Friday, October 14, 2016

Introducing Power BI Embedded talk at Cloud Summit

Hi All,

Earlier today, I have presented "Introducing Power BI Embedded" top that covers platform capabilities and tools in Cloud Summit event at Microsoft Chevy Chase office.

The session covered Power BI Platform capabilities, tools and Power BI Embedded as PaaS option in Microsoft Cloud platform.

I have got a lot of questions about Power BI data set scheduling, working with data capabilities including direct queries vs. import options while authoring reports in Power BI desktop. I also covered the need for Power BI Gateway for hybrid scenarios.


Thursday, October 13, 2016

Fixing powerbi.d.ts missing modules errors in Visual Studio 2015


While i was working on Visual Studio 2015; I have got this errors due to missing modules in powerbi.d.ts file. I have an ASP.NET MVC project that uses Power BI Embedded and i would like to get this application up and running but i am getting these errors while building my app.

These errors are due to missing TypeScript tools for Visual Studio 2015. Once you install them, you will be able to run your app and all these errors disappear.

To fix this problem, follow these steps:

  • Open Tools | Extensions and Updates.
  • Select Online in the tree on the left.
  • Search for TypeScript using the search box in the upper right.
  • Select the most current available TypeScript version.
  • Download and install the package.
  • Build your project!

Hope this helps.

Monday, September 19, 2016

Extending Product Outreach with Outlook Connectors

Hi All,

I presented last Saturday at SharePoint Detroit a talk with title "Extending Product Outreach with Outlook Connectors"; Since i covered how to utilize office 365 groups to extend product outreach using outlook group connectors with demos.

Session Description:

Office 365 Connectors is a brand new experience that delivers relevant interactive content and updates from popular apps and services to Office 365 Groups. We are now bringing this experience to you, our Office 365 customers. Whether you are tracking a Twitter feed, managing a project with Trello or watching the latest news headlines with Bing—Office 365 Connectors surfaces all the information you care about in the Office 365 Groups shared inbox, so you can easily collaborate with others and interact with the updates as they happen. Session will cover how to build your office 365 connectors and how to work with Microsoft to help you build your company one.

Thursday, September 08, 2016

Build Intelligent Microservices Solutions using Azure

Hi All,

I had the pleasure last night to present at one of our local user groups to talk about building intelligent microservices in Azure.

The session covers in detail how to build intelligent microservices solutions using Cloud Services including web and worker roles, Azure App Service features in Azure & Service Fabric. The session was a demo driven and i demonstrated how to design and provision complete end-to-end solutions using cloud services using web roles, worker roles and service bus in Azure.
I also covered Azure App Service capabilities that help developers to scale and monitor production applications; in addition to setup continuous deployment.

Session objectives and takeaways:

  1. Benefits of creating micro services in the cloud
  2. End-To-End Use case for building cloud service with web & worker roles with service bus integration
  3. Azure App Service intelligent features including troubleshooting, CI, back up, routing, scheduling & other features
  4. Azure Service Fabric microservices platform

The presentation is posted below.

Wednesday, August 31, 2016

Building Big Data Solutions in Azure Data Platform @ Data Science MD

Hi All,

Yesterday i was at Johns Hopkins University in Laurel, MD presenting how to build big data solutions in Azure. The presentation was focused on the underling technologies and tools that are needed to build end to end big data solutions in the cloud. I presented the capabilities that Azure offers out of the box in addition to cluster types and tiers that are available for ISVs and developers.

The session covers the following:

1) What HDInsight cluster offers in hadoop ecosystem technology stack.
2) HDInsight cluster tiers and types.
3) HDInsight developer tools in Visual Studio 2015, HDInsight developer tools.
4) Working with HBase databases and Hive View, deploying Hive apps from Visual Studio.
5) Building, Debugging and Deploying Storm Apps into Storm clusters.
6) Working with Spark clusters using Jupyter, PySpark.

Session Title: Building Big Data Solutions in Azure Data Platform

Session Details:
The session covers how to get started to build big data solutions in Azure. Azure provides different Hadoop clusters for Hadoop ecosystem. The session covers the basic understanding of HDInsight clusters including: Apache Hadoop HDFS, HBase, Storm and Spark. The session covers how to integrate with HDInsight in .NET using different Hadoop integration frameworks and libraries. The session is a jump start for engineers and DBAs with RDBMS experience who are looking for a jump start working and developing Hadoop solutions. The session is a demo driven and will cover the basics of Hadoop open source products.

Friday, August 26, 2016

Study notes for exam 70-475: Designing and Implementing Big Data Analytics Solutions

Hi All,

Today I passed the "Designing and Implementing Big Data Analytics Solutions" Microsoft exam.

I have been preparing for this exam (70-475) for a couple of months and I have been using Hadoop ecosystem tools and platforms for awhile.

I wanted to master building big data analytics solutions using HDInsight clusters using Hadoop ecosystem which contains: Storm, Spark, HBase, Hive and HDFS. I worked to cover any gap in understanding I had in Azure Data Lake, ML, Python & R programming and Azure Machine Learning.

This exam covers the following primarily four main technologies (from most covered to least):

1) Hadoop ecosystem: Working with HDFS, HBase, Hive, Storm, Spark and understanding Lambda Architecture. If you want to know more about Lambda Architecture, read my blog post explaining it here.

2) Azure Machine Learning: building/training models, predictive models, classification vs regression vs clustering, recommender algorithms. building custom models, Executing code in R and Python. Ingesting data from Azure Event Hub & transformation in Stream analytics.

3) Azure Data Lake: building pipeline, activities, linked services, move, transform and analyze data, working with storage options in Azure (blob vs block) & tools to transform data.

4) SQL Server and Azure SQL: Security in transit and at rest, SQL Data Warehouse. Working with R in Sql Server 2016/Azure SQL.

My study notes while preparing to pass this test:

1) To protect data at rest as well as querying in Azure SQL Database: Use "Always Encrypted" to make sure data in transit is encrypted. Use "Transparent Data Encryption" to make sure that data at rest is encrypted. Read more about TDE here. Read more about Always Encrypted feature here.

2) When running an Azure ML experiment and you are getting "Out of memory error" here is how to fix it:
   a) Increase the memory settings for the map and reduce operations in the import module.
   b) Use Hive query to limit the amount of data being processed in the import module.

3) The easiest way to manage Hadoop clusters in Azure is to assign every HDInsight cluster to a resource group and to apply tags to all related resources.

4) In Hadoop, When the data is row-based, self-describing with schema and provide compact binary data serialization: it is recommended to use Avro.

5) Which Hadoop cluster type for query and analysis batch jobs:
     a) Spark: A cluster for In-memory processing, interactive queries, and micro-batch stream processing.
     b) Storm: A real-time event processing.
     c) HBase: NoSQL data storage for big data systems.

6) Importing data using Pyhon in Azure ML tips:
    a) Missing values are converted into NA for processing. NA will be converted back to missing values when converted back to datasets.
    b) Azure Dataset are converted to data frames in Pandas. Pandas module is used to work with data in Python.
    c) Number names columns are not ignored. str() function is applied to those.
    d) Duplicate column names are not ignored. The duplicate column names are modified to make sure they have unique names.

7) The only platform that supports ACID transaction in Hadoop file storage options is Apache Orc.

8) You have three utilities you can use to move data from local storage to managed cluster blob storage. These tools are: Azure CLI, PowerShell & AzCopy.

9) How to improve Hive queries using static vs dynamic partitioning, read more here.

10) Understand when to use Filter based Feature Selection in Azure ML.

11) AzureML requires Python to store visualizations as PNG Files. To configure MatPlotLib in AzureML, you should configure it to use AGG backend for rendering and you should save charts as PNG files.

12) To detect potential SQL injection attempts on Azure SQL database in ADL cluster: Enable Threat Detection.

13) To create synthetic samples of dataset for classes that are under represented: use SMOTE module in AzureML.

14) D14 V2 Virtual Machines in Azure supports 100GB in memory processing.

15) You can add multiple contributors to AzureML workspace as users.

16) Understand the minimum requirements for each cluster type in HDInsight;
       a) At least 1 data node for Hadoop cluster type.
       b) At least 1 region server for HBase cluster type.
       c) Two Nimbus nodes for Storm cluster type.
       d) At least 1 worker role for Spark cluster type.

17) If you want to store a file with a file size is greater than 1 TB, you need to use Azure Data Lake Store.

18) In Azure Data Factory (ADF), you can train, score and publish experiments to AzureML using:
      a) AzureML Batch execution: to train and score.
      b) AzureML Update resource activity: to update AzureML web services.

19) In Azure Data Factory (ADF), A pipeline is used to configure several activities, including the sequence and timing activities in a pipeline can be managed as a unit.

20) Working with R models in SQL Server 2016/AzureSQL: read more here.

21) Apache Spark in HDInsight can read files from Azure blob storage (WASB) but not SQL Server.

22) Always Encrypted protects data in transit and at rest will be encrypted. Also this feature allows you to store encryption keys on premise.

23) Transparent Data encryption (TDE) : secure data at rest, it will not protect data in transit and the keys are stored in the cloud.

24) Distcp is a Hadoop tool to copy data to and from HDInsight clusters storage blob into Azure Data lake store.

25) Adlcopy: is a command line utility to copy data from azure blob storage into azure data lake storage account.

26) AzCopy: A tool to copy data from and to Azure blob storage.

27) While working with large binary files and you would like to optimize the speed of AzureML experiment, you can do the following:
      a) Developers should write data as block blob.
      b) The blob format should be in CSV or TSV.
      c) You should NOT turn off the cached results option.
      d) You can NOT filter data using SQL but R language.

28) SQL DB contributor role allows monitoring and auditing of SQL databases without granting permissions to modify security or audit policies.

29) To process data in HDInsight clusters in Azure Data Factory (ADF):
      a) Add a new item to the pipeline in the solution explorer.
      b) Select Hive Transformation.
      c) Construct JSON to process the cluster data in an activity.

30) Understanding Tumbling vs Hopping vs Sliding Windows in Azure Stream Analytics. (link)

Hope this helps you get ready to pass the test, and good luck everyone!
Let's get all certified ya'll data wranglers :-)

-- ME

1) Microsot Exam 70-475 details, skills measured and more:

Thursday, August 25, 2016

GIT 101 in Visual Studio Team Services (VSTS)

Hi All,

I have been working with multiple developers on sharing project code using git. I found out that git is new to a lot of developers who have been using Visual Studio Team Foundation Server (TFS), Visual Studio Online (VSO aka VSTS now), or any other centralized source control system.

What is the difference between TFS/VSO/VSTS versus Git?

If you have been using TFS, VSTS or VSO, those all fall under Team foundation Version Control (TFVC) which is a centralized source control system.

While git, is a distributed source control system (DVCS). which means: you have a local and remote code repositories. you can commit your code to you local repo but not remote repo (unless you want to). Also, you can share your code to the remote repo so other team members can get these changes.
This is a fundamental concept to understand when working with git. git is distributed, contains local and remote repos & works offline and it is a great way to enable collaborations among developers.

**Popular git platforms: GitHub, VSTS, Bitbucket, GitLab, RhodeCode and others.

This article is focusing on managing multiple developers code working in a team & what is the git best practices around that, This also applies to any other git platform. For the sake of simplicity, This article will be focusing on using Git in VSTS.

Basic terminology and keywords to know when working with Git:

1) A branch: In Git, every developer should have his own branch. you write code and commit your changes into your local branch. To sync with other developers, get latest from the master branch and merge it into yours so you make sure everything is compiling & working before creating a new pull request to the master branch (by merging back your code into master).

2) Fetch: It download changes to your local branch from the remote branch. Fetch downloads these commits and adds them to the local repo without updating your local branch. To update your local branch either by executing merge or pull requests to your local repo to be up to date with its remote.

3) Pull: Get updates from your remote branch into your local branch. basically keeps your branch up to date with its remote one. Pull does a fetch and then a merge into your local branch.
So just use Pull to get your local branch updates from its remote one.

3) Pull vs Fetch: git pull does a git fetch. so if you used git pull this means that your have executed git fetch.  you can execute fetch if you want to get the updates but do not want to merge them into yor local branch yet.

5) Push: sends committed changes to remote branch so it is shared with others.

Basic rules to work with git in Visual Studio that everyone should be aware of before start coding:
This section i cover all needed actions to work with using git in Visual Studio Team Explorer window.

1) You need to click on Sync in team explorer to refresh the current branch from the remote branch. followed by pull to get those changes merged into the current local branch. sync just show status but to actually merge those changes you need to click on pull link.

2)  You need to click on Changes in team explorer every time you want to check in or get latest updates of the current branch.

3) You need to click on Branches in team explorer every time you want to manage branches in Visual Studio.

4) You need to click on Pull Requests in team explorer every time you want to manage pull requests in Visual Studio.

A) Setup a project for your team using Git in VSTS:

1) Visit
2) Login to your account.
3) Click on New button to create a new git project.

4) Once you hit create project button, the project will be created in few seconds and then we will use Visual Studio to do some necessary steps.

5) To start using Visual Studio with the created project, Click on Code tab for the newly created project "MicrosoftRocks".

6) Click on Clone in Visual Studio button.
7) This will open up VS and then open up Team Explorer window.
8) You need to click on "Clone this repository" this will allow you to create a local repo of the remote repo we have just created in VSTS.

9) Select a local repo folder and click on Clone.

10) Now, VS shows a message that we can create a new project or solution.

11) You can go ahead and create any project in VS, the only thing to notice to uncheck create a new Git repository checkbox when creating any new project since we have created already our local repo.

12) First things first, you need to exclude bin and debug folders from getting checked in Git. So, click on Settings in Team explorer -- > Click on Repository Settings link under Git --> Click on Add link to add .gitignore file.

13) To edit .gitignore file, click on edit link. Then, add the following at the bottom of the file:

# exclude bin and debug folders

14) Build the project and then we will do our first check in to the master branch.

15) Click on Home icon in team explorer to go back to the home page to manage source control options.

16)  Click on Settings, Type a check in message and then click on Commit Staged.

17) Commit staged action has check in all our changes to our local repo, these changes have not been share to the remote, so we need to click on sync to share it with others.
You will notice, that VS shows you a sync link afterwards so you can sync changes immediately or your can click on Sync from team explorer and then click on Push.

18) Now, the project is ready in the master branch for everyone with git ignore file before everyone will create his own branch and start developing.

B) Create your own branch in Visual Studio:

1) Every developer in a team, should create his own branch and get the latest from master to start developing in our project.

2) From Visual Studio, Click on master branch from the bottom bar and click on new branch.

3) Enter your branch name "dev1" and from which branch your want to create yours "master" and then click on Create Branch button. This step will create your own branch and get latest from master and switch to your branch to start coding in it.

4) You will notice, the name of the current branch has changed from master to dev1 in Visual Studio. now you can start working in your branch.

5) Once you are done coding a feature or at a good point to check in some code, Follow these steps to check in your changes:
  • Click on Changes in team explorer, write a message and then click on Commit All button.
  • You can also click on sync to push these changes to the remote branch in VSTS online.
  • Remember, these changes are still in your branch no one else has seen it until you submit it to the master branch.

6) Publish your branch: It is important to publish your branch to VSTS, Follow these steps:
  • From team explorer, click on branches.
  • Right click on your branch.
  • Click on Publish Branch.

C) How to submit your code to the master branch:

1) First, you need to make sure that your local master branch is up to date. to do that, switch to master branch and click on sync and then click on pull in Team explorer window.

2) Second, Switch back to your branch "dev1" and then click on branches in team explorer.

3) Click on Merge link.

4) Select to merge from "master" into "dev1" and then click on Merge. This step will merge all master changes into your branch so your branch will get other people work and fix any conflict (if any) before submitting all changes to master using Pull Request (PR) action.

5) Now, we need to submit all these changes after making sure there are no conflicts to the master branch. Click on Pull Requests in Team explorer.

6) Click on New Pull Request link.

7) This will open Visual Studio online webpage to submit new pull request.
8) Click on New Pull Request button.

9) Submitted Pull Requests (PRs) will be either approved or rejected by the repository admins. unless you are an admin, you will be able to approve/reject and complete submitted PRs in any project and therefore these changes are committed/merged to the master branch.

10) Click on Complete button to complete the pull request. Visual Studio will prompt a popup window if you want to add any notes and then click on Complete merge button. This is the last step to merge your changes into master after your PR has been approved.

11) Repeat the same steps every time you want to merge your changes into master using PRs.

Hope this article has shown in detailed walk-though how to work in a team using Git in Visual Studio Team Services and manage your code checkins/checkouts/merge/branching and PRs in Git.


-- ME

Tuesday, August 23, 2016

How to create websites with MySQL database in Azure


Microsoft recently announced Azure App Service support for In-app MySQL Feature (Still in Preview).

What does "In-App MySQL" in App Service mean?

It means that MySQL database is provisioned and it shares the resources with your web app. MySQL in-app enables developers to run the MySQL server side-by-side with their Web application within the same environment, which makes it easier to develop and test PHP applications that use MySQL.

So, You can have your MySQL In-App database along with your website into Azure App Service and both share the same resources. No need to provision a different VM for MySQL or purchase ClearDB for your websites under development. The feature is available for new or existing web apps in Azure.

Definitely we recommend when moving to production is to move out of In-App MySQL database, since the intention is to keep this for development and testing purposes only.

In-App MySQL is like hosting SQLServer Express DB instance in your app before mounting it to an actual SQL Server instance.

How to provision MySQL In-App to Azure App Service?

Create a new web app or select an existing web app and then you will find "MySQL in App (Preview)" option. Click MySQL In App On and then save.

Current Limitation for MySQL In App Feature:
1) Auto Scaling feature is not supported.
2) Enabling Local Cache is not supported.
3) You can access your database only using PHPMyAdmin web tool or using KUDU debug console.
4) Web Apps and WordPress templates support MySQL In App when you provision it in Azure Portal. The team is working to expand this to other services in Azure portal.

Hope this helps.

1) MySQL in-app for web apps:

Monday, August 22, 2016

Avro vs Parquet vs ORCFile as Hadoop storage files

While working on developing big data applications and systems in Hadoop. Every time we store data in Hadoop cluster, we think about what is the best way to store our data. There are tons of challenges when storing Petabytes of data including what is the required storage and how to faster reads your data!

In Hadoop, you can store your files in many formats. I would like to share some of these options and which to be used in certain scenarios.

How to store data files in Hadoop and what are the available options:

Apache Avro™ is a data serialization system. Avro provides a row-based data storage, while the schema is encoded on the file and it provides binary data serialization.

Use Case: Use Avro if you have a requirement to support binary data serialization for your data while maintaining a self contained schema on a row-based data files.

Read more about Avro:

Apache Parquet is a columnar storage format available to any project in the Hadoop ecosystem, regardless of the choice of data processing framework, data model or programming language.

Use Case: You want to store data on a column based files & save on storage. Parquet uses an efficient encoding & compression data representation (schemas) on your Hadoop clusters.
It works with different processing framework and programming languages. 

Read more about Apache Parquet:

3) ORCFile
Apache Orc is the smallest, fastest columnar storage for Hadoop workloads. ORC is a self-describing type-aware columnar file format designed for Hadoop workloads. It is optimized for large streaming reads, but with integrated support for finding required rows quickly. Storing data in a columnar format lets the reader read, decompress, and process only the values that are required for the current query. Because ORC files are type-aware, the writer chooses the most appropriate encoding for the type and builds an internal index as the file is written.

Use Case: Use ORC when you need to store your data on columnar storage in Hadoop in an efficient and faster way to retrieve your data. ORCFile contains its schema which makes reading values is so fast.

Read more about Apache Arc:

Hope this helps!

Wednesday, August 17, 2016

How to read images from a URL in core


I was building an core web api that suppose to read images from an external url. Even though i have done this dozens of time. I got stuck for a bit trying to have the same code that reads an image from a url into my core project using Visual Studio 2015.

After little bit of searching, i found out that before trying to read a static file such as an image from your controller. you need to enable first Directory browsing and configure routing path so you are able to view this image in a browser by hitting the url of the image.

So, follow these below steps to be able to read images from a url (in my case these images were part of project):

1) Move images folder (or any static files folder) under wwwroot folder.
2) Open startup.cs file and enable directory browsing.

C# code to enable directory browsing and serving static files:

public void Configure(IApplicationBuilder app, IHostingEnvironment env, ILoggerFactory loggerFactory)

            // Enable directory browsing
            app.UseStaticFiles(new StaticFileOptions()
                FileProvider = new PhysicalFileProvider(
           Path.Combine(Directory.GetCurrentDirectory(), @"wwwroot\images")),
                RequestPath = new PathString("/images")
            app.UseDirectoryBrowser(new DirectoryBrowserOptions()
                FileProvider = new PhysicalFileProvider(
            Path.Combine(Directory.GetCurrentDirectory(), @"wwwroot\images")),
                RequestPath = new PathString("/images")


3) Run your app and try to load an image from the browser, for example:

4) You will be able to view the image in the browser. now, let's read image in C# from a url.

 // read from remote image drive
                    using (HttpClient c = new HttpClient())
                        using (Stream s = await c.GetStreamAsync(imgUrl))
                             // do any logic with the image stream, save it, store it...etc.

If you haven't done step #3 (This is were i got stuck!), the GetStreamAsync method will throw an exception (404 not found) error because we haven't configured the app to deliver static files.

Hope this helps!

1) Working with static files in core:

Wednesday, August 03, 2016

Building Big Data Solutions using Hadoop in Azure

Hi All,

Today i am at New York City presenting how to build data solutions in Azure. The presentation is focused on the underling technologies and tools that are needed to build big data solutions.

The session also covers the following:

1) What HDInsight cluster offers in hadoop ecosystem technology stack.
2) HDInsight cluster tiers and types.
3) HDInsight developer tools in Visual Studio 2015.
4) Working with HBase databases and Hive View.
5) Building, Debugging and Deploying Storm Apps.
6) Working with Spark clusters.

Session Title: Building Big Data Solutions in Azure.

Session Details:
The session covers how to get started to build big data solutions in Azure. Azure provides different Hadoop clusters for Hadoop ecosystem. The session covers the basic understanding of HDInsight clusters including: Apache Hadoop HDFS, HBase, Storm and Spark. The session covers how to integrate with HDInsight in .NET using different Hadoop integration frameworks and libraries. The session is a jump start for engineers and DBAs with RDBMS experience who are looking for a jump start working and developing Hadoop solutions. The session is a demo driven and will cover the basics of Hadoop open source products.

Event Url:

Hope this helps!

Tuesday, August 02, 2016

Working with Hive in HDInsight


While i am working on building big data solutions in Azure HDInsight clusters. I found out really new tools that have been added to HDP to easily help you working with Hive and HBase datastores.

In this blog post, I would like to share that you can manage your Hive databases and queries using Hive View in HDInsight clusters.

I have provisioned a Linux based Spark cluster in HDInsight. Spark clusters comes with a preloaded tools, frameworks and services. Hive service is preloaded and configured by default as well.

Follow these steps to work with Hive:

1) From Azure Portal, select your HDInsight cluster.
2) Click on Dashboard.
3) Enter your admin username and password.
4) This would be Ambari homepage for your cluster.

5) From the top right corner, click on Hive View.

6) You will be able to write any SQL statements in Hive query as you used to do.

Hive view also contains other capabilities such as defining UDFs and upload tables to Hive.

Hope this helps.

Wednesday, July 20, 2016

Easily construct Outlook Group Connector JSON messages in C#

Hi All,

If you are building an Outlook Group Connector that you are spending a lot of time writing JSON message & specifying different schema elements and attributes to be able to build a canvas that looks like this below figure, so i got good news for you!

I got your back and published an Outlook Group Connector SDK ver. 1.1 nuget package that includes tons of extension methods and features that helps your easily build your JSON payload message.

How to send a message in C# to a group:

                Message message = new Message()
                    summary = "This is the subject for the sent message to an outlook group",
                    title = msg
                message.AddFacts("Facts", facts);
                message.AddImages("Images", images);
                message.AddAction("check details here", "");

                var result = await message.Send(webhookUrl);

GitHub Code and Sample links:

1) GitHub Repo for SDK and Samples apps including console & web apps (link).
2) NuGet package that has been published to use it in your apps (link) or search for "Office365ConnectorSDK" in VS 2015.

Hope this helps.

Friday, July 15, 2016

Get started with Outlook Connectors with a sample showcase application

Hi All,
Office 365 Connectors provide a compelling extensibility solution for developers. Developers can build connectors through incoming webhooks to generate rich connector cards. Additionally, with the new "Connect to Office 365" button, developers can embed the button on their site and enable users to connect to Office 365 groups.

A sample showcase for outlook connectors integration
I have built this application that demonstrates outlook connector integration showcase that includes an integration for "Connect to Office 365" button into a third party website and how to send a detailed canvas message to a group.
How to Use it:
  • Outlook Connector landing page: Click on "Enterprise" menu item, install our connector into one of your office 365 groups.
  • Send a message to any group: Click on "Send Message" menu item, set a title message and group name and click on Send button. Check your group and you will be notified with a full detailed canvas message.

Useful Resources: 

A general overview of what Office 365 Connectors are and how end-users interact with them.

Complete documentation for building Office 365 Connectors.

A sandbox environment for developer experimentation.

Create and manage outlook connector settings in this dashboard.


Friday, July 01, 2016

How to run web browsers in private mode using Visual Studio 2015

Hi All,

I'd like to share a cool tip when running web applications in Visual Studio by opening your web browsers in private mode if you are using Internet Explorer or incognito mode if you are using chrome.

This is a needed practice especially if you are sharing different logins using the same browser and you want to avoid cached logins issues.

Follow these steps to add private mode browsers in Visual Studio 2015:

1) From any html or View page in Visual Studio, right click and click on Browser with.

2) Click on Add button to add IE in private mode.
3) Enter the following values for IE private mode and then click ok:
Program: C:\Program Files (x86)\Internet Explorer\iexplore.exe
Arguments: -private

For Chrome enter the following:
Program: C:\Program Files (x86)\Google\Chrome\Application\chrome.exe
Arguments: -incognito

Now, you will be able to run web applications in browsers' private mode in Visual Studio 2015.

Hope this helps!

Monday, June 27, 2016

How to call cURL from Command Line

Hi All,

I installed cURL tool on my windows 10 machine, after i installed it; I tried to use this tool to post JSON messages over HTTP web hook endpoint.  But when i open the command line and type in curl, i get the following error:

'curl' is not recognized as an internal or external command, operable program or batch file.

By default, when you install cURL it gets installed on this default directory:
C:\Program Files\cURL

If i navigate to the installation path in command line window and type in curl, the tool will start working. but because i don't want to remember this path every time i want to use this tool, i need to add it as a system variable so the system knows where to points to when i call it.

Here is how to accomplish this:

1) Open Control Panel.
2) Click on System and Security.
3) Click on Advanced system settings link on the left pane.
4) This will open system properties pop up window.
5) Click on Environment variables button from advanced tab.
6) Click on New button from Systems Variables section (bottom section).
7) Add the following variable name information:

Variable name: curl
Variable value: C:\Program Files\cURL\bin

8) Click on OK button.
9) Open a new window from command line and type curl.
10) Now you can start using curl tool from command line without the need to navigate to the installation directory of the tool.


Saturday, June 25, 2016

Azure Notification Hub unable to upload .p12 for APN

Hi All,

While i was working to setup Notification Hub for APNS, I got to a point to upload the certificate i created from apple developer account to the Azure portal.

I was getting the following error every time i upload my .p12 exported certificate to Azure:

This is the error message: Error updating the Notification Hub

I tried to get more detailed error message, So i jumped into the old portal and i was getting more detailed error as shown below:

Error Message: "SubCode=40000. Failed to validate credentials with APNS. Error is The credentials supplied to the package were not recognized"

After around couple of hours, i found out that the Azure reference to setup APNS was not accurate enough to say exactly what to export after you installed the certificate into your machine.

Here is what you need to do to successfully upload .p12 file into Azure:

1) From Keychain tool, Select Keys from the left pane.
2) Expand the target key that certificate contains, Right click on the certificate only and click on Export.
3) Set the password for .p12 file and save it to your disk.
4) Visit the Azure portal, Select the exported certificate (.p12) file and set the same password you set earlier in step #3 and click upload.

You will be able to successfully upload your certificate to Azure!


Monday, June 20, 2016

Python for .Net Developers

Hi All,

I have been working with Python recently and i would like to share a quick and easy tutorial to learn Python for developers with OOP experience such as: C#, JAVA or .Net experience.

This is a mini course, gets you up to speed to start developing in Python with no prior knowledge because it covers what every developers wants to know for Python programming language basics.

Here is my takeaways of this tutorial:

1) Python uses indentation for code execution, Python doesn't use curly brackets for opening and closing functions, classes..etc as in C# or Java. So it is preferred to use tabs than spaces while coding.

2) You can use lists ([]), dictionaries ({key:value}), tuples (()), and sets (set([list])) for storing collections in code. Use an appropriate option as in your code case. Sets items has no duplicates. Tuples are immutable (Once it is created, you can't change it).

3) You can create main function as we do in Java or C# console apps as an entry point for your program.

# Main Function
if __name__ == '__main__':        

4) You can include other files in your python file by adding import statement: import myotherfile, note do not include .py in the name of your python file.

5) You can define classes and functions in Python. also, you can create instance and static variables for class members.

class Person:
    population = 0
    def __init__(self, myAge):
        self.age = myAge        --> Instance Variable
        Person.population += 1  --> Static Variable 

6) You can use Visual Studio Code as a development IDE to code in Python. Download & install VS Code for free and then install Python & Python VS Code extension. Here is the steps:

a) Download Visual Studio Code for free:
b) Install Python on your machine:
c) Open VS Code, Press F1 and then type install extension and hit enter and then type: python. VS Code provides intellisense & debugging capabilities for Python.


More useful links:

1) Python Tutorial: 

Thursday, June 16, 2016

Unleash the power of office add-ins with Office Development Patterns and Practices

Hi All,

I was pleased to speak at Cap Area .NET SharePoint Special Interest Group user group yesterday.
In this presentation i covered the underlying concepts of extending office applications and how Microsoft is supporting the component architecture by enabling web extensiblity framework (WEF); hence the WEF is the core runtime platform for building web based extensions or addins to office applications.

I covered the following topics:
1) Office Add-Ins overview: Add-ins shapes/types, runtime framework, anatomy of office add-in.
2) Building Office Add-Ins using open source tools such as: Yeoman tool ( provides a scaffolding platform for office addins templates. while when creating an add-in using yeoman tool you will be able to use any text editor to develop your office add-in.

2) VS tools for Office Add-ins: this covers updated tools in Office Developer tools in VS 2015 version 2 that contains web addins and VSTO templates as well.

Code Samples:
1) UPS Tracker Add-in on Github is here.
2) Using Yeoman Tool for building Office Add-ins:


Wednesday, June 15, 2016

Content was blocked because it was not signed by a valid security certificate when running Office Add-Ins

Hi All,

While i was developing an office add-in and using IE to view a deployed outlook add-in i was getting the following error:

Content was blocked because it was not signed by a valid security certificate.

It turns out that this issue is related to a self signed certificate that is being used on my local web server using gulp-webserver plugin; since i am using Visual Studio code in developing my office add-ins.

If you want to configure gulp server to use a certificate, open gulpfile.js in your project and add https to the configuration of gulp web server as explained here.

The other solution is to use chrome instead and not IE since IE requires self signed certificates to be added as a trusted certificate in your machine. So if you don't want to go through adding a self signed certificate to trusted certificates store, just use chrome and your add-in will load with no issues.

Hope this helps.

Monday, June 06, 2016

How to remotely connect to a linux based Spark Cluster in Azure

Hi All,

In this blog post i am showing how to connect remotely to a Linux based Spark cluster in Azure.

Today, Microsoft has announced Spark general availability in Azure, read the official announcement here. Technical announcement from SQL Server team is here.

Spark GA in Azure

Once you provision a Linux based Spark Cluster, you are going to need to remotely login to it using SSH to start executing Spark commands using Spark Shell.

Open Azure Portal, Search for your cluster or you can find Spark clusters under HDInsight clusters tab; if you don't have it add it as a favorite tab from browse button in the portal).

Click on Secure Shell button, this will open a new blade with a host name that we will use to sign in using SSH to Spark cluster and start using Spark Shell.

Secure SSH to Spark from Azure Portal
Copy the host name if you are using windows and then run PuTTY tool to connect to Spark cluster using host name, username and password you have set when you provisioned the cluster.

SSH host name config
Then, Open PuTTY tool and enter the host name and then click on Open button.

Then, PuTTY will prompt to enter your username and password, once you are logged in successfully, you are in Spark Shell to start working with Spark!.

Hope this helps.

Friday, June 03, 2016

Thoughts on Lambda Architecture

Hi All,

Recently i have read "Big data principles and best practices of scalable real-time data systems" by Nathan Marz & James Warren. The book is very informative on analyzing how to build scalable data systems using hadoop ecosystem.

Lambda Architecture

Regardless which tools you are using to implement this but i can say the biggest take away of this book is describing in detail Lambda Architecture (LA). I am new to LA and the way how this architecture is being laid out in building highly scalable big data systems.

LA provides a separation of concerns for building large data systems especially on separating the batch from serving and speed layers.

Lambda Architecture (LA) consists of main three layers:
1) Batch Layer: contains the original master data set (immutable, append-only data) and precomupte functions over the master dataset.

Hadoop is the standard batch processing system used for most high-throughput architectures. MapReduce is used for big data computational systems. Recently, Developers lean to use Spark as a new computation system for big data computing for its high performance & in memory processing.

2) Serving Layer: contains batch views that serves the precomputed results with low-latency reads.
Examples of serving layer technologies: Apache Cassandra, Apache HBase, ElephantDB, and Cloudera Impala.

3) Speed Layer. contains real-time views that fills the latency gap by querying recently obtained data. The speed layer is responsible for any data not yet available in the serving layer.

You can use Apache Storm to perform realtime computation in the speed layer.

It is recommended to use Apache Cassandra or Apache HBase for speed layer output while ElephantDB or Cloudera Impala for batch layer output.

Hope this article helps you in getting into designing big data systems with high throughput and low latency.


a) Lambda Architecture website:

b) Cloudera Impala: