Ruby on Rails on Azure App Service (Web Sites) with Linux (and Ubuntu on Windows 10)

Running Ruby on Rails on Windows has historically sucked. Most of the Ruby/Rails folks are Mac and Linux users and haven’t focused on getting Rails to be usable for daily development on Windows. There have been some heroic efforts by a number of volunteers to get Rails working with projects like RailsInstaller, but native modules and dependencies almost always cause problems. Even more, when you go to deploy your Rails app you’re likely using a Linux host so you may run into differences between operating systems.

Fast forward to today and Windows 10 has the Ubuntu-based “Linux Subsystem for Windows” (WSL) and the native bash shell which means you can run real Linux elf binaries on Windows natively without a Virtual Machine…so you should do your Windows-based Rails development in Bash on Windows.

Ruby on Rails development is great on Windows 10 because you’ve Windows 10 handling the “windows” UI part and bash and Ubuntu handling the shell.

After I set it up I want to git deploy my app to Azure, easily.

Developing on Ruby on Rails on Windows 10 using WSL

Rails and Ruby folks can apt-get update and apt-get install ruby, they can install rbenv or rvm as they like. These days rbenv is preferred.

Once you have Ubuntu on Windows 10 installed you can quickly install “rbenv” like this within Bash. Here I’m getting 2.3.0.

~$ git clone https://github.com/rbenv/rbenv.git ~/.rbenv
~$ echo 'export PATH="$HOME/.rbenv/bin:$PATH"' >> ~/.bashrc
~$ echo 'eval "$(rbenv init -)"' >> ~/.bashrc
~$ exec $SHELL
~$ git clone https://github.com/rbenv/ruby-build.git ~/.rbenv/plugins/ruby-build
~$ echo 'export PATH="$HOME/.rbenv/plugins/ruby-build/bin:$PATH"' >> ~/.bashrc
~$ exec $SHELL
~$ rbenv install 2.3.0
~$ rbenv global 2.3.0
~$ ruby -v
~$ gem install bundler
~$ rbenv reshash

Here’s a screenshot mid-process on my SurfaceBook. This build/install step takes a while and hits the disk a lot, FYI.

Installing rbenv on Windows under Ubuntu

At this point I’ve got Ruby, now I need Rails, as well as NodeJs for the Rails Asset Pipeline. You can change the versions as appropriate.

@ curl -sL https://deb.nodesource.com/setup_4.x | sudo -E bash -
$ sudo apt-get install -y nodejs
$ gem install rails -v 5.0.1

You will likely also want either PostgresSQL or MySQL or Mongo, or you can use a Cloud DB like Azure DocumentDB.

When you’re developing on both Windows and Linux at the same time, you’ll likely want to keep your code in one place or the other, not both. I use the automatic mount point that WSL creates at /mnt/c so for this sample I’m at /mnt/c/Users/scott/Desktop/RailsonAzure which maps to a folder on my Windows desktop. You can be anywhere, just be aware of your CR/LF settings and stay in one world.

I did a “rails new .” and got it running locally. Here you can se Visual Studio Code with Ruby Extensions and my project open next to Bash on Windows.

image

After I’ve got a Rails app running and I’m able to develop cleanly, jumping between Visual Studio Code on Windows and the Bash prompt within Ubuntu, I want to deploy the app to the web.

Since this is a simple “Hello World” default rails app I can’t deploy it somewhere where the Rails Environment is Production. There’s no Route in routes.rb (the Yay! You’re on Rails message is development-time only) and there’s no SECRET_KEY_BASE environment variable set which is used to verify signed cookies. I’ll need to add those two things. I’ll change routes.rb quickly to just use the default Welcome page for this demo, like this:

Rails.application.routes.draw do
  # For details on the DSL available within this file, see http://guides.rubyonrails.org/routing.html
    get '/' => "rails/welcome#index"
end

And I’ll add the SECRET_KEY_BASE in as an App Setting/ENV var in the Azure portal when I make my backend, below.

Deploying Ruby on Rails App to Azure App Service on Linux

From the New menu in the Azure portal, choose to Web App on Linux (in preview as of the time I wrote this) from the Web + Mobile option. This will make an App Service Plan that has an App within it. There are a bunch of application stacks you can use here including node.js, PHP, .NE Core, and Ruby.

NOTE: A few glossary and definition points. Azure App Service is the Azure PaaS (Platform as a Service). You run Web Apps on Azure App Service. An Azure App Service Plan is the underlying Virtual Machine (sall, medium, large, etc.) that hosts n number of App Services/Web Sites. I have 20 App Services/Web Sites running under a App Service Plan with a Small VM. By default this is Windows by can run Php, Python, Node, .NET, etc. In this blog post I’m using an App Service Plan that runs Linux and hosts Docker containers. My Rails app will live inside that App Service and you can find the Dockerfiles and other info here https://github.com/Azure-App-Service/ruby or use your own Docker image.

Here you can see my Azure App Service that I’ll now deploy to using Git. I could also FTP.

Ruby on Rails on Azure

I went into Deployment OPtions and setup a local (to Azure) git repro. Now I can see that under Overview.

image

On my local bash I add azure as a remote. This can be set up however your workflow is setup. In this case, Git is FTP for code.

$ git add remote azure https:[email protected]:443/RubyOnAzureAppService.git
$ git add .
$ git commit -m "initial"
$ git push azure master

This starts the deployment as the code is pushed to Azure.

Azure deploying the Rails app

IMPORTANT: I will also add “RAILS_ENV= production” and a SECRET_KEY_BASE=to my Azure Application Settings. You can make a new secret with “rake secret.”

If I’m having trouble I can turn on Application Logging, Web Server Logging, and Detailed Error Messages under Diagnostic Logs then FTP into the App Service and look at the logs.

FTPing into Azure to look at logs

This is all in Preview so you’ll likely run into issues. They are updating the underlying systems very often. Some gotchas I hit:

  • Deploying/redeploying requires an explicit site restart, today. I hear that’ll be fixed soon.
  • I had to dig log files out via FTP. They are going to expose logs in the portal.
  • I used the Kudu “sidecar” site at mysite.scm.azurewebsite.net to get shell access to the Kudu container, but I’d like to be able to ssh into or get to access to the actual running container from the Azure Portal one day.

That said, if you’d like more internal details on how this works, you can watch a session from Connect() last year with developer Nazim Lala. Thanks to James Christianson for his debugging help!


Sponsor: Did you know VSTS can integrate closely with Octopus Deploy? Watch Damian Brady and Brian A. Randell as they show you how to automate deployments from VSTS to Octopus Deploy, and demo the new VSTS Octopus Deploy dashboard widget. Watch now


© 2017 Scott Hanselman. All rights reserved.
     

Running Ruby on Rails on Windows has historically sucked. Most of the Ruby/Rails folks are Mac and Linux users and haven't focused on getting Rails to be usable for daily development on Windows. There have been some heroic efforts by a number of volunteers to get Rails working with projects like RailsInstaller, but native modules and dependencies almost always cause problems. Even more, when you go to deploy your Rails app you're likely using a Linux host so you may run into differences between operating systems.

Fast forward to today and Windows 10 has the Ubuntu-based "Linux Subsystem for Windows" (WSL) and the native bash shell which means you can run real Linux elf binaries on Windows natively without a Virtual Machine...so you should do your Windows-based Rails development in Bash on Windows.

Ruby on Rails development is great on Windows 10 because you've Windows 10 handling the "windows" UI part and bash and Ubuntu handling the shell.

After I set it up I want to git deploy my app to Azure, easily.

Developing on Ruby on Rails on Windows 10 using WSL

Rails and Ruby folks can apt-get update and apt-get install ruby, they can install rbenv or rvm as they like. These days rbenv is preferred.

Once you have Ubuntu on Windows 10 installed you can quickly install "rbenv" like this within Bash. Here I'm getting 2.3.0.

~$ git clone https://github.com/rbenv/rbenv.git ~/.rbenv

~$ echo 'export PATH="$HOME/.rbenv/bin:$PATH"' >> ~/.bashrc
~$ echo 'eval "$(rbenv init -)"' >> ~/.bashrc
~$ exec $SHELL
~$ git clone https://github.com/rbenv/ruby-build.git ~/.rbenv/plugins/ruby-build
~$ echo 'export PATH="$HOME/.rbenv/plugins/ruby-build/bin:$PATH"' >> ~/.bashrc
~$ exec $SHELL
~$ rbenv install 2.3.0
~$ rbenv global 2.3.0
~$ ruby -v
~$ gem install bundler
~$ rbenv reshash

Here's a screenshot mid-process on my SurfaceBook. This build/install step takes a while and hits the disk a lot, FYI.

Installing rbenv on Windows under Ubuntu

At this point I've got Ruby, now I need Rails, as well as NodeJs for the Rails Asset Pipeline. You can change the versions as appropriate.

@ curl -sL https://deb.nodesource.com/setup_4.x | sudo -E bash -

$ sudo apt-get install -y nodejs
$ gem install rails -v 5.0.1

You will likely also want either PostgresSQL or MySQL or Mongo, or you can use a Cloud DB like Azure DocumentDB.

When you're developing on both Windows and Linux at the same time, you'll likely want to keep your code in one place or the other, not both. I use the automatic mount point that WSL creates at /mnt/c so for this sample I'm at /mnt/c/Users/scott/Desktop/RailsonAzure which maps to a folder on my Windows desktop. You can be anywhere, just be aware of your CR/LF settings and stay in one world.

I did a "rails new ." and got it running locally. Here you can se Visual Studio Code with Ruby Extensions and my project open next to Bash on Windows.

image

After I've got a Rails app running and I'm able to develop cleanly, jumping between Visual Studio Code on Windows and the Bash prompt within Ubuntu, I want to deploy the app to the web.

Since this is a simple "Hello World" default rails app I can't deploy it somewhere where the Rails Environment is Production. There's no Route in routes.rb (the Yay! You're on Rails message is development-time only) and there's no SECRET_KEY_BASE environment variable set which is used to verify signed cookies. I'll need to add those two things. I'll change routes.rb quickly to just use the default Welcome page for this demo, like this:

Rails.application.routes.draw do
  # For details on the DSL available within this file, see http://guides.rubyonrails.org/routing.html
    get '/' => "rails/welcome#index"
end

And I'll add the SECRET_KEY_BASE in as an App Setting/ENV var in the Azure portal when I make my backend, below.

Deploying Ruby on Rails App to Azure App Service on Linux

From the New menu in the Azure portal, choose to Web App on Linux (in preview as of the time I wrote this) from the Web + Mobile option. This will make an App Service Plan that has an App within it. There are a bunch of application stacks you can use here including node.js, PHP, .NE Core, and Ruby.

NOTE: A few glossary and definition points. Azure App Service is the Azure PaaS (Platform as a Service). You run Web Apps on Azure App Service. An Azure App Service Plan is the underlying Virtual Machine (sall, medium, large, etc.) that hosts n number of App Services/Web Sites. I have 20 App Services/Web Sites running under a App Service Plan with a Small VM. By default this is Windows by can run Php, Python, Node, .NET, etc. In this blog post I'm using an App Service Plan that runs Linux and hosts Docker containers. My Rails app will live inside that App Service and you can find the Dockerfiles and other info here https://github.com/Azure-App-Service/ruby or use your own Docker image.

Here you can see my Azure App Service that I'll now deploy to using Git. I could also FTP.

Ruby on Rails on Azure

I went into Deployment OPtions and setup a local (to Azure) git repro. Now I can see that under Overview.

image

On my local bash I add azure as a remote. This can be set up however your workflow is setup. In this case, Git is FTP for code.

$ git add remote azure https:[email protected]:443/RubyOnAzureAppService.git

$ git add .
$ git commit -m "initial"
$ git push azure master

This starts the deployment as the code is pushed to Azure.

Azure deploying the Rails app

IMPORTANT: I will also add "RAILS_ENV= production" and a SECRET_KEY_BASE=to my Azure Application Settings. You can make a new secret with "rake secret."

If I'm having trouble I can turn on Application Logging, Web Server Logging, and Detailed Error Messages under Diagnostic Logs then FTP into the App Service and look at the logs.

FTPing into Azure to look at logs

This is all in Preview so you'll likely run into issues. They are updating the underlying systems very often. Some gotchas I hit:

  • Deploying/redeploying requires an explicit site restart, today. I hear that'll be fixed soon.
  • I had to dig log files out via FTP. They are going to expose logs in the portal.
  • I used the Kudu "sidecar" site at mysite.scm.azurewebsite.net to get shell access to the Kudu container, but I'd like to be able to ssh into or get to access to the actual running container from the Azure Portal one day.

That said, if you'd like more internal details on how this works, you can watch a session from Connect() last year with developer Nazim Lala. Thanks to James Christianson for his debugging help!


Sponsor: Did you know VSTS can integrate closely with Octopus Deploy? Watch Damian Brady and Brian A. Randell as they show you how to automate deployments from VSTS to Octopus Deploy, and demo the new VSTS Octopus Deploy dashboard widget. Watch now



© 2017 Scott Hanselman. All rights reserved.
     

Exploring the new DevOps – Azure Command Line Interface 2.0 (CLI)

Azure CLI 2.0I’m a huge fan of the command line, and sometimes I feel like Windows people are missing out on the power of text mode. Fortunately, today Windows 10 has bash (via Ubuntu on Windows 10), PowerShell, and “classic” CMD. I use all three, myself.

Five years ago I started managing my Azure cloud web apps using the Azure CLI. I’ve been a huge fan of it ever since. It was written in node.js, it worked the same everywhere, and it got the job done.

Fast forward to today and the Azure team just announced a complete Azure CLI re-write, and now 2.0 is out, today. Initially I was concerned it had been re-written and didn’t understand the philosophy behind it. But I understand it now. While it works on Windows (my daily driver) it’s architecturally aligned with Mac and (mostly, IMHO) Linux users. It also supports new thinking around a modern command line with support for things like JMESPath, a query language for JSON. It works well and clearly with the usual suspects of course, like grep, jq, cut, etc. It’s easily installed with pip, or you just get Python 3.5.x and then just “pip install –user azure-cli.”

Linux people (feel free to check the script) can just do this curl, but it’s also in apt-get, of course.

curl -L https://aka.ms/InstallAzureCli | bash

NOTE: Since I already have the older Azure CLI 1.0 on my machine, it’s useful to note that these two CLIs can live on the same machine. The new one is “az” and the older is “azure,” so no problems there.

Or, for those of you who run individual Docker containers for your tools (or if you’re just wanting to explore) you can

docker run -v ${HOME}:/root -it azuresdk/azure-cli-python:<version>

Then I just “az login” and I’m off! Here I’ll query my subscriptions:

C:\Users\scott\Desktop>  az account list --output table
Name CloudName Sub State IsDefault
------------------------------------------- ----------- --- ------- -----------
3-Month Free Trial AzureCloud 0f3 Enabled
Pay-As-You-Go AzureCloud 34c Enabled
Windows Azure MSDN AzureCloud ffb Enabled True

At this point, it’s already feeling familiar. It’s “az noun verb” and there’s an optional –output parameter. If I don’t include –output by default I’ll get JSON…which I can then query with JMESPath if I’d like. (Those of us who are older may be having a little XML/XPath/XQuery déjà vu)

I can use JSON, TSV, tables, and even “colorized json” or JSONC.

C:\Users\scott\Desktop> az appservice plan list --output table   
AppServicePlanName GeoRegion Kind Location Status
-------------------- ---------------- ------ ---------------- --------
Default1 North Central US app North Central US Ready
Default1 Southeast Asia app Southeast Asia Ready
Default1 West Europe app West Europe Ready
DefaultServerFarm West US app West US Ready
myEchoHostingPlan North Central US app North Central US Ready

I can make and manage basically anything. Here I’ll make a new App Service Plan and put two web apps in it, all managed in a group:

az group create -n MyResourceGroup
# Create an Azure AppService that we can use to host multiple web apps 
az appservice plan create -n MyAppServicePlan -g MyResourceGroup

# Create two web apps within the appservice (note: name param must be a unique DNS entry)
az appservice web create -n MyWebApp43432 -g MyResourceGroup --plan MyAppServicePlan
az appservice web create -n MyWEbApp43433 -g MyResourceGroup --plan MyAppServicePlan

You might be thinking this looks like PowerShell. Why not use PowerShell? Remember this isn’t for Windows primarily. There’s a ton of DevOps happening in Python on Linux/Mac and this fits very nicely into that. For those of us (myself included) who are PowerShell fans, PowerShell has massive and complete Azure Support. Of course, while the bash folks will need to use JMESPath to simulate passing objects around, PowerShell can keep on keeping on. There’s a command line for everyone.

It’s easy to get started with the CLI at http://aka.ms/CLI and learn about the command line with docs and samples. Check out topics like installing and updating the CLI, working with Virtual Machines, creating a complete Linux environment including VMs, Scale Sets, Storage, and network, and deploying Azure Web Apps – and let them know what you think at [email protected]. Also, as always, the Azure CLI 2.0 is open source and on GitHub.


Sponsor: Check out JetBrains Rider: a new cross-platform .NET IDE. Edit, refactor, test, build and debug ASP.NET, .NET Framework, .NET Core, or Unity applications. Learn more and get access to early builds!


© 2016 Scott Hanselman. All rights reserved.
     

Azure CLI 2.0I'm a huge fan of the command line, and sometimes I feel like Windows people are missing out on the power of text mode. Fortunately, today Windows 10 has bash (via Ubuntu on Windows 10), PowerShell, and "classic" CMD. I use all three, myself.

Five years ago I started managing my Azure cloud web apps using the Azure CLI. I've been a huge fan of it ever since. It was written in node.js, it worked the same everywhere, and it got the job done.

Fast forward to today and the Azure team just announced a complete Azure CLI re-write, and now 2.0 is out, today. Initially I was concerned it had been re-written and didn't understand the philosophy behind it. But I understand it now. While it works on Windows (my daily driver) it's architecturally aligned with Mac and (mostly, IMHO) Linux users. It also supports new thinking around a modern command line with support for things like JMESPath, a query language for JSON. It works well and clearly with the usual suspects of course, like grep, jq, cut, etc. It's easily installed with pip, or you just get Python 3.5.x and then just "pip install --user azure-cli."

Linux people (feel free to check the script) can just do this curl, but it's also in apt-get, of course.

curl -L https://aka.ms/InstallAzureCli | bash

NOTE: Since I already have the older Azure CLI 1.0 on my machine, it's useful to note that these two CLIs can live on the same machine. The new one is "az" and the older is "azure," so no problems there.

Or, for those of you who run individual Docker containers for your tools (or if you're just wanting to explore) you can

docker run -v ${HOME}:/root -it azuresdk/azure-cli-python:<version>

Then I just "az login" and I'm off! Here I'll query my subscriptions:

C:\Users\scott\Desktop>  az account list --output table

Name CloudName Sub State IsDefault
------------------------------------------- ----------- --- ------- -----------
3-Month Free Trial AzureCloud 0f3 Enabled
Pay-As-You-Go AzureCloud 34c Enabled
Windows Azure MSDN AzureCloud ffb Enabled True

At this point, it's already feeling familiar. It's "az noun verb" and there's an optional --output parameter. If I don't include --output by default I'll get JSON...which I can then query with JMESPath if I'd like. (Those of us who are older may be having a little XML/XPath/XQuery déjà vu)

I can use JSON, TSV, tables, and even "colorized json" or JSONC.

C:\Users\scott\Desktop> az appservice plan list --output table   

AppServicePlanName GeoRegion Kind Location Status
-------------------- ---------------- ------ ---------------- --------
Default1 North Central US app North Central US Ready
Default1 Southeast Asia app Southeast Asia Ready
Default1 West Europe app West Europe Ready
DefaultServerFarm West US app West US Ready
myEchoHostingPlan North Central US app North Central US Ready

I can make and manage basically anything. Here I'll make a new App Service Plan and put two web apps in it, all managed in a group:

az group create -n MyResourceGroup
# Create an Azure AppService that we can use to host multiple web apps 

az appservice plan create -n MyAppServicePlan -g MyResourceGroup

# Create two web apps within the appservice (note: name param must be a unique DNS entry)
az appservice web create -n MyWebApp43432 -g MyResourceGroup --plan MyAppServicePlan
az appservice web create -n MyWEbApp43433 -g MyResourceGroup --plan MyAppServicePlan

You might be thinking this looks like PowerShell. Why not use PowerShell? Remember this isn't for Windows primarily. There's a ton of DevOps happening in Python on Linux/Mac and this fits very nicely into that. For those of us (myself included) who are PowerShell fans, PowerShell has massive and complete Azure Support. Of course, while the bash folks will need to use JMESPath to simulate passing objects around, PowerShell can keep on keeping on. There's a command line for everyone.

It’s easy to get started with the CLI at http://aka.ms/CLI and learn about the command line with docs and samples. Check out topics like installing and updating the CLI, working with Virtual Machines, creating a complete Linux environment including VMs, Scale Sets, Storage, and network, and deploying Azure Web Apps – and let them know what you think at [email protected]. Also, as always, the Azure CLI 2.0 is open source and on GitHub.


Sponsor: Check out JetBrains Rider: a new cross-platform .NET IDE. Edit, refactor, test, build and debug ASP.NET, .NET Framework, .NET Core, or Unity applications. Learn more and get access to early builds!



© 2016 Scott Hanselman. All rights reserved.
     

Azure App Service Secrets and Web Site Hidden Gems

I just discovered that you can see a preview (almost like a daily build) of the Azure Portal if you go to https://preview.portal.azure.com instead of https://portal.azure.com. Sometimes the changes are big, sometimes they are subtle. It feels faster to me.

Azure Preview Portal

A few days ago I blogged that I had found a number of things in Azure that I wasn’t previously aware of like “Metrics per instance (App Service)” which is DEEPLY useful if you run more than one Web App inside an App Service Plan. Remember, an App Service Plan is basically a VM and you can run as many Websites, docker containers, Azure Functions, Mobile Apps, Api Apps, Logic apps, and whatever you can fit in there. Density is the word of the day.

Azure App Service Secrets and Hidden Gems

A bunch of folks agreed that there were some real hidden gems worth exploring so I thought I’d take a moment and do just that. Here’s a few of the things that I’m continuously amazed are included for free with App Service.

Console

The Console option under Development Tools

There’s a web-based console that you can access from the Azure Portal to explore your apps!

Live HTML5 Console within the Azure Portal

This is basically an HTML 5 bash prompt. I find it useful to double check the contents of certain files in Production, and confirm environment variables are set. I also, for some reason, find it comforting to see that my “cloud web site” actually lives on Drive D:. It calms me to know the Cloud has a D Drive.

App Service Editor

App Service Editor

App Service Editor is the editor that’s codenamed “Monaco” that powers Visual Studio Code. It’s amazing and few people know about it. I use it to make quick updates to production, although you do need to be aware if you have Continuous Deployment enabled that your changes will get eventually overwritten.

It's like a whole "IDE in the Cloud"

Testing in Production – (A/B Testing)

This is an amazing feature that not enough people know about. So, I’m assuming you are aware of Staging Slots? These are things like dev-, test-, or staging- that you can pull from a different branch during CI/CD, or just a separate but near-identical website that runs on the same hardware. The REAL magic is the Testing in Production feature.

Once you have a slot – I have one here for the Staging Site for BabySmash – you have the option to just “swap” between staging and production…OR…you can set a percentage of traffic you want to go to each slot!

Note that traffic is pinned to a slot for the life of a client session, so you don’t have to worry about folks bouncing around if you change the UI or something.

Why is this insanely powerful? You can even make – for example – a “beta” slot and have your customers opt-in to a beta! And you don’t have to write any code to enable this! MyApp.com/?x-ms-routing-name=beta would get them there and MyApp.com?x-ms-routing-name=self always points to Production.

Testing in Production 

You could also write a PowerShell script that would slowly move traffic in increments. That way you could ramp up traffic to staging from 5% to 100% – assuming you see no errors or issues.

$siteName = "yourProductionSiteName"
$rule1 = New-Object Microsoft.WindowsAzure.Commands.Utilities.Websites.Services.WebEntities.RampUpRule
$rule1.ActionHostName = "yourSlotSiteName"
$rule1.ReroutePercentage = 10;
$rule1.Name = "stage"

$rule1.ChangeIntervalInMinutes = 10;
$rule1.ChangeStep = 5;
$rule1.MinReroutePercentage = 5;
$rule1.MaxReroutePercentage = 50;
$rule1.ChangeDecisionCallbackUrl = "callBackUrlOfyourChoice-OptionalThatDecidesIfYouShoudlKeepGoing"

Set-AzureWebsite $siteName -Slot Production -RoutingRules $rule1

All this stuff is built-in to the Standard Azure AppServicePlan.

Easy and Cheap Databases

A number of folks in the comments of my last post asked about the 20 websites I have running on my single App Service Plan. Some felt I may have been disingenuous about the pricing and assumed I have a bunch of SQL Server databases behind my sites, or that a site can’t be useful without a SQL Server.

There’s a few things there to answer. My sites are many different techs, Node.js, Ruby, C# and ASP.NET MVC, and static sites. For example:

  • Running the Ruby Middleman Static Site Generator on Microsoft Azure runs in the cloud when I check code into GitHub but deploys a static site.
  • The Hanselminutes Podcast uses WebMatrix and ASP.NET WebPage’s “SQL Compact Edition.” This database runs out of a single file that’s stored locally.
  • One of my node.js sites uses SQL Lite for its data.
  • One ASP.NET application uses “Azure MySQL in-app” that is also included in Azure App Service. You get a single modest MySQL database that runs in the context of your App Service. It’s not super fast and meant for development, but with a little caching it’s very workable.
  • One node.js app thinks it is talking MongoDB but actually it’s talking via MongoDB protocol support in Azure DocumentDB. You can create an Azure noSQL DocumentDB and point any app that speaks Mongo to it and it Just Works.

There’s a number of options, including Easy Tables for your Mobile Apps. Check out http://mobile.azure.com to learn more about how you can get a VERY quick and easy backend for mobile (or web) apps.

Azure App Service Extensions

If you have used Git deploy to an Azure App Service, you likely noticed a “Sidecar” website that your app has. I have babysmash.com which is actually babysmash.azurewebsites.net, right? There’s also babysmash.scm.azurewebsites.net that you can’t access. That sidecar site (when I’m authenticated) has a ton of easy REST GET APIs I can call to get my process list, files, deployments, and lots more. This is all powered by Kudu, which is open source by the way.

The Azure Kudu sidecar site

Kudu’s sidecar site is a “site extension.” You can not only write your own Azure Site Extension (they are just NuGet packages!) but it turns out there are a TON of useful already vetted and published extensions you can add to your site today. Those extensions live at http://www.siteextensions.net but you add them directly from the Azure Portal. There’s 84 at the time of this blog post.

Azure Site Extensions include:

  • phpMyAdmin – for Admin of MySQL over the web
  • Azure Let’s Encrypt – Easy install of Let’s Encrypt SSL certs!
  • Image Optimizer – Automatic squishing of your site’s JPGs and PNGs because you know you forgot!
  • GoLang Support – Azure doesn’t officially support Go in Azure Web Apps…but with this extension it works fine!
  • Jekyll – Easy static site generation in Azure
  • Brotli HTTP Compression

You get the idea.

Diagnostics

I just discovered this “uptime” blade within my Web Apps in the Azure Portal. It tells me my app’s uptime and if it’s not 100%, it tells my why not and when!

Azure Diagnostics and Uptime

Again, none of this stuff costs extra. You can add Site Extensions or explore your apps to the limit of the underlying App Service Plan. I’m doing all this on a single Standard 1 (S1) App Service Plan.


Sponsor: Excited about the future in ASP.NET? The folks at Progress held an awesome webinar which gives a 360⁰ view of the new ASP.NET Core and how it compares to WebForms and MVC. Watch it now on demand!


© 2016 Scott Hanselman. All rights reserved.
     

I just discovered that you can see a preview (almost like a daily build) of the Azure Portal if you go to https://preview.portal.azure.com instead of https://portal.azure.com. Sometimes the changes are big, sometimes they are subtle. It feels faster to me.

Azure Preview Portal

A few days ago I blogged that I had found a number of things in Azure that I wasn't previously aware of like "Metrics per instance (App Service)" which is DEEPLY useful if you run more than one Web App inside an App Service Plan. Remember, an App Service Plan is basically a VM and you can run as many Websites, docker containers, Azure Functions, Mobile Apps, Api Apps, Logic apps, and whatever you can fit in there. Density is the word of the day.

Azure App Service Secrets and Hidden Gems

A bunch of folks agreed that there were some real hidden gems worth exploring so I thought I'd take a moment and do just that. Here's a few of the things that I'm continuously amazed are included for free with App Service.

Console

The Console option under Development Tools

There's a web-based console that you can access from the Azure Portal to explore your apps!

Live HTML5 Console within the Azure Portal

This is basically an HTML 5 bash prompt. I find it useful to double check the contents of certain files in Production, and confirm environment variables are set. I also, for some reason, find it comforting to see that my "cloud web site" actually lives on Drive D:. It calms me to know the Cloud has a D Drive.

App Service Editor

App Service Editor

App Service Editor is the editor that's codenamed "Monaco" that powers Visual Studio Code. It's amazing and few people know about it. I use it to make quick updates to production, although you do need to be aware if you have Continuous Deployment enabled that your changes will get eventually overwritten.

It's like a whole "IDE in the Cloud"

Testing in Production - (A/B Testing)

This is an amazing feature that not enough people know about. So, I'm assuming you are aware of Staging Slots? These are things like dev-, test-, or staging- that you can pull from a different branch during CI/CD, or just a separate but near-identical website that runs on the same hardware. The REAL magic is the Testing in Production feature.

Once you have a slot - I have one here for the Staging Site for BabySmash - you have the option to just "swap" between staging and production...OR...you can set a percentage of traffic you want to go to each slot!

Note that traffic is pinned to a slot for the life of a client session, so you don't have to worry about folks bouncing around if you change the UI or something.

Why is this insanely powerful? You can even make - for example - a "beta" slot and have your customers opt-in to a beta! And you don't have to write any code to enable this! MyApp.com/?x-ms-routing-name=beta would get them there and MyApp.com?x-ms-routing-name=self always points to Production.

Testing in Production 

You could also write a PowerShell script that would slowly move traffic in increments. That way you could ramp up traffic to staging from 5% to 100% - assuming you see no errors or issues.

$siteName = "yourProductionSiteName"

$rule1 = New-Object Microsoft.WindowsAzure.Commands.Utilities.Websites.Services.WebEntities.RampUpRule
$rule1.ActionHostName = "yourSlotSiteName"
$rule1.ReroutePercentage = 10;
$rule1.Name = "stage"

$rule1.ChangeIntervalInMinutes = 10;
$rule1.ChangeStep = 5;
$rule1.MinReroutePercentage = 5;
$rule1.MaxReroutePercentage = 50;
$rule1.ChangeDecisionCallbackUrl = "callBackUrlOfyourChoice-OptionalThatDecidesIfYouShoudlKeepGoing"

Set-AzureWebsite $siteName -Slot Production -RoutingRules $rule1

All this stuff is built-in to the Standard Azure AppServicePlan.

Easy and Cheap Databases

A number of folks in the comments of my last post asked about the 20 websites I have running on my single App Service Plan. Some felt I may have been disingenuous about the pricing and assumed I have a bunch of SQL Server databases behind my sites, or that a site can't be useful without a SQL Server.

There's a few things there to answer. My sites are many different techs, Node.js, Ruby, C# and ASP.NET MVC, and static sites. For example:

  • Running the Ruby Middleman Static Site Generator on Microsoft Azure runs in the cloud when I check code into GitHub but deploys a static site.
  • The Hanselminutes Podcast uses WebMatrix and ASP.NET WebPage's "SQL Compact Edition." This database runs out of a single file that's stored locally.
  • One of my node.js sites uses SQL Lite for its data.
  • One ASP.NET application uses "Azure MySQL in-app" that is also included in Azure App Service. You get a single modest MySQL database that runs in the context of your App Service. It's not super fast and meant for development, but with a little caching it's very workable.
  • One node.js app thinks it is talking MongoDB but actually it's talking via MongoDB protocol support in Azure DocumentDB. You can create an Azure noSQL DocumentDB and point any app that speaks Mongo to it and it Just Works.

There's a number of options, including Easy Tables for your Mobile Apps. Check out http://mobile.azure.com to learn more about how you can get a VERY quick and easy backend for mobile (or web) apps.

Azure App Service Extensions

If you have used Git deploy to an Azure App Service, you likely noticed a "Sidecar" website that your app has. I have babysmash.com which is actually babysmash.azurewebsites.net, right? There's also babysmash.scm.azurewebsites.net that you can't access. That sidecar site (when I'm authenticated) has a ton of easy REST GET APIs I can call to get my process list, files, deployments, and lots more. This is all powered by Kudu, which is open source by the way.

The Azure Kudu sidecar site

Kudu's sidecar site is a "site extension." You can not only write your own Azure Site Extension (they are just NuGet packages!) but it turns out there are a TON of useful already vetted and published extensions you can add to your site today. Those extensions live at http://www.siteextensions.net but you add them directly from the Azure Portal. There's 84 at the time of this blog post.

Azure Site Extensions include:

  • phpMyAdmin - for Admin of MySQL over the web
  • Azure Let's Encrypt - Easy install of Let's Encrypt SSL certs!
  • Image Optimizer - Automatic squishing of your site's JPGs and PNGs because you know you forgot!
  • GoLang Support - Azure doesn't officially support Go in Azure Web Apps...but with this extension it works fine!
  • Jekyll - Easy static site generation in Azure
  • Brotli HTTP Compression

You get the idea.

Diagnostics

I just discovered this "uptime" blade within my Web Apps in the Azure Portal. It tells me my app's uptime and if it's not 100%, it tells my why not and when!

Azure Diagnostics and Uptime

Again, none of this stuff costs extra. You can add Site Extensions or explore your apps to the limit of the underlying App Service Plan. I'm doing all this on a single Standard 1 (S1) App Service Plan.


Sponsor: Excited about the future in ASP.NET? The folks at Progress held an awesome webinar which gives a 360⁰ view of the new ASP.NET Core and how it compares to WebForms and MVC. Watch it now on demand!


© 2016 Scott Hanselman. All rights reserved.
     

Penny Pinching in the Cloud: Running and Managing LOTS of Web Apps on a single Azure App Service

I’ve blogged before about “penny pinching in the cloud.” I’ll update that series for 2017 soon, but the underlying concepts still apply. Many if you are still using bigger virtual machines than are needed when doing IaaS (Infrastructure as a Service) or when doing PaaS (Platform as a Service) folks are doing “one website per App Service.” That’s super expensive.

Remember that you can fit as many web applications as memory and CPU will into an Azure App Service Plan. An “App Service Plan” in Azure is effectively the Virtual Machine under your Web Apps. You don’t need to think about it as it’s totally managed and hidden – but – if you choose think about it you’ll be able to squeeze more out of it and you’ll pay less.

For example, I have 20 web applications running in a plan I named “DefaultServerFarm.” It’s a Small Standard Plan (S1) and I pay about $70 a month. Some folks use a Basic (B1) plan if they don’t need to scale out and that’s about $50 a month. Both B1 and S1 support “unlimited” web apps within them, to the limits of memory. That’s what allows me to run 20 modest (but real) sites on the one plan and that’s what makes it a good deal from a pricing perspective for me.

I logged in to the Azure Portal recently and noticed the CPU percentage on my plan was higher than usual and higher than I’d like.

Why is that web app using so much CPU?

That’s the CPU of the machine “under” my 20 sites. I can click here on my App Service Plan’s “blade” to see the underlying sites, or just click “Apps” in the blade menu.

Running 20 apps in a Single Azure App Service

However, when I’m looking at an app that lives within my plan, there’s two super powerful menu items to check out. One is  called “Metrics per instance (Apps)” and one is “Metrics per instance (App Service).” Click the latter option. For many of you it’s going to become your favorite area in the Azure Portal. It was a game changer for me as it gave me the internal insight I needed to make sure I can get maximum density in my plan (thereby saving the most money).

Metrics per Instance - App Service Plan

I click here and see “Sites in App Service Plan.”

20 sites in a single plan

I can see that over the last few days my CPU has been going up and up…

The CPU is going up and up over a few days

I can see by site:

A graph showing ALL 20 sites and their CPU

So now I can filter by site and I see that it’s ONE site that’s going nuts.

One site is using all the CPU

I can then dig in, go to the main CPU charge and see exactly when it started:

The site is using 2.12 days of CPU

I can change the scale

It started on Feb 11th

I had a Web Job stuck in a loop. I restarted and will be monitoring but for now, I’m in a much better place for this one app.

Now it's calming down

Now if I check the App Service Plan itself, I can see everything has calmed down.

Things have calmed down after the one rogue site was restarted

The point here is that even though it’s “Platform as a Service” and we want a layer of abstraction, at no point are things HIDDEN from us. If you want to see the hardware, you can. If you want to see the process tree, you can. A good reminder.


Sponsor: Excited about the future in ASP.NET? The folks at Progress held an awesome webinar which gives a 360⁰ view of the new ASP.NET Core and how it compares to WebForms and MVC. Watch it now on demand!


© 2016 Scott Hanselman. All rights reserved.
     

I've blogged before about "penny pinching in the cloud." I'll update that series for 2017 soon, but the underlying concepts still apply. Many if you are still using bigger virtual machines than are needed when doing IaaS (Infrastructure as a Service) or when doing PaaS (Platform as a Service) folks are doing "one website per App Service." That's super expensive.

Remember that you can fit as many web applications as memory and CPU will into an Azure App Service Plan. An "App Service Plan" in Azure is effectively the Virtual Machine under your Web Apps. You don't need to think about it as it's totally managed and hidden - but - if you choose think about it you'll be able to squeeze more out of it and you'll pay less.

For example, I have 20 web applications running in a plan I named "DefaultServerFarm." It's a Small Standard Plan (S1) and I pay about $70 a month. Some folks use a Basic (B1) plan if they don't need to scale out and that's about $50 a month. Both B1 and S1 support "unlimited" web apps within them, to the limits of memory. That's what allows me to run 20 modest (but real) sites on the one plan and that's what makes it a good deal from a pricing perspective for me.

I logged in to the Azure Portal recently and noticed the CPU percentage on my plan was higher than usual and higher than I'd like.

Why is that web app using so much CPU?

That's the CPU of the machine "under" my 20 sites. I can click here on my App Service Plan's "blade" to see the underlying sites, or just click "Apps" in the blade menu.

Running 20 apps in a Single Azure App Service

However, when I'm looking at an app that lives within my plan, there's two super powerful menu items to check out. One is  called "Metrics per instance (Apps)" and one is "Metrics per instance (App Service)." Click the latter option. For many of you it's going to become your favorite area in the Azure Portal. It was a game changer for me as it gave me the internal insight I needed to make sure I can get maximum density in my plan (thereby saving the most money).

Metrics per Instance - App Service Plan

I click here and see "Sites in App Service Plan."

20 sites in a single plan

I can see that over the last few days my CPU has been going up and up...

The CPU is going up and up over a few days

I can see by site:

A graph showing ALL 20 sites and their CPU

So now I can filter by site and I see that it's ONE site that's going nuts.

One site is using all the CPU

I can then dig in, go to the main CPU charge and see exactly when it started:

The site is using 2.12 days of CPU

I can change the scale

It started on Feb 11th

I had a Web Job stuck in a loop. I restarted and will be monitoring but for now, I'm in a much better place for this one app.

Now it's calming down

Now if I check the App Service Plan itself, I can see everything has calmed down.

Things have calmed down after the one rogue site was restarted

The point here is that even though it's "Platform as a Service" and we want a layer of abstraction, at no point are things HIDDEN from us. If you want to see the hardware, you can. If you want to see the process tree, you can. A good reminder.


Sponsor: Excited about the future in ASP.NET? The folks at Progress held an awesome webinar which gives a 360⁰ view of the new ASP.NET Core and how it compares to WebForms and MVC. Watch it now on demand!



© 2016 Scott Hanselman. All rights reserved.
     

Connecting my Particle Photon Internet of Things device to the Azure IoT Hub

Particle Photon connected to the cloudMy vacation continues. Yesterday I had shoulder surgery (adhesive capsulitis release) so today I’m messing around with Azure IoT Hub. I had some devices on my desk – some of which I had never really gotten around to exploring – and I thought I’d see if I could accomplish something.

I’ve got a Particle Photon here, as well as a Tessel 2, a LattePanda, Funduino, and Onion Omega. A few days ago I was able to get the Onion Omega to show my blood sugar on a small OLED screen, which was cool. Tonight I’m going to try to hook the Particle Photon up to the Azure IoT hub for monitoring.

The Photon is a tiny little device with Wi-Fi built-in. It’s super easy to setup and it has a cloud-based IDE with tons of examples written in C and Node.js for you to use. Particle Photon also has a node.js based command line. From there you can list out your Photons, see their available functions, and even call functions over the internet! A hacker’s delight, to be sure.

Here’s a standard “blink an LED” Hello world on a Photon. This one creates a cloud function called “led” and binds it to the “ledToggle” method. Those cloud methods take a string, so there’s no enum for the on/off command.

int led1 = D0;
int led2 = D7;
void setup() {
pinMode(led1, OUTPUT);
pinMode(led2, OUTPUT);
Spark.function("led",ledToggle);
digitalWrite(led1, LOW);
digitalWrite(led2, LOW);
}

void loop() {
}

int ledToggle(String command) {
if (command=="on") {
digitalWrite(led1,HIGH);
digitalWrite(led2,HIGH);
return 1;
}
else if (command=="off") {
digitalWrite(led1,LOW);
digitalWrite(led2,LOW);
return 0;
}
else {
return -1;
}
}

From the command line I can use the Particle command line interface (CLI) to enumerate my devices:

C:\Users\scott>particle list
hansel_photon [390039000647xxxxxxxxxxx] (Photon) is online
Functions:
int led(String args)

See how it doesn’t just enumerate devices, but also cloud methods that hang off devices? LOVE THIS.

I can get a secret API Key from the Particle Photon’s cloud based Console. Then using my Device ID and auth token I can call the method…with an HTTP request! How much easier could this be?

C:\Users\scott\>curl https://api.particle.io/v1/devices/390039000647xxxxxxxxx/led -d access_token=31fa2e6f --insecure -d arg="on"
{
"id": "390039000647xxxxxxxxx",
"last_app": "",
"connected": true,
"return_value": 1
}

At this moment the LED on the Particle Photon turns on. I’m going to change the code a little and add some telemetry using the Particle’s online code editor.

Editing Particle Photon Code online

They’ve got a great online code editor, but I could also edit and compile the code locally:

C:\Users\scott\Desktop>particle compile photon webconnected.ino

Compiling code for photon

Including:
webconnected.ino
attempting to compile firmware
downloading binary from: /v1/binaries/5858b74667ddf87fb2a2df8f
saving to: photon_firmware_1482209089877.bin
Memory use:
text data bss dec hex filename
6156 12 1488 7656 1de8
Compile succeeded.
Saved firmware to: C:\Users\scott\Desktop\photon_firmware_1482209089877.bin

I’ll change the code to announce an “Event” when I turn on the LED.

if (command=="on") {
digitalWrite(led1,HIGH);
digitalWrite(led2,HIGH);

String data = "Amazing! Some Data would be here! The light is on.";
Particle.publish("ledBlinked", data);

return 1;
}

I can head back over to the http://console.particle.io and see these events live on the web:

Particle Photon's have great online charts

Particle also supports integration with Google Cloud and Azure IoT Hub. Azure IoT Hub allows you to manage billions of devices and all their many billions of events. I just have a few, but we all have to start somewhere. 😉

I created a free Azure IoT Hub in my Azure Account…

Azure IoT Hub has charts and graphs built in

And made a shared access policy for my Particle Devices.

Be sure to set all the Access Policy Permissions you need

Then I told Particle about Azure in their Integrations system.

Particle has Azure IoT Hub integration built in

The Azure IoT SDKS on GitHub at https://github.com/Azure/azure-iot-sdks/releases have both a Windows-based Azure IoT Explorer and a command-line one called IoT Hub Explorer.

I logged in to the IoT Hub Explorer using the connection string from the Azure Portal:

iothub-explorer login "HostName=HanselIoT.azure-devices.net;SharedAccessKeyName=particle-iot-hub;SharedAccessKey=rdWUVMXs="

Then I’ll run “iothub-explorer monitor-events” passing in the device ID and the connection string for the shared access policy. Monitor-events is cool because it’ll hang and just output the events as they’re flowing through the whole system.

IoTHub-Explorer monitor-events command line

So I’m able to call methods on the Particle using their cloud, and monitor events from within Azure IoT Hub. I can explore diagnostics data and query huge amounts of device-to-cloud data that would potentially flow in from my hardware devices.

The IoT Hub Limits are very generous for free/hobbyist users as we learn to develop. I haven’t paid anything yet. However, it can scale to thousands of messages a second per unit! That means millions of messages a second if you need it.

I can definitely see how the the value an IoT Hub solution like this would add up quickly after you’ve got more than one device. Text files don’t really scale. Even if I just IoT’ed up my house, it would be nice to have all that data flowing into a single hub I could manage and query securely.


Sponsor: Big thanks to Telerik! They recently published a comprehensive whitepaper on The State of C#, discussing the history of C#, what’s new in C# 7 and whether C# is the top tech to know. Check it out!


© 2016 Scott Hanselman. All rights reserved.
     

Particle Photon connected to the cloudMy vacation continues. Yesterday I had shoulder surgery (adhesive capsulitis release) so today I'm messing around with Azure IoT Hub. I had some devices on my desk - some of which I had never really gotten around to exploring - and I thought I'd see if I could accomplish something.

I've got a Particle Photon here, as well as a Tessel 2, a LattePanda, Funduino, and Onion Omega. A few days ago I was able to get the Onion Omega to show my blood sugar on a small OLED screen, which was cool. Tonight I'm going to try to hook the Particle Photon up to the Azure IoT hub for monitoring.

The Photon is a tiny little device with Wi-Fi built-in. It's super easy to setup and it has a cloud-based IDE with tons of examples written in C and Node.js for you to use. Particle Photon also has a node.js based command line. From there you can list out your Photons, see their available functions, and even call functions over the internet! A hacker's delight, to be sure.

Here's a standard "blink an LED" Hello world on a Photon. This one creates a cloud function called "led" and binds it to the "ledToggle" method. Those cloud methods take a string, so there's no enum for the on/off command.

int led1 = D0;

int led2 = D7;
void setup() {
pinMode(led1, OUTPUT);
pinMode(led2, OUTPUT);
Spark.function("led",ledToggle);
digitalWrite(led1, LOW);
digitalWrite(led2, LOW);
}

void loop() {
}

int ledToggle(String command) {
if (command=="on") {
digitalWrite(led1,HIGH);
digitalWrite(led2,HIGH);
return 1;
}
else if (command=="off") {
digitalWrite(led1,LOW);
digitalWrite(led2,LOW);
return 0;
}
else {
return -1;
}
}

From the command line I can use the Particle command line interface (CLI) to enumerate my devices:

C:\Users\scott>particle list

hansel_photon [390039000647xxxxxxxxxxx] (Photon) is online
Functions:
int led(String args)

See how it doesn't just enumerate devices, but also cloud methods that hang off devices? LOVE THIS.

I can get a secret API Key from the Particle Photon's cloud based Console. Then using my Device ID and auth token I can call the method...with an HTTP request! How much easier could this be?

C:\Users\scott\>curl https://api.particle.io/v1/devices/390039000647xxxxxxxxx/led -d access_token=31fa2e6f --insecure -d arg="on"

{
"id": "390039000647xxxxxxxxx",
"last_app": "",
"connected": true,
"return_value": 1
}

At this moment the LED on the Particle Photon turns on. I'm going to change the code a little and add some telemetry using the Particle's online code editor.

Editing Particle Photon Code online

They've got a great online code editor, but I could also edit and compile the code locally:

C:\Users\scott\Desktop>particle compile photon webconnected.ino


Compiling code for photon

Including:
webconnected.ino
attempting to compile firmware
downloading binary from: /v1/binaries/5858b74667ddf87fb2a2df8f
saving to: photon_firmware_1482209089877.bin
Memory use:
text data bss dec hex filename
6156 12 1488 7656 1de8
Compile succeeded.
Saved firmware to: C:\Users\scott\Desktop\photon_firmware_1482209089877.bin

I'll change the code to announce an "Event" when I turn on the LED.

if (command=="on") {

digitalWrite(led1,HIGH);
digitalWrite(led2,HIGH);

String data = "Amazing! Some Data would be here! The light is on.";
Particle.publish("ledBlinked", data);

return 1;
}

I can head back over to the http://console.particle.io and see these events live on the web:

Particle Photon's have great online charts

Particle also supports integration with Google Cloud and Azure IoT Hub. Azure IoT Hub allows you to manage billions of devices and all their many billions of events. I just have a few, but we all have to start somewhere. ;)

I created a free Azure IoT Hub in my Azure Account...

Azure IoT Hub has charts and graphs built in

And made a shared access policy for my Particle Devices.

Be sure to set all the Access Policy Permissions you need

Then I told Particle about Azure in their Integrations system.

Particle has Azure IoT Hub integration built in

The Azure IoT SDKS on GitHub at https://github.com/Azure/azure-iot-sdks/releases have both a Windows-based Azure IoT Explorer and a command-line one called IoT Hub Explorer.

I logged in to the IoT Hub Explorer using the connection string from the Azure Portal:

iothub-explorer login "HostName=HanselIoT.azure-devices.net;SharedAccessKeyName=particle-iot-hub;SharedAccessKey=rdWUVMXs="

Then I'll run "iothub-explorer monitor-events" passing in the device ID and the connection string for the shared access policy. Monitor-events is cool because it'll hang and just output the events as they're flowing through the whole system.

IoTHub-Explorer monitor-events command line

So I'm able to call methods on the Particle using their cloud, and monitor events from within Azure IoT Hub. I can explore diagnostics data and query huge amounts of device-to-cloud data that would potentially flow in from my hardware devices.

The IoT Hub Limits are very generous for free/hobbyist users as we learn to develop. I haven't paid anything yet. However, it can scale to thousands of messages a second per unit! That means millions of messages a second if you need it.

I can definitely see how the the value an IoT Hub solution like this would add up quickly after you've got more than one device. Text files don't really scale. Even if I just IoT'ed up my house, it would be nice to have all that data flowing into a single hub I could manage and query securely.


Sponsor: Big thanks to Telerik! They recently published a comprehensive whitepaper on The State of C#, discussing the history of C#, what’s new in C# 7 and whether C# is the top tech to know. Check it out!



© 2016 Scott Hanselman. All rights reserved.
     

NoSQL .NET Core development using an local Azure DocumentDB Emulator

I was hanging out with Miguel de Icaza in New York a few weeks ago and he was sharing with me his ongoing love affair with a NoSQL Database called Azure DocumentDB. I’ve looked at it a few times over the last year or so and though it was cool but I didn’t feel like using it for a few reasons:

  • Can’t develop locally – I’m often in low-bandwidth or airplane situations
  • No MongoDB support – I have existing apps written in Node that use Mongo
  • No .NET Core support – I’m doing mostly cross-platform .NET Core apps

Miguel told me to take a closer look. Looks like things have changed! DocumentDB now has:

  • Free local DocumentDB Emulator – I asked and this is the SAME code that runs in Azure with just changes like using the local file system for persistence, etc. It’s an “emulator” but it’s really the essential same core engine code. There is no cost and no sign in for the local DocumentDB emulator.
  • MongoDB protocol support – This is amazing. I literally took an existing Node app, downloaded MongoChef and copied my collection over into Azure using a standard MongoDB connection string, then pointed my app at DocumentDB and it just worked. It’s using DocumentDB for storage though, which gives me
    • Better Latency
    • Turnkey global geo-replication (like literally a few clicks)
    • A performance SLA with <10ms read and <15ms write (Service Level Agreement)
    • Metrics and Resource Management like every Azure Service
  • DocumentDB .NET Core Preview SDK that has feature parity with the .NET Framework SDK.

There’s also Node, .NET, Python, Java, and C++ SDKs for DocumentDB so it’s nice for gaming on Unity, Web Apps, or any .NET App…including Xamarin mobile apps on iOS and Android which is why Miguel is so hype on it.

Azure DocumentDB Local Quick Start

I wanted to see how quickly I could get started. I spoke with the PM for the project on Azure Friday and downloaded and installed the local emulator. The lead on the project said it’s Windows for now but they are looking for cross-platform solutions. After it was installed it popped up my web browser with a local web page – I wish more development tools would have such clean Quick Starts. There’s also a nice quick start on using DocumentDB with ASP.NET MVC.

NOTE: This is a 0.1.0 release. Definitely Alpha level. For example, the sample included looks like it had the package name changed at some point so it didn’t line up. I had to change “Microsoft.Azure.Documents.Client”: “0.1.0” to “Microsoft.Azure.DocumentDB.Core”: “0.1.0-preview” so a little attention to detail issue there. I believe the intent is for stuff to Just Work. 😉

Nice DocumentDB Quick Start

The sample app is a pretty standard “ToDo” app:

ASP.NET MVC ToDo App using Azure Document DB local emulator

The local Emulator also includes a web-based local Data Explorer:

image

A Todo Item is really just a POCO (Plain Old CLR Object) like this:

namespace todo.Models
{
    using Newtonsoft.Json;
    public class Item
    {
        [JsonProperty(PropertyName = "id")]
        public string Id { get; set; }
        [JsonProperty(PropertyName = "name")]
        public string Name { get; set; }
        [JsonProperty(PropertyName = "description")]
        public string Description { get; set; }
        [JsonProperty(PropertyName = "isComplete")]
        public bool Completed { get; set; }
    }
}

The MVC Controller in the sample uses an underlying repository pattern so the code is super simple at that layer – as an example:

[ActionName("Index")]
public async Task<IActionResult> Index()
{
var items = await DocumentDBRepository<Item>.GetItemsAsync(d => !d.Completed);
return View(items);
}

[HttpPost]
[ActionName("Create")]
[ValidateAntiForgeryToken]
public async Task<ActionResult> CreateAsync([Bind("Id,Name,Description,Completed")] Item item)
{
if (ModelState.IsValid)
{
await DocumentDBRepository<Item>.CreateItemAsync(item);
return RedirectToAction("Index");
}

return View(item);
}

The Repository itself that’s abstracting away the complexities is itself not that complex. It’s like 120 lines of code, and really more like 60 when you remove whitespace and curly braces. And half of that is just initialization and setup. It’s also DocumentDBRepository<T> so it’s a generic you can change to meet your tastes and use it however you’d like.

The only thing that stands out to me in this sample is the loopp in GetItemsAsync that’s hiding potential paging/chunking. It’s nice you can pass in a predicate but I’ll want to go and put in some paging logic for large collections.

public static async Task<T> GetItemAsync(string id)
{
    try
    {
        Document document = await client.ReadDocumentAsync(UriFactory.CreateDocumentUri(DatabaseId, CollectionId, id));
        return (T)(dynamic)document;
    }
    catch (DocumentClientException e)
    {
        if (e.StatusCode == System.Net.HttpStatusCode.NotFound){
            return null;
        }
        else {
            throw;
        }
    }
}
public static async Task<IEnumerable<T>> GetItemsAsync(Expression<Func<T, bool>> predicate)
{
    IDocumentQuery<T> query = client.CreateDocumentQuery<T>(
        UriFactory.CreateDocumentCollectionUri(DatabaseId, CollectionId),
        new FeedOptions { MaxItemCount = -1 })
        .Where(predicate)
        .AsDocumentQuery();
    List<T> results = new List<T>();
    while (query.HasMoreResults){
        results.AddRange(await query.ExecuteNextAsync<T>());
    }
    return results;
}
public static async Task<Document> CreateItemAsync(T item)
{
    return await client.CreateDocumentAsync(UriFactory.CreateDocumentCollectionUri(DatabaseId, CollectionId), item);
}
public static async Task<Document> UpdateItemAsync(string id, T item)
{
    return await client.ReplaceDocumentAsync(UriFactory.CreateDocumentUri(DatabaseId, CollectionId, id), item);
}
public static async Task DeleteItemAsync(string id)
{
    await client.DeleteDocumentAsync(UriFactory.CreateDocumentUri(DatabaseId, CollectionId, id));
}

I’m going to keep playing with this but so far I’m pretty happy I can get this far while on an airplane. It’s really easy (given I’m preferring NoSQL over SQL lately) to just through objects at it and store them.

In another post I’m going to look at RavenDB, another great NoSQL Document Database that works on .NET Core that s also Open Source.


Sponsor: Big thanks to Octopus Deploy! Do you deploy the same application multiple times for each of your end customers? The team at Octopus have taken the pain out of multi-tenant deployments. Check out their latest 3.4 release


© 2016 Scott Hanselman. All rights reserved.
     

I was hanging out with Miguel de Icaza in New York a few weeks ago and he was sharing with me his ongoing love affair with a NoSQL Database called Azure DocumentDB. I've looked at it a few times over the last year or so and though it was cool but I didn't feel like using it for a few reasons:

  • Can't develop locally - I'm often in low-bandwidth or airplane situations
  • No MongoDB support - I have existing apps written in Node that use Mongo
  • No .NET Core support - I'm doing mostly cross-platform .NET Core apps

Miguel told me to take a closer look. Looks like things have changed! DocumentDB now has:

  • Free local DocumentDB Emulator - I asked and this is the SAME code that runs in Azure with just changes like using the local file system for persistence, etc. It's an "emulator" but it's really the essential same core engine code. There is no cost and no sign in for the local DocumentDB emulator.
  • MongoDB protocol support - This is amazing. I literally took an existing Node app, downloaded MongoChef and copied my collection over into Azure using a standard MongoDB connection string, then pointed my app at DocumentDB and it just worked. It's using DocumentDB for storage though, which gives me
    • Better Latency
    • Turnkey global geo-replication (like literally a few clicks)
    • A performance SLA with <10ms read and <15ms write (Service Level Agreement)
    • Metrics and Resource Management like every Azure Service
  • DocumentDB .NET Core Preview SDK that has feature parity with the .NET Framework SDK.

There's also Node, .NET, Python, Java, and C++ SDKs for DocumentDB so it's nice for gaming on Unity, Web Apps, or any .NET App...including Xamarin mobile apps on iOS and Android which is why Miguel is so hype on it.

Azure DocumentDB Local Quick Start

I wanted to see how quickly I could get started. I spoke with the PM for the project on Azure Friday and downloaded and installed the local emulator. The lead on the project said it's Windows for now but they are looking for cross-platform solutions. After it was installed it popped up my web browser with a local web page - I wish more development tools would have such clean Quick Starts. There's also a nice quick start on using DocumentDB with ASP.NET MVC.

NOTE: This is a 0.1.0 release. Definitely Alpha level. For example, the sample included looks like it had the package name changed at some point so it didn't line up. I had to change "Microsoft.Azure.Documents.Client": "0.1.0" to "Microsoft.Azure.DocumentDB.Core": "0.1.0-preview" so a little attention to detail issue there. I believe the intent is for stuff to Just Work. ;)

Nice DocumentDB Quick Start

The sample app is a pretty standard "ToDo" app:

ASP.NET MVC ToDo App using Azure Document DB local emulator

The local Emulator also includes a web-based local Data Explorer:

image

A Todo Item is really just a POCO (Plain Old CLR Object) like this:

namespace todo.Models
{
    using Newtonsoft.Json;
    public class Item
    {
        [JsonProperty(PropertyName = "id")]
        public string Id { get; set; }
        [JsonProperty(PropertyName = "name")]
        public string Name { get; set; }
        [JsonProperty(PropertyName = "description")]
        public string Description { get; set; }
        [JsonProperty(PropertyName = "isComplete")]
        public bool Completed { get; set; }
    }
}

The MVC Controller in the sample uses an underlying repository pattern so the code is super simple at that layer - as an example:

[ActionName("Index")]

public async Task<IActionResult> Index()
{
var items = await DocumentDBRepository<Item>.GetItemsAsync(d => !d.Completed);
return View(items);
}

[HttpPost]
[ActionName("Create")]
[ValidateAntiForgeryToken]
public async Task<ActionResult> CreateAsync([Bind("Id,Name,Description,Completed")] Item item)
{
if (ModelState.IsValid)
{
await DocumentDBRepository<Item>.CreateItemAsync(item);
return RedirectToAction("Index");
}

return View(item);
}

The Repository itself that's abstracting away the complexities is itself not that complex. It's like 120 lines of code, and really more like 60 when you remove whitespace and curly braces. And half of that is just initialization and setup. It's also DocumentDBRepository<T> so it's a generic you can change to meet your tastes and use it however you'd like.

The only thing that stands out to me in this sample is the loopp in GetItemsAsync that's hiding potential paging/chunking. It's nice you can pass in a predicate but I'll want to go and put in some paging logic for large collections.

public static async Task<T> GetItemAsync(string id)
{
    try
    {
        Document document = await client.ReadDocumentAsync(UriFactory.CreateDocumentUri(DatabaseId, CollectionId, id));
        return (T)(dynamic)document;
    }
    catch (DocumentClientException e)
    {
        if (e.StatusCode == System.Net.HttpStatusCode.NotFound){
            return null;
        }
        else {
            throw;
        }
    }
}
public static async Task<IEnumerable<T>> GetItemsAsync(Expression<Func<T, bool>> predicate)
{
    IDocumentQuery<T> query = client.CreateDocumentQuery<T>(
        UriFactory.CreateDocumentCollectionUri(DatabaseId, CollectionId),
        new FeedOptions { MaxItemCount = -1 })
        .Where(predicate)
        .AsDocumentQuery();
    List<T> results = new List<T>();
    while (query.HasMoreResults){
        results.AddRange(await query.ExecuteNextAsync<T>());
    }
    return results;
}
public static async Task<Document> CreateItemAsync(T item)
{
    return await client.CreateDocumentAsync(UriFactory.CreateDocumentCollectionUri(DatabaseId, CollectionId), item);
}
public static async Task<Document> UpdateItemAsync(string id, T item)
{
    return await client.ReplaceDocumentAsync(UriFactory.CreateDocumentUri(DatabaseId, CollectionId, id), item);
}
public static async Task DeleteItemAsync(string id)
{
    await client.DeleteDocumentAsync(UriFactory.CreateDocumentUri(DatabaseId, CollectionId, id));
}

I'm going to keep playing with this but so far I'm pretty happy I can get this far while on an airplane. It's really easy (given I'm preferring NoSQL over SQL lately) to just through objects at it and store them.

In another post I'm going to look at RavenDB, another great NoSQL Document Database that works on .NET Core that s also Open Source.


Sponsor: Big thanks to Octopus Deploy! Do you deploy the same application multiple times for each of your end customers? The team at Octopus have taken the pain out of multi-tenant deployments. Check out their latest 3.4 release


© 2016 Scott Hanselman. All rights reserved.
     

Publishing ASP.NET Core 1.1 applications to Azure using git deploy

I took an ASP.NET Core 1.0 application and updated its package references to ASP.NET 1.1 today. I also updated to my SDK to the Current (not the LTS – Long Term Support version). Everything worked great building and running locally, but then I made a new Web App in Azure and did a quick git deploy.

Deploying ASP.NET Core 1.1 to Azure

Basically:

c:\aspnetcore11app> azure site create "aspnetcore11test" --location "West US" --git
c:\aspnetcore11app> git add .
c:\aspnetcore11app> git commit -m "initial"
c:\aspnetcore11app> git push azure master

Then I watched the logs go by. You can watch them at the command line or from within the Azure Portal under “Log Stream.”

The deployment failed when Azure was building the app and got this error:

Build started 11/25/2016 6:51:54 AM.
2016-11-25T06:51:55         1>Project "D:\home\site\repository\aspnet1.1.xproj" on node 1 (Publish target(s)).
2016-11-25T06:51:55         1>D:\home\site\repository\aspnet1.1.xproj(7,3): error MSB4019: The imported project "D:\Program Files (x86)\dotnet\sdk\1.0.0-preview3-004056\Extensions\Microsoft\VisualStudio\v14.0\DotNet\Microsoft.DotNet.Props" was not found. Confirm that the path in the <Import> declaration is correct, and that the file exists on disk.
2016-11-25T06:51:55         1>Done Building Project "D:\home\site\repository\aspnet1.1.xproj" (Publish target(s)) -- FAILED.
2016-11-25T06:51:55    

See where it says “1.0.0-preview3-004056” in there? Looks like Azure Web Apps has some build that isn’t the one that I have. Theirs has the new csproj/msbuild stuff and I’m staying a few steps back waiting for things to bake.

Remember that unless you specify the SDK version, .NET Core will use whatever is the latest one on the box.

Well, what do I have locally?

C:\aspnetcore11app>dotnet --version
1.0.0-preview2-1-003177

OK, then that’s what I need to use so I’ll make a global.json at the root of my project, then move my code into a folder under “src” for example.

{
"projects": [ "src", "test" ],
"sdk": {
"version": "1.0.0-preview2-1-003177"
}
}

Here I’m “pinning” the same SDK version. This will tell Azure (or you, if I gave you the code) to use this SDK version when building.

Now I’ll redeploy with a git add, commit, and push. It works.

I think this should be easier, but I’m not sure how it should be easier. Does this mean that everyone should have a global.json with a preferred version? If you have no preferred version should Azure give a smarter error if it has an incompatible newer one?

The learning here for me is that not having a global.json basically says “*.*” to any cloud build servers and you’ll get whatever latest SDK they have. It could stop working any day.


Sponsor: Big thanks to Octopus Deploy! Do you deploy the same application multiple times for each of your end customers? The team at Octopus have taken the pain out of multi-tenant deployments. Check out their latest 3.4 release!


© 2016 Scott Hanselman. All rights reserved.
     

I took an ASP.NET Core 1.0 application and updated its package references to ASP.NET 1.1 today. I also updated to my SDK to the Current (not the LTS - Long Term Support version). Everything worked great building and running locally, but then I made a new Web App in Azure and did a quick git deploy.

Deploying ASP.NET Core 1.1 to Azure

Basically:

c:\aspnetcore11app> azure site create "aspnetcore11test" --location "West US" --git

c:\aspnetcore11app> git add .
c:\aspnetcore11app> git commit -m "initial"
c:\aspnetcore11app> git push azure master

Then I watched the logs go by. You can watch them at the command line or from within the Azure Portal under "Log Stream."

The deployment failed when Azure was building the app and got this error:

Build started 11/25/2016 6:51:54 AM.
2016-11-25T06:51:55         1>Project "D:\home\site\repository\aspnet1.1.xproj" on node 1 (Publish target(s)).
2016-11-25T06:51:55         1>D:\home\site\repository\aspnet1.1.xproj(7,3): error MSB4019: The imported project "D:\Program Files (x86)\dotnet\sdk\1.0.0-preview3-004056\Extensions\Microsoft\VisualStudio\v14.0\DotNet\Microsoft.DotNet.Props" was not found. Confirm that the path in the <Import> declaration is correct, and that the file exists on disk.
2016-11-25T06:51:55         1>Done Building Project "D:\home\site\repository\aspnet1.1.xproj" (Publish target(s)) -- FAILED.
2016-11-25T06:51:55    

See where it says "1.0.0-preview3-004056" in there? Looks like Azure Web Apps has some build that isn't the one that I have. Theirs has the new csproj/msbuild stuff and I'm staying a few steps back waiting for things to bake.

Remember that unless you specify the SDK version, .NET Core will use whatever is the latest one on the box.

Well, what do I have locally?

C:\aspnetcore11app>dotnet --version

1.0.0-preview2-1-003177

OK, then that's what I need to use so I'll make a global.json at the root of my project, then move my code into a folder under "src" for example.

{

"projects": [ "src", "test" ],
"sdk": {
"version": "1.0.0-preview2-1-003177"
}
}

Here I'm "pinning" the same SDK version. This will tell Azure (or you, if I gave you the code) to use this SDK version when building.

Now I'll redeploy with a git add, commit, and push. It works.

I think this should be easier, but I'm not sure how it should be easier. Does this mean that everyone should have a global.json with a preferred version? If you have no preferred version should Azure give a smarter error if it has an incompatible newer one?

The learning here for me is that not having a global.json basically says "*.*" to any cloud build servers and you'll get whatever latest SDK they have. It could stop working any day.


Sponsor: Big thanks to Octopus Deploy! Do you deploy the same application multiple times for each of your end customers? The team at Octopus have taken the pain out of multi-tenant deployments. Check out their latest 3.4 release!



© 2016 Scott Hanselman. All rights reserved.
     

Exploring Application Insights for disconnected or connected deep telemetry in ASP.NET Apps

Today on the ASP.NET Community Standup we learned about how you can use Application Insights in a disconnected scenario to get some cool – ahem – insights into your application.

Typically when someone sees Application Insights in the File | New Project dialog they assume it’s a feature that only works in Azure, that will create some account for you and send your data to the cloud. While App Insights totally does do a lot of cool stuff when you have a cloud-hosted app and it does add a lot of value, it also supports a very useful “SDK only” mode that’s totally offline.

Click “Add Application Insights to project” and then under “Send telemetry to” you click “Install SDK only” and no data gets sent to the cloud.

Application Insights dropdown - Install SDK only

Once you make your new project, can you learn more about AppInsights here.

For ASP.NET Core apps, Application Insights will include this package in your project.json – “Microsoft.ApplicationInsights.AspNetCore”: “1.0.0” and it’ll add itself into your middleware pipeline and register services in your Startup.cs. Remember, nothing is hidden in ASP.NET Core, so you can modify all this to your heart’s content.

if (env.IsDevelopment())
{
// This will push telemetry data through Application Insights pipeline faster, allowing you to view results immediately.
builder.AddApplicationInsightsSettings(developerMode: true);
}

Request telemetry and Exception telemetry are added separately, as you like.

Make sure you show the Application Insights Toolbar by right-clicking your toolbars and ensuring it’s checked.

Application Insights Dropdown Menu

The button it adds is actually pretty useful.

Application Insights Dropdown Menu

Run your app and click the Application Insights button.

NOTE: I’m using Visual Studio Community. That’s the free version of VS you can get at http://visualstudio.com/free. I use it exclusively and I think it’s pretty cool that this feature works just great on VS Community.

You’ll see the Search window open up in VS. You can keep it running while you debug and it’ll fill with Requests, Traces, Exceptions, etc.

image

I added a Exceptional case to /about, and here’s what I see:

Searching for the last hours traces

I can dig into each issue, filter, search, and explore deeper:

Unhandled Exception

And once I’ve found something interesting, I can explore around it with the full details of the HTTP Request. I find the “telemetry 5 minutes before and after” query to be very powerful.

Track Operations

Notice where it says “dependencies for this operation?” That’s not dependencies like “Dependency Injection” – that’s larger system-wide dependencies like “my app depends on this web service.”

You can custom instrument your application with the TrackDependancy API if you like, and that will cause your system’s dependency to  light up in AppInsights charts and reports. Here’s a dumb example as I pretend that putting data in ViewData is a dependency. It should be calling a WebAPI or a Database or something.

var telemetry = new TelemetryClient();

var success = false;
var startTime = DateTime.UtcNow;
var timer = System.Diagnostics.Stopwatch.StartNew();
try
{
ViewData["Message"] = "Your application description page.";
}
finally
{
timer.Stop();
telemetry.TrackDependency("ViewDataAsDependancy", "CallSomeStuff", startTime, timer.Elapsed, success);
}

Once I’m Tracking external Dependencies I can search for outliers, long durations, categorize them, and they’ll affect the generated charts and graphs if/when you do connect your App Insights to the cloud. Here’s what I see, after making this one code change. I could build this kind of stuff into all my external calls AND instrument the JavaScript as well. (note Client and Server in this chart.)

Application Insight Maps

And once it’s all there, I can query all the insights like this:

Querying Data Live

To be clear, though, you don’t have to host your app in the cloud. You can just send the telemetry to the cloud for analysis. Your existing on-premises IIS servers can run a “Status Monitor” app for instrumentation.

Application Insights Charts

There’s a TON of good data here and it’s REALLY easy to get started either:

  • Totally offline (no cloud) and just query within Visual Studio
  • Somewhat online – Host your app locally and send telemetry to the cloud
  • Totally online – Host your app and telemetry in the cloud

All in all, I am pretty impressed. There’s SDKs for Java, Node, Docker, and ASP.NET – There’s a LOT here. I’m going to dig deeper.


Sponsor: Big thanks to Telerik! 60+ ASP.NET Core controls for every need. The most complete UI toolset for x-platform responsive web and cloud development. Try now 30 days for free!


© 2016 Scott Hanselman. All rights reserved.
     

Today on the ASP.NET Community Standup we learned about how you can use Application Insights in a disconnected scenario to get some cool - ahem - insights into your application.

Typically when someone sees Application Insights in the File | New Project dialog they assume it's a feature that only works in Azure, that will create some account for you and send your data to the cloud. While App Insights totally does do a lot of cool stuff when you have a cloud-hosted app and it does add a lot of value, it also supports a very useful "SDK only" mode that's totally offline.

Click "Add Application Insights to project" and then under "Send telemetry to" you click "Install SDK only" and no data gets sent to the cloud.

Application Insights dropdown - Install SDK only

Once you make your new project, can you learn more about AppInsights here.

For ASP.NET Core apps, Application Insights will include this package in your project.json - "Microsoft.ApplicationInsights.AspNetCore": "1.0.0" and it'll add itself into your middleware pipeline and register services in your Startup.cs. Remember, nothing is hidden in ASP.NET Core, so you can modify all this to your heart's content.

if (env.IsDevelopment())

{
// This will push telemetry data through Application Insights pipeline faster, allowing you to view results immediately.
builder.AddApplicationInsightsSettings(developerMode: true);
}

Request telemetry and Exception telemetry are added separately, as you like.

Make sure you show the Application Insights Toolbar by right-clicking your toolbars and ensuring it's checked.

Application Insights Dropdown Menu

The button it adds is actually pretty useful.

Application Insights Dropdown Menu

Run your app and click the Application Insights button.

NOTE: I'm using Visual Studio Community. That's the free version of VS you can get at http://visualstudio.com/free. I use it exclusively and I think it's pretty cool that this feature works just great on VS Community.

You'll see the Search window open up in VS. You can keep it running while you debug and it'll fill with Requests, Traces, Exceptions, etc.

image

I added a Exceptional case to /about, and here's what I see:

Searching for the last hours traces

I can dig into each issue, filter, search, and explore deeper:

Unhandled Exception

And once I've found something interesting, I can explore around it with the full details of the HTTP Request. I find the "telemetry 5 minutes before and after" query to be very powerful.

Track Operations

Notice where it says "dependencies for this operation?" That's not dependencies like "Dependency Injection" - that's larger system-wide dependencies like "my app depends on this web service."

You can custom instrument your application with the TrackDependancy API if you like, and that will cause your system's dependency to  light up in AppInsights charts and reports. Here's a dumb example as I pretend that putting data in ViewData is a dependency. It should be calling a WebAPI or a Database or something.

var telemetry = new TelemetryClient();


var success = false;
var startTime = DateTime.UtcNow;
var timer = System.Diagnostics.Stopwatch.StartNew();
try
{
ViewData["Message"] = "Your application description page.";
}
finally
{
timer.Stop();
telemetry.TrackDependency("ViewDataAsDependancy", "CallSomeStuff", startTime, timer.Elapsed, success);
}

Once I'm Tracking external Dependencies I can search for outliers, long durations, categorize them, and they'll affect the generated charts and graphs if/when you do connect your App Insights to the cloud. Here's what I see, after making this one code change. I could build this kind of stuff into all my external calls AND instrument the JavaScript as well. (note Client and Server in this chart.)

Application Insight Maps

And once it's all there, I can query all the insights like this:

Querying Data Live

To be clear, though, you don't have to host your app in the cloud. You can just send the telemetry to the cloud for analysis. Your existing on-premises IIS servers can run a "Status Monitor" app for instrumentation.

Application Insights Charts

There's a TON of good data here and it's REALLY easy to get started either:

  • Totally offline (no cloud) and just query within Visual Studio
  • Somewhat online - Host your app locally and send telemetry to the cloud
  • Totally online - Host your app and telemetry in the cloud

All in all, I am pretty impressed. There's SDKs for Java, Node, Docker, and ASP.NET - There's a LOT here. I'm going to dig deeper.


Sponsor: Big thanks to Telerik! 60+ ASP.NET Core controls for every need. The most complete UI toolset for x-platform responsive web and cloud development. Try now 30 days for free!



© 2016 Scott Hanselman. All rights reserved.
     

What is Serverless Computing? Exploring Azure Functions

There’s a lot of confusing terms in the Cloud space. And that’s not counting the term “Cloud.” 😉

  • IaaS (Infrastructure as a Services) – Virtual Machines and stuff on demand.
  • PaaS (Platform as a Service) – You deploy your apps but try not to think about the Virtual Machines underneath. They exist, but we pretend they don’t until forced.
  • SaaS (Software as a Service) – Stuff like Office 365 and Gmail. You pay a subscription and you get email/whatever as a service. It Just Works.

“Serverless Computing” doesn’t really mean there’s no server. Serverless means there’s no server you need to worry about. That might sound like PaaS, but it’s higher level that than.

Serverless Computing is like this – Your code, a slider bar, and your credit card. You just have your function out there and it will scale as long as you can pay for it. It’s as close to “cloudy” as The Cloud can get.

Serverless Computing is like this. Your code, a slider bar, and your credit card.

With Platform as a Service, you might made a Node or C# app, check it into Git, deploy it to a Web Site/Application, and then you’ve got an endpoint. You might scale it up (get more CPU/Memory/Disk) or out (have 1, 2, n instances of the Web App) but it’s not seamless. It’s totally cool, to be clear, but you’re always aware of the servers.

New cloud systems like Amazon Lambda and Azure Functions have you upload some code and it’s running seconds later. You can have continuous jobs, functions that run on a triggered event, or make Web APIs or Webhooks that are just a function with a URL.

I’m going to see how quickly I can make a Web API with Serverless Computing.

I’ll go to http://functions.azure.com and make a new function. If you don’t have an account you can sign up free.

Getting started with Azure Functions

You can make a function in JavaScript or C#.

Getting started with Azure Functions - Create This Function

Once you’re into the Azure Function Editor, click “New Function” and you’ve got dozens of templates and code examples for things like:

  • Find a face in an image and store the rectangle of where the face is.
  • Run a function and comment on a GitHub issue when a GitHub webhook is triggered
  • Update a storage blob when an HTTP Request comes in
  • Load entities from a database or storage table

I figured I’d change the first example. It is a trigger that sees an image in storage, calls a cognitive services API to get the location of the face, then stores the data. I wanted to change it to:

  • Take an image as input from an HTTP Post
  • Draw a rectangle around the face
  • Return the new image

You can do this work from Git/GitHub but for easy stuff I’m literally doing it all in the browser. Here’s what it looks like.

Azure Functions can be done in the browser

I code and iterate and save and fail fast, fail often. Here’s the starter code I based it on. Remember, that this is a starter function that runs on a triggered event, so note its Run()…I’m going to change this.

#r "Microsoft.WindowsAzure.Storage"
#r "Newtonsoft.Json"
using System.Net;
using System.Net.Http;
using System.Net.Http.Headers;
using Newtonsoft.Json;
using Microsoft.WindowsAzure.Storage.Table;
using System.IO; 
public static async Task Run(Stream image, string name, IAsyncCollector<FaceRectangle> outTable, TraceWriter log)
{
    var image = await req.Content.ReadAsStreamAsync();
    
    string result = await CallVisionAPI(image); //STREAM
    log.Info(result); 
    if (String.IsNullOrEmpty(result))
    {
        return req.CreateResponse(HttpStatusCode.BadRequest);
    }
    ImageData imageData = JsonConvert.DeserializeObject<ImageData>(result);
    foreach (Face face in imageData.Faces)
    {
        var faceRectangle = face.FaceRectangle;
        faceRectangle.RowKey = Guid.NewGuid().ToString();
        faceRectangle.PartitionKey = "Functions";
        faceRectangle.ImageFile = name + ".jpg";
        await outTable.AddAsync(faceRectangle); 
    }
    return req.CreateResponse(HttpStatusCode.OK, "Nice Job");  
}
static async Task<string> CallVisionAPI(Stream image)
{
    using (var client = new HttpClient())
    {
        var content = new StreamContent(image);
        var url = "https://api.projectoxford.ai/vision/v1.0/analyze?visualFeatures=Faces";
        client.DefaultRequestHeaders.Add("Ocp-Apim-Subscription-Key", Environment.GetEnvironmentVariable("Vision_API_Subscription_Key"));
        content.Headers.ContentType = new MediaTypeHeaderValue("application/octet-stream");
        var httpResponse = await client.PostAsync(url, content);
        if (httpResponse.StatusCode == HttpStatusCode.OK){
            return await httpResponse.Content.ReadAsStringAsync();
        }
    }
    return null;
}
public class ImageData {
    public List<Face> Faces { get; set; }
}
public class Face {
    public int Age { get; set; }
    public string Gender { get; set; }
    public FaceRectangle FaceRectangle { get; set; }
}
public class FaceRectangle : TableEntity {
    public string ImageFile { get; set; }
    public int Left { get; set; }
    public int Top { get; set; }
    public int Width { get; set; }
    public int Height { get; set; }
}

GOAL: I’ll change this Run() and make this listen for an HTTP request that contains an image, read the image that’s POSTed in (ya, I know, no validation), draw rectangle around detected faces, then return a new image.

public static async Task<HttpResponseMessage> Run(HttpRequestMessage req, TraceWriter log) {
var image = await req.Content.ReadAsStreamAsync();

As for the body of this function, I’m 20% sure I’m using too many MemoryStreams but they are getting disposed so take this code as a initial proof of concept. However, I DO need at least the two I have. Regardless, happy to chat with those who know more, but it’s more subtle than even I thought. That said, basically call out to the API, get back some face data that looks like this:

2016-08-26T23:59:26.741 {"requestId":"8be222ff-98cc-4019-8038-c22eeffa63ed","metadata":{"width":2808,"height":1872,"format":"Jpeg"},"faces":[{"age":41,"gender":"Male","faceRectangle":{"left":1059,"top":671,"width":466,"height":466}},{"age":41,"gender":"Male","faceRectangle":{"left":1916,"top":702,"width":448,"height":448}}]}

Then take that data and DRAW a Rectangle over the faces detected.

public static async Task<HttpResponseMessage> Run(HttpRequestMessage req, TraceWriter log)
{
    var image = await req.Content.ReadAsStreamAsync();
    MemoryStream mem = new MemoryStream();
    image.CopyTo(mem); //make a copy since one gets destroy in the other API. Lame, I know.
    image.Position = 0;
    mem.Position = 0;
    
    string result = await CallVisionAPI(image); 
    log.Info(result); 
    if (String.IsNullOrEmpty(result)) {
        return req.CreateResponse(HttpStatusCode.BadRequest);
    }
    
    ImageData imageData = JsonConvert.DeserializeObject<ImageData>(result);
    MemoryStream outputStream = new MemoryStream();
    using(Image maybeFace = Image.FromStream(mem, true))
    {
        using (Graphics g = Graphics.FromImage(maybeFace))
        {
            Pen yellowPen = new Pen(Color.Yellow, 4);
            foreach (Face face in imageData.Faces)
            {
                var faceRectangle = face.FaceRectangle;
                g.DrawRectangle(yellowPen, 
                    faceRectangle.Left, faceRectangle.Top, 
                    faceRectangle.Width, faceRectangle.Height);
            }
        }
        maybeFace.Save(outputStream, ImageFormat.Jpeg);
    }
    
    var response = new HttpResponseMessage()
    {
        Content = new ByteArrayContent(outputStream.ToArray()),
        StatusCode = HttpStatusCode.OK,
    };
    response.Content.Headers.ContentType = new MediaTypeHeaderValue("image/jpeg");
    return response;
}

 

Now I go into Postman and POST an image to my new Azure Function endpoint. Here I uploaded a flattering picture of me and unflattering picture of The Oatmeal. He’s pretty in real life just NOT HERE. 😉

Image Recognition with Azure Functions

So in just about 15 min with no idea and armed with just my browser, Postman (also my browser), Google/StackOverflow, and Azure Functions I’ve got a backend proof of concept.

Azure Functions supports Node.js, C#, F#, Python, PHP *and* Batch, Bash, and PowerShell, which really opens it up to basically anyone. You can use them for anything when you just want a function (or more) out there on the web. Send stuff to Slack, automate your house, update GitHub issues, act as a Webhook, etc. There’s some great 3d party Azure Functions sample code in this GitHub repo as well. Inputs can be from basically anywhere and outputs can be basically anywhere. If those anywheres are also cloud services like Tables or Storage, you’ve got a “serverless backed” that is easy to scale.

I’m still learning, but I can see when I’d want a VM (total control) vs a Web App (near total control) vs a “Serverless” Azure Function (less control but I didn’t need it anyway, just wanted a function in the cloud.)


Sponsor: Aspose makes programming APIs for working with files, like: DOC, XLS, PPT, PDF and countless more.  Developers can use their products to create, convert, modify, or manage files in almost any way.  Aspose is a good company and they offer solid products.  Check them out, and download a free evaluation.


© 2016 Scott Hanselman. All rights reserved.
     

There's a lot of confusing terms in the Cloud space. And that's not counting the term "Cloud." ;)

  • IaaS (Infrastructure as a Services) - Virtual Machines and stuff on demand.
  • PaaS (Platform as a Service) - You deploy your apps but try not to think about the Virtual Machines underneath. They exist, but we pretend they don't until forced.
  • SaaS (Software as a Service) - Stuff like Office 365 and Gmail. You pay a subscription and you get email/whatever as a service. It Just Works.

"Serverless Computing" doesn't really mean there's no server. Serverless means there's no server you need to worry about. That might sound like PaaS, but it's higher level that than.

Serverless Computing is like this - Your code, a slider bar, and your credit card. You just have your function out there and it will scale as long as you can pay for it. It's as close to "cloudy" as The Cloud can get.

Serverless Computing is like this. Your code, a slider bar, and your credit card.

With Platform as a Service, you might made a Node or C# app, check it into Git, deploy it to a Web Site/Application, and then you've got an endpoint. You might scale it up (get more CPU/Memory/Disk) or out (have 1, 2, n instances of the Web App) but it's not seamless. It's totally cool, to be clear, but you're always aware of the servers.

New cloud systems like Amazon Lambda and Azure Functions have you upload some code and it's running seconds later. You can have continuous jobs, functions that run on a triggered event, or make Web APIs or Webhooks that are just a function with a URL.

I'm going to see how quickly I can make a Web API with Serverless Computing.

I'll go to http://functions.azure.com and make a new function. If you don't have an account you can sign up free.

Getting started with Azure Functions

You can make a function in JavaScript or C#.

Getting started with Azure Functions - Create This Function

Once you're into the Azure Function Editor, click "New Function" and you've got dozens of templates and code examples for things like:

  • Find a face in an image and store the rectangle of where the face is.
  • Run a function and comment on a GitHub issue when a GitHub webhook is triggered
  • Update a storage blob when an HTTP Request comes in
  • Load entities from a database or storage table

I figured I'd change the first example. It is a trigger that sees an image in storage, calls a cognitive services API to get the location of the face, then stores the data. I wanted to change it to:

  • Take an image as input from an HTTP Post
  • Draw a rectangle around the face
  • Return the new image

You can do this work from Git/GitHub but for easy stuff I'm literally doing it all in the browser. Here's what it looks like.

Azure Functions can be done in the browser

I code and iterate and save and fail fast, fail often. Here's the starter code I based it on. Remember, that this is a starter function that runs on a triggered event, so note its Run()...I'm going to change this.

#r "Microsoft.WindowsAzure.Storage"
#r "Newtonsoft.Json"
using System.Net;
using System.Net.Http;
using System.Net.Http.Headers;
using Newtonsoft.Json;
using Microsoft.WindowsAzure.Storage.Table;
using System.IO; 
public static async Task Run(Stream image, string name, IAsyncCollector<FaceRectangle> outTable, TraceWriter log)
{
    var image = await req.Content.ReadAsStreamAsync();
    
    string result = await CallVisionAPI(image); //STREAM
    log.Info(result); 
    if (String.IsNullOrEmpty(result))
    {
        return req.CreateResponse(HttpStatusCode.BadRequest);
    }
    ImageData imageData = JsonConvert.DeserializeObject<ImageData>(result);
    foreach (Face face in imageData.Faces)
    {
        var faceRectangle = face.FaceRectangle;
        faceRectangle.RowKey = Guid.NewGuid().ToString();
        faceRectangle.PartitionKey = "Functions";
        faceRectangle.ImageFile = name + ".jpg";
        await outTable.AddAsync(faceRectangle); 
    }
    return req.CreateResponse(HttpStatusCode.OK, "Nice Job");  
}
static async Task<string> CallVisionAPI(Stream image)
{
    using (var client = new HttpClient())
    {
        var content = new StreamContent(image);
        var url = "https://api.projectoxford.ai/vision/v1.0/analyze?visualFeatures=Faces";
        client.DefaultRequestHeaders.Add("Ocp-Apim-Subscription-Key", Environment.GetEnvironmentVariable("Vision_API_Subscription_Key"));
        content.Headers.ContentType = new MediaTypeHeaderValue("application/octet-stream");
        var httpResponse = await client.PostAsync(url, content);
        if (httpResponse.StatusCode == HttpStatusCode.OK){
            return await httpResponse.Content.ReadAsStringAsync();
        }
    }
    return null;
}
public class ImageData {
    public List<Face> Faces { get; set; }
}
public class Face {
    public int Age { get; set; }
    public string Gender { get; set; }
    public FaceRectangle FaceRectangle { get; set; }
}
public class FaceRectangle : TableEntity {
    public string ImageFile { get; set; }
    public int Left { get; set; }
    public int Top { get; set; }
    public int Width { get; set; }
    public int Height { get; set; }
}

GOAL: I'll change this Run() and make this listen for an HTTP request that contains an image, read the image that's POSTed in (ya, I know, no validation), draw rectangle around detected faces, then return a new image.

public static async Task<HttpResponseMessage> Run(HttpRequestMessage req, TraceWriter log) {

var image = await req.Content.ReadAsStreamAsync();

As for the body of this function, I'm 20% sure I'm using too many MemoryStreams but they are getting disposed so take this code as a initial proof of concept. However, I DO need at least the two I have. Regardless, happy to chat with those who know more, but it's more subtle than even I thought. That said, basically call out to the API, get back some face data that looks like this:

2016-08-26T23:59:26.741 {"requestId":"8be222ff-98cc-4019-8038-c22eeffa63ed","metadata":{"width":2808,"height":1872,"format":"Jpeg"},"faces":[{"age":41,"gender":"Male","faceRectangle":{"left":1059,"top":671,"width":466,"height":466}},{"age":41,"gender":"Male","faceRectangle":{"left":1916,"top":702,"width":448,"height":448}}]}

Then take that data and DRAW a Rectangle over the faces detected.

public static async Task<HttpResponseMessage> Run(HttpRequestMessage req, TraceWriter log)
{
    var image = await req.Content.ReadAsStreamAsync();
    MemoryStream mem = new MemoryStream();
    image.CopyTo(mem); //make a copy since one gets destroy in the other API. Lame, I know.
    image.Position = 0;
    mem.Position = 0;
    
    string result = await CallVisionAPI(image); 
    log.Info(result); 
    if (String.IsNullOrEmpty(result)) {
        return req.CreateResponse(HttpStatusCode.BadRequest);
    }
    
    ImageData imageData = JsonConvert.DeserializeObject<ImageData>(result);
    MemoryStream outputStream = new MemoryStream();
    using(Image maybeFace = Image.FromStream(mem, true))
    {
        using (Graphics g = Graphics.FromImage(maybeFace))
        {
            Pen yellowPen = new Pen(Color.Yellow, 4);
            foreach (Face face in imageData.Faces)
            {
                var faceRectangle = face.FaceRectangle;
                g.DrawRectangle(yellowPen, 
                    faceRectangle.Left, faceRectangle.Top, 
                    faceRectangle.Width, faceRectangle.Height);
            }
        }
        maybeFace.Save(outputStream, ImageFormat.Jpeg);
    }
    
    var response = new HttpResponseMessage()
    {
        Content = new ByteArrayContent(outputStream.ToArray()),
        StatusCode = HttpStatusCode.OK,
    };
    response.Content.Headers.ContentType = new MediaTypeHeaderValue("image/jpeg");
    return response;
}

 

Now I go into Postman and POST an image to my new Azure Function endpoint. Here I uploaded a flattering picture of me and unflattering picture of The Oatmeal. He's pretty in real life just NOT HERE. ;)

Image Recognition with Azure Functions

So in just about 15 min with no idea and armed with just my browser, Postman (also my browser), Google/StackOverflow, and Azure Functions I've got a backend proof of concept.

Azure Functions supports Node.js, C#, F#, Python, PHP *and* Batch, Bash, and PowerShell, which really opens it up to basically anyone. You can use them for anything when you just want a function (or more) out there on the web. Send stuff to Slack, automate your house, update GitHub issues, act as a Webhook, etc. There's some great 3d party Azure Functions sample code in this GitHub repo as well. Inputs can be from basically anywhere and outputs can be basically anywhere. If those anywheres are also cloud services like Tables or Storage, you've got a "serverless backed" that is easy to scale.

I'm still learning, but I can see when I'd want a VM (total control) vs a Web App (near total control) vs a "Serverless" Azure Function (less control but I didn't need it anyway, just wanted a function in the cloud.)


Sponsor: Aspose makes programming APIs for working with files, like: DOC, XLS, PPT, PDF and countless more.  Developers can use their products to create, convert, modify, or manage files in almost any way.  Aspose is a good company and they offer solid products.  Check them out, and download a free evaluation.


© 2016 Scott Hanselman. All rights reserved.
     

What is Serverless Computing? Exploring Azure Functions

There’s a lot of confusing terms in the Cloud space. And that’s not counting the term “Cloud.” 😉

  • IaaS (Infrastructure as a Services) – Virtual Machines and stuff on demand.
  • PaaS (Platform as a Service) – You deploy your apps but try not to think about the Virtual Machines underneath. They exist, but we pretend they don’t until forced.
  • SaaS (Software as a Service) – Stuff like Office 365 and Gmail. You pay a subscription and you get email/whatever as a service. It Just Works.

“Serverless Computing” doesn’t really mean there’s no server. Serverless means there’s no server you need to worry about. That might sound like PaaS, but it’s higher level that than.

Serverless Computing is like this – Your code, a slider bar, and your credit card. You just have your function out there and it will scale as long as you can pay for it. It’s as close to “cloudy” as The Cloud can get.

Serverless Computing is like this. Your code, a slider bar, and your credit card.

With Platform as a Service, you might made a Node or C# app, check it into Git, deploy it to a Web Site/Application, and then you’ve got an endpoint. You might scale it up (get more CPU/Memory/Disk) or out (have 1, 2, n instances of the Web App) but it’s not seamless. It’s totally cool, to be clear, but you’re always aware of the servers.

New cloud systems like Amazon Lambda and Azure Functions have you upload some code and it’s running seconds later. You can have continuous jobs, functions that run on a triggered event, or make Web APIs or Webhooks that are just a function with a URL.

I’m going to see how quickly I can make a Web API with Serverless Computing.

I’ll go to http://functions.azure.com and make a new function. If you don’t have an account you can sign up free.

Getting started with Azure Functions

You can make a function in JavaScript or C#.

Getting started with Azure Functions - Create This Function

Once you’re into the Azure Function Editor, click “New Function” and you’ve got dozens of templates and code examples for things like:

  • Find a face in an image and store the rectangle of where the face is.
  • Run a function and comment on a GitHub issue when a GitHub webhook is triggered
  • Update a storage blob when an HTTP Request comes in
  • Load entities from a database or storage table

I figured I’d change the first example. It is a trigger that sees an image in storage, calls a cognitive services API to get the location of the face, then stores the data. I wanted to change it to:

  • Take an image as input from an HTTP Post
  • Draw a rectangle around the face
  • Return the new image

You can do this work from Git/GitHub but for easy stuff I’m literally doing it all in the browser. Here’s what it looks like.

Azure Functions can be done in the browser

I code and iterate and save and fail fast, fail often. Here’s the starter code I based it on. Remember, that this is a starter function that runs on a triggered event, so note its Run()…I’m going to change this.

#r "Microsoft.WindowsAzure.Storage"
#r "Newtonsoft.Json"
using System.Net;
using System.Net.Http;
using System.Net.Http.Headers;
using Newtonsoft.Json;
using Microsoft.WindowsAzure.Storage.Table;
using System.IO; 
public static async Task Run(Stream image, string name, IAsyncCollector<FaceRectangle> outTable, TraceWriter log)
{
    var image = await req.Content.ReadAsStreamAsync();
    
    string result = await CallVisionAPI(image); //STREAM
    log.Info(result); 
    if (String.IsNullOrEmpty(result))
    {
        return req.CreateResponse(HttpStatusCode.BadRequest);
    }
    ImageData imageData = JsonConvert.DeserializeObject<ImageData>(result);
    foreach (Face face in imageData.Faces)
    {
        var faceRectangle = face.FaceRectangle;
        faceRectangle.RowKey = Guid.NewGuid().ToString();
        faceRectangle.PartitionKey = "Functions";
        faceRectangle.ImageFile = name + ".jpg";
        await outTable.AddAsync(faceRectangle); 
    }
    return req.CreateResponse(HttpStatusCode.OK, "Nice Job");  
}
static async Task<string> CallVisionAPI(Stream image)
{
    using (var client = new HttpClient())
    {
        var content = new StreamContent(image);
        var url = "https://api.projectoxford.ai/vision/v1.0/analyze?visualFeatures=Faces";
        client.DefaultRequestHeaders.Add("Ocp-Apim-Subscription-Key", Environment.GetEnvironmentVariable("Vision_API_Subscription_Key"));
        content.Headers.ContentType = new MediaTypeHeaderValue("application/octet-stream");
        var httpResponse = await client.PostAsync(url, content);
        if (httpResponse.StatusCode == HttpStatusCode.OK){
            return await httpResponse.Content.ReadAsStringAsync();
        }
    }
    return null;
}
public class ImageData {
    public List<Face> Faces { get; set; }
}
public class Face {
    public int Age { get; set; }
    public string Gender { get; set; }
    public FaceRectangle FaceRectangle { get; set; }
}
public class FaceRectangle : TableEntity {
    public string ImageFile { get; set; }
    public int Left { get; set; }
    public int Top { get; set; }
    public int Width { get; set; }
    public int Height { get; set; }
}

GOAL: I’ll change this Run() and make this listen for an HTTP request that contains an image, read the image that’s POSTed in (ya, I know, no validation), draw rectangle around detected faces, then return a new image.

public static async Task<HttpResponseMessage> Run(HttpRequestMessage req, TraceWriter log) {
var image = await req.Content.ReadAsStreamAsync();

As for the body of this function, I’m 20% sure I’m using too many MemoryStreams but they are getting disposed so take this code as a initial proof of concept. However, I DO need at least the two I have. Regardless, happy to chat with those who know more, but it’s more subtle than even I thought. That said, basically call out to the API, get back some face data that looks like this:

2016-08-26T23:59:26.741 {"requestId":"8be222ff-98cc-4019-8038-c22eeffa63ed","metadata":{"width":2808,"height":1872,"format":"Jpeg"},"faces":[{"age":41,"gender":"Male","faceRectangle":{"left":1059,"top":671,"width":466,"height":466}},{"age":41,"gender":"Male","faceRectangle":{"left":1916,"top":702,"width":448,"height":448}}]}

Then take that data and DRAW a Rectangle over the faces detected.

public static async Task<HttpResponseMessage> Run(HttpRequestMessage req, TraceWriter log)
{
    var image = await req.Content.ReadAsStreamAsync();
    MemoryStream mem = new MemoryStream();
    image.CopyTo(mem); //make a copy since one gets destroy in the other API. Lame, I know.
    image.Position = 0;
    mem.Position = 0;
    
    string result = await CallVisionAPI(image); 
    log.Info(result); 
    if (String.IsNullOrEmpty(result)) {
        return req.CreateResponse(HttpStatusCode.BadRequest);
    }
    
    ImageData imageData = JsonConvert.DeserializeObject<ImageData>(result);
    MemoryStream outputStream = new MemoryStream();
    using(Image maybeFace = Image.FromStream(mem, true))
    {
        using (Graphics g = Graphics.FromImage(maybeFace))
        {
            Pen yellowPen = new Pen(Color.Yellow, 4);
            foreach (Face face in imageData.Faces)
            {
                var faceRectangle = face.FaceRectangle;
                g.DrawRectangle(yellowPen, 
                    faceRectangle.Left, faceRectangle.Top, 
                    faceRectangle.Width, faceRectangle.Height);
            }
        }
        maybeFace.Save(outputStream, ImageFormat.Jpeg);
    }
    
    var response = new HttpResponseMessage()
    {
        Content = new ByteArrayContent(outputStream.ToArray()),
        StatusCode = HttpStatusCode.OK,
    };
    response.Content.Headers.ContentType = new MediaTypeHeaderValue("image/jpeg");
    return response;
}

 

Now I go into Postman and POST an image to my new Azure Function endpoint. Here I uploaded a flattering picture of me and unflattering picture of The Oatmeal. He’s pretty in real life just NOT HERE. 😉

Image Recognition with Azure Functions

So in just about 15 min with no idea and armed with just my browser, Postman (also my browser), Google/StackOverflow, and Azure Functions I’ve got a backend proof of concept.

Azure Functions supports Node.js, C#, F#, Python, PHP *and* Batch, Bash, and PowerShell, which really opens it up to basically anyone. You can use them for anything when you just want a function (or more) out there on the web. Send stuff to Slack, automate your house, update GitHub issues, act as a Webhook, etc. There’s some great 3d party Azure Functions sample code in this GitHub repo as well. Inputs can be from basically anywhere and outputs can be basically anywhere. If those anywheres are also cloud services like Tables or Storage, you’ve got a “serverless backed” that is easy to scale.

I’m still learning, but I can see when I’d want a VM (total control) vs a Web App (near total control) vs a “Serverless” Azure Function (less control but I didn’t need it anyway, just wanted a function in the cloud.)


Sponsor: Aspose makes programming APIs for working with files, like: DOC, XLS, PPT, PDF and countless more.  Developers can use their products to create, convert, modify, or manage files in almost any way.  Aspose is a good company and they offer solid products.  Check them out, and download a free evaluation.


© 2016 Scott Hanselman. All rights reserved.
     

There's a lot of confusing terms in the Cloud space. And that's not counting the term "Cloud." ;)

  • IaaS (Infrastructure as a Services) - Virtual Machines and stuff on demand.
  • PaaS (Platform as a Service) - You deploy your apps but try not to think about the Virtual Machines underneath. They exist, but we pretend they don't until forced.
  • SaaS (Software as a Service) - Stuff like Office 365 and Gmail. You pay a subscription and you get email/whatever as a service. It Just Works.

"Serverless Computing" doesn't really mean there's no server. Serverless means there's no server you need to worry about. That might sound like PaaS, but it's higher level that than.

Serverless Computing is like this - Your code, a slider bar, and your credit card. You just have your function out there and it will scale as long as you can pay for it. It's as close to "cloudy" as The Cloud can get.

Serverless Computing is like this. Your code, a slider bar, and your credit card.

With Platform as a Service, you might made a Node or C# app, check it into Git, deploy it to a Web Site/Application, and then you've got an endpoint. You might scale it up (get more CPU/Memory/Disk) or out (have 1, 2, n instances of the Web App) but it's not seamless. It's totally cool, to be clear, but you're always aware of the servers.

New cloud systems like Amazon Lambda and Azure Functions have you upload some code and it's running seconds later. You can have continuous jobs, functions that run on a triggered event, or make Web APIs or Webhooks that are just a function with a URL.

I'm going to see how quickly I can make a Web API with Serverless Computing.

I'll go to http://functions.azure.com and make a new function. If you don't have an account you can sign up free.

Getting started with Azure Functions

You can make a function in JavaScript or C#.

Getting started with Azure Functions - Create This Function

Once you're into the Azure Function Editor, click "New Function" and you've got dozens of templates and code examples for things like:

  • Find a face in an image and store the rectangle of where the face is.
  • Run a function and comment on a GitHub issue when a GitHub webhook is triggered
  • Update a storage blob when an HTTP Request comes in
  • Load entities from a database or storage table

I figured I'd change the first example. It is a trigger that sees an image in storage, calls a cognitive services API to get the location of the face, then stores the data. I wanted to change it to:

  • Take an image as input from an HTTP Post
  • Draw a rectangle around the face
  • Return the new image

You can do this work from Git/GitHub but for easy stuff I'm literally doing it all in the browser. Here's what it looks like.

Azure Functions can be done in the browser

I code and iterate and save and fail fast, fail often. Here's the starter code I based it on. Remember, that this is a starter function that runs on a triggered event, so note its Run()...I'm going to change this.

#r "Microsoft.WindowsAzure.Storage"
#r "Newtonsoft.Json"
using System.Net;
using System.Net.Http;
using System.Net.Http.Headers;
using Newtonsoft.Json;
using Microsoft.WindowsAzure.Storage.Table;
using System.IO; 
public static async Task Run(Stream image, string name, IAsyncCollector<FaceRectangle> outTable, TraceWriter log)
{
    var image = await req.Content.ReadAsStreamAsync();
    
    string result = await CallVisionAPI(image); //STREAM
    log.Info(result); 
    if (String.IsNullOrEmpty(result))
    {
        return req.CreateResponse(HttpStatusCode.BadRequest);
    }
    ImageData imageData = JsonConvert.DeserializeObject<ImageData>(result);
    foreach (Face face in imageData.Faces)
    {
        var faceRectangle = face.FaceRectangle;
        faceRectangle.RowKey = Guid.NewGuid().ToString();
        faceRectangle.PartitionKey = "Functions";
        faceRectangle.ImageFile = name + ".jpg";
        await outTable.AddAsync(faceRectangle); 
    }
    return req.CreateResponse(HttpStatusCode.OK, "Nice Job");  
}
static async Task<string> CallVisionAPI(Stream image)
{
    using (var client = new HttpClient())
    {
        var content = new StreamContent(image);
        var url = "https://api.projectoxford.ai/vision/v1.0/analyze?visualFeatures=Faces";
        client.DefaultRequestHeaders.Add("Ocp-Apim-Subscription-Key", Environment.GetEnvironmentVariable("Vision_API_Subscription_Key"));
        content.Headers.ContentType = new MediaTypeHeaderValue("application/octet-stream");
        var httpResponse = await client.PostAsync(url, content);
        if (httpResponse.StatusCode == HttpStatusCode.OK){
            return await httpResponse.Content.ReadAsStringAsync();
        }
    }
    return null;
}
public class ImageData {
    public List<Face> Faces { get; set; }
}
public class Face {
    public int Age { get; set; }
    public string Gender { get; set; }
    public FaceRectangle FaceRectangle { get; set; }
}
public class FaceRectangle : TableEntity {
    public string ImageFile { get; set; }
    public int Left { get; set; }
    public int Top { get; set; }
    public int Width { get; set; }
    public int Height { get; set; }
}

GOAL: I'll change this Run() and make this listen for an HTTP request that contains an image, read the image that's POSTed in (ya, I know, no validation), draw rectangle around detected faces, then return a new image.

public static async Task<HttpResponseMessage> Run(HttpRequestMessage req, TraceWriter log) {

var image = await req.Content.ReadAsStreamAsync();

As for the body of this function, I'm 20% sure I'm using too many MemoryStreams but they are getting disposed so take this code as a initial proof of concept. However, I DO need at least the two I have. Regardless, happy to chat with those who know more, but it's more subtle than even I thought. That said, basically call out to the API, get back some face data that looks like this:

2016-08-26T23:59:26.741 {"requestId":"8be222ff-98cc-4019-8038-c22eeffa63ed","metadata":{"width":2808,"height":1872,"format":"Jpeg"},"faces":[{"age":41,"gender":"Male","faceRectangle":{"left":1059,"top":671,"width":466,"height":466}},{"age":41,"gender":"Male","faceRectangle":{"left":1916,"top":702,"width":448,"height":448}}]}

Then take that data and DRAW a Rectangle over the faces detected.

public static async Task<HttpResponseMessage> Run(HttpRequestMessage req, TraceWriter log)
{
    var image = await req.Content.ReadAsStreamAsync();
    MemoryStream mem = new MemoryStream();
    image.CopyTo(mem); //make a copy since one gets destroy in the other API. Lame, I know.
    image.Position = 0;
    mem.Position = 0;
    
    string result = await CallVisionAPI(image); 
    log.Info(result); 
    if (String.IsNullOrEmpty(result)) {
        return req.CreateResponse(HttpStatusCode.BadRequest);
    }
    
    ImageData imageData = JsonConvert.DeserializeObject<ImageData>(result);
    MemoryStream outputStream = new MemoryStream();
    using(Image maybeFace = Image.FromStream(mem, true))
    {
        using (Graphics g = Graphics.FromImage(maybeFace))
        {
            Pen yellowPen = new Pen(Color.Yellow, 4);
            foreach (Face face in imageData.Faces)
            {
                var faceRectangle = face.FaceRectangle;
                g.DrawRectangle(yellowPen, 
                    faceRectangle.Left, faceRectangle.Top, 
                    faceRectangle.Width, faceRectangle.Height);
            }
        }
        maybeFace.Save(outputStream, ImageFormat.Jpeg);
    }
    
    var response = new HttpResponseMessage()
    {
        Content = new ByteArrayContent(outputStream.ToArray()),
        StatusCode = HttpStatusCode.OK,
    };
    response.Content.Headers.ContentType = new MediaTypeHeaderValue("image/jpeg");
    return response;
}

 

Now I go into Postman and POST an image to my new Azure Function endpoint. Here I uploaded a flattering picture of me and unflattering picture of The Oatmeal. He's pretty in real life just NOT HERE. ;)

Image Recognition with Azure Functions

So in just about 15 min with no idea and armed with just my browser, Postman (also my browser), Google/StackOverflow, and Azure Functions I've got a backend proof of concept.

Azure Functions supports Node.js, C#, F#, Python, PHP *and* Batch, Bash, and PowerShell, which really opens it up to basically anyone. You can use them for anything when you just want a function (or more) out there on the web. Send stuff to Slack, automate your house, update GitHub issues, act as a Webhook, etc. There's some great 3d party Azure Functions sample code in this GitHub repo as well. Inputs can be from basically anywhere and outputs can be basically anywhere. If those anywheres are also cloud services like Tables or Storage, you've got a "serverless backed" that is easy to scale.

I'm still learning, but I can see when I'd want a VM (total control) vs a Web App (near total control) vs a "Serverless" Azure Function (less control but I didn't need it anyway, just wanted a function in the cloud.)


Sponsor: Aspose makes programming APIs for working with files, like: DOC, XLS, PPT, PDF and countless more.  Developers can use their products to create, convert, modify, or manage files in almost any way.  Aspose is a good company and they offer solid products.  Check them out, and download a free evaluation.


© 2016 Scott Hanselman. All rights reserved.