Exploring the preconfigured browser-based Linux Cloud Shell built into the Azure Portal

At BUILD a few weeks ago I did a demo of the Azure Cloud Shell, now in preview. It’s pretty fab and it’s built into the Azure Portal and lives in your browser. You don’t have to do anything, it’s just there whenever you need it. I’m trying to convince them to enable “Quake Mode” so it would pop-up when you click ~ but they never listen to me. 😉

Animated Gif of the Azure Cloud Shell

Click the >_ shell icon in the top toolbar at http://portal.azure.com. The very first time you launch the Azure Cloud Shell it will ask you where it wants your $home directory files to be persisted. They will live in your own Storage Account. Don’t worry about cost, remember that Azure Storage is like pennies a gig, so assuming you’re storing script files, figure it’s thousandths of pennies – a non-issue.

Where do you want your account files persisted to?

It’s pretty genius how it works, actually. Since you can setup an Azure Storage Account as a regular File Share (sharing to Mac, Linux, or Windows) it will just make a file share and mount it. The data you save in the ~/clouddrive is persistent between sessions, the sessions themselves disappear if you don’t use them.

Now my Azure Cloud Shell Files are available anywhere

Today it’s got bash inside a real container. Here’s what lsb_release -a says:

[email protected]:~/clouddrive$ lsb_release -a
No LSB modules are available.
Distributor ID: Ubuntu
Description:    Ubuntu 16.04.1 LTS
Release:        16.04
Codename:       xenial

Looks like Ubuntu xenial inside a container, all managed by an orchestrator within Azure Container Services. The shell is using xterm.js to make it all possible inside the browser. That means you can run vim, top, whatever makes you happy. Cloud shells include vim, emacs, npm, make, maven, pip, as well as docker, kubectl, sqlcmd, postgres, mysql, iPython, and even .NET Core’s command line SDK.

NOTE: Ctrl-v and Ctrl-c do not function as copy/paste on Windows machines [in the Portal using xterm.js], please us Ctrl-insert and Shift-insert to copy/paste. Right-click copy paste options are also available, however this is subject to browser-specific clipboard access

When you’re in there, of course the best part is that you can ssh into your Linux VMs. They say PowerShell is coming soon to the Cloud Shell so you’ll be able to remote Powershell in to Windows boxes, I assume.

The Cloud Shell has the Azure CLI (command line interface) built in and pre-configured and logged in. So I can hit the shell then (for example) get a list of my web apps, and restart one. Here I’m getting the names of my sites and their resource groups, then restarting my son’s hamster blog.

[email protected]:~/clouddrive$ az webapp list -o table
ResourceGroup               Location          State    DefaultHostName                             AppServicePlan     Name
--------------------------  ----------------  -------  ------------------------------------------  -----------------  ------------------------
Default-Web-WestUS          West US           Running  thisdeveloperslife.azurewebsites.net        DefaultServerFarm  thisdeveloperslife
Default-Web-WestUS          West US           Running  hanselmanlyncrelay.azurewebsites.net        DefaultServerFarm  hanselmanlyncrelay
Default-Web-WestUS          West US           Running  myhamsterblog.azurewebsites.net             DefaultServerFarm  myhamsterblog

[email protected]:~/clouddrive$ az webapp restart -n myhamsterblog -g "Default-Web-WestUS"

Pretty cool. I’m going to keep exploring, but I like the way the Azure Portal is going from a GUI and DevOps dashboard perspective, but it’s also nice to have a CLI preconfigured whenever I need it.


Sponsor: Did you know VSTS can integrate closely with Octopus Deploy? Watch Damian Brady and Brian A. Randell as they show you how to automate deployments from VSTS to Octopus Deploy, and demo the new VSTS Octopus Deploy dashboard widget. Watch now!


© 2017 Scott Hanselman. All rights reserved.
     

At BUILD a few weeks ago I did a demo of the Azure Cloud Shell, now in preview. It's pretty fab and it's built into the Azure Portal and lives in your browser. You don't have to do anything, it's just there whenever you need it. I'm trying to convince them to enable "Quake Mode" so it would pop-up when you click ~ but they never listen to me. ;)

Animated Gif of the Azure Cloud Shell

Click the >_ shell icon in the top toolbar at http://portal.azure.com. The very first time you launch the Azure Cloud Shell it will ask you where it wants your $home directory files to be persisted. They will live in your own Storage Account. Don't worry about cost, remember that Azure Storage is like pennies a gig, so assuming you're storing script files, figure it's thousandths of pennies - a non-issue.

Where do you want your account files persisted to?

It's pretty genius how it works, actually. Since you can setup an Azure Storage Account as a regular File Share (sharing to Mac, Linux, or Windows) it will just make a file share and mount it. The data you save in the ~/clouddrive is persistent between sessions, the sessions themselves disappear if you don't use them.

Now my Azure Cloud Shell Files are available anywhere

Today it's got bash inside a real container. Here's what lsb_release -a says:

[email protected]:~/clouddrive$ lsb_release -a
No LSB modules are available.
Distributor ID: Ubuntu
Description:    Ubuntu 16.04.1 LTS
Release:        16.04
Codename:       xenial

Looks like Ubuntu xenial inside a container, all managed by an orchestrator within Azure Container Services. The shell is using xterm.js to make it all possible inside the browser. That means you can run vim, top, whatever makes you happy. Cloud shells include vim, emacs, npm, make, maven, pip, as well as docker, kubectl, sqlcmd, postgres, mysql, iPython, and even .NET Core's command line SDK.

NOTE: Ctrl-v and Ctrl-c do not function as copy/paste on Windows machines [in the Portal using xterm.js], please us Ctrl-insert and Shift-insert to copy/paste. Right-click copy paste options are also available, however this is subject to browser-specific clipboard access

When you're in there, of course the best part is that you can ssh into your Linux VMs. They say PowerShell is coming soon to the Cloud Shell so you'll be able to remote Powershell in to Windows boxes, I assume.

The Cloud Shell has the Azure CLI (command line interface) built in and pre-configured and logged in. So I can hit the shell then (for example) get a list of my web apps, and restart one. Here I'm getting the names of my sites and their resource groups, then restarting my son's hamster blog.

[email protected]:~/clouddrive$ az webapp list -o table
ResourceGroup               Location          State    DefaultHostName                             AppServicePlan     Name
--------------------------  ----------------  -------  ------------------------------------------  -----------------  ------------------------
Default-Web-WestUS          West US           Running  thisdeveloperslife.azurewebsites.net        DefaultServerFarm  thisdeveloperslife
Default-Web-WestUS          West US           Running  hanselmanlyncrelay.azurewebsites.net        DefaultServerFarm  hanselmanlyncrelay
Default-Web-WestUS          West US           Running  myhamsterblog.azurewebsites.net             DefaultServerFarm  myhamsterblog


[email protected]:~/clouddrive$ az webapp restart -n myhamsterblog -g "Default-Web-WestUS"

Pretty cool. I'm going to keep exploring, but I like the way the Azure Portal is going from a GUI and DevOps dashboard perspective, but it's also nice to have a CLI preconfigured whenever I need it.


Sponsor: Did you know VSTS can integrate closely with Octopus Deploy? Watch Damian Brady and Brian A. Randell as they show you how to automate deployments from VSTS to Octopus Deploy, and demo the new VSTS Octopus Deploy dashboard widget. Watch now!


© 2017 Scott Hanselman. All rights reserved.
     

Penny Pinching in the Cloud: Lift and Shift vs App Services – When a VM in the Cloud isn’t what you want

I got an interesting question today. This is actually an extremely common one so I thought I’d take a bit to explore it. It’s worth noting that I don’t know the result of this blog post. That is, I don’t know if I’ll be right or not, and I’m not going to edit it. Let’s see how this goes!

The individual emailed and said they were new to Azure and said:

Question for you.  (and we may have made a mistake – some opinions and help needed)
A month or so ago, we setup a full up Win2016 server on Azure, with the idea that it would host a SQL server as well two IIS web sites

Long story short, they were mired in the setup of IIS on Win2k6, messing with ports, yada yada yada. ‘

All they wanted was:

  • The ability to right-click publish from Visual Studio for two sites.
  • Management of a SQL Database from SQL Management Studio.

This is a classic “lift and shift” story. Someone has a VM locally or under their desk or in hosting, so they figure they’ll move it to the cloud. They LIFT the site as a Virtual Machine and SHIFT it to the cloud.

For many, this is a totally reasonable and logical thing to do. If you did this and things work for you, fab, and congrats. However, if, at this point, you’re finding the whole “Cloud” thing to be underwhelming, it’s likely because you’re not really using the cloud, you’ve just moved a VM into a giant host. You still have to feed and water the VM and deal with its incessant needs. This is likely NOT what you wanted to do. You just want your app running.

Making a VM to do Everything

If I go into Azure and make a new Virtual Machine (Linux or Windows) it’s important to remember that I’m now responsible for giving that VM a loving home and a place to poop. Just making sure you’re still reading.

NOTE: If you’re making a Windows VM and you already have a Windows license you can save like 40%, so be aware of that, but I’ll assume they didn’t have a license.

You can check out the Pricing Calculator if you like, but I’ll just go and actually setup the VM and see what the Azure Portal says. Note that it’s going to need to be beefy enough for two websites AND a SQL Server, per the requirements from before.

Pricing for VMs in Azure

For a SQL Server and two sites I might want the second or third choice here, which isn’t too bad given they have SSDs and lots of RAM. But again, you’re responsible for them. Not to mention you have ONE VM so your web server and SQL Server Database are living on that one machine. Anything fails and it’s over. You’re also possibly giving up perf as you’re sharing resources.

App Service Plans with Web Sites/Apps and SQL Azure Server

An “App Service Plan” on Azure is a fancy word for “A VM you don’t need to worry about.” You can host as many Web Apps, Mobile Apps/Backends, Logic Apps and stuff in one as you like, barring perf or memory issues. I have between 19 and 20 small websites in one Small App Service Plan. So, to be clear, you put n number of App Services as you’d like into one App Service Plan.

When you check out the pricing tier for an App Service Plan, be sure to View All and really explore and think about your options. Some includes support for custom domains and SSL, others have 50 backups a day, or support BizTalk Services, etc. They start at Free, go to Shared, and then Basic, Standard, etc. Best part is that you can scale these up and down. If I go from a Small to a Medium App Service Plan, every App on the Plan gets better.

However, we don’t need a SQL Server, remember? This is going to be a plan that we’ll use to host those two websites. AND we can use the the same App Service Plan for staging slots (dev/test/staging/production) if we like. So just get the plan that works for your sites, today. Unlike a VM, you can change it whenever.

App Service Plan pricing

SQL Server on Azure is similar. You make a SQL Server Database that is hosted on a SQL Server that supports the number of Database Throughput Units that I need. Again, because it’s the capital-C Cloud, I can change the size anytime. I can even script it and turn it up and down on the weekends. Whatever saves me money!

SQL Azure Pricing

I can scale the SQL Server from $5 to a month to bajillions and everything in between.

What the difference here?

First, we started here

  • VM in the Cloud: At the start we had “A VM in the Cloud.” I have total control over the Virtual Machine, which is good, but I have total control over the Virtual Machine, which is bad. I can scale up or out, but just as one Unit, unless I split things up into three VMs.

Now we’ve got.

  • IIS/Web Server in the Cloud: I don’t have to think about the underlying OS or keeping it patched. I can use Linux or Windows if I like, and I can run PHP, Ruby, Java, .NET, and on and on in an Azure App Service. I can put lots of sites in one Plan but the IIS publishing endpoint for Visual Studio is automatically configured. I can also use Git for deployment as well
  • SQL Server in the Cloud: The SQL Server is managed, backed up, and independently scalable.

This is a slightly more “cloudy” of doing things. It’s not microservices and independently scalable containers, but it does give you:

  • Independently scalable tiers (pricing and CPU and Memory and disk)
  • Lots of automatic benefits – backups, custom domains, ssl certs, git deploy, App Service Extensions, and on and on. Dozens of features.
  • Control over pricing that is scriptable. You could write scripts to really pinch pennies by scaling your units up and down based on time of day or month.

What are your thoughts on Lift and Shift to IaaS (Infrastructure as a Service) vs using PaaS (Platform as a Service)? What did I forget? (I’m sure lots!)


Sponsor: Check out JetBrains Rider: a new cross-platform .NET IDE. Edit, refactor, test, build and debug ASP.NET, .NET Framework, .NET Core, or Unity applications. Learn more and get access to early builds!


© 2017 Scott Hanselman. All rights reserved.
     

I got an interesting question today. This is actually an extremely common one so I thought I'd take a bit to explore it. It's worth noting that I don't know the result of this blog post. That is, I don't know if I'll be right or not, and I'm not going to edit it. Let's see how this goes!

The individual emailed and said they were new to Azure and said:

Question for you.  (and we may have made a mistake – some opinions and help needed)
A month or so ago, we setup a full up Win2016 server on Azure, with the idea that it would host a SQL server as well two IIS web sites

Long story short, they were mired in the setup of IIS on Win2k6, messing with ports, yada yada yada. '

All they wanted was:

  • The ability to right-click publish from Visual Studio for two sites.
  • Management of a SQL Database from SQL Management Studio.

This is a classic "lift and shift" story. Someone has a VM locally or under their desk or in hosting, so they figure they'll move it to the cloud. They LIFT the site as a Virtual Machine and SHIFT it to the cloud.

For many, this is a totally reasonable and logical thing to do. If you did this and things work for you, fab, and congrats. However, if, at this point, you're finding the whole "Cloud" thing to be underwhelming, it's likely because you're not really using the cloud, you've just moved a VM into a giant host. You still have to feed and water the VM and deal with its incessant needs. This is likely NOT what you wanted to do. You just want your app running.

Making a VM to do Everything

If I go into Azure and make a new Virtual Machine (Linux or Windows) it's important to remember that I'm now responsible for giving that VM a loving home and a place to poop. Just making sure you're still reading.

NOTE: If you're making a Windows VM and you already have a Windows license you can save like 40%, so be aware of that, but I'll assume they didn't have a license.

You can check out the Pricing Calculator if you like, but I'll just go and actually setup the VM and see what the Azure Portal says. Note that it's going to need to be beefy enough for two websites AND a SQL Server, per the requirements from before.

Pricing for VMs in Azure

For a SQL Server and two sites I might want the second or third choice here, which isn't too bad given they have SSDs and lots of RAM. But again, you're responsible for them. Not to mention you have ONE VM so your web server and SQL Server Database are living on that one machine. Anything fails and it's over. You're also possibly giving up perf as you're sharing resources.

App Service Plans with Web Sites/Apps and SQL Azure Server

An "App Service Plan" on Azure is a fancy word for "A VM you don't need to worry about." You can host as many Web Apps, Mobile Apps/Backends, Logic Apps and stuff in one as you like, barring perf or memory issues. I have between 19 and 20 small websites in one Small App Service Plan. So, to be clear, you put n number of App Services as you'd like into one App Service Plan.

When you check out the pricing tier for an App Service Plan, be sure to View All and really explore and think about your options. Some includes support for custom domains and SSL, others have 50 backups a day, or support BizTalk Services, etc. They start at Free, go to Shared, and then Basic, Standard, etc. Best part is that you can scale these up and down. If I go from a Small to a Medium App Service Plan, every App on the Plan gets better.

However, we don't need a SQL Server, remember? This is going to be a plan that we'll use to host those two websites. AND we can use the the same App Service Plan for staging slots (dev/test/staging/production) if we like. So just get the plan that works for your sites, today. Unlike a VM, you can change it whenever.

App Service Plan pricing

SQL Server on Azure is similar. You make a SQL Server Database that is hosted on a SQL Server that supports the number of Database Throughput Units that I need. Again, because it's the capital-C Cloud, I can change the size anytime. I can even script it and turn it up and down on the weekends. Whatever saves me money!

SQL Azure Pricing

I can scale the SQL Server from $5 to a month to bajillions and everything in between.

What the difference here?

First, we started here

  • VM in the Cloud: At the start we had "A VM in the Cloud." I have total control over the Virtual Machine, which is good, but I have total control over the Virtual Machine, which is bad. I can scale up or out, but just as one Unit, unless I split things up into three VMs.

Now we've got.

  • IIS/Web Server in the Cloud: I don't have to think about the underlying OS or keeping it patched. I can use Linux or Windows if I like, and I can run PHP, Ruby, Java, .NET, and on and on in an Azure App Service. I can put lots of sites in one Plan but the IIS publishing endpoint for Visual Studio is automatically configured. I can also use Git for deployment as well
  • SQL Server in the Cloud: The SQL Server is managed, backed up, and independently scalable.

This is a slightly more "cloudy" of doing things. It's not microservices and independently scalable containers, but it does give you:

  • Independently scalable tiers (pricing and CPU and Memory and disk)
  • Lots of automatic benefits - backups, custom domains, ssl certs, git deploy, App Service Extensions, and on and on. Dozens of features.
  • Control over pricing that is scriptable. You could write scripts to really pinch pennies by scaling your units up and down based on time of day or month.

What are your thoughts on Lift and Shift to IaaS (Infrastructure as a Service) vs using PaaS (Platform as a Service)? What did I forget? (I'm sure lots!)


Sponsor: Check out JetBrains Rider: a new cross-platform .NET IDE. Edit, refactor, test, build and debug ASP.NET, .NET Framework, .NET Core, or Unity applications. Learn more and get access to early builds!



© 2017 Scott Hanselman. All rights reserved.
     

Ruby on Rails on Azure App Service (Web Sites) with Linux (and Ubuntu on Windows 10)

Running Ruby on Rails on Windows has historically sucked. Most of the Ruby/Rails folks are Mac and Linux users and haven’t focused on getting Rails to be usable for daily development on Windows. There have been some heroic efforts by a number of volunteers to get Rails working with projects like RailsInstaller, but native modules and dependencies almost always cause problems. Even more, when you go to deploy your Rails app you’re likely using a Linux host so you may run into differences between operating systems.

Fast forward to today and Windows 10 has the Ubuntu-based “Linux Subsystem for Windows” (WSL) and the native bash shell which means you can run real Linux elf binaries on Windows natively without a Virtual Machine…so you should do your Windows-based Rails development in Bash on Windows.

Ruby on Rails development is great on Windows 10 because you’ve Windows 10 handling the “windows” UI part and bash and Ubuntu handling the shell.

After I set it up I want to git deploy my app to Azure, easily.

Developing on Ruby on Rails on Windows 10 using WSL

Rails and Ruby folks can apt-get update and apt-get install ruby, they can install rbenv or rvm as they like. These days rbenv is preferred.

Once you have Ubuntu on Windows 10 installed you can quickly install “rbenv” like this within Bash. Here I’m getting 2.3.0.

~$ git clone https://github.com/rbenv/rbenv.git ~/.rbenv
~$ echo 'export PATH="$HOME/.rbenv/bin:$PATH"' >> ~/.bashrc
~$ echo 'eval "$(rbenv init -)"' >> ~/.bashrc
~$ exec $SHELL
~$ git clone https://github.com/rbenv/ruby-build.git ~/.rbenv/plugins/ruby-build
~$ echo 'export PATH="$HOME/.rbenv/plugins/ruby-build/bin:$PATH"' >> ~/.bashrc
~$ exec $SHELL
~$ rbenv install 2.3.0
~$ rbenv global 2.3.0
~$ ruby -v
~$ gem install bundler
~$ rbenv reshash

Here’s a screenshot mid-process on my SurfaceBook. This build/install step takes a while and hits the disk a lot, FYI.

Installing rbenv on Windows under Ubuntu

At this point I’ve got Ruby, now I need Rails, as well as NodeJs for the Rails Asset Pipeline. You can change the versions as appropriate.

@ curl -sL https://deb.nodesource.com/setup_4.x | sudo -E bash -
$ sudo apt-get install -y nodejs
$ gem install rails -v 5.0.1

You will likely also want either PostgresSQL or MySQL or Mongo, or you can use a Cloud DB like Azure DocumentDB.

When you’re developing on both Windows and Linux at the same time, you’ll likely want to keep your code in one place or the other, not both. I use the automatic mount point that WSL creates at /mnt/c so for this sample I’m at /mnt/c/Users/scott/Desktop/RailsonAzure which maps to a folder on my Windows desktop. You can be anywhere, just be aware of your CR/LF settings and stay in one world.

I did a “rails new .” and got it running locally. Here you can se Visual Studio Code with Ruby Extensions and my project open next to Bash on Windows.

image

After I’ve got a Rails app running and I’m able to develop cleanly, jumping between Visual Studio Code on Windows and the Bash prompt within Ubuntu, I want to deploy the app to the web.

Since this is a simple “Hello World” default rails app I can’t deploy it somewhere where the Rails Environment is Production. There’s no Route in routes.rb (the Yay! You’re on Rails message is development-time only) and there’s no SECRET_KEY_BASE environment variable set which is used to verify signed cookies. I’ll need to add those two things. I’ll change routes.rb quickly to just use the default Welcome page for this demo, like this:

Rails.application.routes.draw do
  # For details on the DSL available within this file, see http://guides.rubyonrails.org/routing.html
    get '/' => "rails/welcome#index"
end

And I’ll add the SECRET_KEY_BASE in as an App Setting/ENV var in the Azure portal when I make my backend, below.

Deploying Ruby on Rails App to Azure App Service on Linux

From the New menu in the Azure portal, choose to Web App on Linux (in preview as of the time I wrote this) from the Web + Mobile option. This will make an App Service Plan that has an App within it. There are a bunch of application stacks you can use here including node.js, PHP, .NE Core, and Ruby.

NOTE: A few glossary and definition points. Azure App Service is the Azure PaaS (Platform as a Service). You run Web Apps on Azure App Service. An Azure App Service Plan is the underlying Virtual Machine (sall, medium, large, etc.) that hosts n number of App Services/Web Sites. I have 20 App Services/Web Sites running under a App Service Plan with a Small VM. By default this is Windows by can run Php, Python, Node, .NET, etc. In this blog post I’m using an App Service Plan that runs Linux and hosts Docker containers. My Rails app will live inside that App Service and you can find the Dockerfiles and other info here https://github.com/Azure-App-Service/ruby or use your own Docker image.

Here you can see my Azure App Service that I’ll now deploy to using Git. I could also FTP.

Ruby on Rails on Azure

I went into Deployment OPtions and setup a local (to Azure) git repro. Now I can see that under Overview.

image

On my local bash I add azure as a remote. This can be set up however your workflow is setup. In this case, Git is FTP for code.

$ git add remote azure https:[email protected]:443/RubyOnAzureAppService.git
$ git add .
$ git commit -m "initial"
$ git push azure master

This starts the deployment as the code is pushed to Azure.

Azure deploying the Rails app

IMPORTANT: I will also add “RAILS_ENV= production” and a SECRET_KEY_BASE=to my Azure Application Settings. You can make a new secret with “rake secret.”

If I’m having trouble I can turn on Application Logging, Web Server Logging, and Detailed Error Messages under Diagnostic Logs then FTP into the App Service and look at the logs.

FTPing into Azure to look at logs

This is all in Preview so you’ll likely run into issues. They are updating the underlying systems very often. Some gotchas I hit:

  • Deploying/redeploying requires an explicit site restart, today. I hear that’ll be fixed soon.
  • I had to dig log files out via FTP. They are going to expose logs in the portal.
  • I used the Kudu “sidecar” site at mysite.scm.azurewebsite.net to get shell access to the Kudu container, but I’d like to be able to ssh into or get to access to the actual running container from the Azure Portal one day.

That said, if you’d like more internal details on how this works, you can watch a session from Connect() last year with developer Nazim Lala. Thanks to James Christianson for his debugging help!


Sponsor: Did you know VSTS can integrate closely with Octopus Deploy? Watch Damian Brady and Brian A. Randell as they show you how to automate deployments from VSTS to Octopus Deploy, and demo the new VSTS Octopus Deploy dashboard widget. Watch now


© 2017 Scott Hanselman. All rights reserved.
     

Running Ruby on Rails on Windows has historically sucked. Most of the Ruby/Rails folks are Mac and Linux users and haven't focused on getting Rails to be usable for daily development on Windows. There have been some heroic efforts by a number of volunteers to get Rails working with projects like RailsInstaller, but native modules and dependencies almost always cause problems. Even more, when you go to deploy your Rails app you're likely using a Linux host so you may run into differences between operating systems.

Fast forward to today and Windows 10 has the Ubuntu-based "Linux Subsystem for Windows" (WSL) and the native bash shell which means you can run real Linux elf binaries on Windows natively without a Virtual Machine...so you should do your Windows-based Rails development in Bash on Windows.

Ruby on Rails development is great on Windows 10 because you've Windows 10 handling the "windows" UI part and bash and Ubuntu handling the shell.

After I set it up I want to git deploy my app to Azure, easily.

Developing on Ruby on Rails on Windows 10 using WSL

Rails and Ruby folks can apt-get update and apt-get install ruby, they can install rbenv or rvm as they like. These days rbenv is preferred.

Once you have Ubuntu on Windows 10 installed you can quickly install "rbenv" like this within Bash. Here I'm getting 2.3.0.

~$ git clone https://github.com/rbenv/rbenv.git ~/.rbenv

~$ echo 'export PATH="$HOME/.rbenv/bin:$PATH"' >> ~/.bashrc
~$ echo 'eval "$(rbenv init -)"' >> ~/.bashrc
~$ exec $SHELL
~$ git clone https://github.com/rbenv/ruby-build.git ~/.rbenv/plugins/ruby-build
~$ echo 'export PATH="$HOME/.rbenv/plugins/ruby-build/bin:$PATH"' >> ~/.bashrc
~$ exec $SHELL
~$ rbenv install 2.3.0
~$ rbenv global 2.3.0
~$ ruby -v
~$ gem install bundler
~$ rbenv reshash

Here's a screenshot mid-process on my SurfaceBook. This build/install step takes a while and hits the disk a lot, FYI.

Installing rbenv on Windows under Ubuntu

At this point I've got Ruby, now I need Rails, as well as NodeJs for the Rails Asset Pipeline. You can change the versions as appropriate.

@ curl -sL https://deb.nodesource.com/setup_4.x | sudo -E bash -

$ sudo apt-get install -y nodejs
$ gem install rails -v 5.0.1

You will likely also want either PostgresSQL or MySQL or Mongo, or you can use a Cloud DB like Azure DocumentDB.

When you're developing on both Windows and Linux at the same time, you'll likely want to keep your code in one place or the other, not both. I use the automatic mount point that WSL creates at /mnt/c so for this sample I'm at /mnt/c/Users/scott/Desktop/RailsonAzure which maps to a folder on my Windows desktop. You can be anywhere, just be aware of your CR/LF settings and stay in one world.

I did a "rails new ." and got it running locally. Here you can se Visual Studio Code with Ruby Extensions and my project open next to Bash on Windows.

image

After I've got a Rails app running and I'm able to develop cleanly, jumping between Visual Studio Code on Windows and the Bash prompt within Ubuntu, I want to deploy the app to the web.

Since this is a simple "Hello World" default rails app I can't deploy it somewhere where the Rails Environment is Production. There's no Route in routes.rb (the Yay! You're on Rails message is development-time only) and there's no SECRET_KEY_BASE environment variable set which is used to verify signed cookies. I'll need to add those two things. I'll change routes.rb quickly to just use the default Welcome page for this demo, like this:

Rails.application.routes.draw do
  # For details on the DSL available within this file, see http://guides.rubyonrails.org/routing.html
    get '/' => "rails/welcome#index"
end

And I'll add the SECRET_KEY_BASE in as an App Setting/ENV var in the Azure portal when I make my backend, below.

Deploying Ruby on Rails App to Azure App Service on Linux

From the New menu in the Azure portal, choose to Web App on Linux (in preview as of the time I wrote this) from the Web + Mobile option. This will make an App Service Plan that has an App within it. There are a bunch of application stacks you can use here including node.js, PHP, .NE Core, and Ruby.

NOTE: A few glossary and definition points. Azure App Service is the Azure PaaS (Platform as a Service). You run Web Apps on Azure App Service. An Azure App Service Plan is the underlying Virtual Machine (sall, medium, large, etc.) that hosts n number of App Services/Web Sites. I have 20 App Services/Web Sites running under a App Service Plan with a Small VM. By default this is Windows by can run Php, Python, Node, .NET, etc. In this blog post I'm using an App Service Plan that runs Linux and hosts Docker containers. My Rails app will live inside that App Service and you can find the Dockerfiles and other info here https://github.com/Azure-App-Service/ruby or use your own Docker image.

Here you can see my Azure App Service that I'll now deploy to using Git. I could also FTP.

Ruby on Rails on Azure

I went into Deployment OPtions and setup a local (to Azure) git repro. Now I can see that under Overview.

image

On my local bash I add azure as a remote. This can be set up however your workflow is setup. In this case, Git is FTP for code.

$ git add remote azure https:[email protected]:443/RubyOnAzureAppService.git

$ git add .
$ git commit -m "initial"
$ git push azure master

This starts the deployment as the code is pushed to Azure.

Azure deploying the Rails app

IMPORTANT: I will also add "RAILS_ENV= production" and a SECRET_KEY_BASE=to my Azure Application Settings. You can make a new secret with "rake secret."

If I'm having trouble I can turn on Application Logging, Web Server Logging, and Detailed Error Messages under Diagnostic Logs then FTP into the App Service and look at the logs.

FTPing into Azure to look at logs

This is all in Preview so you'll likely run into issues. They are updating the underlying systems very often. Some gotchas I hit:

  • Deploying/redeploying requires an explicit site restart, today. I hear that'll be fixed soon.
  • I had to dig log files out via FTP. They are going to expose logs in the portal.
  • I used the Kudu "sidecar" site at mysite.scm.azurewebsite.net to get shell access to the Kudu container, but I'd like to be able to ssh into or get to access to the actual running container from the Azure Portal one day.

That said, if you'd like more internal details on how this works, you can watch a session from Connect() last year with developer Nazim Lala. Thanks to James Christianson for his debugging help!


Sponsor: Did you know VSTS can integrate closely with Octopus Deploy? Watch Damian Brady and Brian A. Randell as they show you how to automate deployments from VSTS to Octopus Deploy, and demo the new VSTS Octopus Deploy dashboard widget. Watch now



© 2017 Scott Hanselman. All rights reserved.
     

Exploring the new DevOps – Azure Command Line Interface 2.0 (CLI)

Azure CLI 2.0I’m a huge fan of the command line, and sometimes I feel like Windows people are missing out on the power of text mode. Fortunately, today Windows 10 has bash (via Ubuntu on Windows 10), PowerShell, and “classic” CMD. I use all three, myself.

Five years ago I started managing my Azure cloud web apps using the Azure CLI. I’ve been a huge fan of it ever since. It was written in node.js, it worked the same everywhere, and it got the job done.

Fast forward to today and the Azure team just announced a complete Azure CLI re-write, and now 2.0 is out, today. Initially I was concerned it had been re-written and didn’t understand the philosophy behind it. But I understand it now. While it works on Windows (my daily driver) it’s architecturally aligned with Mac and (mostly, IMHO) Linux users. It also supports new thinking around a modern command line with support for things like JMESPath, a query language for JSON. It works well and clearly with the usual suspects of course, like grep, jq, cut, etc. It’s easily installed with pip, or you just get Python 3.5.x and then just “pip install –user azure-cli.”

Linux people (feel free to check the script) can just do this curl, but it’s also in apt-get, of course.

curl -L https://aka.ms/InstallAzureCli | bash

NOTE: Since I already have the older Azure CLI 1.0 on my machine, it’s useful to note that these two CLIs can live on the same machine. The new one is “az” and the older is “azure,” so no problems there.

Or, for those of you who run individual Docker containers for your tools (or if you’re just wanting to explore) you can

docker run -v ${HOME}:/root -it azuresdk/azure-cli-python:<version>

Then I just “az login” and I’m off! Here I’ll query my subscriptions:

C:\Users\scott\Desktop>  az account list --output table
Name CloudName Sub State IsDefault
------------------------------------------- ----------- --- ------- -----------
3-Month Free Trial AzureCloud 0f3 Enabled
Pay-As-You-Go AzureCloud 34c Enabled
Windows Azure MSDN AzureCloud ffb Enabled True

At this point, it’s already feeling familiar. It’s “az noun verb” and there’s an optional –output parameter. If I don’t include –output by default I’ll get JSON…which I can then query with JMESPath if I’d like. (Those of us who are older may be having a little XML/XPath/XQuery déjà vu)

I can use JSON, TSV, tables, and even “colorized json” or JSONC.

C:\Users\scott\Desktop> az appservice plan list --output table   
AppServicePlanName GeoRegion Kind Location Status
-------------------- ---------------- ------ ---------------- --------
Default1 North Central US app North Central US Ready
Default1 Southeast Asia app Southeast Asia Ready
Default1 West Europe app West Europe Ready
DefaultServerFarm West US app West US Ready
myEchoHostingPlan North Central US app North Central US Ready

I can make and manage basically anything. Here I’ll make a new App Service Plan and put two web apps in it, all managed in a group:

az group create -n MyResourceGroup
# Create an Azure AppService that we can use to host multiple web apps 
az appservice plan create -n MyAppServicePlan -g MyResourceGroup

# Create two web apps within the appservice (note: name param must be a unique DNS entry)
az appservice web create -n MyWebApp43432 -g MyResourceGroup --plan MyAppServicePlan
az appservice web create -n MyWEbApp43433 -g MyResourceGroup --plan MyAppServicePlan

You might be thinking this looks like PowerShell. Why not use PowerShell? Remember this isn’t for Windows primarily. There’s a ton of DevOps happening in Python on Linux/Mac and this fits very nicely into that. For those of us (myself included) who are PowerShell fans, PowerShell has massive and complete Azure Support. Of course, while the bash folks will need to use JMESPath to simulate passing objects around, PowerShell can keep on keeping on. There’s a command line for everyone.

It’s easy to get started with the CLI at http://aka.ms/CLI and learn about the command line with docs and samples. Check out topics like installing and updating the CLI, working with Virtual Machines, creating a complete Linux environment including VMs, Scale Sets, Storage, and network, and deploying Azure Web Apps – and let them know what you think at [email protected]. Also, as always, the Azure CLI 2.0 is open source and on GitHub.


Sponsor: Check out JetBrains Rider: a new cross-platform .NET IDE. Edit, refactor, test, build and debug ASP.NET, .NET Framework, .NET Core, or Unity applications. Learn more and get access to early builds!


© 2016 Scott Hanselman. All rights reserved.
     

Azure CLI 2.0I'm a huge fan of the command line, and sometimes I feel like Windows people are missing out on the power of text mode. Fortunately, today Windows 10 has bash (via Ubuntu on Windows 10), PowerShell, and "classic" CMD. I use all three, myself.

Five years ago I started managing my Azure cloud web apps using the Azure CLI. I've been a huge fan of it ever since. It was written in node.js, it worked the same everywhere, and it got the job done.

Fast forward to today and the Azure team just announced a complete Azure CLI re-write, and now 2.0 is out, today. Initially I was concerned it had been re-written and didn't understand the philosophy behind it. But I understand it now. While it works on Windows (my daily driver) it's architecturally aligned with Mac and (mostly, IMHO) Linux users. It also supports new thinking around a modern command line with support for things like JMESPath, a query language for JSON. It works well and clearly with the usual suspects of course, like grep, jq, cut, etc. It's easily installed with pip, or you just get Python 3.5.x and then just "pip install --user azure-cli."

Linux people (feel free to check the script) can just do this curl, but it's also in apt-get, of course.

curl -L https://aka.ms/InstallAzureCli | bash

NOTE: Since I already have the older Azure CLI 1.0 on my machine, it's useful to note that these two CLIs can live on the same machine. The new one is "az" and the older is "azure," so no problems there.

Or, for those of you who run individual Docker containers for your tools (or if you're just wanting to explore) you can

docker run -v ${HOME}:/root -it azuresdk/azure-cli-python:<version>

Then I just "az login" and I'm off! Here I'll query my subscriptions:

C:\Users\scott\Desktop>  az account list --output table

Name CloudName Sub State IsDefault
------------------------------------------- ----------- --- ------- -----------
3-Month Free Trial AzureCloud 0f3 Enabled
Pay-As-You-Go AzureCloud 34c Enabled
Windows Azure MSDN AzureCloud ffb Enabled True

At this point, it's already feeling familiar. It's "az noun verb" and there's an optional --output parameter. If I don't include --output by default I'll get JSON...which I can then query with JMESPath if I'd like. (Those of us who are older may be having a little XML/XPath/XQuery déjà vu)

I can use JSON, TSV, tables, and even "colorized json" or JSONC.

C:\Users\scott\Desktop> az appservice plan list --output table   

AppServicePlanName GeoRegion Kind Location Status
-------------------- ---------------- ------ ---------------- --------
Default1 North Central US app North Central US Ready
Default1 Southeast Asia app Southeast Asia Ready
Default1 West Europe app West Europe Ready
DefaultServerFarm West US app West US Ready
myEchoHostingPlan North Central US app North Central US Ready

I can make and manage basically anything. Here I'll make a new App Service Plan and put two web apps in it, all managed in a group:

az group create -n MyResourceGroup
# Create an Azure AppService that we can use to host multiple web apps 

az appservice plan create -n MyAppServicePlan -g MyResourceGroup

# Create two web apps within the appservice (note: name param must be a unique DNS entry)
az appservice web create -n MyWebApp43432 -g MyResourceGroup --plan MyAppServicePlan
az appservice web create -n MyWEbApp43433 -g MyResourceGroup --plan MyAppServicePlan

You might be thinking this looks like PowerShell. Why not use PowerShell? Remember this isn't for Windows primarily. There's a ton of DevOps happening in Python on Linux/Mac and this fits very nicely into that. For those of us (myself included) who are PowerShell fans, PowerShell has massive and complete Azure Support. Of course, while the bash folks will need to use JMESPath to simulate passing objects around, PowerShell can keep on keeping on. There's a command line for everyone.

It’s easy to get started with the CLI at http://aka.ms/CLI and learn about the command line with docs and samples. Check out topics like installing and updating the CLI, working with Virtual Machines, creating a complete Linux environment including VMs, Scale Sets, Storage, and network, and deploying Azure Web Apps – and let them know what you think at [email protected]. Also, as always, the Azure CLI 2.0 is open source and on GitHub.


Sponsor: Check out JetBrains Rider: a new cross-platform .NET IDE. Edit, refactor, test, build and debug ASP.NET, .NET Framework, .NET Core, or Unity applications. Learn more and get access to early builds!



© 2016 Scott Hanselman. All rights reserved.
     

Azure App Service Secrets and Web Site Hidden Gems

I just discovered that you can see a preview (almost like a daily build) of the Azure Portal if you go to https://preview.portal.azure.com instead of https://portal.azure.com. Sometimes the changes are big, sometimes they are subtle. It feels faster to me.

Azure Preview Portal

A few days ago I blogged that I had found a number of things in Azure that I wasn’t previously aware of like “Metrics per instance (App Service)” which is DEEPLY useful if you run more than one Web App inside an App Service Plan. Remember, an App Service Plan is basically a VM and you can run as many Websites, docker containers, Azure Functions, Mobile Apps, Api Apps, Logic apps, and whatever you can fit in there. Density is the word of the day.

Azure App Service Secrets and Hidden Gems

A bunch of folks agreed that there were some real hidden gems worth exploring so I thought I’d take a moment and do just that. Here’s a few of the things that I’m continuously amazed are included for free with App Service.

Console

The Console option under Development Tools

There’s a web-based console that you can access from the Azure Portal to explore your apps!

Live HTML5 Console within the Azure Portal

This is basically an HTML 5 bash prompt. I find it useful to double check the contents of certain files in Production, and confirm environment variables are set. I also, for some reason, find it comforting to see that my “cloud web site” actually lives on Drive D:. It calms me to know the Cloud has a D Drive.

App Service Editor

App Service Editor

App Service Editor is the editor that’s codenamed “Monaco” that powers Visual Studio Code. It’s amazing and few people know about it. I use it to make quick updates to production, although you do need to be aware if you have Continuous Deployment enabled that your changes will get eventually overwritten.

It's like a whole "IDE in the Cloud"

Testing in Production – (A/B Testing)

This is an amazing feature that not enough people know about. So, I’m assuming you are aware of Staging Slots? These are things like dev-, test-, or staging- that you can pull from a different branch during CI/CD, or just a separate but near-identical website that runs on the same hardware. The REAL magic is the Testing in Production feature.

Once you have a slot – I have one here for the Staging Site for BabySmash – you have the option to just “swap” between staging and production…OR…you can set a percentage of traffic you want to go to each slot!

Note that traffic is pinned to a slot for the life of a client session, so you don’t have to worry about folks bouncing around if you change the UI or something.

Why is this insanely powerful? You can even make – for example – a “beta” slot and have your customers opt-in to a beta! And you don’t have to write any code to enable this! MyApp.com/?x-ms-routing-name=beta would get them there and MyApp.com?x-ms-routing-name=self always points to Production.

Testing in Production 

You could also write a PowerShell script that would slowly move traffic in increments. That way you could ramp up traffic to staging from 5% to 100% – assuming you see no errors or issues.

$siteName = "yourProductionSiteName"
$rule1 = New-Object Microsoft.WindowsAzure.Commands.Utilities.Websites.Services.WebEntities.RampUpRule
$rule1.ActionHostName = "yourSlotSiteName"
$rule1.ReroutePercentage = 10;
$rule1.Name = "stage"

$rule1.ChangeIntervalInMinutes = 10;
$rule1.ChangeStep = 5;
$rule1.MinReroutePercentage = 5;
$rule1.MaxReroutePercentage = 50;
$rule1.ChangeDecisionCallbackUrl = "callBackUrlOfyourChoice-OptionalThatDecidesIfYouShoudlKeepGoing"

Set-AzureWebsite $siteName -Slot Production -RoutingRules $rule1

All this stuff is built-in to the Standard Azure AppServicePlan.

Easy and Cheap Databases

A number of folks in the comments of my last post asked about the 20 websites I have running on my single App Service Plan. Some felt I may have been disingenuous about the pricing and assumed I have a bunch of SQL Server databases behind my sites, or that a site can’t be useful without a SQL Server.

There’s a few things there to answer. My sites are many different techs, Node.js, Ruby, C# and ASP.NET MVC, and static sites. For example:

  • Running the Ruby Middleman Static Site Generator on Microsoft Azure runs in the cloud when I check code into GitHub but deploys a static site.
  • The Hanselminutes Podcast uses WebMatrix and ASP.NET WebPage’s “SQL Compact Edition.” This database runs out of a single file that’s stored locally.
  • One of my node.js sites uses SQL Lite for its data.
  • One ASP.NET application uses “Azure MySQL in-app” that is also included in Azure App Service. You get a single modest MySQL database that runs in the context of your App Service. It’s not super fast and meant for development, but with a little caching it’s very workable.
  • One node.js app thinks it is talking MongoDB but actually it’s talking via MongoDB protocol support in Azure DocumentDB. You can create an Azure noSQL DocumentDB and point any app that speaks Mongo to it and it Just Works.

There’s a number of options, including Easy Tables for your Mobile Apps. Check out http://mobile.azure.com to learn more about how you can get a VERY quick and easy backend for mobile (or web) apps.

Azure App Service Extensions

If you have used Git deploy to an Azure App Service, you likely noticed a “Sidecar” website that your app has. I have babysmash.com which is actually babysmash.azurewebsites.net, right? There’s also babysmash.scm.azurewebsites.net that you can’t access. That sidecar site (when I’m authenticated) has a ton of easy REST GET APIs I can call to get my process list, files, deployments, and lots more. This is all powered by Kudu, which is open source by the way.

The Azure Kudu sidecar site

Kudu’s sidecar site is a “site extension.” You can not only write your own Azure Site Extension (they are just NuGet packages!) but it turns out there are a TON of useful already vetted and published extensions you can add to your site today. Those extensions live at http://www.siteextensions.net but you add them directly from the Azure Portal. There’s 84 at the time of this blog post.

Azure Site Extensions include:

  • phpMyAdmin – for Admin of MySQL over the web
  • Azure Let’s Encrypt – Easy install of Let’s Encrypt SSL certs!
  • Image Optimizer – Automatic squishing of your site’s JPGs and PNGs because you know you forgot!
  • GoLang Support – Azure doesn’t officially support Go in Azure Web Apps…but with this extension it works fine!
  • Jekyll – Easy static site generation in Azure
  • Brotli HTTP Compression

You get the idea.

Diagnostics

I just discovered this “uptime” blade within my Web Apps in the Azure Portal. It tells me my app’s uptime and if it’s not 100%, it tells my why not and when!

Azure Diagnostics and Uptime

Again, none of this stuff costs extra. You can add Site Extensions or explore your apps to the limit of the underlying App Service Plan. I’m doing all this on a single Standard 1 (S1) App Service Plan.


Sponsor: Excited about the future in ASP.NET? The folks at Progress held an awesome webinar which gives a 360⁰ view of the new ASP.NET Core and how it compares to WebForms and MVC. Watch it now on demand!


© 2016 Scott Hanselman. All rights reserved.
     

I just discovered that you can see a preview (almost like a daily build) of the Azure Portal if you go to https://preview.portal.azure.com instead of https://portal.azure.com. Sometimes the changes are big, sometimes they are subtle. It feels faster to me.

Azure Preview Portal

A few days ago I blogged that I had found a number of things in Azure that I wasn't previously aware of like "Metrics per instance (App Service)" which is DEEPLY useful if you run more than one Web App inside an App Service Plan. Remember, an App Service Plan is basically a VM and you can run as many Websites, docker containers, Azure Functions, Mobile Apps, Api Apps, Logic apps, and whatever you can fit in there. Density is the word of the day.

Azure App Service Secrets and Hidden Gems

A bunch of folks agreed that there were some real hidden gems worth exploring so I thought I'd take a moment and do just that. Here's a few of the things that I'm continuously amazed are included for free with App Service.

Console

The Console option under Development Tools

There's a web-based console that you can access from the Azure Portal to explore your apps!

Live HTML5 Console within the Azure Portal

This is basically an HTML 5 bash prompt. I find it useful to double check the contents of certain files in Production, and confirm environment variables are set. I also, for some reason, find it comforting to see that my "cloud web site" actually lives on Drive D:. It calms me to know the Cloud has a D Drive.

App Service Editor

App Service Editor

App Service Editor is the editor that's codenamed "Monaco" that powers Visual Studio Code. It's amazing and few people know about it. I use it to make quick updates to production, although you do need to be aware if you have Continuous Deployment enabled that your changes will get eventually overwritten.

It's like a whole "IDE in the Cloud"

Testing in Production - (A/B Testing)

This is an amazing feature that not enough people know about. So, I'm assuming you are aware of Staging Slots? These are things like dev-, test-, or staging- that you can pull from a different branch during CI/CD, or just a separate but near-identical website that runs on the same hardware. The REAL magic is the Testing in Production feature.

Once you have a slot - I have one here for the Staging Site for BabySmash - you have the option to just "swap" between staging and production...OR...you can set a percentage of traffic you want to go to each slot!

Note that traffic is pinned to a slot for the life of a client session, so you don't have to worry about folks bouncing around if you change the UI or something.

Why is this insanely powerful? You can even make - for example - a "beta" slot and have your customers opt-in to a beta! And you don't have to write any code to enable this! MyApp.com/?x-ms-routing-name=beta would get them there and MyApp.com?x-ms-routing-name=self always points to Production.

Testing in Production 

You could also write a PowerShell script that would slowly move traffic in increments. That way you could ramp up traffic to staging from 5% to 100% - assuming you see no errors or issues.

$siteName = "yourProductionSiteName"

$rule1 = New-Object Microsoft.WindowsAzure.Commands.Utilities.Websites.Services.WebEntities.RampUpRule
$rule1.ActionHostName = "yourSlotSiteName"
$rule1.ReroutePercentage = 10;
$rule1.Name = "stage"

$rule1.ChangeIntervalInMinutes = 10;
$rule1.ChangeStep = 5;
$rule1.MinReroutePercentage = 5;
$rule1.MaxReroutePercentage = 50;
$rule1.ChangeDecisionCallbackUrl = "callBackUrlOfyourChoice-OptionalThatDecidesIfYouShoudlKeepGoing"

Set-AzureWebsite $siteName -Slot Production -RoutingRules $rule1

All this stuff is built-in to the Standard Azure AppServicePlan.

Easy and Cheap Databases

A number of folks in the comments of my last post asked about the 20 websites I have running on my single App Service Plan. Some felt I may have been disingenuous about the pricing and assumed I have a bunch of SQL Server databases behind my sites, or that a site can't be useful without a SQL Server.

There's a few things there to answer. My sites are many different techs, Node.js, Ruby, C# and ASP.NET MVC, and static sites. For example:

  • Running the Ruby Middleman Static Site Generator on Microsoft Azure runs in the cloud when I check code into GitHub but deploys a static site.
  • The Hanselminutes Podcast uses WebMatrix and ASP.NET WebPage's "SQL Compact Edition." This database runs out of a single file that's stored locally.
  • One of my node.js sites uses SQL Lite for its data.
  • One ASP.NET application uses "Azure MySQL in-app" that is also included in Azure App Service. You get a single modest MySQL database that runs in the context of your App Service. It's not super fast and meant for development, but with a little caching it's very workable.
  • One node.js app thinks it is talking MongoDB but actually it's talking via MongoDB protocol support in Azure DocumentDB. You can create an Azure noSQL DocumentDB and point any app that speaks Mongo to it and it Just Works.

There's a number of options, including Easy Tables for your Mobile Apps. Check out http://mobile.azure.com to learn more about how you can get a VERY quick and easy backend for mobile (or web) apps.

Azure App Service Extensions

If you have used Git deploy to an Azure App Service, you likely noticed a "Sidecar" website that your app has. I have babysmash.com which is actually babysmash.azurewebsites.net, right? There's also babysmash.scm.azurewebsites.net that you can't access. That sidecar site (when I'm authenticated) has a ton of easy REST GET APIs I can call to get my process list, files, deployments, and lots more. This is all powered by Kudu, which is open source by the way.

The Azure Kudu sidecar site

Kudu's sidecar site is a "site extension." You can not only write your own Azure Site Extension (they are just NuGet packages!) but it turns out there are a TON of useful already vetted and published extensions you can add to your site today. Those extensions live at http://www.siteextensions.net but you add them directly from the Azure Portal. There's 84 at the time of this blog post.

Azure Site Extensions include:

  • phpMyAdmin - for Admin of MySQL over the web
  • Azure Let's Encrypt - Easy install of Let's Encrypt SSL certs!
  • Image Optimizer - Automatic squishing of your site's JPGs and PNGs because you know you forgot!
  • GoLang Support - Azure doesn't officially support Go in Azure Web Apps...but with this extension it works fine!
  • Jekyll - Easy static site generation in Azure
  • Brotli HTTP Compression

You get the idea.

Diagnostics

I just discovered this "uptime" blade within my Web Apps in the Azure Portal. It tells me my app's uptime and if it's not 100%, it tells my why not and when!

Azure Diagnostics and Uptime

Again, none of this stuff costs extra. You can add Site Extensions or explore your apps to the limit of the underlying App Service Plan. I'm doing all this on a single Standard 1 (S1) App Service Plan.


Sponsor: Excited about the future in ASP.NET? The folks at Progress held an awesome webinar which gives a 360⁰ view of the new ASP.NET Core and how it compares to WebForms and MVC. Watch it now on demand!


© 2016 Scott Hanselman. All rights reserved.
     

Penny Pinching in the Cloud: Running and Managing LOTS of Web Apps on a single Azure App Service

I’ve blogged before about “penny pinching in the cloud.” I’ll update that series for 2017 soon, but the underlying concepts still apply. Many if you are still using bigger virtual machines than are needed when doing IaaS (Infrastructure as a Service) or when doing PaaS (Platform as a Service) folks are doing “one website per App Service.” That’s super expensive.

Remember that you can fit as many web applications as memory and CPU will into an Azure App Service Plan. An “App Service Plan” in Azure is effectively the Virtual Machine under your Web Apps. You don’t need to think about it as it’s totally managed and hidden – but – if you choose think about it you’ll be able to squeeze more out of it and you’ll pay less.

For example, I have 20 web applications running in a plan I named “DefaultServerFarm.” It’s a Small Standard Plan (S1) and I pay about $70 a month. Some folks use a Basic (B1) plan if they don’t need to scale out and that’s about $50 a month. Both B1 and S1 support “unlimited” web apps within them, to the limits of memory. That’s what allows me to run 20 modest (but real) sites on the one plan and that’s what makes it a good deal from a pricing perspective for me.

I logged in to the Azure Portal recently and noticed the CPU percentage on my plan was higher than usual and higher than I’d like.

Why is that web app using so much CPU?

That’s the CPU of the machine “under” my 20 sites. I can click here on my App Service Plan’s “blade” to see the underlying sites, or just click “Apps” in the blade menu.

Running 20 apps in a Single Azure App Service

However, when I’m looking at an app that lives within my plan, there’s two super powerful menu items to check out. One is  called “Metrics per instance (Apps)” and one is “Metrics per instance (App Service).” Click the latter option. For many of you it’s going to become your favorite area in the Azure Portal. It was a game changer for me as it gave me the internal insight I needed to make sure I can get maximum density in my plan (thereby saving the most money).

Metrics per Instance - App Service Plan

I click here and see “Sites in App Service Plan.”

20 sites in a single plan

I can see that over the last few days my CPU has been going up and up…

The CPU is going up and up over a few days

I can see by site:

A graph showing ALL 20 sites and their CPU

So now I can filter by site and I see that it’s ONE site that’s going nuts.

One site is using all the CPU

I can then dig in, go to the main CPU charge and see exactly when it started:

The site is using 2.12 days of CPU

I can change the scale

It started on Feb 11th

I had a Web Job stuck in a loop. I restarted and will be monitoring but for now, I’m in a much better place for this one app.

Now it's calming down

Now if I check the App Service Plan itself, I can see everything has calmed down.

Things have calmed down after the one rogue site was restarted

The point here is that even though it’s “Platform as a Service” and we want a layer of abstraction, at no point are things HIDDEN from us. If you want to see the hardware, you can. If you want to see the process tree, you can. A good reminder.


Sponsor: Excited about the future in ASP.NET? The folks at Progress held an awesome webinar which gives a 360⁰ view of the new ASP.NET Core and how it compares to WebForms and MVC. Watch it now on demand!


© 2016 Scott Hanselman. All rights reserved.
     

I've blogged before about "penny pinching in the cloud." I'll update that series for 2017 soon, but the underlying concepts still apply. Many if you are still using bigger virtual machines than are needed when doing IaaS (Infrastructure as a Service) or when doing PaaS (Platform as a Service) folks are doing "one website per App Service." That's super expensive.

Remember that you can fit as many web applications as memory and CPU will into an Azure App Service Plan. An "App Service Plan" in Azure is effectively the Virtual Machine under your Web Apps. You don't need to think about it as it's totally managed and hidden - but - if you choose think about it you'll be able to squeeze more out of it and you'll pay less.

For example, I have 20 web applications running in a plan I named "DefaultServerFarm." It's a Small Standard Plan (S1) and I pay about $70 a month. Some folks use a Basic (B1) plan if they don't need to scale out and that's about $50 a month. Both B1 and S1 support "unlimited" web apps within them, to the limits of memory. That's what allows me to run 20 modest (but real) sites on the one plan and that's what makes it a good deal from a pricing perspective for me.

I logged in to the Azure Portal recently and noticed the CPU percentage on my plan was higher than usual and higher than I'd like.

Why is that web app using so much CPU?

That's the CPU of the machine "under" my 20 sites. I can click here on my App Service Plan's "blade" to see the underlying sites, or just click "Apps" in the blade menu.

Running 20 apps in a Single Azure App Service

However, when I'm looking at an app that lives within my plan, there's two super powerful menu items to check out. One is  called "Metrics per instance (Apps)" and one is "Metrics per instance (App Service)." Click the latter option. For many of you it's going to become your favorite area in the Azure Portal. It was a game changer for me as it gave me the internal insight I needed to make sure I can get maximum density in my plan (thereby saving the most money).

Metrics per Instance - App Service Plan

I click here and see "Sites in App Service Plan."

20 sites in a single plan

I can see that over the last few days my CPU has been going up and up...

The CPU is going up and up over a few days

I can see by site:

A graph showing ALL 20 sites and their CPU

So now I can filter by site and I see that it's ONE site that's going nuts.

One site is using all the CPU

I can then dig in, go to the main CPU charge and see exactly when it started:

The site is using 2.12 days of CPU

I can change the scale

It started on Feb 11th

I had a Web Job stuck in a loop. I restarted and will be monitoring but for now, I'm in a much better place for this one app.

Now it's calming down

Now if I check the App Service Plan itself, I can see everything has calmed down.

Things have calmed down after the one rogue site was restarted

The point here is that even though it's "Platform as a Service" and we want a layer of abstraction, at no point are things HIDDEN from us. If you want to see the hardware, you can. If you want to see the process tree, you can. A good reminder.


Sponsor: Excited about the future in ASP.NET? The folks at Progress held an awesome webinar which gives a 360⁰ view of the new ASP.NET Core and how it compares to WebForms and MVC. Watch it now on demand!



© 2016 Scott Hanselman. All rights reserved.
     

Connecting my Particle Photon Internet of Things device to the Azure IoT Hub

Particle Photon connected to the cloudMy vacation continues. Yesterday I had shoulder surgery (adhesive capsulitis release) so today I’m messing around with Azure IoT Hub. I had some devices on my desk – some of which I had never really gotten around to exploring – and I thought I’d see if I could accomplish something.

I’ve got a Particle Photon here, as well as a Tessel 2, a LattePanda, Funduino, and Onion Omega. A few days ago I was able to get the Onion Omega to show my blood sugar on a small OLED screen, which was cool. Tonight I’m going to try to hook the Particle Photon up to the Azure IoT hub for monitoring.

The Photon is a tiny little device with Wi-Fi built-in. It’s super easy to setup and it has a cloud-based IDE with tons of examples written in C and Node.js for you to use. Particle Photon also has a node.js based command line. From there you can list out your Photons, see their available functions, and even call functions over the internet! A hacker’s delight, to be sure.

Here’s a standard “blink an LED” Hello world on a Photon. This one creates a cloud function called “led” and binds it to the “ledToggle” method. Those cloud methods take a string, so there’s no enum for the on/off command.

int led1 = D0;
int led2 = D7;
void setup() {
pinMode(led1, OUTPUT);
pinMode(led2, OUTPUT);
Spark.function("led",ledToggle);
digitalWrite(led1, LOW);
digitalWrite(led2, LOW);
}

void loop() {
}

int ledToggle(String command) {
if (command=="on") {
digitalWrite(led1,HIGH);
digitalWrite(led2,HIGH);
return 1;
}
else if (command=="off") {
digitalWrite(led1,LOW);
digitalWrite(led2,LOW);
return 0;
}
else {
return -1;
}
}

From the command line I can use the Particle command line interface (CLI) to enumerate my devices:

C:\Users\scott>particle list
hansel_photon [390039000647xxxxxxxxxxx] (Photon) is online
Functions:
int led(String args)

See how it doesn’t just enumerate devices, but also cloud methods that hang off devices? LOVE THIS.

I can get a secret API Key from the Particle Photon’s cloud based Console. Then using my Device ID and auth token I can call the method…with an HTTP request! How much easier could this be?

C:\Users\scott\>curl https://api.particle.io/v1/devices/390039000647xxxxxxxxx/led -d access_token=31fa2e6f --insecure -d arg="on"
{
"id": "390039000647xxxxxxxxx",
"last_app": "",
"connected": true,
"return_value": 1
}

At this moment the LED on the Particle Photon turns on. I’m going to change the code a little and add some telemetry using the Particle’s online code editor.

Editing Particle Photon Code online

They’ve got a great online code editor, but I could also edit and compile the code locally:

C:\Users\scott\Desktop>particle compile photon webconnected.ino

Compiling code for photon

Including:
webconnected.ino
attempting to compile firmware
downloading binary from: /v1/binaries/5858b74667ddf87fb2a2df8f
saving to: photon_firmware_1482209089877.bin
Memory use:
text data bss dec hex filename
6156 12 1488 7656 1de8
Compile succeeded.
Saved firmware to: C:\Users\scott\Desktop\photon_firmware_1482209089877.bin

I’ll change the code to announce an “Event” when I turn on the LED.

if (command=="on") {
digitalWrite(led1,HIGH);
digitalWrite(led2,HIGH);

String data = "Amazing! Some Data would be here! The light is on.";
Particle.publish("ledBlinked", data);

return 1;
}

I can head back over to the http://console.particle.io and see these events live on the web:

Particle Photon's have great online charts

Particle also supports integration with Google Cloud and Azure IoT Hub. Azure IoT Hub allows you to manage billions of devices and all their many billions of events. I just have a few, but we all have to start somewhere. 😉

I created a free Azure IoT Hub in my Azure Account…

Azure IoT Hub has charts and graphs built in

And made a shared access policy for my Particle Devices.

Be sure to set all the Access Policy Permissions you need

Then I told Particle about Azure in their Integrations system.

Particle has Azure IoT Hub integration built in

The Azure IoT SDKS on GitHub at https://github.com/Azure/azure-iot-sdks/releases have both a Windows-based Azure IoT Explorer and a command-line one called IoT Hub Explorer.

I logged in to the IoT Hub Explorer using the connection string from the Azure Portal:

iothub-explorer login "HostName=HanselIoT.azure-devices.net;SharedAccessKeyName=particle-iot-hub;SharedAccessKey=rdWUVMXs="

Then I’ll run “iothub-explorer monitor-events” passing in the device ID and the connection string for the shared access policy. Monitor-events is cool because it’ll hang and just output the events as they’re flowing through the whole system.

IoTHub-Explorer monitor-events command line

So I’m able to call methods on the Particle using their cloud, and monitor events from within Azure IoT Hub. I can explore diagnostics data and query huge amounts of device-to-cloud data that would potentially flow in from my hardware devices.

The IoT Hub Limits are very generous for free/hobbyist users as we learn to develop. I haven’t paid anything yet. However, it can scale to thousands of messages a second per unit! That means millions of messages a second if you need it.

I can definitely see how the the value an IoT Hub solution like this would add up quickly after you’ve got more than one device. Text files don’t really scale. Even if I just IoT’ed up my house, it would be nice to have all that data flowing into a single hub I could manage and query securely.


Sponsor: Big thanks to Telerik! They recently published a comprehensive whitepaper on The State of C#, discussing the history of C#, what’s new in C# 7 and whether C# is the top tech to know. Check it out!


© 2016 Scott Hanselman. All rights reserved.
     

Particle Photon connected to the cloudMy vacation continues. Yesterday I had shoulder surgery (adhesive capsulitis release) so today I'm messing around with Azure IoT Hub. I had some devices on my desk - some of which I had never really gotten around to exploring - and I thought I'd see if I could accomplish something.

I've got a Particle Photon here, as well as a Tessel 2, a LattePanda, Funduino, and Onion Omega. A few days ago I was able to get the Onion Omega to show my blood sugar on a small OLED screen, which was cool. Tonight I'm going to try to hook the Particle Photon up to the Azure IoT hub for monitoring.

The Photon is a tiny little device with Wi-Fi built-in. It's super easy to setup and it has a cloud-based IDE with tons of examples written in C and Node.js for you to use. Particle Photon also has a node.js based command line. From there you can list out your Photons, see their available functions, and even call functions over the internet! A hacker's delight, to be sure.

Here's a standard "blink an LED" Hello world on a Photon. This one creates a cloud function called "led" and binds it to the "ledToggle" method. Those cloud methods take a string, so there's no enum for the on/off command.

int led1 = D0;

int led2 = D7;
void setup() {
pinMode(led1, OUTPUT);
pinMode(led2, OUTPUT);
Spark.function("led",ledToggle);
digitalWrite(led1, LOW);
digitalWrite(led2, LOW);
}

void loop() {
}

int ledToggle(String command) {
if (command=="on") {
digitalWrite(led1,HIGH);
digitalWrite(led2,HIGH);
return 1;
}
else if (command=="off") {
digitalWrite(led1,LOW);
digitalWrite(led2,LOW);
return 0;
}
else {
return -1;
}
}

From the command line I can use the Particle command line interface (CLI) to enumerate my devices:

C:\Users\scott>particle list

hansel_photon [390039000647xxxxxxxxxxx] (Photon) is online
Functions:
int led(String args)

See how it doesn't just enumerate devices, but also cloud methods that hang off devices? LOVE THIS.

I can get a secret API Key from the Particle Photon's cloud based Console. Then using my Device ID and auth token I can call the method...with an HTTP request! How much easier could this be?

C:\Users\scott\>curl https://api.particle.io/v1/devices/390039000647xxxxxxxxx/led -d access_token=31fa2e6f --insecure -d arg="on"

{
"id": "390039000647xxxxxxxxx",
"last_app": "",
"connected": true,
"return_value": 1
}

At this moment the LED on the Particle Photon turns on. I'm going to change the code a little and add some telemetry using the Particle's online code editor.

Editing Particle Photon Code online

They've got a great online code editor, but I could also edit and compile the code locally:

C:\Users\scott\Desktop>particle compile photon webconnected.ino


Compiling code for photon

Including:
webconnected.ino
attempting to compile firmware
downloading binary from: /v1/binaries/5858b74667ddf87fb2a2df8f
saving to: photon_firmware_1482209089877.bin
Memory use:
text data bss dec hex filename
6156 12 1488 7656 1de8
Compile succeeded.
Saved firmware to: C:\Users\scott\Desktop\photon_firmware_1482209089877.bin

I'll change the code to announce an "Event" when I turn on the LED.

if (command=="on") {

digitalWrite(led1,HIGH);
digitalWrite(led2,HIGH);

String data = "Amazing! Some Data would be here! The light is on.";
Particle.publish("ledBlinked", data);

return 1;
}

I can head back over to the http://console.particle.io and see these events live on the web:

Particle Photon's have great online charts

Particle also supports integration with Google Cloud and Azure IoT Hub. Azure IoT Hub allows you to manage billions of devices and all their many billions of events. I just have a few, but we all have to start somewhere. ;)

I created a free Azure IoT Hub in my Azure Account...

Azure IoT Hub has charts and graphs built in

And made a shared access policy for my Particle Devices.

Be sure to set all the Access Policy Permissions you need

Then I told Particle about Azure in their Integrations system.

Particle has Azure IoT Hub integration built in

The Azure IoT SDKS on GitHub at https://github.com/Azure/azure-iot-sdks/releases have both a Windows-based Azure IoT Explorer and a command-line one called IoT Hub Explorer.

I logged in to the IoT Hub Explorer using the connection string from the Azure Portal:

iothub-explorer login "HostName=HanselIoT.azure-devices.net;SharedAccessKeyName=particle-iot-hub;SharedAccessKey=rdWUVMXs="

Then I'll run "iothub-explorer monitor-events" passing in the device ID and the connection string for the shared access policy. Monitor-events is cool because it'll hang and just output the events as they're flowing through the whole system.

IoTHub-Explorer monitor-events command line

So I'm able to call methods on the Particle using their cloud, and monitor events from within Azure IoT Hub. I can explore diagnostics data and query huge amounts of device-to-cloud data that would potentially flow in from my hardware devices.

The IoT Hub Limits are very generous for free/hobbyist users as we learn to develop. I haven't paid anything yet. However, it can scale to thousands of messages a second per unit! That means millions of messages a second if you need it.

I can definitely see how the the value an IoT Hub solution like this would add up quickly after you've got more than one device. Text files don't really scale. Even if I just IoT'ed up my house, it would be nice to have all that data flowing into a single hub I could manage and query securely.


Sponsor: Big thanks to Telerik! They recently published a comprehensive whitepaper on The State of C#, discussing the history of C#, what’s new in C# 7 and whether C# is the top tech to know. Check it out!



© 2016 Scott Hanselman. All rights reserved.
     

NoSQL .NET Core development using an local Azure DocumentDB Emulator

I was hanging out with Miguel de Icaza in New York a few weeks ago and he was sharing with me his ongoing love affair with a NoSQL Database called Azure DocumentDB. I’ve looked at it a few times over the last year or so and though it was cool but I didn’t feel like using it for a few reasons:

  • Can’t develop locally – I’m often in low-bandwidth or airplane situations
  • No MongoDB support – I have existing apps written in Node that use Mongo
  • No .NET Core support – I’m doing mostly cross-platform .NET Core apps

Miguel told me to take a closer look. Looks like things have changed! DocumentDB now has:

  • Free local DocumentDB Emulator – I asked and this is the SAME code that runs in Azure with just changes like using the local file system for persistence, etc. It’s an “emulator” but it’s really the essential same core engine code. There is no cost and no sign in for the local DocumentDB emulator.
  • MongoDB protocol support – This is amazing. I literally took an existing Node app, downloaded MongoChef and copied my collection over into Azure using a standard MongoDB connection string, then pointed my app at DocumentDB and it just worked. It’s using DocumentDB for storage though, which gives me
    • Better Latency
    • Turnkey global geo-replication (like literally a few clicks)
    • A performance SLA with <10ms read and <15ms write (Service Level Agreement)
    • Metrics and Resource Management like every Azure Service
  • DocumentDB .NET Core Preview SDK that has feature parity with the .NET Framework SDK.

There’s also Node, .NET, Python, Java, and C++ SDKs for DocumentDB so it’s nice for gaming on Unity, Web Apps, or any .NET App…including Xamarin mobile apps on iOS and Android which is why Miguel is so hype on it.

Azure DocumentDB Local Quick Start

I wanted to see how quickly I could get started. I spoke with the PM for the project on Azure Friday and downloaded and installed the local emulator. The lead on the project said it’s Windows for now but they are looking for cross-platform solutions. After it was installed it popped up my web browser with a local web page – I wish more development tools would have such clean Quick Starts. There’s also a nice quick start on using DocumentDB with ASP.NET MVC.

NOTE: This is a 0.1.0 release. Definitely Alpha level. For example, the sample included looks like it had the package name changed at some point so it didn’t line up. I had to change “Microsoft.Azure.Documents.Client”: “0.1.0” to “Microsoft.Azure.DocumentDB.Core”: “0.1.0-preview” so a little attention to detail issue there. I believe the intent is for stuff to Just Work. 😉

Nice DocumentDB Quick Start

The sample app is a pretty standard “ToDo” app:

ASP.NET MVC ToDo App using Azure Document DB local emulator

The local Emulator also includes a web-based local Data Explorer:

image

A Todo Item is really just a POCO (Plain Old CLR Object) like this:

namespace todo.Models
{
    using Newtonsoft.Json;
    public class Item
    {
        [JsonProperty(PropertyName = "id")]
        public string Id { get; set; }
        [JsonProperty(PropertyName = "name")]
        public string Name { get; set; }
        [JsonProperty(PropertyName = "description")]
        public string Description { get; set; }
        [JsonProperty(PropertyName = "isComplete")]
        public bool Completed { get; set; }
    }
}

The MVC Controller in the sample uses an underlying repository pattern so the code is super simple at that layer – as an example:

[ActionName("Index")]
public async Task<IActionResult> Index()
{
var items = await DocumentDBRepository<Item>.GetItemsAsync(d => !d.Completed);
return View(items);
}

[HttpPost]
[ActionName("Create")]
[ValidateAntiForgeryToken]
public async Task<ActionResult> CreateAsync([Bind("Id,Name,Description,Completed")] Item item)
{
if (ModelState.IsValid)
{
await DocumentDBRepository<Item>.CreateItemAsync(item);
return RedirectToAction("Index");
}

return View(item);
}

The Repository itself that’s abstracting away the complexities is itself not that complex. It’s like 120 lines of code, and really more like 60 when you remove whitespace and curly braces. And half of that is just initialization and setup. It’s also DocumentDBRepository<T> so it’s a generic you can change to meet your tastes and use it however you’d like.

The only thing that stands out to me in this sample is the loopp in GetItemsAsync that’s hiding potential paging/chunking. It’s nice you can pass in a predicate but I’ll want to go and put in some paging logic for large collections.

public static async Task<T> GetItemAsync(string id)
{
    try
    {
        Document document = await client.ReadDocumentAsync(UriFactory.CreateDocumentUri(DatabaseId, CollectionId, id));
        return (T)(dynamic)document;
    }
    catch (DocumentClientException e)
    {
        if (e.StatusCode == System.Net.HttpStatusCode.NotFound){
            return null;
        }
        else {
            throw;
        }
    }
}
public static async Task<IEnumerable<T>> GetItemsAsync(Expression<Func<T, bool>> predicate)
{
    IDocumentQuery<T> query = client.CreateDocumentQuery<T>(
        UriFactory.CreateDocumentCollectionUri(DatabaseId, CollectionId),
        new FeedOptions { MaxItemCount = -1 })
        .Where(predicate)
        .AsDocumentQuery();
    List<T> results = new List<T>();
    while (query.HasMoreResults){
        results.AddRange(await query.ExecuteNextAsync<T>());
    }
    return results;
}
public static async Task<Document> CreateItemAsync(T item)
{
    return await client.CreateDocumentAsync(UriFactory.CreateDocumentCollectionUri(DatabaseId, CollectionId), item);
}
public static async Task<Document> UpdateItemAsync(string id, T item)
{
    return await client.ReplaceDocumentAsync(UriFactory.CreateDocumentUri(DatabaseId, CollectionId, id), item);
}
public static async Task DeleteItemAsync(string id)
{
    await client.DeleteDocumentAsync(UriFactory.CreateDocumentUri(DatabaseId, CollectionId, id));
}

I’m going to keep playing with this but so far I’m pretty happy I can get this far while on an airplane. It’s really easy (given I’m preferring NoSQL over SQL lately) to just through objects at it and store them.

In another post I’m going to look at RavenDB, another great NoSQL Document Database that works on .NET Core that s also Open Source.


Sponsor: Big thanks to Octopus Deploy! Do you deploy the same application multiple times for each of your end customers? The team at Octopus have taken the pain out of multi-tenant deployments. Check out their latest 3.4 release


© 2016 Scott Hanselman. All rights reserved.
     

I was hanging out with Miguel de Icaza in New York a few weeks ago and he was sharing with me his ongoing love affair with a NoSQL Database called Azure DocumentDB. I've looked at it a few times over the last year or so and though it was cool but I didn't feel like using it for a few reasons:

  • Can't develop locally - I'm often in low-bandwidth or airplane situations
  • No MongoDB support - I have existing apps written in Node that use Mongo
  • No .NET Core support - I'm doing mostly cross-platform .NET Core apps

Miguel told me to take a closer look. Looks like things have changed! DocumentDB now has:

  • Free local DocumentDB Emulator - I asked and this is the SAME code that runs in Azure with just changes like using the local file system for persistence, etc. It's an "emulator" but it's really the essential same core engine code. There is no cost and no sign in for the local DocumentDB emulator.
  • MongoDB protocol support - This is amazing. I literally took an existing Node app, downloaded MongoChef and copied my collection over into Azure using a standard MongoDB connection string, then pointed my app at DocumentDB and it just worked. It's using DocumentDB for storage though, which gives me
    • Better Latency
    • Turnkey global geo-replication (like literally a few clicks)
    • A performance SLA with <10ms read and <15ms write (Service Level Agreement)
    • Metrics and Resource Management like every Azure Service
  • DocumentDB .NET Core Preview SDK that has feature parity with the .NET Framework SDK.

There's also Node, .NET, Python, Java, and C++ SDKs for DocumentDB so it's nice for gaming on Unity, Web Apps, or any .NET App...including Xamarin mobile apps on iOS and Android which is why Miguel is so hype on it.

Azure DocumentDB Local Quick Start

I wanted to see how quickly I could get started. I spoke with the PM for the project on Azure Friday and downloaded and installed the local emulator. The lead on the project said it's Windows for now but they are looking for cross-platform solutions. After it was installed it popped up my web browser with a local web page - I wish more development tools would have such clean Quick Starts. There's also a nice quick start on using DocumentDB with ASP.NET MVC.

NOTE: This is a 0.1.0 release. Definitely Alpha level. For example, the sample included looks like it had the package name changed at some point so it didn't line up. I had to change "Microsoft.Azure.Documents.Client": "0.1.0" to "Microsoft.Azure.DocumentDB.Core": "0.1.0-preview" so a little attention to detail issue there. I believe the intent is for stuff to Just Work. ;)

Nice DocumentDB Quick Start

The sample app is a pretty standard "ToDo" app:

ASP.NET MVC ToDo App using Azure Document DB local emulator

The local Emulator also includes a web-based local Data Explorer:

image

A Todo Item is really just a POCO (Plain Old CLR Object) like this:

namespace todo.Models
{
    using Newtonsoft.Json;
    public class Item
    {
        [JsonProperty(PropertyName = "id")]
        public string Id { get; set; }
        [JsonProperty(PropertyName = "name")]
        public string Name { get; set; }
        [JsonProperty(PropertyName = "description")]
        public string Description { get; set; }
        [JsonProperty(PropertyName = "isComplete")]
        public bool Completed { get; set; }
    }
}

The MVC Controller in the sample uses an underlying repository pattern so the code is super simple at that layer - as an example:

[ActionName("Index")]

public async Task<IActionResult> Index()
{
var items = await DocumentDBRepository<Item>.GetItemsAsync(d => !d.Completed);
return View(items);
}

[HttpPost]
[ActionName("Create")]
[ValidateAntiForgeryToken]
public async Task<ActionResult> CreateAsync([Bind("Id,Name,Description,Completed")] Item item)
{
if (ModelState.IsValid)
{
await DocumentDBRepository<Item>.CreateItemAsync(item);
return RedirectToAction("Index");
}

return View(item);
}

The Repository itself that's abstracting away the complexities is itself not that complex. It's like 120 lines of code, and really more like 60 when you remove whitespace and curly braces. And half of that is just initialization and setup. It's also DocumentDBRepository<T> so it's a generic you can change to meet your tastes and use it however you'd like.

The only thing that stands out to me in this sample is the loopp in GetItemsAsync that's hiding potential paging/chunking. It's nice you can pass in a predicate but I'll want to go and put in some paging logic for large collections.

public static async Task<T> GetItemAsync(string id)
{
    try
    {
        Document document = await client.ReadDocumentAsync(UriFactory.CreateDocumentUri(DatabaseId, CollectionId, id));
        return (T)(dynamic)document;
    }
    catch (DocumentClientException e)
    {
        if (e.StatusCode == System.Net.HttpStatusCode.NotFound){
            return null;
        }
        else {
            throw;
        }
    }
}
public static async Task<IEnumerable<T>> GetItemsAsync(Expression<Func<T, bool>> predicate)
{
    IDocumentQuery<T> query = client.CreateDocumentQuery<T>(
        UriFactory.CreateDocumentCollectionUri(DatabaseId, CollectionId),
        new FeedOptions { MaxItemCount = -1 })
        .Where(predicate)
        .AsDocumentQuery();
    List<T> results = new List<T>();
    while (query.HasMoreResults){
        results.AddRange(await query.ExecuteNextAsync<T>());
    }
    return results;
}
public static async Task<Document> CreateItemAsync(T item)
{
    return await client.CreateDocumentAsync(UriFactory.CreateDocumentCollectionUri(DatabaseId, CollectionId), item);
}
public static async Task<Document> UpdateItemAsync(string id, T item)
{
    return await client.ReplaceDocumentAsync(UriFactory.CreateDocumentUri(DatabaseId, CollectionId, id), item);
}
public static async Task DeleteItemAsync(string id)
{
    await client.DeleteDocumentAsync(UriFactory.CreateDocumentUri(DatabaseId, CollectionId, id));
}

I'm going to keep playing with this but so far I'm pretty happy I can get this far while on an airplane. It's really easy (given I'm preferring NoSQL over SQL lately) to just through objects at it and store them.

In another post I'm going to look at RavenDB, another great NoSQL Document Database that works on .NET Core that s also Open Source.


Sponsor: Big thanks to Octopus Deploy! Do you deploy the same application multiple times for each of your end customers? The team at Octopus have taken the pain out of multi-tenant deployments. Check out their latest 3.4 release


© 2016 Scott Hanselman. All rights reserved.
     

Publishing ASP.NET Core 1.1 applications to Azure using git deploy

I took an ASP.NET Core 1.0 application and updated its package references to ASP.NET 1.1 today. I also updated to my SDK to the Current (not the LTS – Long Term Support version). Everything worked great building and running locally, but then I made a new Web App in Azure and did a quick git deploy.

Deploying ASP.NET Core 1.1 to Azure

Basically:

c:\aspnetcore11app> azure site create "aspnetcore11test" --location "West US" --git
c:\aspnetcore11app> git add .
c:\aspnetcore11app> git commit -m "initial"
c:\aspnetcore11app> git push azure master

Then I watched the logs go by. You can watch them at the command line or from within the Azure Portal under “Log Stream.”

The deployment failed when Azure was building the app and got this error:

Build started 11/25/2016 6:51:54 AM.
2016-11-25T06:51:55         1>Project "D:\home\site\repository\aspnet1.1.xproj" on node 1 (Publish target(s)).
2016-11-25T06:51:55         1>D:\home\site\repository\aspnet1.1.xproj(7,3): error MSB4019: The imported project "D:\Program Files (x86)\dotnet\sdk\1.0.0-preview3-004056\Extensions\Microsoft\VisualStudio\v14.0\DotNet\Microsoft.DotNet.Props" was not found. Confirm that the path in the <Import> declaration is correct, and that the file exists on disk.
2016-11-25T06:51:55         1>Done Building Project "D:\home\site\repository\aspnet1.1.xproj" (Publish target(s)) -- FAILED.
2016-11-25T06:51:55    

See where it says “1.0.0-preview3-004056” in there? Looks like Azure Web Apps has some build that isn’t the one that I have. Theirs has the new csproj/msbuild stuff and I’m staying a few steps back waiting for things to bake.

Remember that unless you specify the SDK version, .NET Core will use whatever is the latest one on the box.

Well, what do I have locally?

C:\aspnetcore11app>dotnet --version
1.0.0-preview2-1-003177

OK, then that’s what I need to use so I’ll make a global.json at the root of my project, then move my code into a folder under “src” for example.

{
"projects": [ "src", "test" ],
"sdk": {
"version": "1.0.0-preview2-1-003177"
}
}

Here I’m “pinning” the same SDK version. This will tell Azure (or you, if I gave you the code) to use this SDK version when building.

Now I’ll redeploy with a git add, commit, and push. It works.

I think this should be easier, but I’m not sure how it should be easier. Does this mean that everyone should have a global.json with a preferred version? If you have no preferred version should Azure give a smarter error if it has an incompatible newer one?

The learning here for me is that not having a global.json basically says “*.*” to any cloud build servers and you’ll get whatever latest SDK they have. It could stop working any day.


Sponsor: Big thanks to Octopus Deploy! Do you deploy the same application multiple times for each of your end customers? The team at Octopus have taken the pain out of multi-tenant deployments. Check out their latest 3.4 release!


© 2016 Scott Hanselman. All rights reserved.
     

I took an ASP.NET Core 1.0 application and updated its package references to ASP.NET 1.1 today. I also updated to my SDK to the Current (not the LTS - Long Term Support version). Everything worked great building and running locally, but then I made a new Web App in Azure and did a quick git deploy.

Deploying ASP.NET Core 1.1 to Azure

Basically:

c:\aspnetcore11app> azure site create "aspnetcore11test" --location "West US" --git

c:\aspnetcore11app> git add .
c:\aspnetcore11app> git commit -m "initial"
c:\aspnetcore11app> git push azure master

Then I watched the logs go by. You can watch them at the command line or from within the Azure Portal under "Log Stream."

The deployment failed when Azure was building the app and got this error:

Build started 11/25/2016 6:51:54 AM.
2016-11-25T06:51:55         1>Project "D:\home\site\repository\aspnet1.1.xproj" on node 1 (Publish target(s)).
2016-11-25T06:51:55         1>D:\home\site\repository\aspnet1.1.xproj(7,3): error MSB4019: The imported project "D:\Program Files (x86)\dotnet\sdk\1.0.0-preview3-004056\Extensions\Microsoft\VisualStudio\v14.0\DotNet\Microsoft.DotNet.Props" was not found. Confirm that the path in the <Import> declaration is correct, and that the file exists on disk.
2016-11-25T06:51:55         1>Done Building Project "D:\home\site\repository\aspnet1.1.xproj" (Publish target(s)) -- FAILED.
2016-11-25T06:51:55    

See where it says "1.0.0-preview3-004056" in there? Looks like Azure Web Apps has some build that isn't the one that I have. Theirs has the new csproj/msbuild stuff and I'm staying a few steps back waiting for things to bake.

Remember that unless you specify the SDK version, .NET Core will use whatever is the latest one on the box.

Well, what do I have locally?

C:\aspnetcore11app>dotnet --version

1.0.0-preview2-1-003177

OK, then that's what I need to use so I'll make a global.json at the root of my project, then move my code into a folder under "src" for example.

{

"projects": [ "src", "test" ],
"sdk": {
"version": "1.0.0-preview2-1-003177"
}
}

Here I'm "pinning" the same SDK version. This will tell Azure (or you, if I gave you the code) to use this SDK version when building.

Now I'll redeploy with a git add, commit, and push. It works.

I think this should be easier, but I'm not sure how it should be easier. Does this mean that everyone should have a global.json with a preferred version? If you have no preferred version should Azure give a smarter error if it has an incompatible newer one?

The learning here for me is that not having a global.json basically says "*.*" to any cloud build servers and you'll get whatever latest SDK they have. It could stop working any day.


Sponsor: Big thanks to Octopus Deploy! Do you deploy the same application multiple times for each of your end customers? The team at Octopus have taken the pain out of multi-tenant deployments. Check out their latest 3.4 release!



© 2016 Scott Hanselman. All rights reserved.
     

Exploring Application Insights for disconnected or connected deep telemetry in ASP.NET Apps

Today on the ASP.NET Community Standup we learned about how you can use Application Insights in a disconnected scenario to get some cool – ahem – insights into your application.

Typically when someone sees Application Insights in the File | New Project dialog they assume it’s a feature that only works in Azure, that will create some account for you and send your data to the cloud. While App Insights totally does do a lot of cool stuff when you have a cloud-hosted app and it does add a lot of value, it also supports a very useful “SDK only” mode that’s totally offline.

Click “Add Application Insights to project” and then under “Send telemetry to” you click “Install SDK only” and no data gets sent to the cloud.

Application Insights dropdown - Install SDK only

Once you make your new project, can you learn more about AppInsights here.

For ASP.NET Core apps, Application Insights will include this package in your project.json – “Microsoft.ApplicationInsights.AspNetCore”: “1.0.0” and it’ll add itself into your middleware pipeline and register services in your Startup.cs. Remember, nothing is hidden in ASP.NET Core, so you can modify all this to your heart’s content.

if (env.IsDevelopment())
{
// This will push telemetry data through Application Insights pipeline faster, allowing you to view results immediately.
builder.AddApplicationInsightsSettings(developerMode: true);
}

Request telemetry and Exception telemetry are added separately, as you like.

Make sure you show the Application Insights Toolbar by right-clicking your toolbars and ensuring it’s checked.

Application Insights Dropdown Menu

The button it adds is actually pretty useful.

Application Insights Dropdown Menu

Run your app and click the Application Insights button.

NOTE: I’m using Visual Studio Community. That’s the free version of VS you can get at http://visualstudio.com/free. I use it exclusively and I think it’s pretty cool that this feature works just great on VS Community.

You’ll see the Search window open up in VS. You can keep it running while you debug and it’ll fill with Requests, Traces, Exceptions, etc.

image

I added a Exceptional case to /about, and here’s what I see:

Searching for the last hours traces

I can dig into each issue, filter, search, and explore deeper:

Unhandled Exception

And once I’ve found something interesting, I can explore around it with the full details of the HTTP Request. I find the “telemetry 5 minutes before and after” query to be very powerful.

Track Operations

Notice where it says “dependencies for this operation?” That’s not dependencies like “Dependency Injection” – that’s larger system-wide dependencies like “my app depends on this web service.”

You can custom instrument your application with the TrackDependancy API if you like, and that will cause your system’s dependency to  light up in AppInsights charts and reports. Here’s a dumb example as I pretend that putting data in ViewData is a dependency. It should be calling a WebAPI or a Database or something.

var telemetry = new TelemetryClient();

var success = false;
var startTime = DateTime.UtcNow;
var timer = System.Diagnostics.Stopwatch.StartNew();
try
{
ViewData["Message"] = "Your application description page.";
}
finally
{
timer.Stop();
telemetry.TrackDependency("ViewDataAsDependancy", "CallSomeStuff", startTime, timer.Elapsed, success);
}

Once I’m Tracking external Dependencies I can search for outliers, long durations, categorize them, and they’ll affect the generated charts and graphs if/when you do connect your App Insights to the cloud. Here’s what I see, after making this one code change. I could build this kind of stuff into all my external calls AND instrument the JavaScript as well. (note Client and Server in this chart.)

Application Insight Maps

And once it’s all there, I can query all the insights like this:

Querying Data Live

To be clear, though, you don’t have to host your app in the cloud. You can just send the telemetry to the cloud for analysis. Your existing on-premises IIS servers can run a “Status Monitor” app for instrumentation.

Application Insights Charts

There’s a TON of good data here and it’s REALLY easy to get started either:

  • Totally offline (no cloud) and just query within Visual Studio
  • Somewhat online – Host your app locally and send telemetry to the cloud
  • Totally online – Host your app and telemetry in the cloud

All in all, I am pretty impressed. There’s SDKs for Java, Node, Docker, and ASP.NET – There’s a LOT here. I’m going to dig deeper.


Sponsor: Big thanks to Telerik! 60+ ASP.NET Core controls for every need. The most complete UI toolset for x-platform responsive web and cloud development. Try now 30 days for free!


© 2016 Scott Hanselman. All rights reserved.
     

Today on the ASP.NET Community Standup we learned about how you can use Application Insights in a disconnected scenario to get some cool - ahem - insights into your application.

Typically when someone sees Application Insights in the File | New Project dialog they assume it's a feature that only works in Azure, that will create some account for you and send your data to the cloud. While App Insights totally does do a lot of cool stuff when you have a cloud-hosted app and it does add a lot of value, it also supports a very useful "SDK only" mode that's totally offline.

Click "Add Application Insights to project" and then under "Send telemetry to" you click "Install SDK only" and no data gets sent to the cloud.

Application Insights dropdown - Install SDK only

Once you make your new project, can you learn more about AppInsights here.

For ASP.NET Core apps, Application Insights will include this package in your project.json - "Microsoft.ApplicationInsights.AspNetCore": "1.0.0" and it'll add itself into your middleware pipeline and register services in your Startup.cs. Remember, nothing is hidden in ASP.NET Core, so you can modify all this to your heart's content.

if (env.IsDevelopment())

{
// This will push telemetry data through Application Insights pipeline faster, allowing you to view results immediately.
builder.AddApplicationInsightsSettings(developerMode: true);
}

Request telemetry and Exception telemetry are added separately, as you like.

Make sure you show the Application Insights Toolbar by right-clicking your toolbars and ensuring it's checked.

Application Insights Dropdown Menu

The button it adds is actually pretty useful.

Application Insights Dropdown Menu

Run your app and click the Application Insights button.

NOTE: I'm using Visual Studio Community. That's the free version of VS you can get at http://visualstudio.com/free. I use it exclusively and I think it's pretty cool that this feature works just great on VS Community.

You'll see the Search window open up in VS. You can keep it running while you debug and it'll fill with Requests, Traces, Exceptions, etc.

image

I added a Exceptional case to /about, and here's what I see:

Searching for the last hours traces

I can dig into each issue, filter, search, and explore deeper:

Unhandled Exception

And once I've found something interesting, I can explore around it with the full details of the HTTP Request. I find the "telemetry 5 minutes before and after" query to be very powerful.

Track Operations

Notice where it says "dependencies for this operation?" That's not dependencies like "Dependency Injection" - that's larger system-wide dependencies like "my app depends on this web service."

You can custom instrument your application with the TrackDependancy API if you like, and that will cause your system's dependency to  light up in AppInsights charts and reports. Here's a dumb example as I pretend that putting data in ViewData is a dependency. It should be calling a WebAPI or a Database or something.

var telemetry = new TelemetryClient();


var success = false;
var startTime = DateTime.UtcNow;
var timer = System.Diagnostics.Stopwatch.StartNew();
try
{
ViewData["Message"] = "Your application description page.";
}
finally
{
timer.Stop();
telemetry.TrackDependency("ViewDataAsDependancy", "CallSomeStuff", startTime, timer.Elapsed, success);
}

Once I'm Tracking external Dependencies I can search for outliers, long durations, categorize them, and they'll affect the generated charts and graphs if/when you do connect your App Insights to the cloud. Here's what I see, after making this one code change. I could build this kind of stuff into all my external calls AND instrument the JavaScript as well. (note Client and Server in this chart.)

Application Insight Maps

And once it's all there, I can query all the insights like this:

Querying Data Live

To be clear, though, you don't have to host your app in the cloud. You can just send the telemetry to the cloud for analysis. Your existing on-premises IIS servers can run a "Status Monitor" app for instrumentation.

Application Insights Charts

There's a TON of good data here and it's REALLY easy to get started either:

  • Totally offline (no cloud) and just query within Visual Studio
  • Somewhat online - Host your app locally and send telemetry to the cloud
  • Totally online - Host your app and telemetry in the cloud

All in all, I am pretty impressed. There's SDKs for Java, Node, Docker, and ASP.NET - There's a LOT here. I'm going to dig deeper.


Sponsor: Big thanks to Telerik! 60+ ASP.NET Core controls for every need. The most complete UI toolset for x-platform responsive web and cloud development. Try now 30 days for free!



© 2016 Scott Hanselman. All rights reserved.