Penny Pinching in the Cloud: Deploying Containers cheaply to Azure

I saw a tweet from a person on Twitter who wanted to know the easiest and cheapest way to get an Web Application that’s in a Docker Container up to Azure. There’s a few ways and it depends on your use case. Some apps aren’t web apps at all, of course, …

imageI saw a tweet from a person on Twitter who wanted to know the easiest and cheapest way to get an Web Application that's in a Docker Container up to Azure. There's a few ways and it depends on your use case.

Some apps aren't web apps at all, of course, and just start up in a stateless container, do some work, then exit. For a container like that, you'll want to use Azure Container Instances. I did a show and demo on this for Azure Friday.

Azure Container Instances

Using the latest Azure CLI  (command line interface - it works on any platform), I just do these commands to start up a container quickly. Billing is per-second. Shut it down and you stop paying. Get in, get out.

Tip: If you don't want to install anything, just go to https://shell.azure.com to get a bash shell and you can do these command there, even on a Chromebook.

I'll make a "resource group" (just a label to hold stuff, so I can delete it en masse later). Then "az container create" with the image. Note that that's a public image from Docker Hub, but I can also use a private Container Registry or a private one in Azure. More on that in a second.

Anyway, make a group (or use an existing one), create a container, and then either hit the IP I get back or I can query for (or guess) the full name. It's usually dns=name-label.location.azurecontainer.io.

> az group create --name someContainers --location westus

Location Name
---------- --------------
westus someContainers
> az container create --resource-group someContainers --name fancypantscontainer --image microsoft/aci-helloworl
d --dns-name-label fancy-container-demo --ports 80
Name ResourceGroup ProvisioningState Image IP:ports CPU/Memory OsType Location
------------------- --------------- ------------------- ------------------------ ---------------- --------------- -------- ----------
fancypantscontainer someContainers Pending microsoft/aci-helloworld 40.112.167.31:80 1.0 core/1.5 gb Linux westus
> az container show --resource-group someContainers --name fancypantscontainer --query "{FQDN:ipAddress.fqdn,ProvisioningState:provisioningState}" --out table
FQDN ProvisioningState
--------------------------------------------- -------------------
fancy-container-demo.westus.azurecontainer.io Succeeded

Boom, container in the cloud, visible externally (if I want) and per-second billing. Since I made and named a resource group, I can delete everything in that group (and stop billing) easily:

> az group delete -g someContainers 

This is cool because I can basically run Linux or Windows Containers in a "serverless" way. Meaning I don't have to think about VMs and I can get automatic, elastic scale if I like.

Azure Web Apps for Containers

ACI is great for lots of containers quickly, for bringing containers up and down, but I like my long-running web apps in Azure Web Apps for Containers. I run 19 Azure Web Apps today via things like Git/GitHub Deploy, publish from VS, or CI/CD from VSTS.

Azure Web Apps for Containers is the same idea, except I'm deploying containers directly. I can do a Single Container easily or use Docker Compose for multiple.

I wanted to show how easy it was to set this up so I did a video (cold, one take, no rehearsal, real accounts, real app) and put it on YouTube. It explains "How to Deploy Containers cheaply to Azure" in 21 minutes. It could have been shorter, but I also wanted to show how you can deploy from both Docker Hub (public) or from your own private Azure Container Registry.

I did all the work from the command line using Docker commands where I just pushed to my internal registry!

> docker login hanselregistry.azurecr.io

> docker build -t hanselregistry.azurecr.io/podcast .
> docker push hanselregistry.azurecr.io/podcast

Took minutes to get my podcast site running on Azure in Web Apps for Containers. And again - this is the penny pinching part - keep control of the App Service Plan (the VM underneath the App Service) and use the smallest one you can and pack the containers in tight.

Watch the video, and note when I get to the part where I add create an "App Service Plan." Again, that's the VM under a Web App/App Service. I have 19 smallish websites inside a Small (sometime a Medium, I can scale it whenever) App Service. You should be able to fit 3-4 decent sites in small ones depending on memory and CPU characteristics of the site.

Click Pricing Plan and you'll get here:

Recommend Pricing tiers have many choices

Be sure to explore the Dev/Test tab on the left as well. When you're making a non-container-based App Service you'll see F1 and D1 for Free and Shared. Both are fine for small websites, demos, hosting your github projects, etc.

Free, Shared, or Basic Infrastructure

If you back up and select Docker as the "OS"...

Windows, Linux, or Docker

Again check out Dev/Test for less demanding workloads and note B1 - Basic.

B1 is $32.74

The first B1 is free for 30 days! Good to kick the tires. Then as of the timing of this post it's US$32.74 (Check pricing for different regions and currencies) but has nearly 2 gigs of RAM. I can run several containers in there.

Just watch your memory and CPU and pack them in. Again, more money means better perf, but the original ask here was how to save money.

Low CPU and 40% memory

To sum up, ACI is great for per-second billing and spinning up n containers programmatically and getting out fast) and App Service for Containers is an easy way to deploy your Dockerized apps. Hope this helps.


Sponsor: Check out dotMemory Unit, a free unit testing framework for fighting all kinds of memory issues in your code. Extend your unit testing with the functionality of a memory profiler.



© 2018 Scott Hanselman. All rights reserved.
     

Securing an Azure App Service Website under SSL in minutes with Let’s Encrypt

Let’s Encrypt is a free, automated, and open Certificate Authority. That means you can get free SSL certs and change your sites from http:// to https://. What’s the catch? The SSL Certificates only last 90 days – not a year or years. They do this to en…

A screenshot that says "Your connection to this site is not secure."Let’s Encrypt is a free, automated, and open Certificate Authority. That means you can get free SSL certs and change your sites from http:// to https://. What's the catch? The SSL Certificates only last 90 days - not a year or years. They do this to encourage automation. If you set this up, you'll want to have some scripts or background process to automatically renew and install the certificates.

I run nearly two dozen websites (some small, some significant) on Azure. Given that Chrome 68+ is going to call out non HTTPS sites explicitly as "Not secure" in July, now's as good a time as any for us to get our sites - large and small - encrypted. I have some small static "brochure-ware" sites like http://babysmash.com that just aren't worth the money for a cert. Now it's free, so let's do it.

In some theorectical future, I hope that Azure and Clouds like it will have a single "encrypt it" button and handle the details for us, but as of the date of this blog post, there's some manual initial setup and some great work from the community.

There's a protocol for getting certificates called "ACME" - Automated Certificate Management Environment - and the EFF has a tool called Certbot that helps you request and deploy certs. There is a whole ecosystem around it, and if you are running Windows/IIS you can use a great simple ACME client called "Win-ACME." There are also UI's like Certify SSL Manager and PowerShell commands for ACME and systems like "GetSSL - Azure Automation," so you can feel free to roll your own script in an afternoon. Again, if you have a Windows VM and IIS, it's pretty straightforward and getting easier every day.

I'm currently using Simon J.K. Pedersen's lovely (and volunteer and unsupported, so be nice) Azure Let's Encrypt Web App Site Extension. I followed the instructions here but hit a few snags and a few things that aren't totally obvious. Many kudos and thanks to Simon for his hard work on this, as he's saving us all many hours of trouble!

Securing an Azure Web App with Let's Encrypt and the (unofficial) SJKP Let's Encrypt Site Extension

I'll go and secure BabySmash.com right now. Make a text file and keep track of these few things.

What's our checklist?

  • Azure Storage connection string - You'll need one for the extension to store state.
  • App Service Hosting Plan and App Service Resource Group Name - Ideally your "plan" (the VM your site runs on) and your site are in the same Resource Group (a resource group is just a name for a pile of stuff)
  • Service Principal Client/Application ID - This is like an account that the Site Extension will run as to do its job. It's an "on behalf of" delegate that will automate the changes to your site. You might see "client id" or "application id," they are the same thing.
  • Service Principal Client Secret - You'll make a new Key in your Service Principal. I called mine "login" but it doesn't matter, then some value like a generated password (also doesn't matter) and then hit Save. You'll then get a long hashed value - THAT is your Client Secret. Save it, you'll never see it again and you can't get it back.

Cool. Let's do it. Again, following along with the wiki, I'll make an App under Active Directory | App Registrations in the Azure Portal at https://portal.azure.com

Add a new App Registration in the Azure Portal

Make a new app...

Creating a new App Registration

Now grab the Application ID, aka Client ID and save that in your scratch space/notepad/sticky note/smart brain/don't lose it.

Copying the App Registration ClientID

Now click Settings, Keys, make a new one called "login" with a password and click Save. COPY THAT VALUE. You'll never see it again.

Adding a Key to the App Registration

Now, go to the Resource Group for your App Service and App Service Plan. Ideally it'll be the same one, but if it's not, go to each one and keep track of the names. I went there with the search box at the top of the Azure Portal.

Going to the Resource Group

The Portal changes sometimes, and this next step didn't line up to the Wiki instructions exactly. Click add, then make your new App Registration from above a "Contributor" to your Resource Group.

Adding the App as a Contributor to the Resource Group

Now head over to your actual App Service, and click Extensions.

App Service Extentions

I picked Azure Let's Encrypt to have this run as a Web Job in the background.

Adding the Let's Encrypt App Service Extension

Now, while you're at your Web App/Site, go to Settings and make sure you've set the following two Connection strings AzureWebJobsDashboard and AzureWebJobsStorage - Don't forget this step or it'll all work once but fail in 3 months during the renewal.

Both of these should be set to your Azure Storage Account connection string, e.g. DefaultEndpointsProtocol=https;AccountName=[myaccount];AccountKey=[mykey];

Remember the Web Job needs this storage so it can renew the certs every 3 months. Add them as "Custom."

Connection Strings in App Settings

Next, the instructions say to "configure the Site Extension." That can be confusing until you realize a Site Extension is really a "Side car web site." It is its own little website, running off to the side of your site. It will be at http://YOURSITENAME.scm.azurewebsites.net/LetsEncrypt so mine is at http://babysmash.scm.azurewebsites.net/LetsEncrypt.

You'll then want to full this form out. Your "Tenant ID" is your Azure Active Directory URL. You'll find your SubscriptionId in the "Overview" tab.

Configuring the Let's Encrypt Extension

Next next, and then hold down CTRL (as this is a multi-selection dialog) and pick the sites you want a certificate for. Note that www.yourdomin and and .yourdomain (the naked domain) are two different certs.

Requesting two SSL Certs

You'll want to confirm you see "Certificate successfully installed."

Certificate successfully installed.

Then head back over to the Azure Portal and turn on HTTPS Only if you'd like Azure itself (versus your code) to ensure and redirect all non-secure links to https://. Also confirm your SSL Bindings are correct. They should have been set up automatically.

HTTPS Only in the Azure Portal

Now I'll go hit https://babysmash.com and...

A screenshot that says "Your connection to this site is not secure."

It's not secure! Ah, now my site is in "mixed mode." That means that some of the resources like gifs or css were fetched with non-ssl (HTTP://) links. I'll update my site and all its external resources like YouTube embeds and fonts with https:// so that everything is secure. Since I'm using Git Deploy with Azure Web Apps (Azure App Service) I'll just make the changes and push the site again. You can also look at the elements as they load in F12 Browser Tools if you are having trouble finding out which image, css, or js file came in over http://

I'll redeploy and after a few tries, boom.

https://www.babysmash.com

And there's the cert. Note its expiration date. If the Site Extension does its job it will renew the cert before it expires!

A Let's Encrypt SSL Cert

Once I knew what I was doing, it took about 10 minutes per site. Thanks Simon for your work, and while there are multiple ways to do this, I found Simon's App Service Extension the easiest. I hope the Azure team comes up with a "One Click Solution" to this.

What do you think?


Sponsor: Learn how .NET in 2018 addresses the challenges developers are working on with future-focused technology. Get the new whitepaper on "The State of .NET in 2018" by the Progress Telerik team!



© 2018 Scott Hanselman. All rights reserved.
     

Setting up Application Insights took 10 minutes. It created two days of work for me.

I’ve been upgrading my podcast site from a 10 year old WebMatrix site to modern, open source ASP.NET Core with Razor Pages. It’s off IIS and now running cross-platform in Azure. I added Application Insights to the site in about 10 min just a few days a…

I've been upgrading my podcast site from a 10 year old WebMatrix site to modern, open source ASP.NET Core with Razor Pages. It's off IIS and now running cross-platform in Azure.

I added Application Insights to the site in about 10 min just a few days ago. It was super easy to setup and basically automatic in Visual Studio 2017 Community. I left the defaults, installed a bit of script on the client, and enabled the server side, and AppInsights already found a few interesting things.

It took 10 minutes to set up App Insights. It too two days (and work continues) to fix what it found. I love it. This tool has already give me a deeper insight into how my code runs and how it's behaving - and I'm just scratching the service. I'll need to do some videos and/or more blog posts to dig deeper. Truly, you need to try it.

Slow performance in other countries

I could fill this blog post with dozens of awesome screenshots of the useful charts, graphs, and filters that I got by just turning on AppInsights. But the most interesting part is that I turned it on really expecting nothing. I figured I'd get some "Google Analytics"-type behavior.

Then I got this email:

Browser Time is slow in Bangladesh

Huh. I had set up the Azure CDN at images.hanselminutes.com to handle all the faces for each episode. I then added lazy loading so that the webite only loads the images that enter the browser's viewport. I figured I was pretty much done.

However I didn't really think about the page itself as it loads for folks from around the world - given that it's hosted on Azure in the West US.

18.4 secs to load the page in Bangladesh

Ideally I'd want the site to load in less than a second, but this is my archives page with 600 shows so it's pretty heavy.

That's some long load times

Yuck. I have a few options. I could pay and load up another copy of the site in South Asia and then do some global load balancing. However, I'm hosting this on a single small (along with a dozen other sites) so I don't want to really pay much to fix this.

I ended up signing up for a free account at CloudFlare and set up caching for my HTML. The images stay the same. served by the Azure CDN.

Lots of requests from Cloudflare

Fixing Random and regular Server 500 errors

I left the site up for a while and came back later to a warning. You can see my site availability is just 93%. Note that there's "2 Servers?" That's because one is my local machine! Very cool that AppInsights also (optionally) tracks your local development server as well.

1 Alert!

When I dig in I see a VERY interesting sawtooth pattern.

Pro Tip - Recognizing that a Sawtooth Pattern is a Bad Thing (tm) is an important DevOps thing. Why is this happening regularly? Is it exactly regularly (like every 4 hours on a schedule?) or somewhat regularly (like a garbage collection issue?)

What do these operations have in common? Look closely.

scarygraph

It's not a GET it's a HEAD. Remember that HTTP Verbs are more than GET, POST, PUT, DELETE. There's also HEAD. It literally is a HEADer call. Like a GET, but no body.

HTTP HEAD - The HEAD method is identical to GET except that the server MUST NOT return a message-body in the response.

I installed HTTPie - which is like curl or wget for humans - and issue a HEAD command from my local machine while under the debugger.

C:>http --verify=no HEAD https://localhost:5001

HTTP/1.1 500 Internal Server Error
Content-Type: text/html; charset=utf-8
Date: Tue, 13 Mar 2018 03:41:51 GMT
Server: Kestrel

Ok that is bad. See the 500? I check out AppInsights and see it has the full call stack. See it's getting a NullReferenceException as it tries to Render() the Razor page?

Null Reference Exception

It turns out since I'm using Razor Pages, I have implemented "OnGet" where I do my data base work then pass a model to the pages to generate HTML. However, if someone issues a HEAD, then the pages still run but the local data work never happened (I have no OnHead() call). I have a few options here. I could handle HEAD myself. I could no-op it, but that'd be a lie.

THOUGHT: I think this behavior is sub-optimal. While GET and POST are distinct and it makes sense to require an OnGet() and OnPost(), I think that HEAD is special. It's basically a GET with a "don't return the body" flag set. So why not have Razor Pages automatically delegate OnHead to OnGet, unless there's an explicit OnHead() declared? I'll file an issue on GitHub because I don't like this behavior and I find it counter-intuitive. I could also register a global IPageFilter to make this work for all my site's pages.

The simplest thing to do is just to delegate the OnHead to to the OnGet handler.

public Task OnHeadAsync(int? id, string path) => OnGetAsync(id, path);

Then double check and test it with HTTPie:

C:\>http --verify=no HEAD https://localhost:5001

HTTP/1.1 200 OK
Content-Type: text/html; charset=utf-8
Date: Tue, 13 Mar 2018 03:53:55 GMT
Request-Context: appId=cid-v1:e310025f-88e9-4133-bc15-e775513c67ac
Server: Kestrel

Bonus - Application Map

Since I have AppInsights enabled on both the client and the server, I can see this cool live Application Map. I'll check again in a few days to see if I have fewer errors. You can see where my Podcast Site calls into the backend data service at Simplecast.

An application map that shows all the components, both client and server

I saw a few failures in my call to SimpleCast's API as I was failing to consistently set my API key. Everything in this map can be drilled down into.

Bonus - Web Performance Testing

I figured while I was in the Azure Portal I would also take advantage of the free performance testing. I did a simulated aggressive 250 users beating on the site. Average response time is 1.22 seconds and I was doing over 600 req/second.

38097 successful calls

I am learning a ton of stuff. I have more things to fix, more improvements to make, and more insights to dig into. I LOVE that it's creating all this work for me because it's giving me a better application/website!

You can get a free Azure account at http://azure.com/free or check out Azure for Startups https://azure.microsoft.com/overview/startups/ and get a bunch of free Azure time. AppInsights works with Node, Docker, Java, ASP.NET, ASP.NET Core, and other platforms. It even supports telemetry in Electron or Windows Apps.


Sponsor: Get the latest JetBrains Rider for debugging third-party .NET code, Smart Step Into, more debugger improvements, C# Interactive, new project wizard, and formatting code in columns.



© 2017 Scott Hanselman. All rights reserved.
     

Upgrading my podcast site to ASP.NET Core 2.1 in Azure plus some Best Practices

I am continuing to upgrade to podcast’s site. Today I upgraded it to .NET Core 2.1, keeping the work going from my upgrade from “Web Matrix WebPages” from last week. I upgraded to actually running ASP.NET Core 2.1’s preview in Azure by following this b…

I am continuing to upgrade to podcast's site. Today I upgraded it to .NET Core 2.1, keeping the work going from my upgrade from "Web Matrix WebPages" from last week. I upgraded to actually running ASP.NET Core 2.1's preview in Azure by following this blog post.

Pro Tip: Be aware, you can still get up to 10x faster local builds but still keep your site's runtime as 2.0 to lower risk. So there's little reason to not download the .NET Core 2.1 Preview and test your build speeds.

At this point the podcast site is live in Azure at https://hanselminutes.com. Now that I've moved off of the (very old) site I've quickly set up some best practices in just a few hours. I should have taken the time to upgrade this site - and its "devops" a long time ago.

Here's a few things I was able to get done just this evening while the boys' did homework. Each of these tasks were between 5 and 15 min. So not a big investment, but they represented real value I'd been wanting to add to the site.

Git Deploy for Production

The podcast site's code now lives in GitHub and deployment to production is a git push to master.

Deploying from GitHub

A "deployment slot" for staging

Some people like to have the master branch be Production, then they make a branch called Staging for a secondary staging site. Since Azure App Services (WebSites) has "deployment slots" I choose to do it differently. I deploy to Production from GitHub, sure, but I prefer to push manually to staging rather than litter my commits (and clean them up or squash commits later - it's just my preference) with little stuff.

I hooked up Git Deployment but the git repro is in Azure and just for deploy. Then "git remote add azure ..." so when I want to deploy to staging it's:

git push staging

I use it for testing, so ya, it could have been test/dev, etc, but you get the idea. Plus the Deployment Slot/Staging Site is free as it's on the same Azure App Service Plan.

A more sophisticated - but just as easy - plan would be to push to staging, get it perfect then do a "hot swap" with a single button click.

Deployment Slots can have their own independent settings if you click "Slot Setting." Here I've set that this ASPNETCORE_ENVIRONMENT is "Staging" while the main one is "Production."

Staging Slots in Azure

The ASP.NET Core runtime picks up that environment variable and I can conditionally run code based on Environment. I run as "Development" on my local machine. For example:

if (env.IsDevelopment()){
    app.UseDeveloperExceptionPage();
}
else{
    app.UseExceptionHandler("/Error");
}

Don't let Google Index the Staging Site - No Robots

You should be careful to not let Google/Bing/DuckDuckGo index your staging site if it's public. Since I have an environment set on my slot, I can just add this Meta Robots element to the site's main layout. Note also that I use minified CSS when I'm not in Development.

<environment include="Development">
    <link rel="stylesheet" href="http://feeds.hanselman.com/~/t/0/0/scotthanselman/~~/css/site.css" />
</environment>
<environment exclude="Development">
    <link rel="stylesheet" href="http://feeds.hanselman.com/~/t/0/0/scotthanselman/~~/css/site.min.css" />
</environment>
<environment include="Staging">
    <meta name="robots" content="noindex, follow">
</environment>

Require SSL

Making the whole ASP.NET Core site use SSL has been on my list as well. I added my SSL Certs in the Azure Portal that added RequreHttps in my Startup.cs pretty easily.

I could have also added it to the existing IISRewriteUrls.xml legacy file, but this was easier and faster.

var options = new RewriteOptions().AddRedirectToHttps();

Here's how I'd do via IIS Rewrite Middleware, FYI:

<rule name="HTTP to HTTPS redirect" stopProcessing="true">
    <match url="(.*)" />
    <conditions>
       <add input="{HTTPS}" pattern="off" ignoreCase="true" />
    </conditions>
    <action type="Redirect" url="https://{HTTP_HOST}/{R:1}"
redirectType="Permanent" />
</rule>

Application Insights for ASP.NET Core

Next post I'll talk about Application Insights. I was able to set it up both client- and server-side and get a TON of info in about 15 minutes.

Application Insights

How are you?


Sponsor: Unleash a faster Python! Supercharge your applications performance on future forward Intel platforms with The Intel Distribution for Python. Available for Windows, Linux, and macOS. Get the Intel® Distribution for Python Now!


© 2017 Scott Hanselman. All rights reserved.
     

az webapp new – Azure CLI extension to create and deploy a .NET Core or nodejs site in one command

The Azure CLI 2.0 (Command line interface) is a clean little command line tool to query the Azure back-end APIs (which are JSON). It’s easy to install and cross-platform: Install on Windows Install on macOS Install on Linux or Windows Subsystem for …

az webapp newThe Azure CLI 2.0 (Command line interface) is a clean little command line tool to query the Azure back-end APIs (which are JSON). It's easy to install and cross-platform:

Once you got it installed, go "az login" and get authenticated. Also note that the most important switch (IMHO) is --output:

usage: az [-h] [--output {json,tsv,table,jsonc}] [--verbose] [--debug]

You can get json, tables (for humans), or tsv (tab separated values) for your awks and seds, and json (or the more condensed json-c).

A nice first command after "az login" is "az configure" which will walk you through a bunch of questions interactively to set up defaults.

Then I can "az noun verb" like "az webapp list" or "az vm list" and see things like this:

128→ C:\Users\scott> az webapp list

Name Location State ResourceGroup DefaultHostName
------------------------ ---------------- ------- -------------------------- ------------------------------------------
Hanselminutes North Central US Running Default-Web-NorthCentralUS Hanselminutes.azurewebsites.net
HanselmanBandData North Central US Running Default-Web-NorthCentralUS hanselmanbanddata.azurewebsites.net
myEchoHub-WestEurope West Europe Running Default-Web-WestEurope myechohub-westeurope.azurewebsites.net
myEchoHub-SouthEastAsia Southeast Asia Stopped Default-Web-SoutheastAsia myechohub-southeastasia.azurewebsites.net

The Azure CLI supports extensions (plugins) that you can easily add, and the Azure CLI team is experimenting with a few ideas that they are implementing as extensions. "az webapp new" is one of them so I thought I'd take a look. All of this is open source and on GitHub at https://github.com/Azure/azure-cli and is discussed in the GitHub issues for azure-cli-extensions.

You can install the webapp extension with:

az extension add --name webapp

The new command "new" (I'm not sure about that name...maybe deploy? or createAndDeploy?) is basically:

az webapp new --name [app name] --location [optional Azure region name] --dryrun

Now, from a directory, I can make a little node/express app or a little .NET Core app (with "dotnet new razor" and "dotnet build") then it'll make a resource group, web app, and zip up the current folder and just deploy it. The idea being to "JUST DO IT."

128→ C:\Users\scott\desktop\somewebapp> az webapp new  --name somewebappforme

Resource group 'appsvc_rg_Windows_CentralUS' already exists.
App service plan 'appsvc_asp_Windows_CentralUS' already exists.
App 'somewebappforme' already exists
Updating app settings to enable build after deployment
Creating zip with contents of dir C:\Users\scott\desktop\somewebapp ...
Deploying and building contents to app.This operation can take some time to finish...
All done. {
"location": "Central US",
"name": "somewebappforme",
"os": "Windows",
"resourcegroup": "appsvc_rg_Windows_CentralUS ",
"serverfarm": "appsvc_asp_Windows_CentralUS",
"sku": "FREE",
"src_path": "C:\\Users\\scott\\desktop\\somewebapp ",
"version_detected": "2.0",
"version_to_create": "dotnetcore|2.0"
}

I'd even like it to make up a name so I could maybe "az webapp up" or even just "az up." For now it'll make a Free site by default, so you can try it without worrying about paying. If you want to upgrade or change it, upgrade either with the az command or in the Azure portal. Also the site ends at up <name>.azurewebsites.net!

DO NOTE that these extensions are living things, so you can update after installing with

az extension update --name webapp

like I just did!

Again, it's super beta/alpha, but it's an interesting experiment. Go discuss on their GitHub issues.


Sponsor: Get the latest JetBrains Rider for debugging third-party .NET code, Smart Step Into, more debugger improvements, C# Interactive, new project wizard, and formatting code in columns.



© 2017 Scott Hanselman. All rights reserved.
     

Exploring the Azure IoT Arduino Cloud DevKit

Someone gave me an Azure IoT DevKit, and it was lovely timing as I’m continuing to learn about IoT. As you may know, I’ve done a number of Arduino and Raspberry Pi projects, and plugged them into various and sundry clouds, including AWS, Azure, as well…

Someone gave me an Azure IoT DevKit, and it was lovely timing as I'm continuing to learn about IoT. As you may know, I've done a number of Arduino and Raspberry Pi projects, and plugged them into various and sundry clouds, including AWS, Azure, as well as higher-level hobbyist systems like AdaFruit IO (which is super fun, BTW. Love them.)

The Azure IoT DevKit is brilliant for a number of reasons, but one of the coolest things is that you don't need a physical one...they have an online simulator! Which is very Inception. You can try out the simulator at https://aka.ms/iot-devkit-simulator. You can literally edit your .ino Arduino files in the browser, connect them to your Azure account, and then deploy them to a virtual DevKit (seen on the right). All the code and how-tos are on GitHub as well.

When you hit Deploy it'll create a Free Azure IoT Hub. Be aware that if you already have a free one you may want to delete it (as you can only have a certain number) or change the template as appropriate. When you're done playing, just delete the entire Resource Group and everything within it will go away.

The Azure IoT DevKit in the browser is amazing

Right off the bat you'll have the code to connect to Azure, get tweets from Twitter, and display them on the tiny screen! (Did I mention there's a tiny screen?) You can also "shake" the virtual IoT kit, and exercise the various sensors. It wouldn't be IoT if it didn't have sensors!

It's a tiny Arduino device with a screen!

This is just the simulator, but it's exactly like the real MXChip IoT DevKit. (Get one here) They are less than US$50 and include WiFi, Humidity & Temperature, Gyroscope & Accelerometer, Air Pressure, Magnetometer, Microphone, and IrDA, which is ton for a small dev board. It's also got a tiny 128x64 OLED color screen! Finally, the board also can go into AP mode which lets you easily put it online in minutes.

I love these well-designed elegant little devices. It also shows up as an attached disk and it's easy to upgrade the firmware.

Temp and Humidity on the Azure IoT DevKit

You can then dev against the real device with free VS Code if you like. You'll need:

  • Node.js and Yarn: Runtime for the setup script and automated tasks.
  • Azure CLI 2.0 MSI - Cross-platform command-line experience for managing Azure resources. The MSI contains dependent Python and pip.
  • Visual Studio Code (VS Code): Lightweight code editor for DevKit development.
  • Visual Studio Code extension for Arduino: Extension that enables Arduino development in Visual Studio Code.
  • Arduino IDE: The extension for Arduino relies on this tool.
  • DevKit Board Package: Tool chains, libraries, and projects for the DevKit
  • ST-Link Utility: Essential tools and drivers.

But this Zip file sets it all up for you on Windows, and head over here for Homebrew/Mac instructions and more details.

I was very impressed with the Arduino extension for VS Code. No disrespect to the Arduino IDE but you'll likely outgrow it quickly. This free add on to VS Code gives you intellisense and integration Arduino Debugging.

Once you have the basics done, you can graduate to the larger list of projects at https://microsoft.github.io/azure-iot-developer-kit/docs/projects/ that include lots of cool stuff to try out like a cloud based Translator, Door Monitor, and Air Traffic Control Simulator.

All in all, I was super impressed with the polish of it all. There's a LOT to learn, to be clear, but this was a very enjoyable weekend of play.


Sponsor: Get the latest JetBrains Rider for debugging third-party .NET code, Smart Step Into, more debugger improvements, C# Interactive, new project wizard, and formatting code in columns.



© 2017 Scott Hanselman. All rights reserved.
     

Setting up a managed container cluster with AKS and Kubernetes in the Azure Cloud running .NET Core in minutes

After building a Raspberry Pi Kubernetes Cluster, I wanted to see how quickly I could get up to speed on Kubernetes in Azure. I installed the Azure CLI (Command Line Interface) in a few minutes – works on Windows, Mac or Linux. I also remembered tha…

After building a Raspberry Pi Kubernetes Cluster, I wanted to see how quickly I could get up to speed on Kubernetes in Azure.

  • I installed the Azure CLI (Command Line Interface) in a few minutes - works on Windows, Mac or Linux.
    • I also remembered that I don't really need to install anything locally. I could just use the Azure Cloud Shell directly from within VS Code. I'd get a bash shell, Azure CLI, and automatically logged in without doing anything manual.
    • Anyway, while needlessly installing the Azure CLI locally, I read up on the Azure Container Service (AKS) here. There's walkthrough for creating an AKS Cluster here. You can actually run through the whole tutorial in the browser with an in-browser shell.
  • After logging in with "az login" I made a new resource group to hold everything with "az group create -l centralus -n aks-hanselman." It's in the centralus and it's named aks-hanselman.
  • Then I created a managed container service like this:
    C:\Users\scott\Source>az aks create -g aks-hanselman -n hanselkube --generate-ssh-keys
    / Running ...
  • This runs for a few minutes while creating, then when it's done, I can get ahold of the credentials I need with
    C:\Users\scott\Source>az aks get-credentials --resource-group aks-hanselman --name hanselkube
    Merged "hanselkube" as current context in C:\Users\scott\.kube\config
  • I can install Kubenetes CLI "kubectl" easily with "az aks install-cli"
    Then list out the nodes that are ready to go!
    C:\Users\scott\Source>kubectl get nodes
    NAME                       STATUS    ROLES     AGE       VERSION
    aks-nodepool1-13823488-0   Ready     agent     1m        v1.7.7
    aks-nodepool1-13823488-1   Ready     agent     1m        v1.7.7
    aks-nodepool1-13823488-2   Ready     agent     1m        v1.7.7

A year ago, Glenn Condron and I made a silly web app while recording a Microsoft Virtual Academy. We use it for demos and to show how even old (now over a year) containers can still be easily and reliably deployed. It's up at https://hub.docker.com/r/glennc/fancypants/.

I'll deploy it to my new Kubernetes Cluster up in Azure by making this yaml file:

apiVersion: apps/v1beta1

kind: Deployment
metadata:
name: fancypants
spec:
replicas: 1
template:
metadata:
labels:
app: fancypants
spec:
containers:
- name: fancypants
image: glennc/fancypants:latest
ports:
- containerPort: 80
---
apiVersion: v1
kind: Service
metadata:
name: fancypants
spec:
type: LoadBalancer
ports:
- port: 80
selector:
app: fancypants

I saved it as fancypants.yml, then run kubectl create -f fancypants.yml.

I can run kubectl proxy and then hit http://localhost:8001/api/v1/namespaces/kube-system/services/http:kubernetes-dashboard:/proxy/#!/overview?namespace=default to look at the Kubernetes Dashboard, proxyed locally, but all running in Azure.

image

When fancypants is created and deployed, then I can find out its external IP with:

C:\Users\scott\Sources>kubectl get service

NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
fancypants LoadBalancer 10.0.116.145 52.165.232.77 80:31040/TCP 7m
kubernetes ClusterIP 10.0.0.1 <none> 443/TCP 18m

There's my IP, I hit it and boom, I've got fancypants in the managed cloud. I only have to pay for the VMs I'm using, and not for the VM that manages Kubernetes. That means the "kube-system" namespace is free, I pay for other namespaces like my "default" one.

image

Best part? When I'm done, I can just delete the resource group and take it all away. Per minute billing.

C:\Users\scott\Sources>az group delete -n aks-hanselman --yes

Super fun and just took about 30 min to install, read about, try it out, write this blog post, then delete. Try it yourself!


Sponsor: Check out JetBrains Rider: a new cross-platform .NET IDE. Edit, refactor, test and debug ASP.NET, .NET Framework, .NET Core, Xamarin or Unity applications. Learn more and download a 30-day trial!



© 2017 Scott Hanselman. All rights reserved.
     

Azure Cloud Shell – your own bash shell and container – right inside Visual Studio Code

Visual Studio Code has a HUGE extension library. There’s also almost two dozen very nice Azure specific extensions as well as extensions for Docker, etc. If you write an Azure extension yourself, you can depend on the Azure Account Extension to handle …

Visual Studio Code has a HUGE extension library. There's also almost Docker, etc. If you write an Azure extension yourself, you can depend on the Azure Account Extension to handle the administrivia of the user logging into Azure and selecting their subscription. And of course, the Azure Account Extension is open source.

Here's the cool part - I think, since I just learned it. You can have the Azure Account Extension installed (again, you can install it directly or you can get it as a dependency) you also get the ability to get an Azure Cloud Shell directly inside VS Code. That means a little container spins up in the Cloud and you can get a real bash shell or a real PowerShell shell quickly. AND the Azure Cloud Shell automatically is logged in as you and already has a ton of tools pre-installed.

Here's how you do it.

VS Code Command Palette

It will pop up a message with a "copy & open" button. It'll launch a browser, then you enter a special code after logging into Azure to OAuth VS Code into your Account account.

image

At this point, open a Cloud Shell with Shift-Ctrl-P and type "Bash" or "PowerShell"...it'll autocomplete so you can type a lot less, or setup a hotkey.

Your Cloud Shell will appear along side your local terminals!

Note that there's a "clouddrive" folder mapped to your Azure Storage so you can keep stuff in there. Even though the Shell goes away in about 20 min of non-use, your stuff (scripts, whatever) is persisted.

image

There's a bunch of tools preinstalled you can use as well!

[email protected]:~$ node --version

v6.9.4
[email protected]:~$ dotnet --version
2.0.0
[email protected]:~$ git --version
git version 2.7.4
[email protected]:~$ python --version
Python 3.5.2
[email protected]:~$ lsb_release -a
No LSB modules are available.
Distributor ID: Ubuntu
Description: Ubuntu 16.04.2 LTS
Release: 16.04
Codename: xenial

And finally, when you type "azure" or "az" for the various Azure CLI (Command Line Interface) tools, you'll find you're already authenticated/logged into Azure, so you can create VMs, list websites, manage Kubenetes clusters, all from within VS Code. I'm still exploring, but I'm enjoying what I'm seeing.


Sponsor: Scale your Python for big data & big science with Intel® Distribution for Python. Near-native code speed. Use with NumPy, SciPy & scikit-learn. Get it Today



© 2017 Scott Hanselman. All rights reserved.
     

Exploring the preconfigured browser-based Linux Cloud Shell built into the Azure Portal

At BUILD a few weeks ago I did a demo of the Azure Cloud Shell, now in preview. It’s pretty fab and it’s built into the Azure Portal and lives in your browser. You don’t have to do anything, it’s just there whenever you need it. I’m trying to convince them to enable “Quake Mode” so it would pop-up when you click ~ but they never listen to me. 😉

Animated Gif of the Azure Cloud Shell

Click the >_ shell icon in the top toolbar at http://portal.azure.com. The very first time you launch the Azure Cloud Shell it will ask you where it wants your $home directory files to be persisted. They will live in your own Storage Account. Don’t worry about cost, remember that Azure Storage is like pennies a gig, so assuming you’re storing script files, figure it’s thousandths of pennies – a non-issue.

Where do you want your account files persisted to?

It’s pretty genius how it works, actually. Since you can setup an Azure Storage Account as a regular File Share (sharing to Mac, Linux, or Windows) it will just make a file share and mount it. The data you save in the ~/clouddrive is persistent between sessions, the sessions themselves disappear if you don’t use them.

Now my Azure Cloud Shell Files are available anywhere

Today it’s got bash inside a real container. Here’s what lsb_release -a says:

[email protected]:~/clouddrive$ lsb_release -a
No LSB modules are available.
Distributor ID: Ubuntu
Description:    Ubuntu 16.04.1 LTS
Release:        16.04
Codename:       xenial

Looks like Ubuntu xenial inside a container, all managed by an orchestrator within Azure Container Services. The shell is using xterm.js to make it all possible inside the browser. That means you can run vim, top, whatever makes you happy. Cloud shells include vim, emacs, npm, make, maven, pip, as well as docker, kubectl, sqlcmd, postgres, mysql, iPython, and even .NET Core’s command line SDK.

NOTE: Ctrl-v and Ctrl-c do not function as copy/paste on Windows machines [in the Portal using xterm.js], please us Ctrl-insert and Shift-insert to copy/paste. Right-click copy paste options are also available, however this is subject to browser-specific clipboard access

When you’re in there, of course the best part is that you can ssh into your Linux VMs. They say PowerShell is coming soon to the Cloud Shell so you’ll be able to remote Powershell in to Windows boxes, I assume.

The Cloud Shell has the Azure CLI (command line interface) built in and pre-configured and logged in. So I can hit the shell then (for example) get a list of my web apps, and restart one. Here I’m getting the names of my sites and their resource groups, then restarting my son’s hamster blog.

[email protected]:~/clouddrive$ az webapp list -o table
ResourceGroup               Location          State    DefaultHostName                             AppServicePlan     Name
--------------------------  ----------------  -------  ------------------------------------------  -----------------  ------------------------
Default-Web-WestUS          West US           Running  thisdeveloperslife.azurewebsites.net        DefaultServerFarm  thisdeveloperslife
Default-Web-WestUS          West US           Running  hanselmanlyncrelay.azurewebsites.net        DefaultServerFarm  hanselmanlyncrelay
Default-Web-WestUS          West US           Running  myhamsterblog.azurewebsites.net             DefaultServerFarm  myhamsterblog

[email protected]:~/clouddrive$ az webapp restart -n myhamsterblog -g "Default-Web-WestUS"

Pretty cool. I’m going to keep exploring, but I like the way the Azure Portal is going from a GUI and DevOps dashboard perspective, but it’s also nice to have a CLI preconfigured whenever I need it.


Sponsor: Did you know VSTS can integrate closely with Octopus Deploy? Watch Damian Brady and Brian A. Randell as they show you how to automate deployments from VSTS to Octopus Deploy, and demo the new VSTS Octopus Deploy dashboard widget. Watch now!


© 2017 Scott Hanselman. All rights reserved.
     

At BUILD a few weeks ago I did a demo of the Azure Cloud Shell, now in preview. It's pretty fab and it's built into the Azure Portal and lives in your browser. You don't have to do anything, it's just there whenever you need it. I'm trying to convince them to enable "Quake Mode" so it would pop-up when you click ~ but they never listen to me. ;)

Animated Gif of the Azure Cloud Shell

Click the >_ shell icon in the top toolbar at http://portal.azure.com. The very first time you launch the Azure Cloud Shell it will ask you where it wants your $home directory files to be persisted. They will live in your own Storage Account. Don't worry about cost, remember that Azure Storage is like pennies a gig, so assuming you're storing script files, figure it's thousandths of pennies - a non-issue.

Where do you want your account files persisted to?

It's pretty genius how it works, actually. Since you can setup an Azure Storage Account as a regular File Share (sharing to Mac, Linux, or Windows) it will just make a file share and mount it. The data you save in the ~/clouddrive is persistent between sessions, the sessions themselves disappear if you don't use them.

Now my Azure Cloud Shell Files are available anywhere

Today it's got bash inside a real container. Here's what lsb_release -a says:

[email protected]:~/clouddrive$ lsb_release -a
No LSB modules are available.
Distributor ID: Ubuntu
Description:    Ubuntu 16.04.1 LTS
Release:        16.04
Codename:       xenial

Looks like Ubuntu xenial inside a container, all managed by an orchestrator within Azure Container Services. The shell is using xterm.js to make it all possible inside the browser. That means you can run vim, top, whatever makes you happy. Cloud shells include vim, emacs, npm, make, maven, pip, as well as docker, kubectl, sqlcmd, postgres, mysql, iPython, and even .NET Core's command line SDK.

NOTE: Ctrl-v and Ctrl-c do not function as copy/paste on Windows machines [in the Portal using xterm.js], please us Ctrl-insert and Shift-insert to copy/paste. Right-click copy paste options are also available, however this is subject to browser-specific clipboard access

When you're in there, of course the best part is that you can ssh into your Linux VMs. They say PowerShell is coming soon to the Cloud Shell so you'll be able to remote Powershell in to Windows boxes, I assume.

The Cloud Shell has the Azure CLI (command line interface) built in and pre-configured and logged in. So I can hit the shell then (for example) get a list of my web apps, and restart one. Here I'm getting the names of my sites and their resource groups, then restarting my son's hamster blog.

[email protected]:~/clouddrive$ az webapp list -o table
ResourceGroup               Location          State    DefaultHostName                             AppServicePlan     Name
--------------------------  ----------------  -------  ------------------------------------------  -----------------  ------------------------
Default-Web-WestUS          West US           Running  thisdeveloperslife.azurewebsites.net        DefaultServerFarm  thisdeveloperslife
Default-Web-WestUS          West US           Running  hanselmanlyncrelay.azurewebsites.net        DefaultServerFarm  hanselmanlyncrelay
Default-Web-WestUS          West US           Running  myhamsterblog.azurewebsites.net             DefaultServerFarm  myhamsterblog


[email protected]:~/clouddrive$ az webapp restart -n myhamsterblog -g "Default-Web-WestUS"

Pretty cool. I'm going to keep exploring, but I like the way the Azure Portal is going from a GUI and DevOps dashboard perspective, but it's also nice to have a CLI preconfigured whenever I need it.


Sponsor: Did you know VSTS can integrate closely with Octopus Deploy? Watch Damian Brady and Brian A. Randell as they show you how to automate deployments from VSTS to Octopus Deploy, and demo the new VSTS Octopus Deploy dashboard widget. Watch now!


© 2017 Scott Hanselman. All rights reserved.
     

Penny Pinching in the Cloud: Lift and Shift vs App Services – When a VM in the Cloud isn’t what you want

I got an interesting question today. This is actually an extremely common one so I thought I’d take a bit to explore it. It’s worth noting that I don’t know the result of this blog post. That is, I don’t know if I’ll be right or not, and I’m not going to edit it. Let’s see how this goes!

The individual emailed and said they were new to Azure and said:

Question for you.  (and we may have made a mistake – some opinions and help needed)
A month or so ago, we setup a full up Win2016 server on Azure, with the idea that it would host a SQL server as well two IIS web sites

Long story short, they were mired in the setup of IIS on Win2k6, messing with ports, yada yada yada. ‘

All they wanted was:

  • The ability to right-click publish from Visual Studio for two sites.
  • Management of a SQL Database from SQL Management Studio.

This is a classic “lift and shift” story. Someone has a VM locally or under their desk or in hosting, so they figure they’ll move it to the cloud. They LIFT the site as a Virtual Machine and SHIFT it to the cloud.

For many, this is a totally reasonable and logical thing to do. If you did this and things work for you, fab, and congrats. However, if, at this point, you’re finding the whole “Cloud” thing to be underwhelming, it’s likely because you’re not really using the cloud, you’ve just moved a VM into a giant host. You still have to feed and water the VM and deal with its incessant needs. This is likely NOT what you wanted to do. You just want your app running.

Making a VM to do Everything

If I go into Azure and make a new Virtual Machine (Linux or Windows) it’s important to remember that I’m now responsible for giving that VM a loving home and a place to poop. Just making sure you’re still reading.

NOTE: If you’re making a Windows VM and you already have a Windows license you can save like 40%, so be aware of that, but I’ll assume they didn’t have a license.

You can check out the Pricing Calculator if you like, but I’ll just go and actually setup the VM and see what the Azure Portal says. Note that it’s going to need to be beefy enough for two websites AND a SQL Server, per the requirements from before.

Pricing for VMs in Azure

For a SQL Server and two sites I might want the second or third choice here, which isn’t too bad given they have SSDs and lots of RAM. But again, you’re responsible for them. Not to mention you have ONE VM so your web server and SQL Server Database are living on that one machine. Anything fails and it’s over. You’re also possibly giving up perf as you’re sharing resources.

App Service Plans with Web Sites/Apps and SQL Azure Server

An “App Service Plan” on Azure is a fancy word for “A VM you don’t need to worry about.” You can host as many Web Apps, Mobile Apps/Backends, Logic Apps and stuff in one as you like, barring perf or memory issues. I have between 19 and 20 small websites in one Small App Service Plan. So, to be clear, you put n number of App Services as you’d like into one App Service Plan.

When you check out the pricing tier for an App Service Plan, be sure to View All and really explore and think about your options. Some includes support for custom domains and SSL, others have 50 backups a day, or support BizTalk Services, etc. They start at Free, go to Shared, and then Basic, Standard, etc. Best part is that you can scale these up and down. If I go from a Small to a Medium App Service Plan, every App on the Plan gets better.

However, we don’t need a SQL Server, remember? This is going to be a plan that we’ll use to host those two websites. AND we can use the the same App Service Plan for staging slots (dev/test/staging/production) if we like. So just get the plan that works for your sites, today. Unlike a VM, you can change it whenever.

App Service Plan pricing

SQL Server on Azure is similar. You make a SQL Server Database that is hosted on a SQL Server that supports the number of Database Throughput Units that I need. Again, because it’s the capital-C Cloud, I can change the size anytime. I can even script it and turn it up and down on the weekends. Whatever saves me money!

SQL Azure Pricing

I can scale the SQL Server from $5 to a month to bajillions and everything in between.

What the difference here?

First, we started here

  • VM in the Cloud: At the start we had “A VM in the Cloud.” I have total control over the Virtual Machine, which is good, but I have total control over the Virtual Machine, which is bad. I can scale up or out, but just as one Unit, unless I split things up into three VMs.

Now we’ve got.

  • IIS/Web Server in the Cloud: I don’t have to think about the underlying OS or keeping it patched. I can use Linux or Windows if I like, and I can run PHP, Ruby, Java, .NET, and on and on in an Azure App Service. I can put lots of sites in one Plan but the IIS publishing endpoint for Visual Studio is automatically configured. I can also use Git for deployment as well
  • SQL Server in the Cloud: The SQL Server is managed, backed up, and independently scalable.

This is a slightly more “cloudy” of doing things. It’s not microservices and independently scalable containers, but it does give you:

  • Independently scalable tiers (pricing and CPU and Memory and disk)
  • Lots of automatic benefits – backups, custom domains, ssl certs, git deploy, App Service Extensions, and on and on. Dozens of features.
  • Control over pricing that is scriptable. You could write scripts to really pinch pennies by scaling your units up and down based on time of day or month.

What are your thoughts on Lift and Shift to IaaS (Infrastructure as a Service) vs using PaaS (Platform as a Service)? What did I forget? (I’m sure lots!)


Sponsor: Check out JetBrains Rider: a new cross-platform .NET IDE. Edit, refactor, test, build and debug ASP.NET, .NET Framework, .NET Core, or Unity applications. Learn more and get access to early builds!


© 2017 Scott Hanselman. All rights reserved.
     

I got an interesting question today. This is actually an extremely common one so I thought I'd take a bit to explore it. It's worth noting that I don't know the result of this blog post. That is, I don't know if I'll be right or not, and I'm not going to edit it. Let's see how this goes!

The individual emailed and said they were new to Azure and said:

Question for you.  (and we may have made a mistake – some opinions and help needed)
A month or so ago, we setup a full up Win2016 server on Azure, with the idea that it would host a SQL server as well two IIS web sites

Long story short, they were mired in the setup of IIS on Win2k6, messing with ports, yada yada yada. '

All they wanted was:

  • The ability to right-click publish from Visual Studio for two sites.
  • Management of a SQL Database from SQL Management Studio.

This is a classic "lift and shift" story. Someone has a VM locally or under their desk or in hosting, so they figure they'll move it to the cloud. They LIFT the site as a Virtual Machine and SHIFT it to the cloud.

For many, this is a totally reasonable and logical thing to do. If you did this and things work for you, fab, and congrats. However, if, at this point, you're finding the whole "Cloud" thing to be underwhelming, it's likely because you're not really using the cloud, you've just moved a VM into a giant host. You still have to feed and water the VM and deal with its incessant needs. This is likely NOT what you wanted to do. You just want your app running.

Making a VM to do Everything

If I go into Azure and make a new Virtual Machine (Linux or Windows) it's important to remember that I'm now responsible for giving that VM a loving home and a place to poop. Just making sure you're still reading.

NOTE: If you're making a Windows VM and you already have a Windows license you can save like 40%, so be aware of that, but I'll assume they didn't have a license.

You can check out the Pricing Calculator if you like, but I'll just go and actually setup the VM and see what the Azure Portal says. Note that it's going to need to be beefy enough for two websites AND a SQL Server, per the requirements from before.

Pricing for VMs in Azure

For a SQL Server and two sites I might want the second or third choice here, which isn't too bad given they have SSDs and lots of RAM. But again, you're responsible for them. Not to mention you have ONE VM so your web server and SQL Server Database are living on that one machine. Anything fails and it's over. You're also possibly giving up perf as you're sharing resources.

App Service Plans with Web Sites/Apps and SQL Azure Server

An "App Service Plan" on Azure is a fancy word for "A VM you don't need to worry about." You can host as many Web Apps, Mobile Apps/Backends, Logic Apps and stuff in one as you like, barring perf or memory issues. I have between 19 and 20 small websites in one Small App Service Plan. So, to be clear, you put n number of App Services as you'd like into one App Service Plan.

When you check out the pricing tier for an App Service Plan, be sure to View All and really explore and think about your options. Some includes support for custom domains and SSL, others have 50 backups a day, or support BizTalk Services, etc. They start at Free, go to Shared, and then Basic, Standard, etc. Best part is that you can scale these up and down. If I go from a Small to a Medium App Service Plan, every App on the Plan gets better.

However, we don't need a SQL Server, remember? This is going to be a plan that we'll use to host those two websites. AND we can use the the same App Service Plan for staging slots (dev/test/staging/production) if we like. So just get the plan that works for your sites, today. Unlike a VM, you can change it whenever.

App Service Plan pricing

SQL Server on Azure is similar. You make a SQL Server Database that is hosted on a SQL Server that supports the number of Database Throughput Units that I need. Again, because it's the capital-C Cloud, I can change the size anytime. I can even script it and turn it up and down on the weekends. Whatever saves me money!

SQL Azure Pricing

I can scale the SQL Server from $5 to a month to bajillions and everything in between.

What the difference here?

First, we started here

  • VM in the Cloud: At the start we had "A VM in the Cloud." I have total control over the Virtual Machine, which is good, but I have total control over the Virtual Machine, which is bad. I can scale up or out, but just as one Unit, unless I split things up into three VMs.

Now we've got.

  • IIS/Web Server in the Cloud: I don't have to think about the underlying OS or keeping it patched. I can use Linux or Windows if I like, and I can run PHP, Ruby, Java, .NET, and on and on in an Azure App Service. I can put lots of sites in one Plan but the IIS publishing endpoint for Visual Studio is automatically configured. I can also use Git for deployment as well
  • SQL Server in the Cloud: The SQL Server is managed, backed up, and independently scalable.

This is a slightly more "cloudy" of doing things. It's not microservices and independently scalable containers, but it does give you:

  • Independently scalable tiers (pricing and CPU and Memory and disk)
  • Lots of automatic benefits - backups, custom domains, ssl certs, git deploy, App Service Extensions, and on and on. Dozens of features.
  • Control over pricing that is scriptable. You could write scripts to really pinch pennies by scaling your units up and down based on time of day or month.

What are your thoughts on Lift and Shift to IaaS (Infrastructure as a Service) vs using PaaS (Platform as a Service)? What did I forget? (I'm sure lots!)


Sponsor: Check out JetBrains Rider: a new cross-platform .NET IDE. Edit, refactor, test, build and debug ASP.NET, .NET Framework, .NET Core, or Unity applications. Learn more and get access to early builds!



© 2017 Scott Hanselman. All rights reserved.