Porting a 15 year old .NET 1.1 Virtual CPU Tiny Operating System school project to .NET Core 2.0

The 2002 TinyOS in C# is now on .NET Core in 2017 running on UbuntuI’ve had a number of great guests on the podcast lately. One topic that has come up a number of times is the “toy project.” I’ve usually kept mine private – never putting them on GitHub – Somewhat concerned that people would judge me and my code. However, hypocrite that am (aren’t we all?) I have advocated that others put their “Garage Sale Code” online. So here’s some crappy code. 😉

The Preamble

While I’ve been working as an engineer for 25 years this year, I didn’t graduate from school with a 4 year degree until 2003 – I just needed to get it done, for myself. I was poking around recently and found my project from OIT’s CST352 “Operating Systems” class. One of the projects was to create a “Virtual CPU and OS.” This is kind of a thought exercise. It’s not really a parser/lexer – although there is both – and it’s not a real OS. But it needs to be able to take in a made-up quasi-Assembly Language instruction set and execute them on a virtual CPU while managing virtual memory of arbitrary size. Again, a thought exercise made real to confirm that the student understands the responsibilities of a CPU.

Here’s an example “application.” Confused yet? Here’s the original spec I was given in 2002 that includes the 36 instructions the “CPU” should understand. It has 10 general-purpose 32bit registers address as 1 through 10. Register 10 is the stack pointer. There are two bit flag registers – sign flag and zero flag.

Instructions are “opcode arg1 arg2” with constants prefixed with “$.”

11 r8        ;Print r8
6 r1 $10 ;Move 10 into r1
6 r2 $6 ;Move 6 into r2
6 r3 $25 ;Move 25 into r3
23 r1 ;Acquire lock in r1 (currently 10)
11 r3 ;Print r3 (currently 25)
24 r1 ;Release r4 (currently 10)
25 r3 ;Sleep r3 (currently 25)
11 r3 ;Print r3 (currently 25)
27 ;Exit

I write my homework assignment in 2002 in the idiomatic C# of the time on .NET 1.1. That means no Generics<T> – I had to make my own strongly typed collections. That means C# has dozens of (if not a hundred) language and syntax improvements. I didn’t use a Unit Testing Framework as TDD was just starting around 1999 during the XP (eXtreme Programming) days and NUnit was just getting start. It also uses “unsafe” to pin down memory in a few places. I’m sure there are WAY WAY WAY better and more sophisticated ways to do this today in idiomatic C# of 2017. Those are excuses, the real reasons are my own ignorance, ability, combined with some night-school laziness.

One of the more fun parts of this exercise was moving from physical memory (a byte array as I recall) to a full-on Memory Manager where each Process thought it could address a whole bunch of Virtual Memory while actual Physical Memory was arbitrarily sized. Then – as a joke – I would swap out memory pages as XML! 😉 Yes, to be clear, it was a joke and I still love it.

You can run an “app” by passing in the total physical memory along with the text file containing the program, but you can also run an arbitrary number of programs by passing in an arbitrary number  of text files! The “TinyOS” will handle each process thinking it has its own memory and will time

If you are more of a visual learner, perhaps you’d prefer this 20-slide PowerPoint on this Tiny CPU that I presented in Malaysia later that year. You dig those early 2000-era slides? I KNOW YOU DO.

Tiny OS Memory SlidesTiny OS Memory SlidesTiny OS Memory Slides 

Updating a .NET 1.1 app to cross-platform .NET Core 2.0

Step 1 was to download the original code from my own blog. 😉 This is also Reason #4134 why you should have a blog.

I decided to use Visual Studio 2017 to upgrade it, and even worse I decided to use .NET Core 2.0 which is currently in Preview. I wanted to use .NET Core 2.0 not just because it’s cross-platform but also because it promises to have a pretty large API surface area and I want this to “just work.” The part about getting my old application running on Linux is going to be awesome, though.

Visual Studio then pops a scary dialog about upgrading files. NOTE that another totally valid way to do this (that I will end up doing later in this blog post) is to just make a new project and move the source files into it. Natch.

image

Visual Studio says it’s targeting .NET 2.0 Full Framework, but I ratchet it up to 4.6 to see what happens. It builds but with a bunch of errors about Obsolete methods, the most interesting one being this one:

Warning CS0618    
'ConfigurationSettings.AppSettings' is obsolete:
'This method is obsolete, it has been replaced by
System.Configuration!System.Configuration.ConfigurationManager.AppSettings'
C:\Users\scott\Downloads\TinyOSOLDOLD\OS Project\CPU.cs 72

That’s telling me that my .NET 1/2 API will work but has been replaced in .NET 4.x, but I’m more interested in .NET Core 2.0. I could make my EXE a LIB and target .NET Standard 2.0 or I could make a .NET Core 2.0 app and perhaps get a few more APIs. I didn’t do a formal analysis with the .NET Portability Analyzer but I will add that to the list of Things To Do. I may be able to make a library that works on an iPhone – a product that didn’t exist when I started this assignment. That would be Just Cool(tm).

I decided to just make a new empty .NET Core 2.0 app and copy the source .cs files into it. A few interesting things.

  • My app also used “unsafe” code (it pins memory down and accesses it directly).
  • It has extensive inline documentation in comments that I used to use NDoc to make a CHM Help file. I’d like that doc to turn into HTML at some point.
  • It also has an appsettings.json file that needs to get copied to the output folder when it compiles.
  • While I could publish it to a self-contained .NET Core exe, for now I’m running it like this in my test batch files – example:
    • dotnet netcoreapp2.0/TinyOSCore.dll 512 scott13.txt

Here’s the resulting csproj file.

<Project Sdk="Microsoft.NET.Sdk">

<PropertyGroup>
<OutputType>Exe</OutputType>
<TargetFramework>netcoreapp2.0</TargetFramework>
<GenerateDocumentationFile>true</GenerateDocumentationFile>
</PropertyGroup>

<PropertyGroup>
<AllowUnsafeBlocks>true</AllowUnsafeBlocks>
</PropertyGroup>

<ItemGroup>
<None Remove="appsettings.json" />
</ItemGroup>

<ItemGroup>
<Content Include="appsettings.json">
<CopyToOutputDirectory>PreserveNewest</CopyToOutputDirectory>
</Content>
</ItemGroup>

<ItemGroup>
<PackageReference Include="Microsoft.Extensions.Configuration" Version="2.0.0-preview2-final" />
<PackageReference Include="Microsoft.Extensions.Configuration.Json" Version="2.0.0-preview2-final" />
<PackageReference Include="Microsoft.Extensions.DependencyInjection" Version="2.0.0-preview2-final" />
<PackageReference Include="Microsoft.Extensions.Options.ConfigurationExtensions" Version="2.0.0-preview2-final" />
</ItemGroup>

</Project>

Other than the obsolete configuration warning and a few malformed XML comments, the app compiled and ran! You can actually “watch” the nightmare process here https://github.com/shanselman/TinyOS/commits/Core2Port in the form of GitHub commits. I also moved the docs from a 2002 Word Doc to Markdown so be sure to explore the fairly extensive spec https://github.com/shanselman/TinyOS.

The only significant change was loading the config. Configuration is even more different on .NET Core 2.0 than Full Framework. It’s FAR more, ahem, configurable. I could have used “Options,” I could have written my own config provider if it was important to keep the file format.

This little TinyOS has a bunch of config options that come in from a .exe.config file in XML like this (truncated):

<configuration>
<appSettings>
<!--
Must be a factor of 4
This is the total Physical Memory in bytes that the CPU can address.
This should not be confused with the amount of total or addressable memory
that is passed in on the command line.
-->
<add key="PhysicalMemory" value="128" />
<!--
Must be a factor of 4
This is the ammount of memory in bytes each process is allocated
Therefore, if this is 256 and you want to load 4 processes into the OS,
you'll need to pass a number > 1024 as the total ammount of addressable memory
on the command line.
-->
<add key="ProcessMemory" value="384" />
<add key="DumpPhysicalMemory" value="true" />
<add key="DumpInstruction" value="true" />
<add key="DumpRegisters" value="true" />
<add key="DumpProgram" value="true" />
<add key="DumpContextSwitch" value="true" />
<add key="PauseOnExit" value="false" />

I have a few choices. I could make a Configuration Provider and reach .NET Core to read this format (there’s an XML adapter, in fact) or make the code porting easier by moving these “name/value” pairs to a JSON file like this:

{
"PhysicalMemory": "128",
"ProcessMemory": "384",
"DumpPhysicalMemory": "true",
"DumpInstruction": "true",
"DumpRegisters": "true",
"DumpProgram": "true",
"DumpContextSwitch": "true",
"PauseOnExit": "false",
"SharedMemoryRegionSize": "16",
"NumOfSharedMemoryRegions": "4",
"MemoryPageSize": "16",
"StackSize": "16",
"DataSize": "16"
}

This was just a few minutes of search and replace to change the XML to JSON. I could have also written a little app or shell script. By changing the config (rather than writing an adapter) I could then keep the code 99% the same.

My code was doing things like this (all over…there was no DI container yet):

bytesOfPhysicalMemory = uint.Parse(ConfigurationSettings.AppSettings["PhysicalMemory"]);

And I’d like to avoid major refactoring – yet. I added this bit of .NET Core configuration at the top of the EntryPoint and saved away an IConfigurationHost:

var builder = new ConfigurationBuilder()
.AddJsonFile("appsettings.json");
Configuration = builder.Build();

I’ve got a Dictionary in the format of the IConfiguration host called “Configuration.” So now I just do this in a dozen places and the app compiles again:

bytesOfPhysicalMemory = uint.Parse(Configuration["PhysicalMemory"]);

This brings up that feeling we all have when we look at old code – especially our own old code. I should have abstracted that away! Why didn’t I use an interface? Why so many statics? What was I thinking?

We can beat ourselves up or we can feel good about ourselves and remember this. The app worked. It still works. There is value in it. I learned a lot. I’m a better programmer now. I don’t know how far I’ll take this old code but I had a lovely afternoon porting it to .NET Core 2.0 and I may refactor the heck out if it or I may not.

TinyOS on Ubuntu

For now I did update the smoke tests to run on both Windows and Linux and I’m happy with the experiment.

Related Links

Have YOU done a project like this, either in school or on your own?


Sponsor: Check out JetBrains Rider: a new cross-platform .NET IDE. Edit, refactor, test, build and debug ASP.NET, .NET Framework, .NET Core, or Unity applications. Learn more and get access to early builds!


© 2017 Scott Hanselman. All rights reserved.
     

The 2002 TinyOS in C# is now on .NET Core in 2017 running on UbuntuI've had a number of great guests on the podcast lately. One topic that has come up a number of times is the "toy project." I've usually kept mine private - never putting them on GitHub - Somewhat concerned that people would judge me and my code. However, hypocrite that am (aren't we all?) I have advocated that others put their "Garage Sale Code" online. So here's some crappy code. ;)

The Preamble

While I've been working as an engineer for 25 years this year, I didn't graduate from school with a 4 year degree until 2003 - I just needed to get it done, for myself. I was poking around recently and found my project from OIT's CST352 "Operating Systems" class. One of the projects was to create a "Virtual CPU and OS." This is kind of a thought exercise. It's not really a parser/lexer - although there is both - and it's not a real OS. But it needs to be able to take in a made-up quasi-Assembly Language instruction set and execute them on a virtual CPU while managing virtual memory of arbitrary size. Again, a thought exercise made real to confirm that the student understands the responsibilities of a CPU.

Here's an example "application." Confused yet? Here's the original spec I was given in 2002 that includes the 36 instructions the "CPU" should understand. It has 10 general-purpose 32bit registers address as 1 through 10. Register 10 is the stack pointer. There are two bit flag registers - sign flag and zero flag.

Instructions are "opcode arg1 arg2" with constants prefixed with "$."

11 r8        ;Print r8

6 r1 $10 ;Move 10 into r1
6 r2 $6 ;Move 6 into r2
6 r3 $25 ;Move 25 into r3
23 r1 ;Acquire lock in r1 (currently 10)
11 r3 ;Print r3 (currently 25)
24 r1 ;Release r4 (currently 10)
25 r3 ;Sleep r3 (currently 25)
11 r3 ;Print r3 (currently 25)
27 ;Exit

I write my homework assignment in 2002 in the idiomatic C# of the time on .NET 1.1. That means no Generics<T> - I had to make my own strongly typed collections. That means C# has dozens of (if not a hundred) language and syntax improvements. I didn't use a Unit Testing Framework as TDD was just starting around 1999 during the XP (eXtreme Programming) days and NUnit was just getting start. It also uses "unsafe" to pin down memory in a few places. I'm sure there are WAY WAY WAY better and more sophisticated ways to do this today in idiomatic C# of 2017. Those are excuses, the real reasons are my own ignorance, ability, combined with some night-school laziness.

One of the more fun parts of this exercise was moving from physical memory (a byte array as I recall) to a full-on Memory Manager where each Process thought it could address a whole bunch of Virtual Memory while actual Physical Memory was arbitrarily sized. Then - as a joke - I would swap out memory pages as XML! ;) Yes, to be clear, it was a joke and I still love it.

You can run an "app" by passing in the total physical memory along with the text file containing the program, but you can also run an arbitrary number of programs by passing in an arbitrary number  of text files! The "TinyOS" will handle each process thinking it has its own memory and will time

If you are more of a visual learner, perhaps you'd prefer this 20-slide PowerPoint on this Tiny CPU that I presented in Malaysia later that year. You dig those early 2000-era slides? I KNOW YOU DO.

Tiny OS Memory SlidesTiny OS Memory SlidesTiny OS Memory Slides 

Updating a .NET 1.1 app to cross-platform .NET Core 2.0

Step 1 was to download the original code from my own blog. ;) This is also Reason #4134 why you should have a blog.

I decided to use Visual Studio 2017 to upgrade it, and even worse I decided to use .NET Core 2.0 which is currently in Preview. I wanted to use .NET Core 2.0 not just because it's cross-platform but also because it promises to have a pretty large API surface area and I want this to "just work." The part about getting my old application running on Linux is going to be awesome, though.

Visual Studio then pops a scary dialog about upgrading files. NOTE that another totally valid way to do this (that I will end up doing later in this blog post) is to just make a new project and move the source files into it. Natch.

image

Visual Studio says it's targeting .NET 2.0 Full Framework, but I ratchet it up to 4.6 to see what happens. It builds but with a bunch of errors about Obsolete methods, the most interesting one being this one:

Warning CS0618    

'ConfigurationSettings.AppSettings' is obsolete:
'This method is obsolete, it has been replaced by
System.Configuration!System.Configuration.ConfigurationManager.AppSettings'
C:\Users\scott\Downloads\TinyOSOLDOLD\OS Project\CPU.cs 72

That's telling me that my .NET 1/2 API will work but has been replaced in .NET 4.x, but I'm more interested in .NET Core 2.0. I could make my EXE a LIB and target .NET Standard 2.0 or I could make a .NET Core 2.0 app and perhaps get a few more APIs. I didn't do a formal analysis with the .NET Portability Analyzer but I will add that to the list of Things To Do. I may be able to make a library that works on an iPhone - a product that didn't exist when I started this assignment. That would be Just Cool(tm).

I decided to just make a new empty .NET Core 2.0 app and copy the source .cs files into it. A few interesting things.

  • My app also used "unsafe" code (it pins memory down and accesses it directly).
  • It has extensive inline documentation in comments that I used to use NDoc to make a CHM Help file. I'd like that doc to turn into HTML at some point.
  • It also has an appsettings.json file that needs to get copied to the output folder when it compiles.
  • While I could publish it to a self-contained .NET Core exe, for now I'm running it like this in my test batch files - example:
    • dotnet netcoreapp2.0/TinyOSCore.dll 512 scott13.txt

Here's the resulting csproj file.

<Project Sdk="Microsoft.NET.Sdk">


<PropertyGroup>
<OutputType>Exe</OutputType>
<TargetFramework>netcoreapp2.0</TargetFramework>
<GenerateDocumentationFile>true</GenerateDocumentationFile>
</PropertyGroup>

<PropertyGroup>
<AllowUnsafeBlocks>true</AllowUnsafeBlocks>
</PropertyGroup>

<ItemGroup>
<None Remove="appsettings.json" />
</ItemGroup>

<ItemGroup>
<Content Include="appsettings.json">
<CopyToOutputDirectory>PreserveNewest</CopyToOutputDirectory>
</Content>
</ItemGroup>

<ItemGroup>
<PackageReference Include="Microsoft.Extensions.Configuration" Version="2.0.0-preview2-final" />
<PackageReference Include="Microsoft.Extensions.Configuration.Json" Version="2.0.0-preview2-final" />
<PackageReference Include="Microsoft.Extensions.DependencyInjection" Version="2.0.0-preview2-final" />
<PackageReference Include="Microsoft.Extensions.Options.ConfigurationExtensions" Version="2.0.0-preview2-final" />
</ItemGroup>

</Project>

Other than the obsolete configuration warning and a few malformed XML comments, the app compiled and ran! You can actually "watch" the nightmare process here https://github.com/shanselman/TinyOS/commits/Core2Port in the form of GitHub commits. I also moved the docs from a 2002 Word Doc to Markdown so be sure to explore the fairly extensive spec https://github.com/shanselman/TinyOS.

The only significant change was loading the config. Configuration is even more different on .NET Core 2.0 than Full Framework. It's FAR more, ahem, configurable. I could have used "Options," I could have written my own config provider if it was important to keep the file format.

This little TinyOS has a bunch of config options that come in from a .exe.config file in XML like this (truncated):

<configuration>

<appSettings>
<!--
Must be a factor of 4
This is the total Physical Memory in bytes that the CPU can address.
This should not be confused with the amount of total or addressable memory
that is passed in on the command line.
-->
<add key="PhysicalMemory" value="128" />
<!--
Must be a factor of 4
This is the ammount of memory in bytes each process is allocated
Therefore, if this is 256 and you want to load 4 processes into the OS,
you'll need to pass a number > 1024 as the total ammount of addressable memory
on the command line.
-->
<add key="ProcessMemory" value="384" />
<add key="DumpPhysicalMemory" value="true" />
<add key="DumpInstruction" value="true" />
<add key="DumpRegisters" value="true" />
<add key="DumpProgram" value="true" />
<add key="DumpContextSwitch" value="true" />
<add key="PauseOnExit" value="false" />

I have a few choices. I could make a Configuration Provider and reach .NET Core to read this format (there's an XML adapter, in fact) or make the code porting easier by moving these "name/value" pairs to a JSON file like this:

{

"PhysicalMemory": "128",
"ProcessMemory": "384",
"DumpPhysicalMemory": "true",
"DumpInstruction": "true",
"DumpRegisters": "true",
"DumpProgram": "true",
"DumpContextSwitch": "true",
"PauseOnExit": "false",
"SharedMemoryRegionSize": "16",
"NumOfSharedMemoryRegions": "4",
"MemoryPageSize": "16",
"StackSize": "16",
"DataSize": "16"
}

This was just a few minutes of search and replace to change the XML to JSON. I could have also written a little app or shell script. By changing the config (rather than writing an adapter) I could then keep the code 99% the same.

My code was doing things like this (all over...there was no DI container yet):

bytesOfPhysicalMemory = uint.Parse(ConfigurationSettings.AppSettings["PhysicalMemory"]);

And I'd like to avoid major refactoring - yet. I added this bit of .NET Core configuration at the top of the EntryPoint and saved away an IConfigurationHost:

var builder = new ConfigurationBuilder()

.AddJsonFile("appsettings.json");
Configuration = builder.Build();

I've got a Dictionary in the format of the IConfiguration host called "Configuration." So now I just do this in a dozen places and the app compiles again:

bytesOfPhysicalMemory = uint.Parse(Configuration["PhysicalMemory"]);

This brings up that feeling we all have when we look at old code - especially our own old code. I should have abstracted that away! Why didn't I use an interface? Why so many statics? What was I thinking?

We can beat ourselves up or we can feel good about ourselves and remember this. The app worked. It still works. There is value in it. I learned a lot. I'm a better programmer now. I don't know how far I'll take this old code but I had a lovely afternoon porting it to .NET Core 2.0 and I may refactor the heck out if it or I may not.

TinyOS on Ubuntu

For now I did update the smoke tests to run on both Windows and Linux and I'm happy with the experiment.

Related Links

Have YOU done a project like this, either in school or on your own?


Sponsor: Check out JetBrains Rider: a new cross-platform .NET IDE. Edit, refactor, test, build and debug ASP.NET, .NET Framework, .NET Core, or Unity applications. Learn more and get access to early builds!


© 2017 Scott Hanselman. All rights reserved.
     

Xamarin .NET Workbooks – Interactive Computing is a stellar learning tool

I’ve been thinking a lot about how to best teach .NET and C#/F# to folks who are new to the space. We’ve added an in-browser no-install C# tutorial at http://dot.net. You can run through almost a few days lessons in C# without installing anything. Heck, it’s useful even if you just want to brush up on your skills.

When I spoke with Safia Abdalla a few months ago she re-introduced me to the ideas behind Interactive Computing and the whole ecosystem around Jupyter Notebooks, and the Nteract project Safia works on. It’s pretty amazing.

Pythonistas are familiar with Jupyter and the idea of a notebook that cleanly mixes prose and code. This ecosystem is very friendly to data scientists that are (perhaps) more scientist and less developer. People for whom an IDE is not as interesting as “electric paper.”

In fact, many people don’t realize that the Microsoft Azure Cloud supports hosting of Jupyter Notebooks using Python, R, and F#.

Azure Notebooks

Notebooks are a great learning resource that go beyond a REPL (an simple interactive console) in that they are effectively textbooks with islands of interactive code. It’s even more powerful when you consider graphics, charts, and other interactive models.

Xamarin has a thing called Xamarin Workbooks (I’m calling them .NET Workbooks in my head) that you should download and check out RIGHT NOW. Go get Xamarin Workbooks & Inspector for Windows (or download for Mac). Start playing around with workbooks or try out the samples.

I’m going to try teaching my C# and .NET courses for at least the first day or two using Xamarin .NET Workbooks. I think they have huge potential and I’m thrilled that Miguel and friends are investing so much in them. The potential for these as a learning tool that sits between a REPL and an IDE is huge.

The page at https://developer.xamarin.com/workbooks/ is FILLED with amazing example workbooks and lessons, and it’s growing. It has section not only on C# but Android, Games, Graphics as a concept, iOS, WPF, and so much more.

I run it and start here:

Xamarin Workbooks

Then I start typing…prose first! Just real sentences. Then I add some code. Notice that I’m not doing Console.WriteLine, I’m just assigning a variable. Xamarin Workbooks makes a nice visualization of my variable.

var scott = "Hanselman"

The prose is ignored (by the compiler) but the code cells and built upon each other and when you execute one you’re executing up to that point. Great for building up concepts.

You can print in other libraries and built upon them like in this chart example using the Urho library.

Charts in Xamarin Workbooks

Not to put to fine a point on it, but you can write really fully featured examples or games in Xamarin Workbooks. Here’s a fully 3D realized planet earth WITH SATELLITES. Again, with not just sample code but explanatory prose. It’s a textbook come to life.

THIS is how I wish I learned programming 25 years ago. I’d loved to have turned (or demo’ed) a .workbook file. I’m thrilled to see C# folks be able to do simple things that Jupyter users have enjoyed for so long.

3D Earth in Xamarin Workbooks

What do you think? Would this be a good way to deliver a course on learning .NET and C#?


Sponsor: Big thanks to Progress! They recently published a comprehensive whitepaper on The State of C#, discussing the history of C#, what’s new in C# 7 and whether C# is still a viable language. Check it out!


© 2016 Scott Hanselman. All rights reserved.
     

I've been thinking a lot about how to best teach .NET and C#/F# to folks who are new to the space. We've added an in-browser no-install C# tutorial at http://dot.net. You can run through almost a few days lessons in C# without installing anything. Heck, it's useful even if you just want to brush up on your skills.

When I spoke with Safia Abdalla a few months ago she re-introduced me to the ideas behind Interactive Computing and the whole ecosystem around Jupyter Notebooks, and the Nteract project Safia works on. It's pretty amazing.

Pythonistas are familiar with Jupyter and the idea of a notebook that cleanly mixes prose and code. This ecosystem is very friendly to data scientists that are (perhaps) more scientist and less developer. People for whom an IDE is not as interesting as "electric paper."

In fact, many people don't realize that the Microsoft Azure Cloud supports hosting of Jupyter Notebooks using Python, R, and F#.

Azure Notebooks

Notebooks are a great learning resource that go beyond a REPL (an simple interactive console) in that they are effectively textbooks with islands of interactive code. It's even more powerful when you consider graphics, charts, and other interactive models.

Xamarin has a thing called Xamarin Workbooks (I'm calling them .NET Workbooks in my head) that you should download and check out RIGHT NOW. Go get Xamarin Workbooks & Inspector for Windows (or download for Mac). Start playing around with workbooks or try out the samples.

I'm going to try teaching my C# and .NET courses for at least the first day or two using Xamarin .NET Workbooks. I think they have huge potential and I'm thrilled that Miguel and friends are investing so much in them. The potential for these as a learning tool that sits between a REPL and an IDE is huge.

The page at https://developer.xamarin.com/workbooks/ is FILLED with amazing example workbooks and lessons, and it's growing. It has section not only on C# but Android, Games, Graphics as a concept, iOS, WPF, and so much more.

I run it and start here:

Xamarin Workbooks

Then I start typing...prose first! Just real sentences. Then I add some code. Notice that I'm not doing Console.WriteLine, I'm just assigning a variable. Xamarin Workbooks makes a nice visualization of my variable.

var scott = "Hanselman"

The prose is ignored (by the compiler) but the code cells and built upon each other and when you execute one you're executing up to that point. Great for building up concepts.

You can print in other libraries and built upon them like in this chart example using the Urho library.

Charts in Xamarin Workbooks

Not to put to fine a point on it, but you can write really fully featured examples or games in Xamarin Workbooks. Here's a fully 3D realized planet earth WITH SATELLITES. Again, with not just sample code but explanatory prose. It's a textbook come to life.

THIS is how I wish I learned programming 25 years ago. I'd loved to have turned (or demo'ed) a .workbook file. I'm thrilled to see C# folks be able to do simple things that Jupyter users have enjoyed for so long.

3D Earth in Xamarin Workbooks

What do you think? Would this be a good way to deliver a course on learning .NET and C#?


Sponsor: Big thanks to Progress! They recently published a comprehensive whitepaper on The State of C#, discussing the history of C#, what’s new in C# 7 and whether C# is still a viable language. Check it out!



© 2016 Scott Hanselman. All rights reserved.
     

Microsoft Virtual Academy – Introduction to ASP.NET 5

In 2013, Jon Galloway, Damian Edwards, and myself went up to film a LIVE 8 hour long Microsoft Virtual Academy training session called “Building Web Apps with ASP.NET Jump Start.” We returned in 2014 with “Building Modern Web Apps with ASP.NET.” Both of these are free, and are effectively a full day each of content.

Just a few weeks back, we recorded “Introduction to ASP.NET 5.” This is 100 to 200 level beginner content that starts at the beginning. If you’re just getting started with ASP.NET 5 (currently in Beta) or perhaps you have been meaning to dig into the new stuff in ASP.NET 5 but haven’t gotten around to it, this a good place to start.

Microsoft Virtual Academy - Introduction to ASP.NET 5

Introduction to ASP.NET 5

We cover

  • Introduction to ASP.NET 5
  • Introduction to Visual Studio
  • Introducing Model View Controller (MVC6)
  • Getting Started with Models, Views, and Controllers
  • Debugging Web Applications
  • Configuration Data
  • Publishing Your Application
  • Using Data with Entity Framework

In the final three segments, we work with Damian Edwards to dissect every line of code in the real-world cloud-deployed application that runs our weekly standup at http://live.asp.net.

  • Exploring live.asp.net
  • Managing Data on live.asp.net
  • Advanced Features in live.asp.net

It’s a full day of detailed video training with assessments after each video. You can seek around, of course, or download the videos for offline viewing. We are pretty happy with how it turned out.

We’ll be returning to Microsoft Virtual Academy over the next several months to record 300-400 level advanced content as well as a Cross-Platform specific show for Mac and Linux users who want to develop and deploy ASP.NET applications. I hope you enjoyed it, we all worked very hard on it.


Sponsor: Big thanks to my friends at Infragistics for sponsoring the feed this week! Responsive web design on any browser, any platform and any device with Infragistics jQuery/HTML5 Controls.  Get super-charged performance with the world’s fastest HTML5 Grid – Download for free now!


© 2015 Scott Hanselman. All rights reserved.
     

In 2013, Jon Galloway, Damian Edwards, and myself went up to film a LIVE 8 hour long Microsoft Virtual Academy training session called "Building Web Apps with ASP.NET Jump Start." We returned in 2014 with "Building Modern Web Apps with ASP.NET." Both of these are free, and are effectively a full day each of content.

Just a few weeks back, we recorded "Introduction to ASP.NET 5." This is 100 to 200 level beginner content that starts at the beginning. If you're just getting started with ASP.NET 5 (currently in Beta) or perhaps you have been meaning to dig into the new stuff in ASP.NET 5 but haven't gotten around to it, this a good place to start.

Microsoft Virtual Academy - Introduction to ASP.NET 5

Introduction to ASP.NET 5

We cover

  • Introduction to ASP.NET 5
  • Introduction to Visual Studio
  • Introducing Model View Controller (MVC6)
  • Getting Started with Models, Views, and Controllers
  • Debugging Web Applications
  • Configuration Data
  • Publishing Your Application
  • Using Data with Entity Framework

In the final three segments, we work with Damian Edwards to dissect every line of code in the real-world cloud-deployed application that runs our weekly standup at http://live.asp.net.

  • Exploring live.asp.net
  • Managing Data on live.asp.net
  • Advanced Features in live.asp.net

It's a full day of detailed video training with assessments after each video. You can seek around, of course, or download the videos for offline viewing. We are pretty happy with how it turned out.

We'll be returning to Microsoft Virtual Academy over the next several months to record 300-400 level advanced content as well as a Cross-Platform specific show for Mac and Linux users who want to develop and deploy ASP.NET applications. I hope you enjoyed it, we all worked very hard on it.


Sponsor: Big thanks to my friends at Infragistics for sponsoring the feed this week! Responsive web design on any browser, any platform and any device with Infragistics jQuery/HTML5 Controls.  Get super-charged performance with the world’s fastest HTML5 Grid - Download for free now!



© 2015 Scott Hanselman. All rights reserved.
     

Proper benchmarking to diagnose and solve a .NET serialization bottleneck

From http://adrianotto.com/2010/08/dev-null-unlimited-scale/Here’s a few comments and disclaimers to start with. First, benchmarks are challenging. They are challenging to measure, but the real issue is that often we forget WHY we are benchmarking something. We’ll take a complex multi-machine financial system and suddenly we’re hyper-focused on a bunch of serialization code that we’re convinced is THE problem. “If I can fix this serialization by writing a 10,000 iteration for loop and getting it down to x milliseconds, it’ll be SMOOOOOOTH sailing.”

Second, this isn’t a benchmarking blog post. Don’t point this blog post and say “see! Library X is better than library Y! And .NET is better than Java!” Instead, consider this a cautionary tale, and a series of general guidelines. I’m just using this anecdote to illustrate these points.

  • Are you 100% sure what you’re measuring?
  • Have you run a profiler like the Visual Studio profiler or DotTrace?
  • Are you considering warm-up time? Throwing out outliers? Are your results statistically significant?
  • Are the libraries you’re using optimized for your use case? Are you sure what your use case is?

A bad benchmark

A reader sent me a email recently with concern of serialization in .NET. They had read some very old blog posts from 2009 about perf that included charts and graphs and did some tests of their own. They were seeing serialization names (of tens of thousands of items) over 700ms and sizes nearly 2 megs. The tests included serialization of their typical data structures in both C# and Java across a number of different serialization libraries and techniques. Techniques included their company’s custom serialization, .NET binary DataContract serialization, as well as JSON.NET. One serialization format was small (1.8Ms for a large structure) and one was fast (94ms) but there was no clear winner. This reader was at their wit’s end and had decided, more or less, that .NET must not be up for the task.

To me, this benchmark didn’t smell right. It wasn’t clear what was being measured. It wasn’t clear if it was being accurately measured, but more specifically, the overarching conclusion of “.NET is slow” wasn’t reasonable given the data.

Hm. So .NET can’t serialize a few tens of thousands of data items quickly? I know it can.

Related Links: Create benchmarks and results that have value and Responsible benchmarking by @Kellabyte

I am no expert, but I poked around at this code.

First: Are we measuring correctly?

The tests were using DateTime.UtcNow which isn’t advisable.

startTime = DateTime.UtcNow;
resultData = TestSerialization(foo);
endTime = DateTime.UtcNow;

Do not use DateTime.Now or DateTime.Utc for measuring things where any kind of precision matters. DateTime doesn’t have enough precision and is said to be accurate only to 30ms.

DateTime represents a date and a time. It’s not a high-precision timer or Stopwatch.

As Eric Lippert says:

In short, “what time is it?” and “how long did that take?” are completely different questions; don’t use a tool designed to answer one question to answer the other.

And as Raymond Chen says:

“Precision is not the same as accuracy. Accuracy is how close you are to the correct answer; precision is how much resolution you have for that answer.”

So, we will use a Stopwatch when you need a stopwatch. In fact, before I switch the sample to Stopwatch I was getting numbers in milliseconds like 90,106,103,165,94, and after Stopwatch the results were 99,94,95,95,94. There’s much less jitter.

Stopwatch sw = new Stopwatch();
sw.Start();

// stuff

sw.Stop();

You might also want to pin your process to a single CPU core if you’re trying to get an accurate throughput measurement. While it shouldn’t matter and Stopwatch is using the Win32 QueryPerformanceCounter internally (the source for the .NET Stopwatch Class is here) there were some issues on old systems when you’d start on one proc and stop on another.

// One Core
var p = Process.GetCurrentProcess();
p.ProcessorAffinity = (IntPtr)1;

If you don’t use Stopwatch, look for a  simple and well-thought-of benchmarking library.

Second: Doing the math

In the code sample I was given, about 10 lines of code were the thing being measured, and 735 lines were the “harness” to collect and display the data from the benchmark. Perhaps you’ve seen things like this before? It’s fair to say that the benchmark can get lost in the harness.

Have a listen to my recent podcast with Matt Warren on “Performance as a Feature” and consider Matt’s performance blog and recent Book called “Writing High Performance .NET Code”. Matt is currently exploring creating a mini-benchmark harness on GitHub. Matt’s system is rather promising and would have a [Benchmark] attribute within a test.

Considering using an existing harness for small benchmarks. One is SimpleSpeedTester from Yan Cui. It makes nice tables and does a lot of the tedious work for you. Here’s a screenshot I stole borrowed from Yan’s blog.

image11

Something a bit more advanced to explore is HdrHistogram, a library “designed for recoding histograms of value measurements in latency and performance sensitive applications.” It’s also on GitHub and includes Java, C, and C# implementations.

PercentileHistogramExample

And seriously. Use a profiler.

Third: Have you run a profiler?

Use the Visual Studio Profiler, or get a trial of the Redgate ANTS Performance Profiler or the JetBrains dotTrace profiler.

Where is our application spending its time? Surprise I think we’ve all seen people write complex benchmarks and poke at a black box rather than simply running a profiler.

Visual Studio Profiler

Aside: Are there newer/better/understood ways to solve this?

This is my opinion, but I think it’s a decent one and there’s numbers to back it up. Some of the .NET serialization code is pretty old, written in 2003 or 2005 and may not be taking advantage of new techniques or knowledge. Plus, it’s rather flexible “make it work for everyone” code, as opposed to very narrowly purposed code.

People have different serialization needs. You can’t serialize something as XML and expect it to be small and tight. You likely can’t serialize a structure as JSON and expect it to be as fast as a packed binary serializer.

Measure your code, consider your requirements, and step back and consider all options.

Fourth: Newer .NET Serializers to Consider

Now that I have a sense of what’s happening and how to measure the timing, it was clear these serializers didn’t meet this reader’s goals. Some of are old, as I mentioned, so what other newer more sophisticated options exist?

There’s two really nice specialized serializers to watch. They are Jil from Kevin Montrose, and protobuf-net from Marc Gravell. Both are extraordinary libraries, and protobuf-net’s breadth of target framework scope and build system are a joy to behold. There are also other impressive serializers in including support for not only JSON, but also JSV and CSV in ServiceStack.NET.

Protobuf-net – protocol buffers for .NET

Protocol buffers are a data structure format from Google, and protobuf-net is a high performance .NET implementation of protocol buffers. Think if it like XML but smaller and faster. It also can serialize cross language. From their site:

Protocol buffers have many advantages over XML for serializing structured data. Protocol buffers:

  • are simpler
  • are 3 to 10 times smaller
  • are 20 to 100 times faster
  • are less ambiguous
  • generate data access classes that are easier to use programmatically

It was easy to add. There’s lots of options and ways to decorate your data structures but in essence:

var r = ProtoBuf.Serializer.Deserialize<List<DataItem>>(memInStream);

The numbers I got with protobuf-net were exceptional and in this case packed the data tightly and quickly, taking just 49ms.

JIL – Json Serializer for .NET using Sigil

Jil s a Json serializer that is less flexible than Json.net and makes those small sacrifices in the name of raw speed. From their site:

Flexibility and “nice to have” features are explicitly discounted in the pursuit of speed.

It’s also worth pointing out that some serializers work over the whole string in memory, while others like Json.NET and DataContractSerializer work over a stream. That means you’ll want to consider the size of what you’re serializing when choosing a library.

Jil is impressive in a number of ways but particularly in that it dynamically emits a custom serializer (much like the XmlSerializers of old)

Jil is trivial to use. It just worked. I plugged it in to this sample and it took my basic serialization times to 84ms.

result = Jil.JSON.Deserialize<Foo>(jsonData);

Conclusion: There’s the thing about benchmarks. It depends.

What are you measuring? Why are you measuring it? Does the technique you’re suing handle your use case? Are you serializing one large object or thousands of small ones?

James Newton-King made this excellent point to me:

“[There’s a] meta-problem around benchmarking. Micro-optimization and caring about performance when it doesn’t matter is something devs are guilty of. Documentation, developer productivity, and flexibility are more important than a 100th of a millisecond.”

In fact, James pointed out this old (but recently fixed) ASP.NET bug on Twitter. It’s a performance bug that is significant, but was totally overshadowed by the time spent on the network.

This bug backs up the idea that many developers care about performance where it doesn’t matter https://t.co/LH4WR1nit9

— James Newton-King (@JamesNK) February 13, 2015

Thanks to Marc Gravell and James Newton-King for their time helping with this post.

What are your benchmarking tips and tricks? Sound off in the comments!


© 2015 Scott Hanselman. All rights reserved.
     

From http://adrianotto.com/2010/08/dev-null-unlimited-scale/Here's a few comments and disclaimers to start with. First, benchmarks are challenging. They are challenging to measure, but the real issue is that often we forget WHY we are benchmarking something. We'll take a complex multi-machine financial system and suddenly we're hyper-focused on a bunch of serialization code that we're convinced is THE problem. "If I can fix this serialization by writing a 10,000 iteration for loop and getting it down to x milliseconds, it'll be SMOOOOOOTH sailing."

Second, this isn't a benchmarking blog post. Don't point this blog post and say "see! Library X is better than library Y! And .NET is better than Java!" Instead, consider this a cautionary tale, and a series of general guidelines. I'm just using this anecdote to illustrate these points.

  • Are you 100% sure what you're measuring?
  • Have you run a profiler like the Visual Studio profiler or DotTrace?
  • Are you considering warm-up time? Throwing out outliers? Are your results statistically significant?
  • Are the libraries you're using optimized for your use case? Are you sure what your use case is?

A bad benchmark

A reader sent me a email recently with concern of serialization in .NET. They had read some very old blog posts from 2009 about perf that included charts and graphs and did some tests of their own. They were seeing serialization names (of tens of thousands of items) over 700ms and sizes nearly 2 megs. The tests included serialization of their typical data structures in both C# and Java across a number of different serialization libraries and techniques. Techniques included their company's custom serialization, .NET binary DataContract serialization, as well as JSON.NET. One serialization format was small (1.8Ms for a large structure) and one was fast (94ms) but there was no clear winner. This reader was at their wit's end and had decided, more or less, that .NET must not be up for the task.

To me, this benchmark didn't smell right. It wasn't clear what was being measured. It wasn't clear if it was being accurately measured, but more specifically, the overarching conclusion of ".NET is slow" wasn't reasonable given the data.

Hm. So .NET can't serialize a few tens of thousands of data items quickly? I know it can.

Related Links: Create benchmarks and results that have value and Responsible benchmarking by @Kellabyte

I am no expert, but I poked around at this code.

First: Are we measuring correctly?

The tests were using DateTime.UtcNow which isn't advisable.

startTime = DateTime.UtcNow;

resultData = TestSerialization(foo);
endTime = DateTime.UtcNow;

Do not use DateTime.Now or DateTime.Utc for measuring things where any kind of precision matters. DateTime doesn't have enough precision and is said to be accurate only to 30ms.

DateTime represents a date and a time. It's not a high-precision timer or Stopwatch.

As Eric Lippert says:

In short, "what time is it?" and "how long did that take?" are completely different questions; don't use a tool designed to answer one question to answer the other.

And as Raymond Chen says:

"Precision is not the same as accuracy. Accuracy is how close you are to the correct answer; precision is how much resolution you have for that answer."

So, we will use a Stopwatch when you need a stopwatch. In fact, before I switch the sample to Stopwatch I was getting numbers in milliseconds like 90,106,103,165,94, and after Stopwatch the results were 99,94,95,95,94. There's much less jitter.

Stopwatch sw = new Stopwatch();

sw.Start();

// stuff

sw.Stop();

You might also want to pin your process to a single CPU core if you're trying to get an accurate throughput measurement. While it shouldn't matter and Stopwatch is using the Win32 QueryPerformanceCounter internally (the source for the .NET Stopwatch Class is here) there were some issues on old systems when you'd start on one proc and stop on another.

// One Core

var p = Process.GetCurrentProcess();
p.ProcessorAffinity = (IntPtr)1;

If you don't use Stopwatch, look for a  simple and well-thought-of benchmarking library.

Second: Doing the math

In the code sample I was given, about 10 lines of code were the thing being measured, and 735 lines were the "harness" to collect and display the data from the benchmark. Perhaps you've seen things like this before? It's fair to say that the benchmark can get lost in the harness.

Have a listen to my recent podcast with Matt Warren on "Performance as a Feature" and consider Matt's performance blog and recent Book called "Writing High Performance .NET Code". Matt is currently exploring creating a mini-benchmark harness on GitHub. Matt's system is rather promising and would have a [Benchmark] attribute within a test.

Considering using an existing harness for small benchmarks. One is SimpleSpeedTester from Yan Cui. It makes nice tables and does a lot of the tedious work for you. Here's a screenshot I stole borrowed from Yan's blog.

image11

Something a bit more advanced to explore is HdrHistogram, a library "designed for recoding histograms of value measurements in latency and performance sensitive applications." It's also on GitHub and includes Java, C, and C# implementations.

PercentileHistogramExample

And seriously. Use a profiler.

Third: Have you run a profiler?

Use the Visual Studio Profiler, or get a trial of the Redgate ANTS Performance Profiler or the JetBrains dotTrace profiler.

Where is our application spending its time? Surprise I think we've all seen people write complex benchmarks and poke at a black box rather than simply running a profiler.

Visual Studio Profiler

Aside: Are there newer/better/understood ways to solve this?

This is my opinion, but I think it's a decent one and there's numbers to back it up. Some of the .NET serialization code is pretty old, written in 2003 or 2005 and may not be taking advantage of new techniques or knowledge. Plus, it's rather flexible "make it work for everyone" code, as opposed to very narrowly purposed code.

People have different serialization needs. You can't serialize something as XML and expect it to be small and tight. You likely can't serialize a structure as JSON and expect it to be as fast as a packed binary serializer.

Measure your code, consider your requirements, and step back and consider all options.

Fourth: Newer .NET Serializers to Consider

Now that I have a sense of what's happening and how to measure the timing, it was clear these serializers didn't meet this reader's goals. Some of are old, as I mentioned, so what other newer more sophisticated options exist?

There's two really nice specialized serializers to watch. They are Jil from Kevin Montrose, and protobuf-net from Marc Gravell. Both are extraordinary libraries, and protobuf-net's breadth of target framework scope and build system are a joy to behold. There are also other impressive serializers in including support for not only JSON, but also JSV and CSV in ServiceStack.NET.

Protobuf-net - protocol buffers for .NET

Protocol buffers are a data structure format from Google, and protobuf-net is a high performance .NET implementation of protocol buffers. Think if it like XML but smaller and faster. It also can serialize cross language. From their site:

Protocol buffers have many advantages over XML for serializing structured data. Protocol buffers:

  • are simpler
  • are 3 to 10 times smaller
  • are 20 to 100 times faster
  • are less ambiguous
  • generate data access classes that are easier to use programmatically

It was easy to add. There's lots of options and ways to decorate your data structures but in essence:

var r = ProtoBuf.Serializer.Deserialize<List<DataItem>>(memInStream);

The numbers I got with protobuf-net were exceptional and in this case packed the data tightly and quickly, taking just 49ms.

JIL - Json Serializer for .NET using Sigil

Jil s a Json serializer that is less flexible than Json.net and makes those small sacrifices in the name of raw speed. From their site:

Flexibility and "nice to have" features are explicitly discounted in the pursuit of speed.

It's also worth pointing out that some serializers work over the whole string in memory, while others like Json.NET and DataContractSerializer work over a stream. That means you'll want to consider the size of what you're serializing when choosing a library.

Jil is impressive in a number of ways but particularly in that it dynamically emits a custom serializer (much like the XmlSerializers of old)

Jil is trivial to use. It just worked. I plugged it in to this sample and it took my basic serialization times to 84ms.

result = Jil.JSON.Deserialize<Foo>(jsonData);

Conclusion: There's the thing about benchmarks. It depends.

What are you measuring? Why are you measuring it? Does the technique you're suing handle your use case? Are you serializing one large object or thousands of small ones?

James Newton-King made this excellent point to me:

"[There's a] meta-problem around benchmarking. Micro-optimization and caring about performance when it doesn't matter is something devs are guilty of. Documentation, developer productivity, and flexibility are more important than a 100th of a millisecond."

In fact, James pointed out this old (but recently fixed) ASP.NET bug on Twitter. It's a performance bug that is significant, but was totally overshadowed by the time spent on the network.

Thanks to Marc Gravell and James Newton-King for their time helping with this post.

What are your benchmarking tips and tricks? Sound off in the comments!



© 2015 Scott Hanselman. All rights reserved.
     

The .NET CoreCLR is now open source, so I ran the GitHub repo through Azure Power BI

The hits keep on coming, Dear Reader. Just as we announced a few months back, .NET Core is open source. We said it would run on Windows, Mac, and Linux, but then the work of doing it has to actually happen. 😉

Go check out the .NET Framework Blog. Today the .NET team put the Core CLR up on GitHub. It’s open source and it’s under the MIT License. This includes the Core CLR source, the new RyuJIT, the .NET GC, native interop and everything you need to fork, clone, and build your own personal copy of the .NET Core CLR. What a cool day, and what an immense amount of work (both technical and legal) to make it happen. Years in the making, but still lots of work to do.

The GitHub repo has 2.6ish MILLION lines of code. They say when it’s all said and done.NET Core will be about 5 MILLION lines of open source code.

The .NET Blog did a nice pie chart, but honestly, I found it to be not enough. It basically was a big grey circle that said “other 2.2M.” 😉

I’d like a little more insight, but I don’t know if I have the compute power, or the patience, frankly, to analyze this code repository. Or do I?

I decided to import the repository into Microsoft Azure’s Power BI preview. Power BI (BI means “Business Intelligence”) is an amazing service that you can use (usually for FREE, depending on your data source) to pull in huge amounts of data and ask questions of that data. Watch for a great video on this at http://friday.azure.com this week or next.

I logged into http://powerbi.com (It’s US only for the preview, sorry) and clicked Get Data. I then selected GitHub as the source of my data and authorized Power BI to talk to GitHub on my behalf. Crazy, AMIRITE?

Screenshot (10)

After a few minutes of data chewing, I’m officially adding “BI and Big Data Analyst” to my resume and you can’t stop me. 😉

What does Power BI tell me about the .NET Team’s “CoreCLR” GitHub repository?

Here’s what Power BI told me.

image

Let’s dig in. Looks like Stephen Toub has worked on a LOT of this code. He’s super brilliant and very nice, BTW.

image

Editing the query and looking at Dates and Times, it seems the .NET Team commits code at ALL hours. They are really feeling “committable” around 3 to 4 pm, but they’ll even put code in at 4 in the morning!

image

Here’s a more intense way to look at it.

image

One of the insanely cool things about Power BI is the ability to ask your data questions in plain English. Given that my SQL abilities have atrophied to “Select * from LittleBobbyTables” this is particularly useful to me.

I asked it “issues that are open sorted by date” and you’ll notice that not only did it work, but it showed me what I meant underneath my query.

image

What about issues closed by a certain person?

image

I’m running around in this tool just building charts and asking questions of the repo. It’s all in HTML5 but it’s just like Excel. It’s amazing.

image

Open issues from last year?

image

Average time to close an issue in hours?

image

It’s amazing to be running queries like this on something as significant as the now open-sourced .NET Core CLR. I didn’t need to be an employee to do it. I didn’t need special access, I just did it. I’m enjoying this new Microsoft, and very much digging Power BI. Next I’m going to put my Blood Sugar and Diabetes Data in Power PI and encourage others to do the same.

P.S. Check out the code for the Core CLR Hello World app. When was the last time you saw an ASCII Art Linux Penguin in Microsoft Source code? 😉


Sponsor: Big thanks to Infragistics for sponsoring the feed this week! Responsive web design on any browser, any platform and any device with Infragistics jQuery/HTML5 Controls.  Get super-charged performance with the world’s fastest HTML5 Grid – Download for free now!


© 2014 Scott Hanselman. All rights reserved.
     

The hits keep on coming, Dear Reader. Just as we announced a few months back, .NET Core is open source. We said it would run on Windows, Mac, and Linux, but then the work of doing it has to actually happen. ;)

Go check out the .NET Framework Blog. Today the .NET team put the Core CLR up on GitHub. It's open source and it's under the MIT License. This includes the Core CLR source, the new RyuJIT, the .NET GC, native interop and everything you need to fork, clone, and build your own personal copy of the .NET Core CLR. What a cool day, and what an immense amount of work (both technical and legal) to make it happen. Years in the making, but still lots of work to do.

The GitHub repo has 2.6ish MILLION lines of code. They say when it's all said and done.NET Core will be about 5 MILLION lines of open source code.

The .NET Blog did a nice pie chart, but honestly, I found it to be not enough. It basically was a big grey circle that said "other 2.2M." ;)

I'd like a little more insight, but I don't know if I have the compute power, or the patience, frankly, to analyze this code repository. Or do I?

I decided to import the repository into Microsoft Azure's Power BI preview. Power BI (BI means "Business Intelligence") is an amazing service that you can use (usually for FREE, depending on your data source) to pull in huge amounts of data and ask questions of that data. Watch for a great video on this at http://friday.azure.com this week or next.

I logged into http://powerbi.com (It's US only for the preview, sorry) and clicked Get Data. I then selected GitHub as the source of my data and authorized Power BI to talk to GitHub on my behalf. Crazy, AMIRITE?

Screenshot (10)

After a few minutes of data chewing, I'm officially adding "BI and Big Data Analyst" to my resume and you can't stop me. ;)

What does Power BI tell me about the .NET Team's "CoreCLR" GitHub repository?

Here's what Power BI told me.

image

Let's dig in. Looks like Stephen Toub has worked on a LOT of this code. He's super brilliant and very nice, BTW.

image

Editing the query and looking at Dates and Times, it seems the .NET Team commits code at ALL hours. They are really feeling "committable" around 3 to 4 pm, but they'll even put code in at 4 in the morning!

image

Here's a more intense way to look at it.

image

One of the insanely cool things about Power BI is the ability to ask your data questions in plain English. Given that my SQL abilities have atrophied to "Select * from LittleBobbyTables" this is particularly useful to me.

I asked it "issues that are open sorted by date" and you'll notice that not only did it work, but it showed me what I meant underneath my query.

image

What about issues closed by a certain person?

image

I'm running around in this tool just building charts and asking questions of the repo. It's all in HTML5 but it's just like Excel. It's amazing.

image

Open issues from last year?

image

Average time to close an issue in hours?

image

It's amazing to be running queries like this on something as significant as the now open-sourced .NET Core CLR. I didn't need to be an employee to do it. I didn't need special access, I just did it. I'm enjoying this new Microsoft, and very much digging Power BI. Next I'm going to put my Blood Sugar and Diabetes Data in Power PI and encourage others to do the same.

P.S. Check out the code for the Core CLR Hello World app. When was the last time you saw an ASCII Art Linux Penguin in Microsoft Source code? ;)


Sponsor: Big thanks to Infragistics for sponsoring the feed this week! Responsive web design on any browser, any platform and any device with Infragistics jQuery/HTML5 Controls.  Get super-charged performance with the world’s fastest HTML5 Grid – Download for free now!


© 2014 Scott Hanselman. All rights reserved.
     

Getting ready for the future with the Microsoft .NET Portability Analyzer

.NET has been getting more and more portable. Not only is .NET Open Source going forward (read Announcing .NET 2015 – .NET as Open Source, .NET on Mac and Linux, and Visual Studio Community) but you of course know about Xamarin tools, as well as, I hope, the .NET Microframework, and much more.

You can run your .NET code all over, and there’s a tool to make this even easier. While you’ll rarely get 100% portable code with any platform, you can get into the magic 90-95% with smart refactoring, then keep the platform-specific shims pluggable.

The .NET Portability Analyzer is a free Visual Studio Add-in (or console app) that will give you a detailed report on how portable your code is. Then you can get a real sense of how far you can take your code, as well as how prepared you’ll be for the Core CLR and alternate platforms.

.NET Portability

Take a look at this report on AutoFac, for example. You can see that the main assembly is in fantastic shape across most platforms. Understandably the more platform-specific Configuration assembly fares worse, but still there’s a complete list of what methods are available on what platforms, and a clear way forward.

.NET Portability Report

You’ll get suggestions with a direction to head when you bump up against a missing or not-recommended API.

img2

You can analyze specific assemblies, or an entire project. Once installed, you’ll find the commands under the Analyze menu, and you can change options in the .NET Portability Analyzer options in the Tools | Options menu.

Even better, you can use this with the FREE Visual Studio Community that you can download at http://www.visualstudio.com/free.

Related Links


© 2014 Scott Hanselman. All rights reserved.
     

.NET has been getting more and more portable. Not only is .NET Open Source going forward (read Announcing .NET 2015 - .NET as Open Source, .NET on Mac and Linux, and Visual Studio Community) but you of course know about Xamarin tools, as well as, I hope, the .NET Microframework, and much more.

You can run your .NET code all over, and there's a tool to make this even easier. While you'll rarely get 100% portable code with any platform, you can get into the magic 90-95% with smart refactoring, then keep the platform-specific shims pluggable.

The .NET Portability Analyzer is a free Visual Studio Add-in (or console app) that will give you a detailed report on how portable your code is. Then you can get a real sense of how far you can take your code, as well as how prepared you'll be for the Core CLR and alternate platforms.

.NET Portability

Take a look at this report on AutoFac, for example. You can see that the main assembly is in fantastic shape across most platforms. Understandably the more platform-specific Configuration assembly fares worse, but still there's a complete list of what methods are available on what platforms, and a clear way forward.

.NET Portability Report

You'll get suggestions with a direction to head when you bump up against a missing or not-recommended API.

img2

You can analyze specific assemblies, or an entire project. Once installed, you'll find the commands under the Analyze menu, and you can change options in the .NET Portability Analyzer options in the Tools | Options menu.

Even better, you can use this with the FREE Visual Studio Community that you can download at http://www.visualstudio.com/free.

Related Links



© 2014 Scott Hanselman. All rights reserved.