Using ASP.NET Core 2.1’s HttpClientFactory with Refit’s REST library

When I moved my podcast site over to ASP.NET Core 2.1 I also started using HttpClientFactory and wrote up my experience. It’s a nice clean way to centralize both settings and policy for your HttpClients, especially if you’re using a lot of them to talk…

Strong by Lucyb_22 used under Creative Commons from FlickrWhen I moved my podcast site over to ASP.NET Core 2.1 I also started using HttpClientFactory and wrote up my experience. It's a nice clean way to centralize both settings and policy for your HttpClients, especially if you're using a lot of them to talk to a lot of small services.

Last year I explored Refit, an automatic type-safe REST library for .NET Standard. It makes it super easy to just declare the shape of a client and its associated REST API with a C# interface:

public interface IGitHubApi

{
[Get("/users/{user}")]
Task<User> GetUser(string user);
}

and then ask for an HttpClient that speaks that API's shape, then call it. Fabulous.

var gitHubApi = RestService.For<IGitHubApi>("https://api.github.com");


var octocat = await gitHubApi.GetUser("octocat");

But! What does Refit look like and how does it work in an HttpClientFactory-enabled world? Refit has recently been updated with first class support for ASP.NET Core 2.1's HttpClientFactory with the Refit.HttpClientFactory package.

Since you'll want to centralize all your HttpClient configuration in your ConfigureServices method in Startup, Refit adds a nice extension method hanging off of Services.

You add a RefitClient of a type, then add whatever other IHttpClientBuilder methods you want afterwards:

services.AddRefitClient<IWebApi>()

.ConfigureHttpClient(c => c.BaseAddress = new Uri("https://api.example.com"));
// Add additional IHttpClientBuilder chained methods as required here:
// .AddHttpMessageHandler<MyHandler>()
// .SetHandlerLifetime(TimeSpan.FromMinutes(2));

Of course, then you can just have your HttpClient automatically created and passed into the constructor. You'll see in this sample from their GitHub that you get an IWebAPI (that is, whatever type you want, like my IGitHubApi) and just go to town with a strongly typed interfaces of an HttpClient with autocomplete.

public class HomeController : Controller

{
public HomeController(IWebApi webApi)
{
_webApi = webApi;
}

private readonly IWebApi _webApi;

public async Task<IActionResult> Index(CancellationToken cancellationToken)
{
var thing = await _webApi.GetSomethingWeNeed(cancellationToken);

return View(thing);
}
}

Refit is easy to use, and even better with ASP.NET Core 2.1. Go get Refit and try it today!

* Strong image by Lucyb_22 used under Creative Commons from Flickr


Sponsor: Check out dotMemory Unit, a free unit testing framework for fighting all kinds of memory issues in your code. Extend your unit testing with the functionality of a memory profiler.



© 2018 Scott Hanselman. All rights reserved.
     

EarTrumpet 2.0 makes Windows 10’s audio subsystem even better…and it’s free!

Last week I blogged about some new audio features in Windows 10 that make switching your inputs and outputs easier, but even better, allow you to set up specific devices for specific programs. That means I can have one mic and headphones for Audition, …

EarTrumpetLast week I blogged about some new audio features in Windows 10 that make switching your inputs and outputs easier, but even better, allow you to set up specific devices for specific programs. That means I can have one mic and headphones for Audition, while another for browsing, and yet another set for Skype.

However, while doing my research and talking about this on Twitter, lots of people started recommending I check out "EarTrumpet" - it's an applet that lets you control the volume of classic and modern Windows Apps in one nice UI! Switching, volume, and more. Consider EarTrumpet a prosumer replacement for the little Volume icon down by the clock in Windows 10. You'll hide the default one and drag EarTrumpet over in its place and use it instead!

EarTrumpet

EarTrumpet is available for free in the Windows Store and works on all versions of Windows 10, even S! I have no affiliation with the team that built it and it's a free app, so you have literally nothing to lose by trying it out!

EarTrumpet is also open source and on GitHub. The team that built it is:

  • Rafael Rivera - a software forward/reverse engineer
  • David Golden - lead engineer on MetroTwit, the greatest WPF Twitter Client the world has never known.
  • Dave Amenta - ex-Microsoft, worked on shell and Start menu for Windows 8 and 10

It was originally built as a replacement for the Volume Control in Windows back in 2015, but EarTrumpet 2.0's recent release makes it easy to use the new audio capabilities in the Windows 10's April 2018 Update.

Looks Good

It's easy to make a crappy Windows App. Heck, it's easy to make a crappy app. But EarTrumpet is NOT just an "applet" or an app. It's a perfect example of how a Windows 10 app - not made by Microsoft - can work and look seamlessly with the operating system. You'll think it's native - and it adds functionality that probably should be built in to Windows!

It's got light/dark theme support (no one bothers to even test this, but EarTrumpet does) and a nice acrylic blur. It looks like it's built-in/in-box. There's a sample app so you can make your apps look this sharp up on Rafael's GitHub and here's the actual BlurWindowExtensions that EarTrumpet uses.

Works Good

Quickly switch outputEarTrumpet 1.x works on Windows "RS3 and below" so that's 10.0.16299 and down. But 2.0 works on the latest Windows and is also written entirely in C#. Any remaining C++ code has been removed with no missing functionality.

EarTrumpet may SEEM like a simple app but there's a lot going on to be this polished AND work with any combination of audio hardware. As a podcaster and remote workers I have a LOT of audio devices but I now have one-click control over it all.

Given how fast Windows 10 has been moving with Insiders Builds and all, it seems like there's a bunch of APIs with new functionality that lacks docs. The EarTrumpet team has reverse engineered the parts the needed.

Modern Resource Technology (MRT) Resource Manager

Internal Audio Interface: IAudioPolicyConfigFactory

  • Gets them access to new APIs (GetPersistedDefaultAudioEndpoint / SetPersistedDefaultAudioEndpoint) in RS4 that let's them 'redirect' apps to different playback devices. Same API used in modern sound settings.
      • Code here with no public API yet?

    Internal Audio Interface: IPolicyConfig

    • Gets them access to SetDefaultEndpoint API; lets us change the default playback device
    • Code here and no public API yet?

    Acrylic Blur (win32)

    From a development/devops perspective, I am told EarTrumpet's team is able to push a beta flight through the Windows 10 Store in just over 30 minutes. No waiting for days to get beta test data. They use Bugsnag for their generous OSS license to catch crashes and telemetry. So far they're getting >3000 new users a month as the word gets out with nearly 100k users so far! Hopefully +1 as you give EarTrumpet a try yourself!


    Sponsor: Check out dotMemory Unit, a free unit testing framework for fighting all kinds of memory issues in your code. Extend your unit testing with the functionality of a memory profiler.



    © 2018 Scott Hanselman. All rights reserved.
         

    Real Browser Integration Testing with Selenium Standalone, Chrome, and ASP.NET Core 2.1

    Buckle up kids, this is nuts and I’m probably doing it wrong. 😉 And it’s 2am and I wrote this fast. I’ll come back tomorrow and fix the spelling. I want to have lots of tests to make sure my new podcast site is working well. As mentioned before, I’ve …

    I find your lack of tests disturbingBuckle up kids, this is nuts and I'm probably doing it wrong. ;) And it's 2am and I wrote this fast. I'll come back tomorrow and fix the spelling.

    I want to have lots of tests to make sure my new podcast site is working well. As mentioned before, I've been updating the site to ASP.NET Core 2.1.

    Here's some posts if you want to catch up:

    I've been doing my testing with XUnit and I want to test in layers.

    Basic Unit Testing

    Simply create a Razor Page's Model in memory and call OnGet or WhateverMethod. At this point you are NOT calling Http, there is no WebServer.

    public IndexModel pageModel;
    

    public IndexPageTests()
    {
    var testShowDb = new TestShowDatabase();
    pageModel = new IndexModel(testShowDb);
    }

    [Fact]
    public async void MainPageTest()
    {
    // FAKE HTTP GET "/"
    IActionResult result = await pageModel.OnGetAsync(null, null);

    Assert.NotNull(result);
    Assert.True(pageModel.OnHomePage); //we are on the home page, because "/"
    Assert.Equal(16, pageModel.Shows.Count()); //home page has 16 shows showing
    Assert.Equal(620, pageModel.LastShow.ShowNumber); //last test show is #620
    }

    Moving out a layer...

    In-Memory Testing with both Client and Server using WebApplicationFactory

    Here we are starting up the app and calling it with a client, but the "HTTP" of it all is happening in memory/in process. There are no open ports, there's no localhost:5000. We can still test HTTP semantics though.

    public class TestingFunctionalTests : IClassFixture<WebApplicationFactory<Startup>>
    
    {
    public HttpClient Client { get; }
    public ServerFactory<Startup> Server { get; }

    public TestingFunctionalTests(ServerFactory<Startup> server)
    {
    Client = server.CreateClient();
    Server = server;
    }

    [Fact]
    public async Task GetHomePage()
    {
    // Arrange & Act
    var response = await Client.GetAsync("/");

    // Assert
    Assert.Equal(HttpStatusCode.OK, response.StatusCode);
    }
    ...
    }

    Testing with a real Browser and real HTTP using Selenium Standalone and Chrome

    THIS is where it gets interesting with ASP.NET Core 2.1 as we are going to fire up both the complete web app, talking to the real back end (although it could talk to a local test DB if you want) as well as a real headless version of Chrome being managed by Selenium Standalone and talked to with the WebDriver. It sounds complex, but it's actually awesome and super useful.

    First I add references to Selenium.Support and Selenium.WebDriver to my Test project:

    dotnet add reference "Selenium.Support"
    
    dotnet add reference "Selenium.WebDriver"

    Make sure you have node and npm then you can get Selenium Standalone like this:

    npm install -g [email protected]
    
    selenium-standalone install

    Chrome is being controlled by automated test softwareSelenium, to be clear, puts your browser on a puppet's strings. Even Chrome knows it's being controlled! It's using the (soon to be standard, but clearly defacto standard) WebDriver protocol. Imagine if your browser had a localhost REST protocol where you could interrogate it and click stuff! I've been using Selenium for over 11 years. You can even test actual Windows apps (not in the browser) with WinAppDriver/Appium but that's for another post.

    Now for this part, bare with me because my ServerFactory class I'm about to make is doing two things. It's setting up my ASP.NET Core 2. 1 app and actually running it so it's listening on https://localhost:5001. It's assuming a few things that I'll point out. It also (perhaps questionable) is launching Selenium Standalone from within its constructor. Questionable, to be clear, and there's others ways to do this, but this is VERY simple.

    If it offends you, remembering that you do need to start Selenium Standalone with "selenium-standalone start" you could do it OUTSIDE your test in a script.

    Perhaps do the startup/teardown work in a PowerShell or Shell script. Start it up, save the process id, then stop it when you're done. Note I'm also doing checking code coverage here with Coverlet but that's not related to Selenium - I could just "dotnet test."

    #!/usr/local/bin/powershell
    
    $SeleniumProcess = Start-Process "selenium-standalone" -ArgumentList "start" -PassThru
    dotnet test /p:CollectCoverage=true /p:CoverletOutputFormat=lcov /p:CoverletOutput=./lcov .\hanselminutes.core.tests
    Stop-Process -Id $SeleniumProcess.Id

    Here my SeleniumServerFactory is getting my Browser and Server ready.

    SIDEBAR NOTE: I want to point out that this is NOT perfect and it's literally the simplest thing possible to get things working. It's my belief, though, that there are some problems here and that I shouldn't have to fake out the "new TestServer" in CreateServer there. While the new WebApplicationFactory is great for in-memory unit testing, it should be just as easy to fire up your app and use a real port for things like Selenium testing. Here I'm building and starting the IWebHostBuilder myself (!) and then making a fake TestServer only to satisfy the CreateServer method, which I think should not have a concrete class return type. For testing, ideally I could easily get either an "InMemoryWebApplicationFactory" and a "PortUsingWebApplicationFactory" (naming is hard). Hopefully this is somewhat clear and something that can be easily adjusted for ASP.NET Core 2.1.x.

    My app is configured to listen on both http://localhost:5000 and https://localhost:5001, so you'll note where I'm getting that last value (in an attempt to avoid hard-coding it). We also are sure to stop both Server and Brower in Dispose() at the bottom.

    public class SeleniumServerFactory<TStartup> : WebApplicationFactory<Startup> where TStartup : class
    
    {
    public string RootUri { get; set; } //Save this use by tests

    Process _process;
    IWebHost _host;

    public SeleniumServerFactory()
    {
    ClientOptions.BaseAddress = new Uri("https://localhost"); //will follow redirects by default

    _process = new Process() {
    StartInfo = new ProcessStartInfo {
    FileName = "selenium-standalone",
    Arguments = "start",
    UseShellExecute = true
    }
    };
    _process.Start();
    }

    protected override TestServer CreateServer(IWebHostBuilder builder)
    {
    //Real TCP port
    _host = builder.Build();
    _host.Start();
    RootUri = _host.ServerFeatures.Get<IServerAddressesFeature>().Addresses.LastOrDefault(); //Last is https://localhost:5001!

    //Fake Server we won't use...this is lame. Should be cleaner, or a utility class
    return new TestServer(new WebHostBuilder().UseStartup<TStartup>());
    }

    protected override void Dispose(bool disposing)
    {
            base.Dispose(disposing);
            if (disposing) {
                _host.Dispose();
    _process.CloseMainWindow(); //Be sure to stop Selenium Standalone
            }
        }
    }

    But what does a complete series of tests look like? I have a Server, a Browser, and an (theoretically optional) HttpClient. Focus on the Browser and Server.

    At the point when a single test starts, my site is up (the Server) and an invisible headless Chrome (the Browser) is actually being puppeted with local calls via WebDriver. All this is hidden from to you - if you want. You can certainly see Chrome (or other browsers) get automated, but what's nice about Selenium Standalone with hidden/headless Browser testing is that my unit tests now also include these complete Integration Tests and can run as part of my Continuous Integration Build.

    Again, layers. I test classes, then move out and test Http Request/Response interactions, and finally the site is up and I'm making sure I can navigate, that data is loading. I'm automating the "smoke tests" that I used to do myself! And I can make as many of this a I'd like now that the scaffolding work is done.

    public class SeleniumTests : IClassFixture<SeleniumServerFactory<Startup>>, IDisposable
    
    {
    public SeleniumServerFactory<Startup> Server { get; }
    public IWebDriver Browser { get; }
    public HttpClient Client { get; }
    public ILogs Logs { get; }

    public SeleniumTests(SeleniumServerFactory<Startup> server)
    {
    Server = server;
    Client = server.CreateClient(); //weird side effecty thing here. This call shouldn't be required for setup, but it is.

    var opts = new ChromeOptions();
    opts.AddArgument("--headless"); //Optional, comment this out if you want to SEE the browser window
    opts.SetLoggingPreference(OpenQA.Selenium.LogType.Browser, LogLevel.All);

    var driver = new RemoteWebDriver(opts);
    Browser = driver;
    Logs = new RemoteLogs(driver); //TODO: Still not bringing the logs over yet
    }

    [Fact]
    public void LoadTheMainPageAndCheckTitle()
    {
    Browser.Navigate().GoToUrl(Server.RootUri);
    Assert.StartsWith("Hanselminutes Technology Podcast - Fresh Air and Fresh Perspectives for Developers", Browser.Title);
    }

    [Fact]
    public void ThereIsAnH1()
    {
    Browser.Navigate().GoToUrl(Server.RootUri);

    var headerSelector = By.TagName("h1");
    Assert.Equal("HANSELMINUTES PODCAST\r\nby Scott Hanselman", Browser.FindElement(headerSelector).Text);
    }

    [Fact]
    public void KevinScottTestThenGoHome()
    {
    Browser.Navigate().GoToUrl(Server.RootUri + "/631/how-do-you-become-a-cto-with-microsofts-cto-kevin-scott");

    var headerSelector = By.TagName("h1");
    var link = Browser.FindElement(headerSelector);
    link.Click();
    Assert.Equal(Browser.Url.TrimEnd('/'),Server.RootUri); //WTF
    }

    public void Dispose()
    {
    Browser.Dispose();
    }
    }

    Here's a build, unit test/selenium test with code coverage actually running. I started running it from PowerShell. The black window in the back is Selenium Standalone doing its thing (again, could be hidden).

    Two consoles, one with PowerShell running XUnit and one running Selenium

    If I comment out the "--headless" line, I'll see this as Chrome is automated. Cool.

    Chrome is loading my site and being automated

    Of course, I can also run these in the .NET Core Test Explorer in either Visual Studio Code, or Visual Studio.

    image

    Great fun. What are your thoughts?


    Sponsor: Check out JetBrains Rider: a cross-platform .NET IDE. Edit, refactor, test and debug ASP.NET, .NET Framework, .NET Core, Xamarin or Unity applications. Learn more and download a 30-day trial!



    © 2018 Scott Hanselman. All rights reserved.
         

    Installing PowerShell Core on a Raspberry Pi (powered by .NET Core)

    Earlier this week I set up .NET Core and Docker on a Raspberry Pi and found that I could run my podcast website quite easily on a Pi. Check that post out as there’s a lot going on. I can test within a Linux Container and output the test results to the …

    PowerShell Core on a Raspberry Pi!Earlier this week I set up .NET Core and Docker on a Raspberry Pi and found that I could run my podcast website quite easily on a Pi. Check that post out as there's a lot going on. I can test within a Linux Container and output the test results to the host and then open them in VS. I also explored a reasonably complex Dockerfile that is both multiarch and multistage. I can reliably build and test my website either inside a container or on the bare metal of Windows or Linux. Very fun.

    As primarily a Windows developer I have lots of batch/cmd files like "test.bat" or "dockerbuild.bat." They start as little throwaway bits of automation but as the project grows inevitably more complex.

    I'm not interested in "selling" anyone PowerShell. If you like bash, use bash, it's lovely, as are shell scripts. PowerShell is object-oriented in its pipeline, moving lists of real objects as standard output. They are different and most importantly, they can live together. Just like you might call Python scripts from bash, you can call PowerShell scripts from bash, or vice versa. Another tool in our toolkits.

    PS /home/pi> Get-Process | Where-Object WorkingSet -gt 10MB
    

    NPM(K) PM(M) WS(M) CPU(s) Id SI ProcessName
    ------ ----- ----- ------ -- -- -----------
    0 0.00 10.92 890.87 917 917 docker-containe
    0 0.00 35.64 1,140.29 449 449 dockerd
    0 0.00 10.36 0.88 1272 037 light-locker
    0 0.00 20.46 608.04 1245 037 lxpanel
    0 0.00 69.06 32.30 3777 749 pwsh
    0 0.00 31.60 107.74 647 647 Xorg
    0 0.00 10.60 0.77 1279 037 zenity
    0 0.00 10.52 0.77 1280 037 zenity

    Bash and shell scripts are SUPER powerful. It's a whole world. But it is text based (or json for some newer things) so you're often thinking about text more.

    [email protected]:~ $ ps aux | sort -rn -k 5,6 | head -n6
    
    root 449 0.5 3.8 956240 36500 ? Ssl May17 19:00 /usr/bin/dockerd -H fd://
    root 917 0.4 1.1 910492 11180 ? Ssl May17 14:51 docker-containerd --config /var/run/docker/containerd/containerd.toml
    root 647 0.0 3.4 155608 32360 tty7 Ssl+ May17 1:47 /usr/lib/xorg/Xorg :0 -seat seat0 -auth /var/run/lightdm/root/:0 -nolisten tcp vt7 -novtswitch
    pi 1245 0.2 2.2 153132 20952 ? Sl May17 10:08 lxpanel --profile LXDE-pi
    pi 1272 0.0 1.1 145928 10612 ? Sl May17 0:00 light-locker
    pi 1279 0.0 1.1 145020 10856 ? Sl May17 0:00 zenity --warning --no-wrap --text

    You can take it as far as you like. For some it's intuitive power, for others, it's baroque.

    [email protected]:~ $ ps -eo size,pid,user,command --sort -size | awk '{ hr=$1/1024 ; printf("%13.2f Mb ",hr) } { for ( x=4 ; x<=NF ; x++ ) { printf("%s ",$x) } print "" }'
    
    0.00 Mb COMMAND
    161.14 Mb /usr/bin/dockerd -H fd://
    124.20 Mb docker-containerd --config /var/run/docker/containerd/containerd.toml
    78.23 Mb lxpanel --profile LXDE-pi
    66.31 Mb /usr/lib/xorg/Xorg :0 -seat seat0 -auth /var/run/lightdm/root/:0 -nolisten tcp vt7 -novtswitch
    61.66 Mb light-locker

    Point is, there's choice. Here's a nice article about PowerShell from the perspective of a Linux user. Can I install PowerShell on my Raspberry Pi (or any Linux machine) and use the same scripts in both places? YES.

    For many years PowerShell was a Windows-only thing that was part of the closed Windows ecosystem. In fact, here's video of me nearly 12 years ago (I was working in banking) talking to Jeffrey Snover about PowerShell. Today, PowerShell is open source up at https://github.com/PowerShell with lots of docs and scripts, also open source. PowerShell is supported on Windows, Mac, and a half-dozen Linuxes. Sound familiar? That's because it's powered (ahem) by open source cross platform .NET Core. You can get PowerShell Core 6.0 here on any platform.

    Don't want to install it? Start it up in Docker in seconds with

    docker run -it microsoft/powershell

    Sweet. How about Raspbian on my ARMv7 based Raspberry Pi? I was running Raspbian Jessie and PowerShell is supported on Raspbian Stretch (newer) so I upgraded from Jesse to Stretch (and tidied up and did the firmware while I'm at it) with:

    $ sudo apt-get update
    
    $ sudo apt-get upgrade
    $ sudo apt-get dist-upgrade
    $ sudo sed -i 's/jessie/stretch/g' /etc/apt/sources.list
    $ sudo sed -i 's/jessie/stretch/g' /etc/apt/sources.list.d/raspi.list
    $ sudo apt-get update && sudo apt-get upgrade -y
    $ sudo apt-get dist-upgrade -y
    $ sudo rpi-update

    Cool. Now I'm on Raspbian Stretch on my Raspberry Pi 3. Let's install PowerShell! These are just the most basic Getting Started instructions. Check out GitHub for advanced and detailed info if you have issues with prerequisites or paths.

    NOTE: Here I'm getting PowerShell Core 6.0.2. Be sure to check the releases page for newer releases if you're reading this in the future. I've also used 6.1.0 (in preview) with success. The next 6.1 preview will upgrade to .NET Core 2.1. If you're just evaluating, get the latest preview as it'll have the most recent bug fixes.

    $ sudo apt-get install libunwind8
    
    $ wget https://github.com/PowerShell/PowerShell/releases/download/v6.0.2/powershell-6.0.2-linux-arm32.tar.gz
    $ mkdir ~/powershell
    $ tar -xvf ./powershell-6.0.2-linux-arm32.tar.gz -C ~/powershell
    $ sudo ln -s ~/powershell/pwsh /usr/bin/pwsh
    $ sudo ln -s ~/powershell/pwsh /usr/local/bin/powershell
    $ powershell

    Lovely.

    GOTCHA: Because I upgraded from Jessie to Stretch, I ran into a bug where libssl1.0.0 is getting loaded over libssl1.0.2. This is a complex native issue with interaction between PowerShell and .NET Core 2.0 that's being fixed. Only upgraded machines like mind will it it, but it's easily fixed with sudo apt-get remove libssl1.0.0

    Now this means my PowerShell build scripts can work on both Windows and Linux. This is a deeply trivial example (just one line) but note the "shebang" at the top that lets Linux know what a *.ps1 file is for. That means I can keep using bash/zsh/fish on Raspbian, but still "build.ps1" or "test.ps1" on any platform.

    #!/usr/local/bin/powershell
    
    dotnet watch --project .\hanselminutes.core.tests test /p:CollectCoverage=true /p:CoverletOutputFormat=lcov /p:CoverletOutput=./lcov

    Here's a few totally random but lovely PowerShell examples:

    PS /home/pi> Get-Date | Select-Object -Property * | ConvertTo-Json
    
    {
    "DisplayHint": 2,
    "DateTime": "Sunday, May 20, 2018 5:55:35 AM",
    "Date": "2018-05-20T00:00:00+00:00",
    "Day": 20,
    "DayOfWeek": 0,
    "DayOfYear": 140,
    "Hour": 5,
    "Kind": 2,
    "Millisecond": 502,
    "Minute": 55,
    "Month": 5,
    "Second": 35,
    "Ticks": 636623925355021162,
    "TimeOfDay": {
    "Ticks": 213355021162,
    "Days": 0,
    "Hours": 5,
    "Milliseconds": 502,
    "Minutes": 55,
    "Seconds": 35,
    "TotalDays": 0.24693868190046295,
    "TotalHours": 5.9265283656111105,
    "TotalMilliseconds": 21335502.1162,
    "TotalMinutes": 355.59170193666665,
    "TotalSeconds": 21335.502116199998
    },
    "Year": 2018
    }

    You can take PowerShell objects to and from Objects, Hashtables, JSON, etc.

    PS /home/pi> $hash | ConvertTo-Json
    
    {
    "Shape": "Square",
    "Color": "Blue",
    "Number": 1
    }
    PS /home/pi> $hash = @{ Number = 1; Shape = "Square"; Color = "Blue"}
    PS /home/pi> $hash

    Name Value
    ---- -----
    Shape Square
    Color Blue
    Number 1


    PS /home/pi> $hash | ConvertTo-Json
    {
    "Shape": "Square",
    "Color": "Blue",
    "Number": 1
    }

    Here's a nice one from MCPMag:

    PS /home/pi> $URI = "https://query.yahooapis.com/v1/public/yql?q=select  * from weather.forecast where woeid in (select woeid from geo.places(1) where  text='{0}, {1}')&format=json&env=store://datatables.org/alltableswithkeys"  -f 'Omaha','NE'
    
    PS /home/pi> $Data = Invoke-RestMethod -Uri $URI
    PS /home/pi> $Data.query.results.channel.item.forecast|Format-Table

    code date day high low text
    ---- ---- --- ---- --- ----
    39 20 May 2018 Sun 62 56 Scattered Showers
    30 21 May 2018 Mon 78 53 Partly Cloudy
    30 22 May 2018 Tue 88 61 Partly Cloudy
    4 23 May 2018 Wed 89 67 Thunderstorms
    4 24 May 2018 Thu 91 68 Thunderstorms
    4 25 May 2018 Fri 92 69 Thunderstorms
    34 26 May 2018 Sat 89 68 Mostly Sunny
    34 27 May 2018 Sun 85 65 Mostly Sunny
    30 28 May 2018 Mon 85 63 Partly Cloudy
    47 29 May 2018 Tue 82 63 Scattered Thunderstorms

    Or a one-liner if you want to be obnoxious.

    PS /home/pi> (Invoke-RestMethod -Uri  "https://query.yahooapis.com/v1/public/yql?q=select  * from weather.forecast where woeid in (select woeid from geo.places(1) where  text='Omaha, NE')&format=json&env=store://datatables.org/alltableswithkeys").query.results.channel.item.forecast|Format-Table

    Example: This won't work on Linux as it's using Windows specific AIPs, but if you've got PowerShell on your Windows machine, try out this one-liner for a cool demo:

    iex (New-Object Net.WebClient).DownloadString("http://bit.ly/e0Mw9w")

    Thoughts?


    Sponsor: Check out JetBrains Rider: a cross-platform .NET IDE. Edit, refactor, test and debug ASP.NET, .NET Framework, .NET Core, Xamarin or Unity applications. Learn more and download a 30-day trial!



    © 2018 Scott Hanselman. All rights reserved.
         

    Using LazyCache for clean and simple .NET Core in-memory caching

    I’m continuing to use .NET Core 2.1 to power my Podcast Site, and I’ve done a series of posts on some of the experiments I’ve been doing. I also upgraded to .NET Core 2.1 RC that came out this week. Here’s some posts if you want to catch up: Eyes wide…

    Tai Chi by Luisen Rodrigo - Used under CCI'm continuing to use .NET Core 2.1 to power my Podcast Site, and I've done a series of posts on some of the experiments I've been doing. I also upgraded to .NET Core 2.1 RC that came out this week. Here's some posts if you want to catch up:

    Having a blast, if I may say so.

    I've been trying a number of ways to cache locally. I have an expensive call to a backend (7-8 seconds or more, without deserialization) so I want to cache it locally for a few hours until it expires. I have a way that work very well using a SemaphoreSlim. There's some issues to be aware of but it has been rock solid. However, in the comments of the last caching post a number of people suggested I use "LazyCache."

    Alastair from the LazyCache team said this in the comments:

    LazyCache wraps your "build stuff I want to cache" func in a Lazy<> or an AsyncLazy<> before passing it into MemoryCache to ensure the delegate only gets executed once as you retrieve it from the cache. It also allows you to swap between sync and async for the same cached thing. It is just a very thin wrapper around MemoryCache to save you the hassle of doing the locking yourself. A netstandard 2 version is in pre-release.
    Since you asked the implementation is in CachingService.cs#L119 and proof it works is in CachingServiceTests.cs#L343

    Nice! Sounds like it's worth trying out. Most importantly, it'll allow me to "refactor via subtraction."

    I want to have my "GetShows()" method go off and call the backend "database" which is a REST API over HTTP living at SimpleCast.com. That backend call is expensive and doesn't change often. I publish new shows every Thursday, so ideally SimpleCast would have a standard WebHook and I'd cache the result forever until they called me back. For now I will just cache it for 8 hours - a long but mostly arbitrary number. Really want that WebHook as that's the correct model, IMHO.

    LazyCache was added on my Configure in Startup.cs:

    services.AddLazyCache();
    

    Kind of anticlimactic. ;)

    Then I just make a method that knows how to populate my cache. That's just a "Func" that returns a Task of List of Shows as you can see below. Then I call IAppCache's "GetOrAddAsync" from LazyCache that either GETS the List of Shows out of the Cache OR it calls my Func, does the actual work, then returns the results. The results are cached for 8 hours. Compare this to my previous code and it's a lot cleaner.

    public class ShowDatabase : IShowDatabase
    {
        private readonly IAppCache _cache;
        private readonly ILogger _logger;
        private SimpleCastClient _client;
        public ShowDatabase(IAppCache appCache,
                ILogger<ShowDatabase> logger,
                SimpleCastClient client)
        {
            _client = client;
            _logger = logger;
            _cache = appCache;
        }
        public async Task<List<Show>> GetShows()
        {    
            Func<Task<List<Show>>> showObjectFactory = () => PopulateShowsCache();
            var retVal = await _cache.GetOrAddAsync("shows", showObjectFactory, DateTimeOffset.Now.AddHours(8));
            return retVal;
        }
     
        private async Task<List<Show>> PopulateShowsCache()
        {
            List<Show> shows = shows = await _client.GetShows();
            _logger.LogInformation($"Loaded {shows.Count} shows");
            return shows.Where(c => c.PublishedAt < DateTime.UtcNow).ToList();
        }
    }

    It's always important to point out there's a dozen or more ways to do this. I'm not selling a prescription here or The One True Way, but rather exploring the options and edges and examining the trade-offs.

    • As mentioned before, me using "shows" as a magic string for the key here makes no guarantees that another co-worker isn't also using "shows" as the key.
      • Solution? Depends. I could have a function-specific unique key but that only ensures this function is fast twice. If someone else is calling the backend themselves I'm losing the benefits of a centralized (albeit process-local - not distributed like Redis) cache.
    • I'm also caching the full list and then doing a where/filter every time.
      • A little sloppiness on my part, but also because I'm still feeling this area out. Do I want to cache the whole thing and then let the callers filter? Or do I want to have GetShows() and GetActiveShows()? Dunno yet. But worth pointing out.
    • There's layers to caching. Do I cache the HttpResponse but not the deserialization? Here I'm caching the List<Shows>, complete. I like caching List<T> because a caller can query it, although I'm sending back just active shows (see above).
      • Another perspective is to use the <cache> TagHelper in Razor and cache Razor's resulting rendered HTML. There is value in caching the object graph, but I need to think about perhaps caching both List<T> AND the rendered HTML.
      • I'll explore this next.

    I'm enjoying myself though. ;)

    Go explore LazyCache! I'm using beta2 but there's a whole number of releases going back years and it's quite stable so far.

    Lazy cache is a simple in-memory caching service. It has a developer friendly generics based API, and provides a thread safe cache implementation that guarantees to only execute your cachable delegates once (it's lazy!). Under the hood it leverages ObjectCache and Lazy to provide performance and reliability in heavy load scenarios.

    For ASP.NET Core it's quick to experiment with LazyCache and get it set up. Give it a try, and share your favorite caching techniques in the comments.

    Tai Chi photo by Luisen Rodrigo used under Creative Commons Attribution 2.0 Generic (CC BY 2.0), thanks!


    Sponsor: Check out JetBrains Rider: a cross-platform .NET IDE. Edit, refactor, test and debug ASP.NET, .NET Framework, .NET Core, Xamarin or Unity applications. Learn more and download a 30-day trial!



    © 2018 Scott Hanselman. All rights reserved.
         

    Adding Resilience and Transient Fault handling to your .NET Core HttpClient with Polly

    Last week while upgrading my podcast site to ASP.NET Core 2.1 and .NET. Core 2.1 I moved my Http Client instances over to be created by the new HttpClientFactory. Now I have a single central place where my HttpClient objects are created and managed, an…

    b30f5128-181e-11e6-8780-bc9e5b17685eLast week while upgrading my podcast site to ASP.NET Core 2.1 and .NET. Core 2.1 I moved my Http Client instances over to be created by the new HttpClientFactory. Now I have a single central place where my HttpClient objects are created and managed, and I can set policies as I like on each named client.

    It really can't be overstated how useful a resilience framework for .NET Core like Polly is.

    Take some code like this that calls a backend REST API:

    public class SimpleCastClient
    
    {
    private HttpClient _client;
    private ILogger<SimpleCastClient> _logger;
    private readonly string _apiKey;

    public SimpleCastClient(HttpClient client, ILogger<SimpleCastClient> logger, IConfiguration config)
    {
    _client = client;
    _client.BaseAddress = new Uri($"https://api.simplecast.com");
    _logger = logger;
    _apiKey = config["SimpleCastAPIKey"];
    }

    public async Task<List<Show>> GetShows()
    {
    var episodesUrl = new Uri($"/v1/podcasts/shownum/episodes.json?api_key={_apiKey}", UriKind.Relative);
    var res = await _client.GetAsync(episodesUrl);
    return await res.Content.ReadAsAsync<List<Show>>();
    }
    }

    Now consider what it takes to add things like

    • Retry n times - maybe it's a network blip
    • Circuit-breaker - Try a few times but stop so you don't overload the system.
    • Timeout - Try, but give up after n seconds/minutes
    • Cache - You asked before!
      • I'm going to do a separate blog post on this because I wrote a WHOLE caching system and I may be able to "refactor via subtraction."

    If I want features like Retry and Timeout, I could end up littering my code. OR, I could put it in a base class and build a series of HttpClient utilities. However, I don't think I should have to do those things because while they are behaviors, they are really cross-cutting policies. I'd like a central way to manage HttpClient policy!

    Enter Polly. Polly is an OSS library with a lovely Microsoft.Extensions.Http.Polly package that you can use to combine the goodness of Polly with ASP.NET Core 2.1.

    As Dylan from the Polly Project says:

    HttpClientFactory in ASPNET Core 2.1 provides a way to pre-configure instances of HttpClient which apply Polly policies to every outgoing call.

    I just went into my Startup.cs and changed this

    services.AddHttpClient<SimpleCastClient>();
    

    to this (after adding "using Polly;" as a namespace)

    services.AddHttpClient<SimpleCastClient>().
    
    AddTransientHttpErrorPolicy(policyBuilder => policyBuilder.RetryAsync(2));

    and now I've got Retries. Change it to this:

    services.AddHttpClient<SimpleCastClient>().
    
    AddTransientHttpErrorPolicy(policyBuilder => policyBuilder.CircuitBreakerAsync(
    handledEventsAllowedBeforeBreaking: 2,
    durationOfBreak: TimeSpan.FromMinutes(1)
    ));

    And now I've got CircuitBreaker where it backs off for a minute if it's broken (hit a handled fault) twice!

    I like AddTransientHttpErrorPolicy because it automatically handles Http5xx's and Http408s as well as the occasional System.Net.Http.HttpRequestException. I can have as many named or typed HttpClients as I like and they can have all kinds of specific policies with VERY sophisticated behaviors. If those behaviors aren't actual Business Logic (tm) then why not get them out of your code?

    Go read up on Polly at https://githaub.com/App-vNext/Polly and check out the extensive samples at https://github.com/App-vNext/Polly-Samples/tree/master/PollyTestClient/Samples.

    Even though it works great with ASP.NET Core 2.1 (best, IMHO) you can use Polly with .NET 4, .NET 4.5, or anything that's compliant with .NET Standard 1.1.

    Gotchas

    A few things to remember. If you are POSTing to an endpoint and applying retries, you want that operation to be idempotent.

    "From a RESTful service standpoint, for an operation (or service call) to be idempotent, clients can make that same call repeatedly while producing the same result."

    But everyone's API is different. What would happen if you applied a Polly Retry Policy to an HttpClient and it POSTed twice? Is that backend behavior compatible with your policies? Know what the behavior you expect is and plan for it. You may want to have a GET policy and a post one and use different HttpClients. Just be conscious.

    Next, think about Timeouts. HttpClient's have a Timeout which is "all tries overall timeout" while a TimeoutPolicy inside a Retry is "timeout per try." Again, be aware.

    Thanks to Dylan Reisenberger for his help on this post, along with Joel Hulen! Also read more about HttpClientFactory on Steve Gordon's blog and learn more about HttpClientFactory and Polly on the Polly project site.


    Sponsor: Check out JetBrains Rider: a cross-platform .NET IDE. Edit, refactor, test and debug ASP.NET, .NET Framework, .NET Core, Xamarin or Unity applications. Learn more and download a 30-day trial!



    © 2018 Scott Hanselman. All rights reserved.
         

    How to setup Signed Git Commits with a YubiKey NEO and GPG and Keybase on Windows

    This week in obscure blog titles, I bring you the nightmare that is setting up Signed Git Commits with a YubiKey NEO and GPG and Keybase on Windows. This is one of those “it’s good for you” things like diet and exercise and setting up 2 Factor Authenti…

    This commit was signed with a verified signature.This week in obscure blog titles, I bring you the nightmare that is setting up Signed Git Commits with a YubiKey NEO and GPG and Keybase on Windows. This is one of those "it's good for you" things like diet and exercise and setting up 2 Factor Authentication. I just want to be able to sign my code commits to GitHub so I might avoid people impersonating my Git Commits (happens more than you'd think and has happened recently.) However, I also was hoping to make it more security buy using a YubiKey NEO security key. They're happy to tell you that it supports a BUNCH of stuff that you have never heard of like Yubico OTP, OATH-TOTP, OATH-HOTP, FIDO U2F, OpenPGP, Challenge-Response. I am most concerned with it acting like a Smart Card that holds a PGP (Pretty Good Privacy) key since the YubiKey can look like a "PIV (Personal Identity Verification) Smart Card."

    NOTE: I am not a security expert. Let me know if something here is wrong (be nice) and I'll update it. Note also that there are a LOT of guides out there. Some are complete and encyclopedic, some include recommendations and details that are "too much," but this one was my experience. This isn't The Bible On The Topic but rather  what happened with me and what I ran into and how I got past it. Until this is Super Easy (TM) on Windows, there's gonna be guides like this.

    As with all things security, there is a balance between Capital-S Secure with offline air-gapped what-nots, and Ease Of Use with tools like Keybase. It depends on your tolerance, patience, technical ability, and if you trust any online services. I like Keybase and trust them so I'm starting there with a Private Key. You can feel free to get/generate your key from wherever makes you happy and secure.

    Welcome to Keybase.io

    I use Windows and I like it, so if you want to use a Mac or Linux this blog post likely isn't for you. I love and support you and your choice though. ;)

    Make sure you have a private PGP key that has your Git Commit Email Address associated with it

    I download and installed (and optionally donated) a copy of Gpg4Win here.

    Take your private key - either the one you got from Keybase or one you generated locally - and make sure that your UUID (your email address that you use on GitHub) is a part of it. Here you can see mine is not, yet. That could be the main email or might be an alias or "uuid" that you'll add.

    Certs in Kleopatra

    If not, as in my case since I'm using a key from keybase, you'll need to add a new uuid to your private key. You will know you got it right when you run this command and see your email address inside it.

    > gpg --list-secret-keys --keyid-format LONG
    

    ------------------------------------------------
    sec# rsa4096/MAINKEY 2015-02-09 [SCEA]

    uid [ultimate] keybase.io/shanselman <[email protected]>

    You can adduuid in the gpg command line or you can add it in the Kleopatra GUI.

    image

    If not, as in my case since I'm using a key from keybase, you'll need to add a new uuid to your private key. You will know you got it right when you run this command and see your email address inside it.

    > gpg --list-secret-keys --keyid-format LONG
    

    ------------------------------------------------
    sec# rsa4096/MAINKEY 2015-02-09 [SCEA]
    uid [ultimate] keybase.io/shanselman <[email protected]>
    uid [ unknown] Scott Hanselman <[email protected]>

    Then, when you make changes like this, you can export your public key and update it in Keybase.io (again, if you're using Keybase).

    image

    Plugin your YubiKey

    I installed the YubiKey Smart card mini-driver from here.  Some people have said this driver is optional but I needed it on my main machine. Can anyone confirm?

    When you plug your YubiKey in (assuming it's newer than 2015) it should get auto-detected and show up like this "Yubikey NEO OTP+U2F+CCID." You want it so show up as this kind of "combo" or composite device. If it's older or not in this combo mode, you may need to download the YubiKey NEO Manager and switch modes.

    Setting up a YubiKey on Windows

    Test that your YubiKey can be seen as a Smart Card

    Go to the command line and run this to confirm that your Yubikey can be see as a smart card by the GPG command line.

    > gpg --card-status
    
    Reader ...........: Yubico Yubikey NEO OTP U2F CCID 0
    Version ..........: 2.0
    ....

    IMPORTANT: Sometimes Windows machines and Corporate Laptops have multiple smart card readers, especially if they have Windows Hello installed like my SurfaceBook2! If you hit this, you'll want to create a text file at %appdata%\Roaming\gnupg\scdaemon.conf and include a reader-port that points to your YubiKey. Mine is a NEO, yours might be a 4, etc, so be aware. You may need to reboot or at least restart/kill the GPG services/background apps for it to notice you made a change.
    If you want to know what string should go in that file, go to Device Manager, then View | Show Hidden Devices and look under Software Devices. THAT is the string you want. Put this in scdaemon.conf:

    reader-port "Yubico Yubikey NEO OTP+U2F+CCID 0"

    Yubico Yubikey NEO OTP+U2F+CCID 0

    Yubikey NEO can hold keys up to 2048 bits and the Yubikey 4 can hold up to 4096 bits - that's MOAR bits! However, you might find yourself with a 4096 bit key that is too big for the Yubikey NEO. Lots of folks believe this is a limitation of the NEO that sucks and is unacceptable. Since I'm using Keybase and starting with a 4096 bit key, one solution is to make separate 2048 bit subkeys for Authentication and Signing, etc.

    From the command line, edit your keys then "addkey"

    > gpg --edit-key <[email protected]>

    You'll make a 2048 bit Signing key and you'll want to decide if it ever expires. If it never does, also make a revocation certificate so you can revoke it at some future point.

    gpg> addkey
    
    Please select what kind of key you want:
    (3) DSA (sign only)
    (4) RSA (sign only)
    (5) Elgamal (encrypt only)
    (6) RSA (encrypt only)
    Your selection? 4
    RSA keys may be between 1024 and 4096 bits long.
    What keysize do you want? (2048)
    Requested keysize is 2048 bits
    Please specify how long the key should be valid.
    0 = key does not expire
    <n> = key expires in n days
    <n>w = key expires in n weeks
    <n>m = key expires in n months
    <n>y = key expires in n years
    Key is valid for? (0)
    Key does not expire at all

    Save your changes, and then export the keys. You can do that with Kleopatra or with the command line:

    --export-secret-keys --armor KEYID

    Here's a GUI view. I have my main 4096 bit key and some 2048 bit subkeys for Signing or Encryption, etc. Make as many as you like

    image

    LEVEL SET - It will be the public version of the 2048 bit Signing Key that we'll tell GitHub about and we'll put the private part on the YubiKey, acting as a Smart Card.

    Move the signing subkey over to the YubiKey

    Now I'm going to take my keychain here, select the signing one (note the ASTERISK after I type "key 1" then "keytocard" to move/store it on the YubyKey's SmartCard Signature slot. I'm using my email as a way to get to my key, but if your email is used in multiple keys you'll want to use the unique Key Id/Signature.

    > gpg --edit-key [email protected]
    

    gpg (GnuPG) 2.2.6; Copyright (C) 2018 Free Software Foundation, Inc.
    This is free software: you are free to change and redistribute it.
    There is NO WARRANTY, to the extent permitted by law.

    sec rsa4096/MAINKEY
    created: 2015-02-09 expires: never usage: SCEA
    trust: ultimate validity: ultimate
    ssb rsa2048/THEKEYIDFORTHE2048BITSIGNINGKEY
    created: 2015-02-09 expires: 2023-02-07 usage: S
    card-no: 0006
    ssb rsa2048/KEY2
    created: 2015-02-09 expires: 2023-02-07 usage: E
    [ultimate] (1). keybase.io/shanselman <[email protected]>
    [ultimate] (2) Scott Hanselman <[email protected]>
    gpg> toggle
    gpg> key 1

    sec rsa4096/MAINKEY
    created: 2015-02-09 expires: never usage: SCEA
    trust: ultimate validity: ultimate
    ssb* rsa2048/THEKEYIDFORTHE2048BITSIGNINGKEY
    created: 2015-02-09 expires: 2023-02-07 usage: S
    card-no: 0006
    ssb rsa2048/KEY2
    created: 2015-02-09 expires: 2023-02-07 usage: E
    [ultimate] (1). keybase.io/shanselman <[email protected]>
    [ultimate] (2) Scott Hanselman <[email protected]>

    gpg> keytocard
    Please select where to store the key:
    (1) Signature key
    (3) Authentication key
    Your selection? 1

    If you're storing thing on your Smart Card, it should have a pin to protect it. Also, make sure you have a backup of your primary key (if you like) because keytocard is a destructive action.

    Have you set up PIN numbers for your Smart Card?

    There's a PIN and an Admin PIN. The Admin PIN is the longer one. The default admin PIN is usually ‘12345678’ and the default PIN is usually ‘123456’. You'll want to set these up with either the Kleopatra GUI "Tools | Manage Smart Cards" or the gpg command line:

    >gpg --card-edit
    
    gpg/card> admin
    Admin commands are allowed
    gpg/card> passwd
    *FOLLOW THE PROMPTS TO SET PINS, BOTH ADMIN AND STANDARD*

    Tell Git about your Signing Key Globally

    Be sure to tell Git on your machine some important configuration info like your signing key, but also WHERE the gpg.exe is. This is important because git ships its own older local copy of gpg.exe and you installed a newer one!

    git config --global gpg.program "c:\Program Files (x86)\GnuPG\bin\gpg.exe"
    
    git config --global commit.gpgsign true
    git config --global user.signingkey THEKEYIDFORTHE2048BITSIGNINGKEY

    If you don't want to set ALL commits to signed, you can skip the commit.gpgsign=true and just include -S as you commit your code:

    git commit -S -m your commit message

    Test that you can sign things

    if you are running Kleopatra (the noob Windows GUI) when you run gpg --card-status you'll notice the cert will turn boldface and get marked as certified.

    The goal here is for you to make sure GPG for Windows knows that there's a private key on the smart card, and associates a signing Key ID with that private key so when Git wants to sign a commit, you'll get a Smart Card PIN Prompt.

    Advanced: If you make SubKeys for individual things so that they might also be later revoked without torching your main private key. Using the Kleopatra tool from GPG for Windows you can explore the keys and get their IDs. You'll use those Subkey IDs in your git config to remove to your signingkey.

    At this point things should look kinda like this in the Kleopatra GUI:

    Multiple PGP Sub keys

    Make sure to prove you can sign something by making a text file and signing it. If you get a Smart Card prompt (assuming a YubiKey) and a larger .gpg file appears, you're cool.

    > gpg --sign .\quicktest.txt
    
    > dir quic*

    Mode LastWriteTime Length Name
    ---- ------------- ------ ----
    -a---- 4/18/2018 3:29 PM 9 quicktest.txt
    -a---- 4/18/2018 3:38 PM 360 quicktest.txt.gpg

    Now, go up into GitHub to https://github.com/settings/keys at the bottom. Remember that's GPG Keys, not SSH Keys. Make a new one and paste in your public signing key or subkey.

    Note the KeyID (or the SubKey ID) and remember that one of them (either the signing one or the primary one) should be the ID you used when you set up user.signingkey in git above.

    GPG Keys in GitHub

    The most important thing is that:

    • the email address associated with the GPG Key
    • is the same as the email address GitHub has verified for you
    • is the same as the email in the Git Commit

    If not, double check your email addresses and make sure they are the same everywhere.

    Try a signed commit

    If pressing enter pops a PIN Dialog then you're getting somewhere!

    Please unlock the card

    Commit and push and go over to GitHub and see if your commit is Verified or Unverified. Unverified means that the commit was signed but either had an email GitHub had never seen OR that you forgot to tell GitHub about your signing public key.

    Signed Verified Git Commits

    Yay!

    Setting up to a second (or third) machine

    Once you've told Git about your signing key and you've got your signing key stored in your YubiKey, you'll likely want to set up on another machine.

    • Install the Yubikey SmartCard Mini Driver (may be optional)
    • Install GPG for Windows
      • gpg --card-status
      • Import your public key. If I'm setting up signing on another machine, I'll can import my PUBLIC certificates like this or graphically in Kleopatra.
        >gpg --import "keybase public key.asc"
        
        gpg: key *KEYID*: "keybase.io/shanselman <[email protected]>" not changed
        gpg: Total number processed: 1
        gpg: unchanged: 1

        You may also want to run gpg --expert --edit-key *KEYID* and type "trust" to certify your key as someone (yourself) that you trust.

    • Install Git (I assume you did this) and configure GPG
      • git config --global gpg.program "c:\Program Files (x86)\GnuPG\bin\gpg.exe"
      • git config --global commit.gpgsign true
      • git config --global user.signingkey THEKEYIDFORTHE2048BITSIGNINGKEY
    • Sign something with "gpg --sign" to test
    • Do a test commit.

    Finally, feel superior for 8 minutes, then realize you're really just lucky because you just followed the blog post of someone who ALSO has no clue, then go help a co-worker because this is TOO HARD.


    Sponsor: Check out JetBrains Rider: a cross-platform .NET IDE. Edit, refactor, test and debug ASP.NET, .NET Framework, .NET Core, Xamarin or Unity applications. Learn more and download a 30-day trial!



    © 2018 Scott Hanselman. All rights reserved.
         

    Automatic Unit Testing in .NET Core plus Code Coverage in Visual Studio Code

    I was talking to Toni Edward Solarin on Skype yesterday about his open source spike (early days) of Code Coverage for .NET Core called “coverlet.” There’s a few options out there for cobbling together .NET Core Code Coverage but I wanted to see if I co…

    I was talking to Toni Edward Solarin on Skype yesterday about his open source spike (early days) of Code Coverage for .NET Core called "coverlet." There's a few options out there for cobbling together .NET Core Code Coverage but I wanted to see if I could use the lightest tools I could find and make a "complete" solution for Visual Studio Code that would work for .NET Core cross platform. I put my own living spike of a project up on GitHub.

    Now, keeping in mine that Toni's project is just getting started and (as of the time of this writing) currently supports line and method coverage, and branch coverage is in progress, this is still a VERY compelling developer experience.

    Using VS Code, Coverlet, xUnit, plus these Visual Studio Code extensions

    Here's what we came up with.

    Auto testing, code coverage, line coloring, test explorers, all in VS Code

    There's a lot going on here but take a moment and absorb the screenshot of VS Code above.

    • Our test project is using xunit and the xunit runner that integrates with .NET Core as expected.
      • That means we can just "dotnet test" and it'll build and run tests.
    • Added coverlet, which integrates with MSBuild and automatically runs when you "dotnet test" if you "dotnet test /p:CollectCoverage=true"
      • (I think this should command line switch should be more like --coverage" but there may be an MSBuild limitation here.)

    I'm interested in "The Developer's Inner Loop." . That means I want to have my tests open, my code open, and as I'm typing I want the solution to build, run tests, and update code coverage automatically the way Visual Studio proper does auto-testing, but in a more Rube Goldbergian way. We're close with this setup, although it's a little slow.

    Coverlet can product opencover, lcov, or json files as a resulting output file. You can then generate detailed reports from this. There is a language agnostic VS Code Extension called Coverage Gutters that can read in lcov files and others and highlight line gutters with red, yellow, green to show test coverage. Those lcov files look like this, showing file names, file numbers, coverage, and number of exceptions.

    SF:C:\github\hanselminutes-core\hanselminutes.core\Constants.cs
    DA:3,0
    end_of_record
    SF:C:\github\hanselminutes-core\hanselminutes.core\MarkdownTagHelper.cs
    DA:21,5
    DA:23,5
    DA:49,5

    I should be able to pick the coverage file manually with the extension, but due to a small bug, it's easier to just tell Coverlet to generate a specific file name in a specific format.

    dotnet test /p:CollectCoverage=true /p:CoverletOutputFormat=lcov /p:CoverletOutput=./lcov.info .\my.tests

    The lcov.info files then watched by the VSCode Coverage Gutters extension and updates as the file changes if you click watch in the VS Code Status Bar.

    You can take it even further if you add "dotnet watch test" which will compile and re-run tests if code changes:

    dotnet watch --project .\my.tests test /p:CollectCoverage=true /p:CoverletOutputFormat=lcov /p:CoverletOutput=./lcov.info 

    I can run "WatchTests.cmd" in another terminal, or within the VS Code integrated terminal.

    tests automatically running as code changes

    NOTE: If you're doing code coverage you'll want to ensure your tests and tested assembly are NOT the same file. You might be able to get it to work but it's easier to keep things separate.

    Next, add in the totally under appreciated .NET Core Test Explorer extension (this should have hundreds of thousands of downloads - it's criminal) to get this nice Test Explorer pane:

    A Test Explorer tree view in VS Code for NET Core projects

    Even better, .NET Test Explorer lights up some "code lens" style interfaces over each test as well as a green checkmark for passing tests. Having "debug test" available for .NET Core is an absolute joy.

    Check out "run test" and "debug test"

    Finally we make some specific improvements to the .vscode/tasks.json file that drives much of VS Code's experience with our app. The "BUILD" label is standard but note both the custom "test" and "testwithcoverage" labels, as well as the added group with kind: "test."

    {
        "version": "2.0.0",
        "tasks": [
            {
                "label": "build",
                "command": "dotnet",
                "type": "process",
                "args": [
                    "build",
                    "${workspaceFolder}/hanselminutes.core.tests/hanselminutes.core.tests.csproj"
                ],
                "problemMatcher": "$msCompile",
                "group": {
                    "kind": "build",
                    "isDefault": true
                }
            },
            {
                "label": "test",
                "command": "dotnet",
                "type": "process",
                "args": [
                    "test",
                    "${workspaceFolder}/hanselminutes.core.tests/hanselminutes.core.tests.csproj"
                ],
                "problemMatcher": "$msCompile",
                "group": {
                    "kind": "test",
                    "isDefault": true
                }
            },
            {
                "label": "test with coverage",
                "command": "dotnet",
                "type": "process",
                "args": [
                    "test",
                    "/p:CollectCoverage=true",
                    "/p:CoverletOutputFormat=lcov",
                    "/p:CoverletOutput=./lcov.info",
                    "${workspaceFolder}/hanselminutes.core.tests/hanselminutes.core.tests.csproj"
                ],
                "problemMatcher": "$msCompile",
                "group": {
                    "kind": "test",
                    "isDefault": true
                }
            },
        ]
    }
    

    This lets VS Code know what's for building and what's for testing, so if I use the Command Palette to "Run Test" then I'll get this dropdown that lets me run tests and/or update coverage manually if I don't want the autowatch stuff going.

    Test or Test with Coverage

    Again, all this is just getting started but I've applied it to my Podcast Site that I'm currently rewriting and the experience is very smooth!

    Here's a call to action for you! Toni is just getting started on Coverlet and I'm sure he'd love some help. Head over to the Coverlet github and don't just file issues and complain! This is an opportunity for you to get to know the deep internals of .NET and create something cool for the larger community.

    What are your thoughts?


    Sponsor: Get the latest JetBrains Rider for debugging third-party .NET code, Smart Step Into, more debugger improvements, C# Interactive, new project wizard, and formatting code in columns.



    © 2018 Scott Hanselman. All rights reserved.
         

    Turn your Raspberry Pi into a portable Touchscreen Tablet with SunFounder’s RasPad

    I was very fortunate to get a preview version of the “RasPad” from SunFounder. Check it out at https://raspad.sunfounder.com/ and at the time of these writing they have a Kickstarter I’m backing! I’ve written a lot about Raspberry Pis and the cool pro…

    RasPadI was very fortunate to get a preview version of the "RasPad" from SunFounder. Check it out at https://raspad.sunfounder.com/ and at the time of these writing they have a Kickstarter I'm backing!

    I've written a lot about Raspberry Pis and the cool projects you can do with them. My now-10 and 12 year olds love making stuff with Raspberry Pis and we have at least a dozen of them around the house. A few are portable arcades (some quite tiny PiArcades), one runs PiMusicBox and is a streaming radio, and I have a few myself in a Kubernetes Cluster.

    I've built Raspberry Pi Cars with SunFounder parts, so they sent me an early evaluation version of their "RasPad." I was familiar with the general idea as I'd tried (and failed) to make something like it with their 10" Touchscreen LCD for Raspberry Pi.

    At its heart, the RasPad is quiet elegant and simple. It's a housing for your Raspberry Pi that includes a battery for portable use along with an integrated touchscreen LCD. However, it's the little details where it shines.

    RasPad - Raspberry Pi Touchscreen

    It's not meant to be an iPad. It's not trying. It's thick on one end, and beveled to an angle. You put your RaspberryPi inside the back corner and it sits nicely on the plastic posts without screws. Power and HDMI and are inside with cables, then it's one button to turn it on. There's an included power supply as well as batteries to run the Pi and screen for a few hours while portable.

    RasPad ports are extensive

    I've found with my 10 year old that this neat, organized little tablet mode makes the Pi more accessible and interesting to him - as opposed to the usual mess of wires and bare circuit boards we usually have on my workbench. I could see a fleet of RasPads in a classroom environment being far more engaging than just "raw" Pis on a table.

    The back of the RasPad has a slot where a GPIO Ribbon cable can come out to a breakout  board:

    GPIO slot is convenient

    At this point you can do all the same cool hardware projects you can do with a Raspberry Pi, with all the wires, power, touchscreen, ports, and everything nice and sanitary.

    The inside hatch is flexible enough for other boards as well:

    Raspberry Pi or TinkerBoard

    I asked my 10 year old what he wanted to make with the RasPad/Raspberry Pi and he said he wanted to make a "burglar alarm" for his bedroom. Pretty sure he just wants to keep the 12 year old out of his room.

    We started with a Logitech 930e USB Webcam we had laying around. The Raspberry PI can use lots of off-the-shelf high-quality web cams without drivers, and the RasPad keeps all the USB ports exposed.

    Then we installed the "Motion" Project. It's on GitHub at https://github.com/Motion-Project/motion with:

    sudo apt-get install motion

    Then edited /etc/motion/motion.conf with the nano editor (easier for kids then vim). You'll want to confirm the height and width. Smaller is easier on the Pi, but you can go big with 1280x720 if you like! We also set the target_dir to /tmp since motion's daemon doesn't have access to ~/.

    There's a number of events you can take action on, like "on_motion_detected." We just added a little Python script to let people know WE SEE YOU"

    It's also cool to set location_motion_style to "redbox" so you can see WHERE motion was detected in a frame, and be sure to set stream_localhost to "off" so you can hit http://yourraspberrypiname:8081 to see the stream remotely!

    When motion is detected, the 10 year old's little Python script launches:

    GET OUT OF MY ROOM

    And as a bonus, here is the 10 year old trying to sneak into the room. Can you spot him? (The camera did)

    IMG_3389

    What would you build with a RaspberryPi Tablet?

    BTW, there's a Community Build of the .NET Core SDK for Raspberry Pi!


    Sponsor: Get the latest JetBrains Rider for debugging third-party .NET code, Smart Step Into, more debugger improvements, C# Interactive, new project wizard, and formatting code in columns.



    © 2018 Scott Hanselman. All rights reserved.
         

    Cross-platform GUIs with open source .NET using Eto.Forms

    This is one of those “Did you know you could do THAT?” Many folks have figured out that C#/F#/.NET is cross-platform and open0source and runs on basically any operating system. People are using it to create micro services, web sites, and webAPI’s all o…

    Amazing Cross Platform ANSI art editorThis is one of those "Did you know you could do THAT?" Many folks have figured out that C#/F#/.NET is cross-platform and open0source and runs on basically any operating system. People are using it to create micro services, web sites, and webAPI's all over. Not to mention iPhone/Android apps with Xamarin and video games with Unity and MonoGame.

    But what about cross platform UIs?

    While not officially supported by Microsoft - you can do some awesome stuff...as is how Open Source is supposed to work! Remember that there's a family of .NET Runtimes now, there's the .NET Framework on Windows, there's xplat .NET Core, and there's xplat Mono.

    Eto.Forms has been in development since 2012 and is a cross-platform framework for creating GUI (Graphical User Interface, natch) applications with .NET that run across multiple platforms using their native toolkit. Not like Java in the 90s with custom painted buttons on canvas.

    It's being used for real stuff! In fact, PabloDraw is an Ansi/Ascii text editor that you didn't know you needed in your life. But you do. It runs on Windows, Mac, and Linux and was written using Eto.Forms but has a native UI on each platform. Be sure to check out Curtis Wensley's Twitter account for some cool examples of what PabloDraw and Eto.Forms can do!

    • OS X: MonoMac or Xamarin.Mac (and also iOS via Xamarin)
    • Linux: GTK# 2 or 3
    • Windows: Windows Forms (using GDI or Direct2D) or WPF

    Here's an example Hello World. Note that it's not just Code First, you can also use Xaml, or even Json (.jeto) to layout your forms!

    using Eto.Forms;
    
    using Eto.Drawing;

    public class MyForm : Form
    {
    public MyForm ()
    {
    Title = "My Cross-Platform App";
    ClientSize = new Size(200, 200);
    Content = new Label { Text = "Hello World!" };
    }

    [STAThread]
    static void Main()
    {
    new Application().Run(new MyForm());
    }
    }

    Or I can just File | New Project with their Visual Studio Extension. You should definitely give it a try.

    image

    Even on the same platform (Windows in the below example) amazingly Eto.Forms can use whatever Native Controls you prefer. Here's a great example zip that has precompiled test apps.

    WinForms, WPF, and Direct2D apps

    Once you've installed a new version of Mono on Ubuntu, you can run the same sample as Gtk3, as I'm doing here in a VM. AMAZING.

    image

    Here's some example applications that are in the wild, using Eto.Forms:

    There's so much cool stuff happening in open source .NET right now, and Eto.Forms is actively looking for help. Go check out their excellent Wiki, read the Tutorials, and maybe get involved!


    Sponsor: Get the latest JetBrains Rider for debugging third-party .NET code, Smart Step Into, more debugger improvements, C# Interactive, new project wizard, and formatting code in columns.



    © 2017 Scott Hanselman. All rights reserved.