Remove Inbound NAT rules from an Azure Virtual machine scale set

One of our customers runs on Azure Service Fabric (SF) which is backed by a Virtual machine scale set (VMSS). We had a connectivity problem recently and one of the developers enabled remote debugging on the SF cluster to see what went wrong. Little did he know that (among other things) a large number of additional TCP ports are opened on the cluster load balancers to allow debuggers to attach. In the portal, this looks like this:

This is an undesirable situation because:

  1. the attack surface of the SF cluster has increased due to all these open ports (security perspective) and
  2. the ARM template we use to deploy our SF cluster no longer works (maintenance perspective).

The reason behind the latter is that Azure does not allow the removal of Inbound NAT pools and NAT rules when they are in use by a VMSS. So if you deploy a ARM template that does not have all the Inbound NAT rules that you also see in the Azure portal, you get an error message:

Cannot remove inbound nat pool DebuggerListenerNatPool-8ypmdj7pp8 from load balancer since it is in use by virtual machine scale set /subscriptions/<subscription id>/resourceGroups/ClusterResourceGroupDEV/providers/Microsoft.Compute/virtualMachineScaleSets/Backend

If you try to remove an Inbound NAT rule via the portal you get an even nicer message:

Failed to delete inbound NAT rule 'DebuggerListenerNatPool-2zqjmhjv3q.0'. Error: Adding or updating NAT Rules when NAT pool is present on loadbalancer /subscriptions/<subscription id>/resourceGroups/ClusterResourceGroupDEV/providers/Microsoft.Network/loadBalancers/LB-sfdev-Backend is not supported. To modify the load balancer, pass in all NAT rules unchanged or remove the LoadBalancerInboundNatRules property from your PUT request

And the portal actually warns you that this is not yet supported so we could have known beforehand:

So what if you actually do want to remove these Inbound NAT rules? Or you want to remove the default NAT rules that allows RDP access to your SF cluster VMs? Googling around I couldn’t really find a solution, only people with the same question so I thought: let’s find a way to do this.

The error messages provide a valuable clue: you can not modify NAT rules because they are in use by the VMSS. So let’s check Azure Resource Explorer to see if we can find a link between the VMSS and these NAT pools. This link exists and here they are:

I selected our Frontend VMSS and scrolled down to the network profile. There we have four Inbound NAT pools that you can just remove using Azure Resource Explorer so it should look like this:

So now the link between the VMSS and the NAT pools no longer exists. We can now navigate to the load balancer and remove the NAT pools there as well:

We should now be in a situation where there are no longer any NAT pools we do not want. This means we can redeploy our SF cluster ARM template again and everything is back to normal.

Note that a similar approach can be used for adding/updating/deleting NAT rules. The only thing you have to do is remove the link between the VMSS and the corresponding NAT pool, make your changes and reapply the link.

Centralized configuration for .NET Core with Spring Cloud Config Server and Steeltoe

Spring Cloud Config server is a Java-based application that provides support for externalized configuration in a distributed system. A number of external configuration sources are supported like the local file system, Git or HashiCorp Vault. For this post, I use a Github repo as my configuration source. Centralizing configuration on a source control repo has several advantages, especially in a micro services architecture:

  • Config is externalized and separated from code. This ensures that the same code base (or build artifact) can be deployed to multiple different environments. Code should never change between deploys.
  • Configuration for multiple micro services is centralized in one location, which helps in managing configuration for the application as a whole.
  • Configuration itself is versioned and has a history.

Spring Cloud Config server offers a simple API for retreiving app configuration settings. Client libraries exist for numerous languages. For C#, Steeltoe offers integration with config server. It has a lot more to offer but that’s a topic for future posts.

Source code for this post can be found here.

The application

I’m working on a small demo application that should, when finished, show most Steeltoe features. In this first post, I focus on configuration. The demo app itself is based on an API owned by the Dutch Ministry of Foreign Affairs that issues travel advisories. This API is divided into two parts:

  • a paged list of countries where each country has a reference to a detailed country document
  • per country a document with a detailed travel advisory (our southern neigbors Belgium, for example)

The application fetches and caches this data to represent it to clients in a more accessible format. Furthermore, it should detect travel advisory changes to be able to notify clients of the API that a travel advisory for a specific country has changed.

The application starts with a periodic fetch of the list of countries. Fetching this list is implemented by the ‘world fetcher’ micro service. This service needs two configuration settings: the base url to fetch the data from and the polling interval. So let’s see how to configure Spring Cloud Config server to deliver these settings.

Spring Cloud Config server configuration

First of all we’re going to run Spring Cloud Config server locally. This is quite easy because someone has already packed everything inside a Docker container: hyness/spring-cloud-config-server. So we can do a docker pull hyness/spring-cloud-config-server and then we run the following command:

docker run -it \
    --name=spring-cloud-config-server \
    -p 8888:8888 \
    -e SPRING_CLOUD_CONFIG_SERVER_GIT_URI=https://github.com/rwwilden/steeltoe-demo-config \
    hyness/spring-cloud-config-server

So we give the Docker process a nice name: spring-cloud-config-server, map port 8888 and specify an environment variable with the name SPRING_CLOUD_CONFIG_SERVER_GIT_URI. This should point to a Git repo with configuration that can be read by Spring Cloud Config server. For the moment, the only configuration file there is worldfetch.yaml:

worldfetch.yaml view raw
baseDataUri: "https://opendata.nederlandwereldwijd.nl/v1/sources/nederlandwereldwijd/infotypes/traveladvice"
fetchInterval: "00:05:00"

If you have started the Docker container, you can run curl http://localhost:8888/worldfetch/development and you get back a nice JSON response with the two configured values.

Steeltoe

So we have a running config server that serves configuration from a Github repo. How do we get this configuration inside our ASP.NET Core micro service? The answer is Steeltoe. Steeltoe is a collection of libraries that allows interfacing from a .NET app with a number of Spring Cloud components (like config server).

First step is to configure the location of the config server. Since we’re still running only locally, this is http://localhost:8888 which we specify in appsettings.Development.json:

appsettings.Development.json view raw
1
2
3
4
5
6
7
8
{
  "spring": {
    "cloud": {
      "config": {
        "uri": "http://localhost:8888"
      }
    }
  },

Next step is adding the Steeltoe configuration provider in Program.cs:

Program.cs view raw
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
using Steeltoe.Extensions.Configuration.ConfigServer;

namespace worldfetch
{
    public class Program
    {
        public static void Main(string[] args)
        {
            BuildWebHost(args).Run();
        }

        public static IWebHost BuildWebHost(string[] args) =>
            WebHost.CreateDefaultBuilder(args)
                .ConfigureAppConfiguration((webHostBuilderContext, configurationBuilder) => {
                    
                    var hostingEnvironment = webHostBuilderContext.HostingEnvironment;
                    configurationBuilder.AddConfigServer(hostingEnvironment.EnvironmentName);
                })
                .UseStartup<Startup>()
                .Build();
    }
}

Note I use the ConfigureAppConfiguration method, introduced in ASP.NET Core 2.0 and well documented here. On line 17 the config server is added as a configuration provider.

Next we need another configuration setting. Remember what we named our config file on Github: worldfetch.yaml. The Steeltoe configuration provider must know the name of our application so that it can collect the right configuration settings. This one we define in appsettings.json:

appsettings.json view raw
1
2
3
4
5
6
{
  "spring": {
    "application": {
      "name": "worldfetch"
    }
  },

Final step is to implement an options class to represent our settings so that we can inject them into other classes. This class is quite simple in our case because we have just two settings in worldfetch.yaml:

Lib/FetcherOptions.cs view raw
1
2
3
4
5
6
public class FetcherOptions
{
    public string BaseDataUri { get; set; }

    public TimeSpan FetchInterval { get; set; }
}

Configuration injection

Now all that is left to do is inject an IOptions<FetcherOptions> where we want to access the configuration settings from worldfetch.yaml and we’re done: configuration through Spring Cloud Config server from a Github repo.

What’s next?

We now have a small ASP.NET Core application that is configured via Spring Cloud Config server and fetches data from some URL. Next time we’re going to run all this in ‘the cloud’!

Running a Windows Server Container on Azure

I was looking at options to run some Powershell scripts in Azure and my first idea was: why not start a Windows Server Container with the right Powershell modules and run the scripts there? Turns out there are better options for running Powershell scripts in Azure (Azure Automation Runbooks) so I did not continue on this path but this is really cool technology and I learned a few things so I thought: let’s write this down.

Azure Container Instances

First of all, you can run Docker containers on Azure using Azure Container Instances. ACI is not a container orchestrator like Kubernetes (AKS) but it’s ideal for quickly getting single containers up-and-running. The fastest way to run a container is through the Azure CLI. You can find the CLI directly from the portal:

Running a container

Once you have started the CLI, type or paste the following commands:

az configure --defaults location=westeurope
az group create --name DockerTest
az container create \
    --resource-group DockerTest \
    --name winservercore \
    --image "microsoft/iis:nanoserver" \
    --ip-address public \
    --ports 80 \
    --os-type windows

The first command sets the default resource location to westeurope so you do not have to specify this for each command. The second command creates a resource group named DockerTest and the third command starts a simple Windows Server container with the Nano Server OS, running IIS.

You need to specify a number of parameters when creating a container:

  • the name of the resource group for your container resources
  • the name of the container group
  • the name of the Dockerhub image: microsoft/iis:nanoserver
  • create a public IP address
  • specify that the container should expose port 80
  • specify the OS type (ACI does not detect this automatically)

Once you have run these commands, you can check progress via:

az container list --resource-group DockerTest -o table

And once the container has been provisioned, you should get something like this:

The container has received a public IP address, in my case 52.233.138.192 and when we go there, you see the default IIS welcome page.

Tadaaa, your first Windows Server Container running on Azure.

Debug logging to console with .NET Core

I was struggling for about an hour getting debug logging to console working in ASP.NET Core so I thought I should write it down. I got tricked by the default appsettings.json and appsettings.Development.json that get generated when you run dotnet new. First appsettings.json:

appsettings.json view raw
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
{
  "Logging": {
    "IncludeScopes": false,
    "Debug": {
      "LogLevel": {
        "Default": "Warning"
      }
    },
    "Console": {
      "LogLevel": {
        "Default": "Warning"
      }
    }
  }
}

Pretty straightforward: default log levels for debug and console are set to Warning. And now appsettings.Development.json:

appsettings.Development.json view raw
1
2
3
4
5
6
7
8
9
10
{
  "Logging": {
    "IncludeScopes": false,
    "LogLevel": {
      "Default": "Debug",
      "System": "Information",
      "Microsoft": "Information"
    }
  }
}

The way I interpret this, but which is apparently wrong, is as follows: in a development environment the default log level is Debug so if I do LogDebug, it will appear on stdout. Well, it does not… (otherwise I would not have written this post)

I think this is counter-intuitive, especially since this is the default that gets generated when you run dotnet new. Why have this default when it does not result in debug logging? And what does this default accomplish anyhow?

What you need to do in appsettings.Development.json is explicitly configure console logging and set the desired logging levels:

appsettings.Development.json view raw
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
{
  "Logging": {
    "IncludeScopes": true,
    "LogLevel": {
      "Default": "Debug",
      "System": "Information",
      "Microsoft": "Information"
    },
    "Console": {
      "LogLevel": {
        "Default": "Debug",
        "System": "Information",
        "Microsoft": "Information"
      }
    }
  }
}

I still do not quite understand what the default log level on line 5 does. The keyword Console on line 9 refers to the console logging provider. There are a number of other logging providers but there is no such thing as a ‘default logging provider’. After some more careful reading of the documentation, it appears that the default filter rule applies to ‘all other providers’. These are the providers that you do not explicitly specify in your appsettings.json or appsettings.Development.json files.

Now it begins to make sense I guess: the two configuration files are merged and the most specific rule is selected. In the case of the settings files that are generated by default, this means that the console rule with log level warning is selected. You can override this by specifying another console rule in appsettings.Development.json.

Using ASP.NET Core SignalR with Elm

I’m developing a smoke tests app in Go that tests a number of services (Redis, RabbitMQ, Single Sign-On, etc) that are offered in the marketplace of a CloudFoundry installation at one of our customers. These tests produce simple JSON output that signals what went wrong. Now the customer has asked for a dashboard so the entire organization can check on the health of the platform.

I took some time to come up with a good enough design for this and decided on the following:

  • The smoke tests app (Golang) pushes its results to RabbitMQ
  • An ASP.NET Core app listens to smoke test results and keeps track of state (the results themselves and when they were received)
  • A single page written in Elm that receives status updates via SignalR (web sockets)

Since I have never written anything in Elm and my knowledge of SignalR is a little outdated, I decided to start very simple: a SignalR hub that increments an int every five seconds and sends it to all clients. The number that’s received by each client is used to update an Elm view model. In the real world, the int will become the JSON document describing the results of the smoke tests and we build a nice view for it, you get the idea.

All source code for this post can be found here.

The server side of things

First of all, what do things look like on the server and how do we build the application? It will be an ASP.NET Core app so we start with:

dotnet new web
dotnet add package Microsoft.AspNetCore.SignalR -v 1.0.0-alpha2-final

We create an empty ASP.NET Core website and add the latest version of SignalR. Next we need to configure SignalR in our Startup class:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
using Microsoft.AspNetCore.Builder;
using Microsoft.AspNetCore.Hosting;
using Microsoft.Extensions.DependencyInjection;
using Microsoft.Extensions.Hosting;

namespace SmokeTestsDashboardServer
{
    public class Startup
    {
        // This method gets called by the runtime. Use this method to add services to the container.
        // For more information on how to configure your application, visit https://go.microsoft.com/fwlink/?LinkID=398940
        public void ConfigureServices(IServiceCollection services)
        {
            services.AddSignalR();
            services.AddSingleton<IHostedService, CounterHostedService>();
        }

        // This method gets called by the runtime. Use this method to configure the HTTP request pipeline.
        public void Configure(IApplicationBuilder app, IHostingEnvironment env)
        {
            app.UseDefaultFiles();
            app.UseStaticFiles();
            app.UseSignalR(routes =>
            {
                routes.MapHub<SmokeHub>("smoke");
            });
        }
    }
}

The code speaks for itself, I guess. We add SignalR dependencies to the services collection and configure a hub called SmokeHub which can be reached from the client via the route /smoke.

On line 15 you can see I add a IHostedService implementation: CounterHostedService. A hosted service is an object with a start and a stop method that is managed by the host. This means that when ASP.NET Core starts, it calls the hosted service start method and when ASP.NET Core (gracefully) shuts down, it calls the stop method. In our case, we use it to start a very simple scheduler that increments an integer every five seconds and sends it to all SignalR clients. Here are two posts on implementing your own IHostedService.

The client side of things

First of all, we need the SignalR client library. You can get it via npm. I added it in the wwwroot/js/lib folder.

Now let’s take a look at the Elm code.

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
port module Main exposing (..)

import Html exposing (Html, div, button, text, program)


-- MODEL
type alias Model = Int

init : ( Model, Cmd Msg )
init = ( 1, Cmd.none )


-- MESSAGES
type Msg = Counter Int


-- VIEW
view : Model -> Html Msg
view model = div [] [ text (toString model) ]


-- UPDATE
update : Msg -> Model -> ( Model, Cmd Msg )
update msg model =
    case msg of
        Counter count -> ( count, Cmd.none )


-- SUBSCRIPTIONS
port updates : (Int -> msg) -> Sub msg

subscriptions : Model -> Sub Msg
subscriptions model = updates Counter


-- MAIN
main : Program Never Model Msg
main =
    program
        { init = init
        , view = view
        , update = update
        , subscriptions = subscriptions
        }

Let’s dissect the code:

  • Line 6: we have a model, which is an Int that we initialize to 1
  • Line 13: we have one message type, which is a counter of int
  • Line 17: our view takes our model and returns some very simple html, showing the model value
  • Line 22: when we receive an update, we simply return the count
  • Line 29: we subscribe to counter updates

The question is, where do we receive counter updates from? Elm is a pure functional language. This means that the output of every function in Elm depends only on its arguments, regardless of global and/or local state. Direct communication with Javascript from Elm would break this so that is not allowed. So all interop with the outside world is done through ports.

If we check the Elm code again, you see at line 1 we declare our module with the keyword port. On line 30 we declare a port that listens to counter updates from Javascript. So now we can plug it all together in our index.html file:

wwwroot/index.html view raw
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
<html>
<head>
    <script type="text/javascript" src="js/lib/signalr-client-1.0.0-alpha2-final.js"></script>
</head>
<body>
    <div id="main"></div>
    <script type="text/javascript" src="js/main.js"></script>
    <script>
        var node = document.getElementById('main');
        var app = Elm.Main.embed(node);

        const logger = new signalR.ConsoleLogger(signalR.LogLevel.Information);
        const smokeHub = new signalR.HttpConnection(`http://${document.location.host}/smoke`, { logger: logger });
        const smokeConn = new signalR.HubConnection(smokeHub, logger);

        smokeConn.onClosed = e => {
            console.log('Connection closed');
        };

        smokeConn.on('send', data => {
            console.log(data);
            app.ports.updates.send(data);
        });

        smokeConn.start().then(() => smokeConn.invoke('send', 42));

    </script>
</body>
</html>

Most of the code speaks for itself. On line 22 we invoke the port in our Elm app to pass the updated counter to Elm. Line 25 is a simple test to assure that we can also send message from the client to the SignalR hub.

For completeness’ sake, here is the code for the SmokeHub:

lib/SmokeHub.cs view raw
1
2
3
4
5
6
7
8
9
10
11
12
13
using System.Threading.Tasks;
using Microsoft.AspNetCore.SignalR;

namespace SmokeTestsDashboardServer
{
    public class SmokeHub : Hub
    {
        public Task Send(int counter)
        {
            return Clients.All.InvokeAsync("Send", counter);
        }
    }
}

Note that the Send method is called by JavaScript clients. It is not the same as the Send that is called when notifying all clients of a counter update.