Prevent double login for Azure Resource Manager and Azure AD in Powershell


UPDATE (2018-02-12): The method described below does not work, unfortunately. Connect-AzureAD runs without error but the AD context you get is not authorized to perform AD operations. I get errors that look like this:

Get-AzureADApplication : Error occurred while executing GetApplications 
Code: Authentication_MissingOrMalformed
Message: Access Token missing or malformed.
RequestId: 1f15adc8-1cf5-443b-b78d-88db66701506
DateTimeStamp: Mon, 12 Feb 2018 16:43:42 GMT
HttpStatusCode: Unauthorized
HttpStatusDescription: Unauthorized
HttpResponseStatus: Completed

The access token is missing or malformed. I’m trying to figure out what goes wrong since an access token is actually provided (so it can not be missing). But ‘malformed’ is also strange because this is the token I get back from Add-AzureRmAccount.

Checking the actual access token in jwt.io proves that is isn’t malformed. However, the token audience is https://management.core.windows.net/. This is probably not the audience that is expected when authenticating against Azure AD (unfortunately we can not inspect this token). So that is probably why the token is ‘malformed’.

This means I’m stuck with a double login when using both Add-AzureRmAccount and Connect-AzureAD in one Powershell script… If someone knows a solution, please leave a comment :)


UPDATE (2018-02-13): I also found out that it doesn’t really matter what you pass as access token to Connect-AzureAD. The following runs without error: Connect-AzureAD -TenantId $tenantId -AadAccessToken "this is no token" -AccountId $accountId. Errors happen only later when you try to run operations against Azure AD.


I had to write a Powershell script that connected to Azure Resource Manager via Add-AzureRmAccount and to Azure AD via Connect-AzureAD. If you write your script like this:

Add-AzureRmAccount -SubscriptionId $subscriptionId
Connect-AzureAD -TenantId $tenantId

you are presented twice with this login dialog:

This is of course annoying for users of my script so I set out to improve this. The end result looks like this:

$rmAccount = Add-AzureRmAccount -SubscriptionId $subscriptionId
$tenantId = (Get-AzureRmSubscription -SubscriptionId $subscriptionId).TenantId
$tokenCache = $rmAccount.Context.TokenCache
$cachedTokens = $tokenCache.ReadItems() `
        | where { $_.TenantId -eq $tenantId } `
        | Sort-Object -Property ExpiresOn -Descending
Connect-AzureAD -TenantId $tenantId `
                -AadAccessToken $cachedTokens[0].AccessToken `
                -AccountId $rmAccount.Context.Account.Id

Let’s dissect this:

  • Login to Azure Resource Manager and store the result.
  • Get the tenant id from the subscription.
  • Get the token cache property from the ARM account. This is an object of type Microsoft.IdentityModel.Clients.ActiveDirectory.TokenCache from the ADAL library.
  • We can retrieve the access token we want through the ReadItems() method, filtering on tenant id and getting the most recent one.
  • And finally we can use the access token to connect to Azure AD.

And that’s how we prevent a double login in Powershell scripts that use both Add-AzureRmAccount and Connect-AzureAD.

Remove Inbound NAT rules from an Azure Virtual machine scale set

One of our customers runs on Azure Service Fabric (SF) which is backed by a Virtual machine scale set (VMSS). We had a connectivity problem recently and one of the developers enabled remote debugging on the SF cluster to see what went wrong. Little did he know that (among other things) a large number of additional TCP ports are opened on the cluster load balancers to allow debuggers to attach. In the portal, this looks like this:

This is an undesirable situation because:

  1. the attack surface of the SF cluster has increased due to all these open ports (security perspective) and
  2. the ARM template we use to deploy our SF cluster no longer works (maintenance perspective).

The reason behind the latter is that Azure does not allow the removal of Inbound NAT pools and NAT rules when they are in use by a VMSS. So if you deploy a ARM template that does not have all the Inbound NAT rules that you also see in the Azure portal, you get an error message:

Cannot remove inbound nat pool DebuggerListenerNatPool-8ypmdj7pp8 from load balancer since it is in use by virtual machine scale set /subscriptions/<subscription id>/resourceGroups/ClusterResourceGroupDEV/providers/Microsoft.Compute/virtualMachineScaleSets/Backend

If you try to remove an Inbound NAT rule via the portal you get an even nicer message:

Failed to delete inbound NAT rule 'DebuggerListenerNatPool-2zqjmhjv3q.0'. Error: Adding or updating NAT Rules when NAT pool is present on loadbalancer /subscriptions/<subscription id>/resourceGroups/ClusterResourceGroupDEV/providers/Microsoft.Network/loadBalancers/LB-sfdev-Backend is not supported. To modify the load balancer, pass in all NAT rules unchanged or remove the LoadBalancerInboundNatRules property from your PUT request

And the portal actually warns you that this is not yet supported so we could have known beforehand:

So what if you actually do want to remove these Inbound NAT rules? Or you want to remove the default NAT rules that allows RDP access to your SF cluster VMs? Googling around I couldn’t really find a solution, only people with the same question so I thought: let’s find a way to do this.

The error messages provide a valuable clue: you can not modify NAT rules because they are in use by the VMSS. So let’s check Azure Resource Explorer to see if we can find a link between the VMSS and these NAT pools. This link exists and here they are:

I selected our Frontend VMSS and scrolled down to the network profile. There we have four Inbound NAT pools that you can just remove using Azure Resource Explorer so it should look like this:

So now the link between the VMSS and the NAT pools no longer exists. We can now navigate to the load balancer and remove the NAT pools there as well:

We should now be in a situation where there are no longer any NAT pools we do not want. This means we can redeploy our SF cluster ARM template again and everything is back to normal.

Note that a similar approach can be used for adding/updating/deleting NAT rules. The only thing you have to do is remove the link between the VMSS and the corresponding NAT pool, make your changes and reapply the link.

Centralized configuration for .NET Core with Spring Cloud Config Server and Steeltoe

Spring Cloud Config server is a Java-based application that provides support for externalized configuration in a distributed system. A number of external configuration sources are supported like the local file system, Git or HashiCorp Vault. For this post, I use a Github repo as my configuration source. Centralizing configuration on a source control repo has several advantages, especially in a micro services architecture:

  • Config is externalized and separated from code. This ensures that the same code base (or build artifact) can be deployed to multiple different environments. Code should never change between deploys.
  • Configuration for multiple micro services is centralized in one location, which helps in managing configuration for the application as a whole.
  • Configuration itself is versioned and has a history.

Spring Cloud Config server offers a simple API for retreiving app configuration settings. Client libraries exist for numerous languages. For C#, Steeltoe offers integration with config server. It has a lot more to offer but that’s a topic for future posts.

Source code for this post can be found here.

The application

I’m working on a small demo application that should, when finished, show most Steeltoe features. In this first post, I focus on configuration. The demo app itself is based on an API owned by the Dutch Ministry of Foreign Affairs that issues travel advisories. This API is divided into two parts:

  • a paged list of countries where each country has a reference to a detailed country document
  • per country a document with a detailed travel advisory (our southern neigbors Belgium, for example)

The application fetches and caches this data to represent it to clients in a more accessible format. Furthermore, it should detect travel advisory changes to be able to notify clients of the API that a travel advisory for a specific country has changed.

The application starts with a periodic fetch of the list of countries. Fetching this list is implemented by the ‘world fetcher’ micro service. This service needs two configuration settings: the base url to fetch the data from and the polling interval. So let’s see how to configure Spring Cloud Config server to deliver these settings.

Spring Cloud Config server configuration

First of all we’re going to run Spring Cloud Config server locally. This is quite easy because someone has already packed everything inside a Docker container: hyness/spring-cloud-config-server. So we can do a docker pull hyness/spring-cloud-config-server and then we run the following command:

docker run -it \
    --name=spring-cloud-config-server \
    -p 8888:8888 \
    -e SPRING_CLOUD_CONFIG_SERVER_GIT_URI=https://github.com/rwwilden/steeltoe-demo-config \
    hyness/spring-cloud-config-server

So we give the Docker process a nice name: spring-cloud-config-server, map port 8888 and specify an environment variable with the name SPRING_CLOUD_CONFIG_SERVER_GIT_URI. This should point to a Git repo with configuration that can be read by Spring Cloud Config server. For the moment, the only configuration file there is worldfetch.yaml:

worldfetch.yaml view raw
baseDataUri: "https://opendata.nederlandwereldwijd.nl/v1/sources/nederlandwereldwijd/infotypes/traveladvice"
fetchInterval: "00:05:00"

If you have started the Docker container, you can run curl http://localhost:8888/worldfetch/development and you get back a nice JSON response with the two configured values.

Steeltoe

So we have a running config server that serves configuration from a Github repo. How do we get this configuration inside our ASP.NET Core micro service? The answer is Steeltoe. Steeltoe is a collection of libraries that allows interfacing from a .NET app with a number of Spring Cloud components (like config server).

First step is to configure the location of the config server. Since we’re still running only locally, this is http://localhost:8888 which we specify in appsettings.Development.json:

appsettings.Development.json view raw
1
2
3
4
5
6
7
8
{
  "spring": {
    "cloud": {
      "config": {
        "uri": "http://localhost:8888"
      }
    }
  },

Next step is adding the Steeltoe configuration provider in Program.cs:

Program.cs view raw
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
using Steeltoe.Extensions.Configuration.ConfigServer;

namespace worldfetch
{
    public class Program
    {
        public static void Main(string[] args)
        {
            BuildWebHost(args).Run();
        }

        public static IWebHost BuildWebHost(string[] args) =>
            WebHost.CreateDefaultBuilder(args)
                .ConfigureAppConfiguration((webHostBuilderContext, configurationBuilder) => {
                    
                    var hostingEnvironment = webHostBuilderContext.HostingEnvironment;
                    configurationBuilder.AddConfigServer(hostingEnvironment.EnvironmentName);
                })
                .UseStartup<Startup>()
                .Build();
    }
}

Note I use the ConfigureAppConfiguration method, introduced in ASP.NET Core 2.0 and well documented here. On line 17 the config server is added as a configuration provider.

Next we need another configuration setting. Remember what we named our config file on Github: worldfetch.yaml. The Steeltoe configuration provider must know the name of our application so that it can collect the right configuration settings. This one we define in appsettings.json:

appsettings.json view raw
1
2
3
4
5
6
{
  "spring": {
    "application": {
      "name": "worldfetch"
    }
  },

Final step is to implement an options class to represent our settings so that we can inject them into other classes. This class is quite simple in our case because we have just two settings in worldfetch.yaml:

Lib/FetcherOptions.cs view raw
1
2
3
4
5
6
public class FetcherOptions
{
    public string BaseDataUri { get; set; }

    public TimeSpan FetchInterval { get; set; }
}

Configuration injection

Now all that is left to do is inject an IOptions<FetcherOptions> where we want to access the configuration settings from worldfetch.yaml and we’re done: configuration through Spring Cloud Config server from a Github repo.

What’s next?

We now have a small ASP.NET Core application that is configured via Spring Cloud Config server and fetches data from some URL. Next time we’re going to run all this in ‘the cloud’!

Running a Windows Server Container on Azure

I was looking at options to run some Powershell scripts in Azure and my first idea was: why not start a Windows Server Container with the right Powershell modules and run the scripts there? Turns out there are better options for running Powershell scripts in Azure (Azure Automation Runbooks) so I did not continue on this path but this is really cool technology and I learned a few things so I thought: let’s write this down.

Azure Container Instances

First of all, you can run Docker containers on Azure using Azure Container Instances. ACI is not a container orchestrator like Kubernetes (AKS) but it’s ideal for quickly getting single containers up-and-running. The fastest way to run a container is through the Azure CLI. You can find the CLI directly from the portal:

Running a container

Once you have started the CLI, type or paste the following commands:

az configure --defaults location=westeurope
az group create --name DockerTest
az container create \
    --resource-group DockerTest \
    --name winservercore \
    --image "microsoft/iis:nanoserver" \
    --ip-address public \
    --ports 80 \
    --os-type windows

The first command sets the default resource location to westeurope so you do not have to specify this for each command. The second command creates a resource group named DockerTest and the third command starts a simple Windows Server container with the Nano Server OS, running IIS.

You need to specify a number of parameters when creating a container:

  • the name of the resource group for your container resources
  • the name of the container group
  • the name of the Dockerhub image: microsoft/iis:nanoserver
  • create a public IP address
  • specify that the container should expose port 80
  • specify the OS type (ACI does not detect this automatically)

Once you have run these commands, you can check progress via:

az container list --resource-group DockerTest -o table

And once the container has been provisioned, you should get something like this:

The container has received a public IP address, in my case 52.233.138.192 and when we go there, you see the default IIS welcome page.

Tadaaa, your first Windows Server Container running on Azure.

Debug logging to console with .NET Core

I was struggling for about an hour getting debug logging to console working in ASP.NET Core so I thought I should write it down. I got tricked by the default appsettings.json and appsettings.Development.json that get generated when you run dotnet new. First appsettings.json:

appsettings.json view raw
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
{
  "Logging": {
    "IncludeScopes": false,
    "Debug": {
      "LogLevel": {
        "Default": "Warning"
      }
    },
    "Console": {
      "LogLevel": {
        "Default": "Warning"
      }
    }
  }
}

Pretty straightforward: default log levels for debug and console are set to Warning. And now appsettings.Development.json:

appsettings.Development.json view raw
1
2
3
4
5
6
7
8
9
10
{
  "Logging": {
    "IncludeScopes": false,
    "LogLevel": {
      "Default": "Debug",
      "System": "Information",
      "Microsoft": "Information"
    }
  }
}

The way I interpret this, but which is apparently wrong, is as follows: in a development environment the default log level is Debug so if I do LogDebug, it will appear on stdout. Well, it does not… (otherwise I would not have written this post)

I think this is counter-intuitive, especially since this is the default that gets generated when you run dotnet new. Why have this default when it does not result in debug logging? And what does this default accomplish anyhow?

What you need to do in appsettings.Development.json is explicitly configure console logging and set the desired logging levels:

appsettings.Development.json view raw
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
{
  "Logging": {
    "IncludeScopes": true,
    "LogLevel": {
      "Default": "Debug",
      "System": "Information",
      "Microsoft": "Information"
    },
    "Console": {
      "LogLevel": {
        "Default": "Debug",
        "System": "Information",
        "Microsoft": "Information"
      }
    }
  }
}

I still do not quite understand what the default log level on line 5 does. The keyword Console on line 9 refers to the console logging provider. There are a number of other logging providers but there is no such thing as a ‘default logging provider’. After some more careful reading of the documentation, it appears that the default filter rule applies to ‘all other providers’. These are the providers that you do not explicitly specify in your appsettings.json or appsettings.Development.json files.

Now it begins to make sense I guess: the two configuration files are merged and the most specific rule is selected. In the case of the settings files that are generated by default, this means that the console rule with log level warning is selected. You can override this by specifying another console rule in appsettings.Development.json.