Centralized configuration for .NET Core with Spring Cloud Config Server and Steeltoe

Spring Cloud Config server is a Java-based application that provides support for externalized configuration in a distributed system. A number of external configuration sources are supported like the local file system, Git or HashiCorp Vault. For this post, I use a Github repo as my configuration source. Centralizing configuration on a source control repo has several advantages, especially in a micro services architecture:

  • Config is externalized and separated from code. This ensures that the same code base (or build artifact) can be deployed to multiple different environments. Code should never change between deploys.
  • Configuration for multiple micro services is centralized in one location, which helps in managing configuration for the application as a whole.
  • Configuration itself is versioned and has a history.

Spring Cloud Config server offers a simple API for retreiving app configuration settings. Client libraries exist for numerous languages. For C#, Steeltoe offers integration with config server. It has a lot more to offer but that’s a topic for future posts.

Source code for this post can be found here.

The application

I’m working on a small demo application that should, when finished, show most Steeltoe features. In this first post, I focus on configuration. The demo app itself is based on an API owned by the Dutch Ministry of Foreign Affairs that issues travel advisories. This API is divided into two parts:

  • a paged list of countries where each country has a reference to a detailed country document
  • per country a document with a detailed travel advisory (our southern neigbors Belgium, for example)

The application fetches and caches this data to represent it to clients in a more accessible format. Furthermore, it should detect travel advisory changes to be able to notify clients of the API that a travel advisory for a specific country has changed.

The application starts with a periodic fetch of the list of countries. Fetching this list is implemented by the ‘world fetcher’ micro service. This service needs two configuration settings: the base url to fetch the data from and the polling interval. So let’s see how to configure Spring Cloud Config server to deliver these settings.

Spring Cloud Config server configuration

First of all we’re going to run Spring Cloud Config server locally. This is quite easy because someone has already packed everything inside a Docker container: hyness/spring-cloud-config-server. So we can do a docker pull hyness/spring-cloud-config-server and then we run the following command:

docker run -it \
    --name=spring-cloud-config-server \
    -p 8888:8888 \
    -e SPRING_CLOUD_CONFIG_SERVER_GIT_URI=https://github.com/rwwilden/steeltoe-demo-config \
    hyness/spring-cloud-config-server

So we give the Docker process a nice name: spring-cloud-config-server, map port 8888 and specify an environment variable with the name SPRING_CLOUD_CONFIG_SERVER_GIT_URI. This should point to a Git repo with configuration that can be read by Spring Cloud Config server. For the moment, the only configuration file there is worldfetch.yaml:

worldfetch.yaml view raw
baseDataUri: "https://opendata.nederlandwereldwijd.nl/v1/sources/nederlandwereldwijd/infotypes/traveladvice"
fetchInterval: "00:05:00"

If you have started the Docker container, you can run curl http://localhost:8888/worldfetch/development and you get back a nice JSON response with the two configured values.

Steeltoe

So we have a running config server that serves configuration from a Github repo. How do we get this configuration inside our ASP.NET Core micro service? The answer is Steeltoe. Steeltoe is a collection of libraries that allows interfacing from a .NET app with a number of Spring Cloud components (like config server).

First step is to configure the location of the config server. Since we’re still running only locally, this is http://localhost:8888 which we specify in appsettings.Development.json:

appsettings.Development.json view raw
1
2
3
4
5
6
7
8
{
  "spring": {
    "cloud": {
      "config": {
        "uri": "http://localhost:8888"
      }
    }
  },

Next step is adding the Steeltoe configuration provider in Program.cs:

Program.cs view raw
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
using Steeltoe.Extensions.Configuration.ConfigServer;

namespace worldfetch
{
    public class Program
    {
        public static void Main(string[] args)
        {
            BuildWebHost(args).Run();
        }

        public static IWebHost BuildWebHost(string[] args) =>
            WebHost.CreateDefaultBuilder(args)
                .ConfigureAppConfiguration((webHostBuilderContext, configurationBuilder) => {
                    
                    var hostingEnvironment = webHostBuilderContext.HostingEnvironment;
                    configurationBuilder.AddConfigServer(hostingEnvironment.EnvironmentName);
                })
                .UseStartup<Startup>()
                .Build();
    }
}

Note I use the ConfigureAppConfiguration method, introduced in ASP.NET Core 2.0 and well documented here. On line 17 the config server is added as a configuration provider.

Next we need another configuration setting. Remember what we named our config file on Github: worldfetch.yaml. The Steeltoe configuration provider must know the name of our application so that it can collect the right configuration settings. This one we define in appsettings.json:

appsettings.json view raw
1
2
3
4
5
6
{
  "spring": {
    "application": {
      "name": "worldfetch"
    }
  },

Final step is to implement an options class to represent our settings so that we can inject them into other classes. This class is quite simple in our case because we have just two settings in worldfetch.yaml:

Lib/FetcherOptions.cs view raw
1
2
3
4
5
6
public class FetcherOptions
{
    public string BaseDataUri { get; set; }

    public TimeSpan FetchInterval { get; set; }
}

Configuration injection

Now all that is left to do is inject an IOptions<FetcherOptions> where we want to access the configuration settings from worldfetch.yaml and we’re done: configuration through Spring Cloud Config server from a Github repo.

What’s next?

We now have a small ASP.NET Core application that is configured via Spring Cloud Config server and fetches data from some URL. Next time we’re going to run all this in ‘the cloud’!

Running a Windows Server Container on Azure

I was looking at options to run some PowerShell scripts in Azure and my first idea was: why not start a Windows Server Container with the right PowerShell modules and run the scripts there? Turns out there are better options for running PowerShell scripts in Azure (Azure Automation Runbooks) so I did not continue on this path but this is really cool technology and I learned a few things so I thought: let’s write this down.

Azure Container Instances

First of all, you can run Docker containers on Azure using Azure Container Instances. ACI is not a container orchestrator like Kubernetes (AKS) but it’s ideal for quickly getting single containers up-and-running. The fastest way to run a container is through the Azure CLI. You can find the CLI directly from the portal:

Running a container

Once you have started the CLI, type or paste the following commands:

az configure --defaults location=westeurope
az group create --name DockerTest
az container create \
    --resource-group DockerTest \
    --name winservercore \
    --image "microsoft/iis:nanoserver" \
    --ip-address public \
    --ports 80 \
    --os-type windows

The first command sets the default resource location to westeurope so you do not have to specify this for each command. The second command creates a resource group named DockerTest and the third command starts a simple Windows Server container with the Nano Server OS, running IIS.

You need to specify a number of parameters when creating a container:

  • the name of the resource group for your container resources
  • the name of the container group
  • the name of the Dockerhub image: microsoft/iis:nanoserver
  • create a public IP address
  • specify that the container should expose port 80
  • specify the OS type (ACI does not detect this automatically)

Once you have run these commands, you can check progress via:

az container list --resource-group DockerTest -o table

And once the container has been provisioned, you should get something like this:

The container has received a public IP address, in my case 52.233.138.192 and when we go there, you see the default IIS welcome page.

Tadaaa, your first Windows Server Container running on Azure.

Debug logging to console with .NET Core

I was struggling for about an hour getting debug logging to console working in ASP.NET Core so I thought I should write it down. I got tricked by the default appsettings.json and appsettings.Development.json that get generated when you run dotnet new. First appsettings.json:

appsettings.json view raw
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
{
  "Logging": {
    "IncludeScopes": false,
    "Debug": {
      "LogLevel": {
        "Default": "Warning"
      }
    },
    "Console": {
      "LogLevel": {
        "Default": "Warning"
      }
    }
  }
}

Pretty straightforward: default log levels for debug and console are set to Warning. And now appsettings.Development.json:

appsettings.Development.json view raw
1
2
3
4
5
6
7
8
9
10
{
  "Logging": {
    "IncludeScopes": false,
    "LogLevel": {
      "Default": "Debug",
      "System": "Information",
      "Microsoft": "Information"
    }
  }
}

The way I interpret this, but which is apparently wrong, is as follows: in a development environment the default log level is Debug so if I do LogDebug, it will appear on stdout. Well, it does not… (otherwise I would not have written this post)

I think this is counter-intuitive, especially since this is the default that gets generated when you run dotnet new. Why have this default when it does not result in debug logging? And what does this default accomplish anyhow?

What you need to do in appsettings.Development.json is explicitly configure console logging and set the desired logging levels:

appsettings.Development.json view raw
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
{
  "Logging": {
    "IncludeScopes": true,
    "LogLevel": {
      "Default": "Debug",
      "System": "Information",
      "Microsoft": "Information"
    },
    "Console": {
      "LogLevel": {
        "Default": "Debug",
        "System": "Information",
        "Microsoft": "Information"
      }
    }
  }
}

I still do not quite understand what the default log level on line 5 does. The keyword Console on line 9 refers to the console logging provider. There are a number of other logging providers but there is no such thing as a ‘default logging provider’. After some more careful reading of the documentation, it appears that the default filter rule applies to ‘all other providers’. These are the providers that you do not explicitly specify in your appsettings.json or appsettings.Development.json files.

Now it begins to make sense I guess: the two configuration files are merged and the most specific rule is selected. In the case of the settings files that are generated by default, this means that the console rule with log level warning is selected. You can override this by specifying another console rule in appsettings.Development.json.

Using ASP.NET Core SignalR with Elm

I’m developing a smoke tests app in Go that tests a number of services (Redis, RabbitMQ, Single Sign-On, etc) that are offered in the marketplace of a CloudFoundry installation at one of our customers. These tests produce simple JSON output that signals what went wrong. Now the customer has asked for a dashboard so the entire organization can check on the health of the platform.

I took some time to come up with a good enough design for this and decided on the following:

  • The smoke tests app (Golang) pushes its results to RabbitMQ
  • An ASP.NET Core app listens to smoke test results and keeps track of state (the results themselves and when they were received)
  • A single page written in Elm that receives status updates via SignalR (web sockets)

Since I have never written anything in Elm and my knowledge of SignalR is a little outdated, I decided to start very simple: a SignalR hub that increments an int every five seconds and sends it to all clients. The number that’s received by each client is used to update an Elm view model. In the real world, the int will become the JSON document describing the results of the smoke tests and we build a nice view for it, you get the idea.

All source code for this post can be found here.

The server side of things

First of all, what do things look like on the server and how do we build the application? It will be an ASP.NET Core app so we start with:

dotnet new web
dotnet add package Microsoft.AspNetCore.SignalR -v 1.0.0-alpha2-final

We create an empty ASP.NET Core website and add the latest version of SignalR. Next we need to configure SignalR in our Startup class:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
using Microsoft.AspNetCore.Builder;
using Microsoft.AspNetCore.Hosting;
using Microsoft.Extensions.DependencyInjection;
using Microsoft.Extensions.Hosting;

namespace SmokeTestsDashboardServer
{
    public class Startup
    {
        // This method gets called by the runtime. Use this method to add services to the container.
        // For more information on how to configure your application, visit https://go.microsoft.com/fwlink/?LinkID=398940
        public void ConfigureServices(IServiceCollection services)
        {
            services.AddSignalR();
            services.AddSingleton<IHostedService, CounterHostedService>();
        }

        // This method gets called by the runtime. Use this method to configure the HTTP request pipeline.
        public void Configure(IApplicationBuilder app, IHostingEnvironment env)
        {
            app.UseDefaultFiles();
            app.UseStaticFiles();
            app.UseSignalR(routes =>
            {
                routes.MapHub<SmokeHub>("smoke");
            });
        }
    }
}

The code speaks for itself, I guess. We add SignalR dependencies to the services collection and configure a hub called SmokeHub which can be reached from the client via the route /smoke.

On line 15 you can see I add a IHostedService implementation: CounterHostedService. A hosted service is an object with a start and a stop method that is managed by the host. This means that when ASP.NET Core starts, it calls the hosted service start method and when ASP.NET Core (gracefully) shuts down, it calls the stop method. In our case, we use it to start a very simple scheduler that increments an integer every five seconds and sends it to all SignalR clients. Here are two posts on implementing your own IHostedService.

The client side of things

First of all, we need the SignalR client library. You can get it via npm. I added it in the wwwroot/js/lib folder.

Now let’s take a look at the Elm code.

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
port module Main exposing (..)

import Html exposing (Html, div, button, text, program)


-- MODEL
type alias Model = Int

init : ( Model, Cmd Msg )
init = ( 1, Cmd.none )


-- MESSAGES
type Msg = Counter Int


-- VIEW
view : Model -> Html Msg
view model = div [] [ text (toString model) ]


-- UPDATE
update : Msg -> Model -> ( Model, Cmd Msg )
update msg model =
    case msg of
        Counter count -> ( count, Cmd.none )


-- SUBSCRIPTIONS
port updates : (Int -> msg) -> Sub msg

subscriptions : Model -> Sub Msg
subscriptions model = updates Counter


-- MAIN
main : Program Never Model Msg
main =
    program
        { init = init
        , view = view
        , update = update
        , subscriptions = subscriptions
        }

Let’s dissect the code:

  • Line 6: we have a model, which is an Int that we initialize to 1
  • Line 13: we have one message type, which is a counter of int
  • Line 17: our view takes our model and returns some very simple html, showing the model value
  • Line 22: when we receive an update, we simply return the count
  • Line 29: we subscribe to counter updates

The question is, where do we receive counter updates from? Elm is a pure functional language. This means that the output of every function in Elm depends only on its arguments, regardless of global and/or local state. Direct communication with Javascript from Elm would break this so that is not allowed. So all interop with the outside world is done through ports.

If we check the Elm code again, you see at line 1 we declare our module with the keyword port. On line 30 we declare a port that listens to counter updates from Javascript. So now we can plug it all together in our index.html file:

wwwroot/index.html view raw
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
<html>
<head>
    <script type="text/javascript" src="js/lib/signalr-client-1.0.0-alpha2-final.js"></script>
</head>
<body>
    <div id="main"></div>
    <script type="text/javascript" src="js/main.js"></script>
    <script>
        var node = document.getElementById('main');
        var app = Elm.Main.embed(node);

        const logger = new signalR.ConsoleLogger(signalR.LogLevel.Information);
        const smokeHub = new signalR.HttpConnection(`http://${document.location.host}/smoke`, { logger: logger });
        const smokeConn = new signalR.HubConnection(smokeHub, logger);

        smokeConn.onClosed = e => {
            console.log('Connection closed');
        };

        smokeConn.on('send', data => {
            console.log(data);
            app.ports.updates.send(data);
        });

        smokeConn.start().then(() => smokeConn.invoke('send', 42));

    </script>
</body>
</html>

Most of the code speaks for itself. On line 22 we invoke the port in our Elm app to pass the updated counter to Elm. Line 25 is a simple test to assure that we can also send message from the client to the SignalR hub.

For completeness’ sake, here is the code for the SmokeHub:

lib/SmokeHub.cs view raw
1
2
3
4
5
6
7
8
9
10
11
12
13
using System.Threading.Tasks;
using Microsoft.AspNetCore.SignalR;

namespace SmokeTestsDashboardServer
{
    public class SmokeHub : Hub
    {
        public Task Send(int counter)
        {
            return Clients.All.InvokeAsync("Send", counter);
        }
    }
}

Note that the Send method is called by JavaScript clients. It is not the same as the Send that is called when notifying all clients of a counter update.

Deploy Tridion SDL Web 8.5 Discovery Service on Pivotal CloudFoundry (part 2)

This is part 2 of a series of (still) unknown length where I try to describe how to deploy the SDL Tridion Web 8.5 Discovery Service on CloudFoundry. All parts:

  1. Deploy Tridion SDL Web 8.5 Discovery Service on Pivotal CloudFoundry (part 1)
  2. Deploy Tridion SDL Web 8.5 Discovery Service on Pivotal CloudFoundry (part 2) (this post)

I finished the previous post thinking I was done, except a few small changes. Unfortunately, that wasn’t true. Remember we had to provide an explicit command line because of classpath requirements. This classpath wasn’t yet complete. Let’s analyze the start.sh file again:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
#!/usr/bin/env bash

# Java options and system properties to pass to the JVM when starting the service. For example:
# JVM_OPTIONS="-Xrs -Xms128m -Xmx128m -Dmy.system.property=/var/share"
JVM_OPTIONS="-Xrs -Xms128m -Xmx128m"
SERVER_PORT=--server.port=8082

# set max size of request header to 64Kb
MAX_HTTP_HEADER_SIZE=--server.tomcat.max-http-header-size=65536

BASEDIR=$(dirname $0)
CLASS_PATH=.:config:bin:lib/*
CLASS_NAME="com.sdl.delivery.service.ServiceContainer"
PID_FILE="sdl-service-container.pid"

cd $BASEDIR/..
if [ -f $PID_FILE ]
  then
    if ps -p $(cat $PID_FILE) > /dev/null
        then
          echo "The service already started."
          echo "To start service again, run stop.sh first."
          exit 0
    fi
fi

ARGUMENTS=()
for ARG in $@
do
    if [[ $ARG == --server\.port=* ]]
    then
        SERVER_PORT=$ARG
    elif [[ $ARG =~ -D.+ ]]; then
    	JVM_OPTIONS=$JVM_OPTIONS" "$ARG
    else
        ARGUMENTS+=($ARG)
    fi
done
ARGUMENTS+=($SERVER_PORT)
ARGUMENTS+=($MAX_HTTP_HEADER_SIZE)

for SERVICE_DIR in `find services -type d`
do
    CLASS_PATH=$SERVICE_DIR:$SERVICE_DIR/*:$CLASS_PATH
done

echo "Starting service."

java -cp $CLASS_PATH $JVM_OPTIONS $CLASS_NAME ${ARGUMENTS[@]} & echo $! > $PID_FILE

At line 12 the classpath is set to .:config:bin:lib/*. We ended the previous post with a classpath of $PWD/*:.:$PWD/lib/*:$PWD/config/*, not quite the same. Furthermore, on lines 42..45, additional folders are added to the classpath. Taking all this into account, we get the following classpath: $PWD/*:.:$PWD/lib/*:$PWD/config:$PWD/services/discovery-service/*:$PWD/services/odata-v4-framework/* and the following manifest.yml:

---
applications:
- name: discovery_service
  path: ./
  buildpack: java_buildpack_offline
  command: $PWD/.java-buildpack/open_jdk_jre/bin/java -cp $PWD/*:.:$PWD/lib/*:$PWD/config:$PWD/services/discovery-service/*:$PWD/services/odata-v4-framework/* com.sdl.delivery.service.ServiceContainer -Xrs -Xms128m -Xmx128m
  env:
    JBP_CONFIG_JAVA_MAIN: '{ java_main_class: "com.sdl.delivery.service.ServiceContainer", arguments: "-Xrs -Xms128m -Xmx128m" }'
    JBP_LOG_LEVEL: DEBUG

Now that we have fixed the classpath, let’s see if the discovery service still runs when we push it.

0 of 1 instances running, 1 starting
0 of 1 instances running, 1 starting
0 of 1 instances running, 1 starting
0 of 1 instances running, 1 crashed
FAILED
Error restarting application: Start unsuccessful

TIP: use 'cf logs discovery_service --recent' for more information

Ok, that is unfortunate, we broke it again. Let’s check the log files again:

[APP/PROC/WEB/0] OUT                                             '#b
[APP/PROC/WEB/0] OUT                                              @# ,###
[APP/PROC/WEB/0] OUT     ##########  @##########Mw     ####   ########^
[APP/PROC/WEB/0] OUT    #####%554WC  @#############p  j####       ##"@#m
[APP/PROC/WEB/0] OUT   j####,        @####     1####  j####      ##    "
[APP/PROC/WEB/0] OUT    %######M,    @####     j####  j####
[APP/PROC/WEB/0] OUT      "%######m  @####     j####  j####
[APP/PROC/WEB/0] OUT          "####  @####     {####  j####
[APP/PROC/WEB/0] OUT   ]##MmmM#####  @#############C  j###########
[APP/PROC/WEB/0] OUT   %#########"   @#########MM^     ###########
[APP/PROC/WEB/0] OUT :: Service Container :: Spring Boot  (v1.4.1.RELEASE) ::
[APP/PROC/WEB/0] OUT Exit status 0
[CELL/0] OUT Exit status 0
[CELL/0] OUT Stopping instance ef44cf20-b9da-48c6-5edc-a6d7
[CELL/0] OUT Destroying container
[API/0] OUT Process has crashed with type: "web"
[API/0] OUT App instance exited with guid e9a00d0c-86b4-4dad-ae5d-e4208f09590f payload: {"instance"=>"ef44cf20-b9da-48c6-5edc-a6d7", "index"=>0, "reason"=>"CRASHED", "exit_description"=>"Codependent step exited", "crash_count"=>4, "crash_timestamp"=>1513173007899100032, "version"=>"692f3c6a-acf3-4adc-b870-3827355948d6"}
[CELL/0] OUT Successfully destroyed container

Not very informative… This just tells us that something went wrong but not what went wrong. It should be possible to get more logging than this. Lucky for us, it is.

In my config/logback.xml file, a number of RollingFileAppenders were configured (this may be different for your configuration). These were setup to log to a local folder. This isn’t going to fly on CloudFoundry of course, we should log to stdout and let the platform manage the rest. So I modified logback.xml:

<?xml version="1.0" encoding="UTF-8"?>
<configuration scan="true">
    <!-- Properties -->
    <property name="log.pattern" value="%date %-5level %logger{0} - %message%n"/>
    <property name="log.level" value="DEBUG"/>
    <property name="log.encoding" value="UTF-8"/>

    <!-- Appenders -->
    <appender name="stdout" class="ch.qos.logback.core.ConsoleAppender">
        <encoder>
            <charset>${log.encoding}</charset>
            <pattern>${log.pattern}</pattern>
        </encoder>
    </appender>

    <!-- Loggers -->
    <logger name="com" level="${log.level}">
        <appender-ref ref="stdout"/>
    </logger>

    <root level="ERROR">
        <appender-ref ref="stdout"/>
    </root>
</configuration>

This should take care of logging everything to stdout. If we push the app now, we get a lot of logging and in my case, the discovery service still crashes. But at least now I can see why:

[APP/PROC/WEB/0] OUT DEBUG SQLServerConnection - ConnectionID:1 Connecting with server: DBSERVER port: 1433 Timeout slice: 4800 Timeout Full: 15
[APP/PROC/WEB/0] OUT DEBUG SQLServerConnection - ConnectionID:1 This attempt No: 3
[APP/PROC/WEB/0] OUT DEBUG SQLServerException - *** SQLException:ConnectionID:1 com.microsoft.sqlserver.jdbc.SQLServerException: The TCP/IP connection to the host DBSERVER, port 1433 has failed. Error: "DBSERVER. Verify the connection properties. Make sure that an instance of SQL Server is running on the host and accepting TCP/IP connections at the port. Make sure that TCP connections to the port are not blocked by a firewall.". The TCP/IP connection to the host DBSERVER, port 1433 has failed. Error: "DBSERVER. Verify the connection properties. Make sure that an instance of SQL Server is running on the host and accepting TCP/IP connections at the port. Make sure that TCP connections to the port are not blocked by a firewall.".

The service attempts to connect to a database named DBSERVER. I have not yet configured the discovery database so this makes sense.

All in all, we’re again one step further in deploying SDL Tridion Web 8.5 Discovery Service on CloudFoundry.