Debug logging to console with .NET Core

I was struggling for about an hour getting debug logging to console working in ASP.NET Core so I thought I should write it down. I got tricked by the default appsettings.json and appsettings.Development.json that get generated when you run dotnet new. First appsettings.json:

appsettings.json view raw
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
{
  "Logging": {
    "IncludeScopes": false,
    "Debug": {
      "LogLevel": {
        "Default": "Warning"
      }
    },
    "Console": {
      "LogLevel": {
        "Default": "Warning"
      }
    }
  }
}

Pretty straightforward: default log levels for debug and console are set to Warning. And now appsettings.Development.json:

appsettings.Development.json view raw
1
2
3
4
5
6
7
8
9
10
{
  "Logging": {
    "IncludeScopes": false,
    "LogLevel": {
      "Default": "Debug",
      "System": "Information",
      "Microsoft": "Information"
    }
  }
}

The way I interpret this, but which is apparently wrong, is as follows: in a development environment the default log level is Debug so if I do LogDebug, it will appear on stdout. Well, it does not… (otherwise I would not have written this post)

I think this is counter-intuitive, especially since this is the default that gets generated when you run dotnet new. Why have this default when it does not result in debug logging? And what does this default accomplish anyhow?

What you need to do in appsettings.Development.json is explicitly configure console logging and set the desired logging levels:

appsettings.Development.json view raw
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
{
  "Logging": {
    "IncludeScopes": true,
    "LogLevel": {
      "Default": "Debug",
      "System": "Information",
      "Microsoft": "Information"
    },
    "Console": {
      "LogLevel": {
        "Default": "Debug",
        "System": "Information",
        "Microsoft": "Information"
      }
    }
  }
}

I still do not quite understand what the default log level on line 5 does. The keyword Console on line 9 refers to the console logging provider. There are a number of other logging providers but there is no such thing as a ‘default logging provider’. After some more careful reading of the documentation, it appears that the default filter rule applies to ‘all other providers’. These are the providers that you do not explicitly specify in your appsettings.json or appsettings.Development.json files.

Now it begins to make sense I guess: the two configuration files are merged and the most specific rule is selected. In the case of the settings files that are generated by default, this means that the console rule with log level warning is selected. You can override this by specifying another console rule in appsettings.Development.json.

Using ASP.NET Core SignalR with Elm

I’m developing a smoke tests app in Go that tests a number of services (Redis, RabbitMQ, Single Sign-On, etc) that are offered in the marketplace of a CloudFoundry installation at one of our customers. These tests produce simple JSON output that signals what went wrong. Now the customer has asked for a dashboard so the entire organization can check on the health of the platform.

I took some time to come up with a good enough design for this and decided on the following:

  • The smoke tests app (Golang) pushes its results to RabbitMQ
  • An ASP.NET Core app listens to smoke test results and keeps track of state (the results themselves and when they were received)
  • A single page written in Elm that receives status updates via SignalR (web sockets)

Since I have never written anything in Elm and my knowledge of SignalR is a little outdated, I decided to start very simple: a SignalR hub that increments an int every five seconds and sends it to all clients. The number that’s received by each client is used to update an Elm view model. In the real world, the int will become the JSON document describing the results of the smoke tests and we build a nice view for it, you get the idea.

All source code for this post can be found here.

The server side of things

First of all, what do things look like on the server and how do we build the application? It will be an ASP.NET Core app so we start with:

dotnet new web
dotnet add package Microsoft.AspNetCore.SignalR -v 1.0.0-alpha2-final

We create an empty ASP.NET Core website and add the latest version of SignalR. Next we need to configure SignalR in our Startup class:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
using Microsoft.AspNetCore.Builder;
using Microsoft.AspNetCore.Hosting;
using Microsoft.Extensions.DependencyInjection;
using Microsoft.Extensions.Hosting;

namespace SmokeTestsDashboardServer
{
    public class Startup
    {
        // This method gets called by the runtime. Use this method to add services to the container.
        // For more information on how to configure your application, visit https://go.microsoft.com/fwlink/?LinkID=398940
        public void ConfigureServices(IServiceCollection services)
        {
            services.AddSignalR();
            services.AddSingleton<IHostedService, CounterHostedService>();
        }

        // This method gets called by the runtime. Use this method to configure the HTTP request pipeline.
        public void Configure(IApplicationBuilder app, IHostingEnvironment env)
        {
            app.UseDefaultFiles();
            app.UseStaticFiles();
            app.UseSignalR(routes =>
            {
                routes.MapHub<SmokeHub>("smoke");
            });
        }
    }
}

The code speaks for itself, I guess. We add SignalR dependencies to the services collection and configure a hub called SmokeHub which can be reached from the client via the route /smoke.

On line 15 you can see I add a IHostedService implementation: CounterHostedService. A hosted service is an object with a start and a stop method that is managed by the host. This means that when ASP.NET Core starts, it calls the hosted service start method and when ASP.NET Core (gracefully) shuts down, it calls the stop method. In our case, we use it to start a very simple scheduler that increments an integer every five seconds and sends it to all SignalR clients. Here are two posts on implementing your own IHostedService.

The client side of things

First of all, we need the SignalR client library. You can get it via npm. I added it in the wwwroot/js/lib folder.

Now let’s take a look at the Elm code.

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
port module Main exposing (..)

import Html exposing (Html, div, button, text, program)


-- MODEL
type alias Model = Int

init : ( Model, Cmd Msg )
init = ( 1, Cmd.none )


-- MESSAGES
type Msg = Counter Int


-- VIEW
view : Model -> Html Msg
view model = div [] [ text (toString model) ]


-- UPDATE
update : Msg -> Model -> ( Model, Cmd Msg )
update msg model =
    case msg of
        Counter count -> ( count, Cmd.none )


-- SUBSCRIPTIONS
port updates : (Int -> msg) -> Sub msg

subscriptions : Model -> Sub Msg
subscriptions model = updates Counter


-- MAIN
main : Program Never Model Msg
main =
    program
        { init = init
        , view = view
        , update = update
        , subscriptions = subscriptions
        }

Let’s dissect the code:

  • Line 6: we have a model, which is an Int that we initialize to 1
  • Line 13: we have one message type, which is a counter of int
  • Line 17: our view takes our model and returns some very simple html, showing the model value
  • Line 22: when we receive an update, we simply return the count
  • Line 29: we subscribe to counter updates

The question is, where do we receive counter updates from? Elm is a pure functional language. This means that the output of every function in Elm depends only on its arguments, regardless of global and/or local state. Direct communication with Javascript from Elm would break this so that is not allowed. So all interop with the outside world is done through ports.

If we check the Elm code again, you see at line 1 we declare our module with the keyword port. On line 30 we declare a port that listens to counter updates from Javascript. So now we can plug it all together in our index.html file:

wwwroot/index.html view raw
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
<html>
<head>
    <script type="text/javascript" src="js/lib/signalr-client-1.0.0-alpha2-final.js"></script>
</head>
<body>
    <div id="main"></div>
    <script type="text/javascript" src="js/main.js"></script>
    <script>
        var node = document.getElementById('main');
        var app = Elm.Main.embed(node);

        const logger = new signalR.ConsoleLogger(signalR.LogLevel.Information);
        const smokeHub = new signalR.HttpConnection(`http://${document.location.host}/smoke`, { logger: logger });
        const smokeConn = new signalR.HubConnection(smokeHub, logger);

        smokeConn.onClosed = e => {
            console.log('Connection closed');
        };

        smokeConn.on('send', data => {
            console.log(data);
            app.ports.updates.send(data);
        });

        smokeConn.start().then(() => smokeConn.invoke('send', 42));

    </script>
</body>
</html>

Most of the code speaks for itself. On line 22 we invoke the port in our Elm app to pass the updated counter to Elm. Line 25 is a simple test to assure that we can also send message from the client to the SignalR hub.

For completeness’ sake, here is the code for the SmokeHub:

lib/SmokeHub.cs view raw
1
2
3
4
5
6
7
8
9
10
11
12
13
using System.Threading.Tasks;
using Microsoft.AspNetCore.SignalR;

namespace SmokeTestsDashboardServer
{
    public class SmokeHub : Hub
    {
        public Task Send(int counter)
        {
            return Clients.All.InvokeAsync("Send", counter);
        }
    }
}

Note that the Send method is called by JavaScript clients. It is not the same as the Send that is called when notifying all clients of a counter update.

Deploy Tridion SDL Web 8.5 Discovery Service on Pivotal CloudFoundry (part 2)

This is part 2 of a series of (still) unknown length where I try to describe how to deploy the SDL Tridion Web 8.5 Discovery Service on CloudFoundry. All parts:

  1. Deploy Tridion SDL Web 8.5 Discovery Service on Pivotal CloudFoundry (part 1)
  2. Deploy Tridion SDL Web 8.5 Discovery Service on Pivotal CloudFoundry (part 2) (this post)

I finished the previous post thinking I was done, except a few small changes. Unfortunately, that wasn’t true. Remember we had to provide an explicit command line because of classpath requirements. This classpath wasn’t yet complete. Let’s analyze the start.sh file again:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
#!/usr/bin/env bash

# Java options and system properties to pass to the JVM when starting the service. For example:
# JVM_OPTIONS="-Xrs -Xms128m -Xmx128m -Dmy.system.property=/var/share"
JVM_OPTIONS="-Xrs -Xms128m -Xmx128m"
SERVER_PORT=--server.port=8082

# set max size of request header to 64Kb
MAX_HTTP_HEADER_SIZE=--server.tomcat.max-http-header-size=65536

BASEDIR=$(dirname $0)
CLASS_PATH=.:config:bin:lib/*
CLASS_NAME="com.sdl.delivery.service.ServiceContainer"
PID_FILE="sdl-service-container.pid"

cd $BASEDIR/..
if [ -f $PID_FILE ]
  then
    if ps -p $(cat $PID_FILE) > /dev/null
        then
          echo "The service already started."
          echo "To start service again, run stop.sh first."
          exit 0
    fi
fi

ARGUMENTS=()
for ARG in $@
do
    if [[ $ARG == --server\.port=* ]]
    then
        SERVER_PORT=$ARG
    elif [[ $ARG =~ -D.+ ]]; then
    	JVM_OPTIONS=$JVM_OPTIONS" "$ARG
    else
        ARGUMENTS+=($ARG)
    fi
done
ARGUMENTS+=($SERVER_PORT)
ARGUMENTS+=($MAX_HTTP_HEADER_SIZE)

for SERVICE_DIR in `find services -type d`
do
    CLASS_PATH=$SERVICE_DIR:$SERVICE_DIR/*:$CLASS_PATH
done

echo "Starting service."

java -cp $CLASS_PATH $JVM_OPTIONS $CLASS_NAME ${ARGUMENTS[@]} & echo $! > $PID_FILE

At line 12 the classpath is set to .:config:bin:lib/*. We ended the previous post with a classpath of $PWD/*:.:$PWD/lib/*:$PWD/config/*, not quite the same. Furthermore, on lines 42..45, additional folders are added to the classpath. Taking all this into account, we get the following classpath: $PWD/*:.:$PWD/lib/*:$PWD/config:$PWD/services/discovery-service/*:$PWD/services/odata-v4-framework/* and the following manifest.yml:

---
applications:
- name: discovery_service
  path: ./
  buildpack: java_buildpack_offline
  command: $PWD/.java-buildpack/open_jdk_jre/bin/java -cp $PWD/*:.:$PWD/lib/*:$PWD/config:$PWD/services/discovery-service/*:$PWD/services/odata-v4-framework/* com.sdl.delivery.service.ServiceContainer -Xrs -Xms128m -Xmx128m
  env:
    JBP_CONFIG_JAVA_MAIN: '{ java_main_class: "com.sdl.delivery.service.ServiceContainer", arguments: "-Xrs -Xms128m -Xmx128m" }'
    JBP_LOG_LEVEL: DEBUG

Now that we have fixed the classpath, let’s see if the discovery service still runs when we push it.

0 of 1 instances running, 1 starting
0 of 1 instances running, 1 starting
0 of 1 instances running, 1 starting
0 of 1 instances running, 1 crashed
FAILED
Error restarting application: Start unsuccessful

TIP: use 'cf logs discovery_service --recent' for more information

Ok, that is unfortunate, we broke it again. Let’s check the log files again:

[APP/PROC/WEB/0] OUT                                             '#b
[APP/PROC/WEB/0] OUT                                              @# ,###
[APP/PROC/WEB/0] OUT     ##########  @##########Mw     ####   ########^
[APP/PROC/WEB/0] OUT    #####%554WC  @#############p  j####       ##"@#m
[APP/PROC/WEB/0] OUT   j####,        @####     1####  j####      ##    "
[APP/PROC/WEB/0] OUT    %######M,    @####     j####  j####
[APP/PROC/WEB/0] OUT      "%######m  @####     j####  j####
[APP/PROC/WEB/0] OUT          "####  @####     {####  j####
[APP/PROC/WEB/0] OUT   ]##MmmM#####  @#############C  j###########
[APP/PROC/WEB/0] OUT   %#########"   @#########MM^     ###########
[APP/PROC/WEB/0] OUT :: Service Container :: Spring Boot  (v1.4.1.RELEASE) ::
[APP/PROC/WEB/0] OUT Exit status 0
[CELL/0] OUT Exit status 0
[CELL/0] OUT Stopping instance ef44cf20-b9da-48c6-5edc-a6d7
[CELL/0] OUT Destroying container
[API/0] OUT Process has crashed with type: "web"
[API/0] OUT App instance exited with guid e9a00d0c-86b4-4dad-ae5d-e4208f09590f payload: {"instance"=>"ef44cf20-b9da-48c6-5edc-a6d7", "index"=>0, "reason"=>"CRASHED", "exit_description"=>"Codependent step exited", "crash_count"=>4, "crash_timestamp"=>1513173007899100032, "version"=>"692f3c6a-acf3-4adc-b870-3827355948d6"}
[CELL/0] OUT Successfully destroyed container

Not very informative… This just tells us that something went wrong but not what went wrong. It should be possible to get more logging than this. Lucky for us, it is.

In my config/logback.xml file, a number of RollingFileAppenders were configured (this may be different for your configuration). These were setup to log to a local folder. This isn’t going to fly on CloudFoundry of course, we should log to stdout and let the platform manage the rest. So I modified logback.xml:

<?xml version="1.0" encoding="UTF-8"?>
<configuration scan="true">
    <!-- Properties -->
    <property name="log.pattern" value="%date %-5level %logger{0} - %message%n"/>
    <property name="log.level" value="DEBUG"/>
    <property name="log.encoding" value="UTF-8"/>

    <!-- Appenders -->
    <appender name="stdout" class="ch.qos.logback.core.ConsoleAppender">
        <encoder>
            <charset>${log.encoding}</charset>
            <pattern>${log.pattern}</pattern>
        </encoder>
    </appender>

    <!-- Loggers -->
    <logger name="com" level="${log.level}">
        <appender-ref ref="stdout"/>
    </logger>

    <root level="ERROR">
        <appender-ref ref="stdout"/>
    </root>
</configuration>

This should take care of logging everything to stdout. If we push the app now, we get a lot of logging and in my case, the discovery service still crashes. But at least now I can see why:

[APP/PROC/WEB/0] OUT DEBUG SQLServerConnection - ConnectionID:1 Connecting with server: DBSERVER port: 1433 Timeout slice: 4800 Timeout Full: 15
[APP/PROC/WEB/0] OUT DEBUG SQLServerConnection - ConnectionID:1 This attempt No: 3
[APP/PROC/WEB/0] OUT DEBUG SQLServerException - *** SQLException:ConnectionID:1 com.microsoft.sqlserver.jdbc.SQLServerException: The TCP/IP connection to the host DBSERVER, port 1433 has failed. Error: "DBSERVER. Verify the connection properties. Make sure that an instance of SQL Server is running on the host and accepting TCP/IP connections at the port. Make sure that TCP connections to the port are not blocked by a firewall.". The TCP/IP connection to the host DBSERVER, port 1433 has failed. Error: "DBSERVER. Verify the connection properties. Make sure that an instance of SQL Server is running on the host and accepting TCP/IP connections at the port. Make sure that TCP connections to the port are not blocked by a firewall.".

The service attempts to connect to a database named DBSERVER. I have not yet configured the discovery database so this makes sense.

All in all, we’re again one step further in deploying SDL Tridion Web 8.5 Discovery Service on CloudFoundry.

Deploy SDL Tridion Web 8.5 Discovery Service on Pivotal CloudFoundry (part 1)

A customer of ITQ is running SDL Tridion content management software and has asked ITQ to deliver a proof-of-concept of running a Tridion website and the Tridion 8.5 micro services on Pivotal CloudFoundry. This post is a journal of my attempts of deploying the SDL Web 8.5 Discovery Service on CloudFoundry.

This is just part 1 of a series of unknown length (at the moment of writing). Here are all parts:

  1. Deploy Tridion SDL Web 8.5 Discovery Service on Pivotal CloudFoundry (part 1) (this post)
  2. Deploy Tridion SDL Web 8.5 Discovery Service on Pivotal CloudFoundry (part 2)

The discovery service is distributed as a binary Spring Boot application with the following directory structure:

│README.md
├bin
│    start.sh
│    stop.sh
├config
│    application.properties
│    cd_ambient_conf.xml
│    cd_ambient_conf.xml.org
│    cd_storage_conf.xml
│    logback.xml
│    serviceName.txt
├lib
│   ....
│   service-container-core-8.5.0-1014.jar
│   ....
└services
    ├discovery-service
    └odata-v4-framework

So there’s a bin folder with a start and stop script, some configuration and a lib folder that has a lot of jar files, including the one with our main class.

Binary buildpack

Since this is a binary distribution of a micro service, I first tried the CloudFoundry binary buildpack. A build pack is a small piece of software that takes your source code, compiles it and runs it on CloudFoundry (this is a very simplistic explanation). Let’s see how far the binary buildpack gets us.

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
$ cf push discovery_service -b binary_buildpack -c './bin/start.sh' -i 1 -m 128m
Creating app discovery_service in org PCF / space Test as admin...
OK

Creating route discovery-service.cf-prod.intranet...
OK

Binding discovery-service.cf-prod.intranet to discovery_service...
OK

Uploading discovery_service...
Uploading app files from: /home/wildenbergr/microservices/discovery
Uploading 7.2M, 72 files
Done uploading
OK

Starting app discovery_service in org PCF / space Test as admin...
Downloading binary_buildpack...
Downloaded binary_buildpack
Creating container
Successfully created container
Downloading app package...
Downloaded app package (59.3M)
Staging...
-------> Buildpack version 1.0.13
Exit status 0
Staging complete
Uploading droplet, build artifacts cache...
Uploading build artifacts cache...
Uploading droplet...
Uploaded build artifacts cache (200B)
Uploaded droplet (59.3M)
Uploading complete
Destroying container
Successfully destroyed container

0 of 1 instances running, 1 crashed
FAILED
Error restarting application: Start unsuccessful

TIP: use 'cf logs discovery_service --recent' for more information
$ 

Obviously, the deploy did not go as planned so let’s check the logs:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
$ cf logs discovery_service --recent
Retrieving logs for app discovery_service in org PCF / space Test as admin...

[API/0] OUT Created app with guid fd8dd243-bc3f-4a26-83f7-44b8a06d95dd
[API/1] OUT Updated app with guid fd8dd243-bc3f-4a26-83f7-44b8a06d95dd ({"route"=>"5c279e23-17a0-48d6-b6dd-0c7fe8cbf17b", :verb=>"add", :relation=>"routes", :related_guid=>"5c279e23-17a0-48d6-b6dd-0c7fe8cbf17b"})
[API/0] OUT Updated app with guid fd8dd243-bc3f-4a26-83f7-44b8a06d95dd ({"state"=>"STARTED"})
[STG/0] OUT Downloading binary_buildpack...
[STG/0] OUT Downloaded binary_buildpack
[STG/0] OUT Creating container
[STG/0] OUT Successfully created container
[STG/0] OUT Downloading app package...
[STG/0] OUT Downloaded app package (59.3M)
[STG/0] OUT Staging...
[STG/0] OUT -------> Buildpack version 1.0.13
[STG/0] OUT Exit status 0
[STG/0] OUT Staging complete
[STG/0] OUT Uploading droplet, build artifacts cache...
[STG/0] OUT Uploading build artifacts cache...
[STG/0] OUT Uploading droplet...
[STG/0] OUT Uploaded build artifacts cache (200B)
[STG/0] OUT Uploaded droplet (59.3M)
[STG/0] OUT Uploading complete
[STG/0] OUT Destroying container
[CELL/0] OUT Creating container
[CELL/0] OUT Successfully created container
[STG/0] OUT Successfully destroyed container
[CELL/0] OUT Starting health monitoring of container
[APP/PROC/WEB/0] OUT Starting service.
[APP/PROC/WEB/0] ERR ./bin/start.sh: line 49: java: command not found
[APP/PROC/WEB/0] OUT Exit status 0
[CELL/0] OUT Exit status 143
[CELL/0] OUT Destroying container
[API/2] OUT Process has crashed with type: "web"
[API/2] OUT App instance exited with guid fd8dd243-bc3f-4a26-83f7-44b8a06d95dd payload: {"instance"=>"", "index"=>0, "reason"=>"CRASHED", "exit_description"=>"2 error(s) occurred:\n\n* 2 error(s) occurred:\n\n* Codependent step exited\n* cancelled\n* cancelled", "crash_count"=>1, "crash_timestamp"=>1512986370928003691, "version"=>"26a55501-fbae-4e1e-87d0-4704f9ad0c78"}

And there we have it at line 29: the java command was not found. Makes sense of course because we used the binary buildpack that doesn’t know anything about Java.

Java buildpack

Ok, so the binary buildpack is a no-go. This would suggest we go with the Java buildpack. On the other hand, this buildpack by default assumes you push source code that needs to be compiled. Let’s see what happens.

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
$ cf push discovery_service -b java_buildpack_offline -c './bin/start.sh' -i 1 -m 128m
Updating app discovery_service in org PCF / space Test as admin...
OK

Uploading discovery_service...
Uploading app files from: /home/wildenbergr/microservices/discovery
Uploading 7.2M, 72 files
Done uploading
OK

Stopping app discovery_service in org PCF / space Test as admin...
OK

Starting app discovery_service in org PCF / space Test as admin...
Downloading java_buildpack_offline...
Downloaded java_buildpack_offline
Creating container
Successfully created container
Downloading app package...
Downloaded app package (59.3M)
Downloading build artifacts cache...
Downloaded build artifacts cache (200B)
Staging...
-----> Java Buildpack Version: v3.17 (offline) | https://github.com/cloudfoundry/java-buildpack.git#87fb619
[Buildpack]                      ERROR Compile failed with exception #<RuntimeError: No container can run this application. Please ensure that you've pushed a valid JVM artifact or artifacts using the -p command line argument or path manifest entry. Information about valid JVM artifacts can be found at https://github.com/cloudfoundry/java-buildpack#additional-documentation. >
No container can run this application. Please ensure that you've pushed a valid JVM artifact or artifacts using the -p command line argument or path manifest entry. Information about valid JVM artifacts can be found at https://github.com/cloudfoundry/java-buildpack#additional-documentation.
Failed to compile droplet
Exit status 223
Staging failed: Exited with status 223
Destroying container
Successfully destroyed container

FAILED
Error restarting application: BuildpackCompileFailed

TIP: use 'cf logs discovery_service --recent' for more information

And this fails as well. The Java buildpack doesn’t understand what we are pushing. So with the binary buildpack we can run a shell script but we do not have java. With the Java buildpack we have java but it doesn’t understand the artifact we’re pushing. What to do?

Java buildpack with main() method

Digging around in the Java buildpack documentation, it looks like there is an option to run a self-executable jar file. The jar file we’d like to execute is lib/service-container-core-8.5.0-1014.jar. Let’s take a look at the start.sh script that is normally used to run the discovery micro service:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
#!/usr/bin/env bash

# Java options and system properties to pass to the JVM when starting the service. For example:
# JVM_OPTIONS="-Xrs -Xms128m -Xmx128m -Dmy.system.property=/var/share"
JVM_OPTIONS="-Xrs -Xms128m -Xmx128m"
SERVER_PORT=--server.port=8082

# set max size of request header to 64Kb
MAX_HTTP_HEADER_SIZE=--server.tomcat.max-http-header-size=65536

BASEDIR=$(dirname $0)
CLASS_PATH=.:config:bin:lib/*
CLASS_NAME="com.sdl.delivery.service.ServiceContainer"

cd $BASEDIR/..
ARGUMENTS=()
for ARG in $@
do
    if [[ $ARG == --server\.port=* ]]
    then
        SERVER_PORT=$ARG
    elif [[ $ARG =~ -D.+ ]]; then
    	JVM_OPTIONS=$JVM_OPTIONS" "$ARG
    else
        ARGUMENTS+=($ARG)
    fi
done
ARGUMENTS+=($SERVER_PORT)
ARGUMENTS+=($MAX_HTTP_HEADER_SIZE)

for SERVICE_DIR in `find services -type d`
do
    CLASS_PATH=$SERVICE_DIR:$SERVICE_DIR/*:$CLASS_PATH
done

echo "Starting service."

java -cp $CLASS_PATH $JVM_OPTIONS $CLASS_NAME ${ARGUMENTS[@]}

A lot is going on in here but in the end the script runs the java command with a classpath, a main class and some options. Maybe we can accomplish the same with the Java buildpack. So, first let’s create a manifest.yml file in the root of the micro service folder structure:

---
applications:
- name: discovery_service
  path: lib/service-container-core-8.5.0-1014.jar
  buildpack: java_buildpack_offline

The path points to the jar file that has the class com.sdl.delivery.service.ServiceContainer with a main() method. However, if we deploy with this manifest, we get the same error: No container can run this application. So what is going on?

When running a Java application directly from a jar file, java has to know which class has the main() method. You can specify this on the command line or inside a manifest file inside the jar file. The service-container-core-8.5.0-1014.jar manifest file does not have a Main-Class entry so we have to specify it on the command line. How to do that?

Digging some more through the Java buildpack documentation I found that you can override buildpack settings by setting application environment variables. In our case, we want to override settings from the config/java_main.yml file so we update our manifest.yml file again:

---
applications:
- name: discovery-service
  path: lib/service-container-core-8.5.0-1014.jar
  buildpack: java_buildpack_offline
  env:
    JBP_CONFIG_JAVA_MAIN: '{ java_main_class: "com.sdl.delivery.service.ServiceContainer", arguments: "-Xrs -Xms128m -Xmx128m" }'
    JBP_LOG_LEVEL: DEBUG

Let’s see what happens this time:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
[CELL/0] OUT Creating container
[CELL/0] OUT Successfully created container
[STG/0] OUT Successfully destroyed container
[CELL/0] OUT Starting health monitoring of container
[APP/PROC/WEB/0] ERR Exception in thread "main" java.lang.NoClassDefFoundError: org/slf4j/LoggerFactory
[APP/PROC/WEB/0] ERR     at com.sdl.delivery.service.ServiceContainer.<clinit>(ServiceContainer.java:57)
[APP/PROC/WEB/0] ERR Caused by: java.lang.ClassNotFoundException: org.slf4j.LoggerFactory
[APP/PROC/WEB/0] ERR     at java.net.URLClassLoader.findClass(URLClassLoader.java:381)
[APP/PROC/WEB/0] ERR     at java.lang.ClassLoader.loadClass(ClassLoader.java:424)
[APP/PROC/WEB/0] ERR     at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:335)
[APP/PROC/WEB/0] ERR     at java.lang.ClassLoader.loadClass(ClassLoader.java:357)
[APP/PROC/WEB/0] ERR     ... 1 more
[APP/PROC/WEB/0] OUT Exit status 1
[CELL/0] OUT Exit status 0
[CELL/0] OUT Destroying container
[API/0] OUT Process has crashed with type: "web"
[API/0] OUT App instance exited with guid da7e3f48-151b-4d9a-9df6-cc8479efa839 payload: {"instance"=>"", "index"=>0, "reason"=>"CRASHED", "exit_description"=>"2 error(s) occurred:\n\n* 2 error(s) occurred:\n\n* Exited with status 1\n* cancelled\n* cancelled", "crash_count"=>1, "crash_timestamp"=>1513012904337368910, "version"=>"f75e2238-95dd-45ed-9d7f-66c6c3ef4d7f"}

Now it seems we’re getting somewhere: a NoClassDefFoundError for org/slf4j/LoggerFactory. This means that at least we managed to start a Java process, whoopdeedoo! So now we have to find the missing classes by adding them to the classpath somehow. This is where it all started to get complicated. There is no way I could find to add additional jar files to the classpath in the chosen setup. In fact, this setup has a serious flaw. The documentation for cf push on ‘how it finds the application’ states: if the path is to a file, cf push pushes only that file. So this is never going to work because we need a whole bunch of other files.

Java buildpack with main() method and explicit command

So, what’s next? Luckily, a colleague of mine who knows his way around CloudFoundry, found this blog post. The idea is to specify a number of settings to trick the buildpack into doing what we want (repeating some stuff from the aforementioned post in my own words):

  1. In the buildpack detect phase, we want to make sure the correct container is chosen: java-main. We force this by setting the JBP_CONFIG_JAVA_MAIN environment variable as before.
  2. For the buildpack compile phase, we need all the artifacts from the Tridion Discovery Microservice folder. So we specify a path of ./. Since we use the java-main container we do not really have a compile phase but we still need all microservice files.
  3. In the buildpack release phase we want to run our own Java command that has everything we want on the classpath. We can do this by explicitly specifying a command in our manifest.yml file.

Given these requirements, we come up with the following manifest file:

---
applications:
- name: discovery_service
  path: ./
  buildpack: java_buildpack_offline
  command: $PWD/.java-buildpack/open_jdk_jre/bin/java -cp $PWD/*:.:$PWD/lib/*:$PWD/config/* com.sdl.delivery.service.ServiceContainer -Xrs -Xms128m -Xmx128m
  env:
    JBP_CONFIG_JAVA_MAIN: '{ java_main_class: "com.sdl.delivery.service.ServiceContainer", arguments: "-Xrs -Xms128m -Xmx128m" }'
    JBP_LOG_LEVEL: DEBUG

And if we push the app this time, it works!!

App discovery_service was started using this command `$PWD/.java-buildpack/open_jdk_jre/bin/java -cp $PWD/*:.:$PWD/lib/*:$PWD/config/* com.sdl.delivery.service.ServiceContainer -Xrs -Xms128m -Xmx128m`

Showing health and status for app discovery_service in org PCF / space Test as admin...
OK

requested state: started
instances: 1/1
usage: 1G x 1 instances
urls: discovery-service.test-cf-prod.intranet
last uploaded: Tue Dec 12 15:05:32 UTC 2017
stack: cflinuxfs2
buildpack: java_buildpack_offline

     state     since                    cpu    memory    disk      details
#0   running   2017-12-12 04:06:04 PM   0.0%   0 of 1G   0 of 1G

You see that our new command is used, making everything we want available on the classpath. You may wonder, how did we know that the location of the java executable was $PWD/.java-buildpack/open_jdk_jre/bin/java (besides from the blog post I referred to earlier). This is where the JBP_LOG_LEVEL environment variable comes in. It is a variable specific to the Java buildpack that tells it to generate debug output. Part of the output is the exact command the buildpack will execute (if you do not specify your own command).

Run local Pivotal UAA inside a debugger

I’ve been involved in a project that uses Pivotal CloudFoundry as the PAAS platform of choice. To provide some minimal background info: CloudFoundry is an open-source PAAS platform that can run on top of a number of cloud infrastructures: Azure, AWS, GCP, OpenStack, VMware vSphere and more. Pivotal is a company that offers a commercial CloudFoundry package that includes support, certification and additional services.

I was asked to develop a smoke test to ensure a certain level of confidence in the Single-Sign-On (SSO) capabilities of the platform. SSO in CloudFoundry is taken care of by CloudFoundry User Account and Authentication (UAA) Server, an open-source, multi-tenant, OAuth2 identity management service. Not knowing a lot about UAA and knowing that it is open-source, I decided that my first step should be to try and install UAA on my laptop and get it up-and-running, ideally inside a debugger so that I could step through authorization and token requests. This blog post explains how to do that, how to configure a local UAA database and how to interact with UAA once installed.

Some additional details before getting started:

  • I’m running a Windows 10 laptop…
  • …with the Windows Subsystem for Linux running Ubuntu
  • UAA will be installed on this Ubuntu distribution
  • Debugging via IntelliJ on Windows. JetBrains has a free community edition of IntelliJ that is ideal for this sort of work.

Installing and configuring UAA

Cloning the UAA repo and performing an initial run

Following the UAA documentation you can see that installing UAA locally is really easy. Just perform the following steps:

$ git clone git://github.com/cloudfoundry/uaa.git
$ cd uaa
$ ./gradlew run

However, that is not exactly what I did… I’d like to use IntelliJ to set breakpoints and step through code and IntelliJ is installed on my Windows box. So what I actually did was clone the UAA repo on my Windows box to the folder %HOMEPATH%\IdeaProjects\uaa (in my case: C:\Users\rwwil\IdeaProjects\uaa). You can now open the project inside IntelliJ and browse through all the code.

Next, inside Ubuntu, you need to locate the folder you cloned UAA into. In my case this is /mnt/c/Users/rwwil/IdeaProjects/uaa. From that folder you can execute ./gradlew run and all should be well: you should now have a local UAA running on the default Tomcat port 8080.

Adding debugger support

Of course it’s very nice to have it all up-and-running but in my opinion it helps tremendously to be able to step through code to see what is going on and understand what happens. So we want to attach IntelliJ as debugger to the running UAA instance. First, this requires some configuration inside IntelliJ: you need to create a remote debugging configuration. This option is available from the RunEdit Configurations… menu. In my case it looks like this:

Note the command-line arguments that must be added to the remote JVM:

-agentlib:jdwp=transport=dt_socket,server=y,suspend=n,address=5005

Unfortunately, we started UAA via Gradle and to be honest I have no idea how to add additional command-line options to the Java process that is started by Gradle. So what we need is the complete command line of the running Java process. This is quite easy on Linux:

$ ps -ef | less

We get all running processes (-e) with their full command line (-f). The output should look as follows:

UID        PID  PPID  C STIME TTY          TIME CMD
root         1     0  0 Nov20 ?        00:00:00 /init
rwwilden     2     1  0 Nov20 tty1     00:00:00 -bash
rwwilden    82     1  0 Nov20 tty2     00:00:00 -bash
rwwilden   179     1  0 Nov21 tty3     00:00:04 -bash
rwwilden   299     2  0 Nov24 tty1     00:06:19 /usr/lib/jvm/java-8-openjdk-amd64/jre/bin/java -agentlib:jdwp=transport=dt_socket,server=y,suspend=n,address=5005 -javaagent:/tmp/cargo/jacocoagent.jar=output=file,dumponexit=true,append=false,destfile=/mnt/c/Users/rwwil/IdeaProjects/uaa/build/integrationTestCoverageReport.exec -DLOGIN_CONFIG_URL=file:///mnt/c/Users/rwwil/IdeaProjects/uaa/./uaa/src/main/resources/required_configuration.yml -Xms128m -Xmx512m -Dsmtp.host=localhost -Dsmtp.port=2525 -Dspring.profiles.active=default,sqlserver -Dcatalina.home=/mnt/c/Users/rwwil/IdeaProjects/uaa/build/extract/tomcat-8.5.16/apache-tomcat-8.5.16 -Dcatalina.base=/tmp/cargo/conf -Djava.io.tmpdir=/tmp/cargo/conf/temp -Djava.util.logging.manager=org.apache.juli.ClassLoaderLogManager -Djava.util.logging.config.file=/tmp/cargo/conf/conf/logging.properties -classpath /mnt/c/Users/rwwil/IdeaProjects/uaa/build/extract/tomcat-8.5.16/apache-tomcat-8.5.16/bin/tomcat-juli.jar:/mnt/c/Users/rwwil/IdeaProjects/uaa/build/extract/tomcat-8.5.16/apache-tomcat-8.5.16/bin/bootstrap.jar:/usr/lib/jvm/java-8-openjdk-amd64/lib/tools.jar org.apache.catalina.startup.Bootstrap start
rwwilden   772     1  0 Nov28 tty4     00:00:04 -bash

You get a very long Java command line that you can copy and modify as needed. In our case, we’d like to add debugging options (which I already added in the example output above).

Now paste the modified command line and run it and we have a Java process that IntelliJ can attach to.

Configuring for Microsoft SQL Server

By default, UAA runs with an in-memory database, losing all data between restarts. My laptop runs Microsoft SQL Server which UAA actually supports so let’s check out how to configure this.

The way UAA selects between data stores is via Spring Profiles. We can add a profile to the command-line we just copied. Just add sqlserver to the spring.profiles.active command-line parameter: -Dspring.profiles.active=default,sqlserver.

Next step is the connection string for SQL Server. This can be configured in uaa/server/src/main/resources/spring/env.xml. For my local setup I use the following:

<beans profile="sqlserver">
  <description>Profile for SQL Server scripts on an existing database</description>
  <util:properties id="platformProperties">
    <prop key="database.driverClassName">com.microsoft.sqlserver.jdbc.SQLServerDriver</prop>
    <prop key="database.url">jdbc:sqlserver://localhost:1433;database=uaa;</prop>
    <prop key="database.username">root</prop>
    <prop key="database.password">changemeCHANGEME1234!</prop>
  </util:properties>
  <bean id="platform" class="java.lang.String">
    <constructor-arg value="sqlserver" />
  </bean>
  <bean id="validationQuery" class="java.lang.String">
    <constructor-arg value="select 1" />
  </bean>
  <bean id="limitSqlAdapter" class="org.cloudfoundry.identity.uaa.resources.jdbc.SQLServerLimitSqlAdapter"/>
</beans>

So I have a local database named uaa and a user named root. Now we have a setup where we can actually see what UAA is writing to the database when certain actions are performed.

Interacting with UAA

Ok, final step: what can we do with UAA once we have it up-and-running? It is an OAuth2 server so let’s see if we can get a token somehow. The easiest way to communicate with UAA is through the UAA CLI (UAAC). This is a Ruby application so you need to install Ruby to get it working (there is some work underway on a Golang version of the CLI).

First we have to point UAAC to the correct UAA instance:

uaac target http://localhost:8080/uaa

Next, we’d like to perform some operations on UAA so for that we need an access token that allows these operations. UAA comes pre-installed with an admin client application that you can get a token for:

uaac token client get admin -s "adminsecret"

If we dissect this line:

  • uaac token: perform some token operation on UAA
  • client: use the OAuth2 client credentials grant
  • get: get a token
  • admin -s "adminsecret": get a token for the application with client_id=admin and client_secret=adminsecret

The output should be:

Successfully fetched token via client credentials grant.
Target: http://localhost:8080/uaa
Context: admin, from client admin

The obtained token is stored (in my case) in /home/rwwilden/.uaac.yml.

Using this token we can now perform some administration tasks on our local UAA. Some examples:

  • Add a local user:

    uaac user add smokeuser --given_name smokeuser --family_name smokeuser --emails smokeuser2@mail.com --password smokepassword
    
  • Add a local group (or scope in OAuth2 terminology):

    uaac group add "smoke.extinguish"
    
  • Add user to scope:

    uaac member add smoke.extinguish smokeuser
    
  • Add a client application that requires the smoke.extinguish scope and allows logging in via the OAuth2 resource owner password credentials grant:

    uaac client add smoke --name smoke --scope "smoke.extinguish" --authorized_grant_types "password" -s "smokesecret"
    
  • Obtain a token for user smokeuser on client application smoke using the password credentials grant:

    uaac token owner get smoke smokeuser -s smokesecret -p smokepassword
    

Of course, there is a lot more to know about CloudFoundry UAA. As I mentioned earlier, it is a full-fledged OAuth2 implementation that has proven itself in numerous (Pivotal) CloudFoundry production installations. Here are some additional references:

  • API overview: https://docs.cloudfoundry.org/api/uaa/version/4.7.1/index.html
  • SSO in Pivotal CloudFoundry: https://docs.pivotal.io/p-identity/1-2/index.html
  • Additional docs on Github: https://github.com/cloudfoundry/uaa/tree/master/docs