Custom domain name and certificate for your Azure Service Fabric cluster

This is a follow-up to my previous post about getting SSL working on a local Azure Service Fabric cluster. This time I’m aiming for the real goal: running a custom API endpoint (micro-service) on a custom domain name behind https on a cluster running on Azure.

First a short summary of the things we need to do:

  • Register a CNAME record with a DNS provider that maps your desired custom domain name to the default domain name of your Service Fabric cluster.
  • Obtain a certificate in PFX format from a certificate authority.
  • Upload the PFX file to Azure Key Vault.
  • Modify the Azure Virtual Machine Scale Set that sits behind your Service Fabric cluster so that the certificate gets installed on all VMs in the scale set.
  • Modify the Service Fabric configuration to make sure that our custom API uses the certificate.

Register a DNS CNAME record

This step actually has nothing to do with Service Fabric but is required if you want to run your API micro-service on SSL (or you could try getting a certificate for mysfcluster.westeurope.cloudapp.azure.com but I don’t think Microsoft will allow that ;)

So what you want is a CNAME record that maps your custom domain name, for this article I’ll use mysfcluster.nl, to the domain of your cluster, e.g. mysfcluster.westeurope.cloudapp.azure.com.

Get a certificate

Again, this has nothing to do with Service Fabric. You need a server authentication certificate in PFX format that includes the private key and the entire certificate chain. And of course the password that protects the private key.

Upload the certificate to Azure Key Vault

Azure Key Vault can be used to securely store a number of different things: passwords, PFX files, storage account keys, etc. Things you store there can be referenced from Azure Resource Manager templates to be used in web sites, VMs, etc. Uploading a PFX file to Azure Key Vault isn’t as easy as it should be, so lucky for us Chacko Daniel from Microsoft has written a nice PowerShell module that handles this for us.

So what I did was clone the GitHub repository and import the module (from a PowerShell prompt):

PS c:\projects> git clone https://github.com/ChackDan/Service-Fabric.git
PS c:\projects> Import-Module Service-Fabric\Scripts\ServiceFabricRPHelpers\ServiceFabricRPHelpers.psm1

We can now invoke the PowerShell command Invoke-AddCertToKeyVault, which you’ll find below, including the expected output.

PS C:\projects> Invoke-AddCertToKeyVault `
        -SubscriptionId "12345678-aabb-ccdd-eeff-987654321012" `
        -ResourceGroupName MySFResourceGroup `
        -Location westeurope `
        -VaultName "MyKeyVault" `
        -CertificateName "MyAPICert" `
        -Password "eivhqfBw=AGUsLuJ2Z<r" `
        -UseExistingCertificate `
        -ExistingPfxFilePath "C:\projects\mysfcluster_nl.pfx"

Switching context to SubscriptionId 12345678-aabb-ccdd-eeff-987654321012
Ensuring ResourceGroup MySFResourceGroup in westeurope
Using existing vault MyKeyVault in westeurope
Reading pfx file from C:\projects\mysfcluster_nl.pfx
Writing secret to MyAPICert in vault MyKeyVault

Name  : CertificateThumbprint
Value : C83D60162D7BDC62A41516CD5007E4FDDD196201

Name  : SourceVault
Value : /subscriptions/12345678-aabb-ccdd-eeff-987654321012/resourceGroups/MySFResourceGroup/providers/Microsoft.KeyVault/vaults/MyKeyVault

Name  : CertificateURL
Value : https://mykeyvault.vault.azure.net:443/secrets/MyAPICert/e72e1834a1ae4be19f249121cc8fc722

I’ll walk you through the parameters for Invoke-AddCertToKeyVault in order of appearance:

SubscriptionIdThe id of the Azure subscription that contains your key vault. When the key vault does not yet exist, it will be created.
ResourceGroupNameName of the resource group for your key vault.
LocationKey vault location. If you run the PowerShell cmd Get-AzureRmLocation you'll get a list of location system names.
VaultNameThe name of your key vault. When the script can not find this key vault, it will be created.
CertificateNameThe name of the PFX resource that is created in the key vault.
PasswordThe password that you used to protect the private key in the PFX file.
UseExistingCertificateIndicates that we are using an existing certificate. Invoke-AddCertToKeyVault can also be used to generate a self-signed certificate and upload that to the key vault.
ExistingPfxFilePathThe absolute path to your PFX file.

Install certificate on virtual machines

An Azure Service Fabric cluster is powered by one or more Virtual Machine Scale Sets. A VM Scale Set is a collection of identical VMs that (in the case of Service Fabric) run the micro-services in your Service Fabric applications.

There is very little support for VM Scale Sets in the portal so we use Azure Resource Explorer for this. If you’ve opened Azure Resource Explorer, you have to browse to the correct resource, which is the VM Scale Set that powers your SF cluster. In my case, it is called Backend.

Resource Explorer left pane

Once there, you can add a reference to the certificate in the key vault. In the screenshot below, you see a reference to the key vault itself and multiple certificate references.

If I follow my own example:

  • the reference to the key vault should be /subscriptions/12345678-aabb-ccdd-eeff-987654321012/resourceGroups/MySFResourceGroup/providers/Microsoft.KeyVault/vaults/MyKeyVault
  • the reference to the certificate should be https://mykeyvault.vault.azure.net:443/secrets/MyAPICert/e72e1834a1ae4be19f249121cc8fc722

You can copy these values from the output of the Invoke-AddCertToKeyVault command. If you save (PUT) the updated VM Scale Set resource description, the certificate will be installed to all VMs in the scale set.

Configure Service Fabric to use the certificate

We actually already did this in the previous post so I’ll summarize here:

  1. Extend your ServiceManifest.xml with an additional named (https) endpoint.
  2. Extend the ApplicationManifest.xml in two places:
    • Add an EndpointBindingPolicy to the ServiceManifestImport. This links the https endpoint to a certificate.
    • Add an EndpointCertificate to the certificates collection. This is a named reference to the thumbprint of the certificate you uploaded to Azure Key Vault earlier.
  3. Modify OwinCommunicationListener. This class is hard-coded to support only http. You can change this to make it support https as well.
  4. Add a ServiceInstaleListener that references the https endpoint.

For more information about service manifests, application manifest and how they are related, check this post.

Conclusion

There are again quite some steps involved, just as in the previous post on how to get this working locally. Each step by itself isn’t complicated but the entire process takes some time and effort.

I hope this helps in setting up a protected endpoint in Service Fabric :)

Running a local Azure Service Fabric cluster on SSL

Azure Service Fabric is Microsofts micro-services platform. Well, it’s actually more than that but that is all well-documented in other places on the interwebs.

It is relatively new and documentation is still a bit behind so I had some trouble in getting the following setup working:

  • I want to run my production cluster on a domain name that is not the default. So instead of mycluster.westeurope.cloudapp.azure.com I want my-api.my-services.nl.
  • The custom API endpoint that is exposed through my cluster should run on https and not the default http.

The documentation, as far as it’s available, is rather fragmented and I couldn’t find the complete story so I thought I’d write it down for future reference. In my humble opinion, step 1 should always be to get it working on a local dev box so that’s what I started out with. Reproducing and fixing an error on your dev-box is a lot easier than fixing the same error in a remote cluster. Besides, it helps in better understanding what is happening.

Here’s a short summary of what needs to be done, all the details follow below:

  • Generate a self-signed root certificate and install that in the trusted root certificates store.
  • Generate a server authentication certificate that is derived from the root certificate.
  • Modify your hosts file to match the server certificate common name.
  • Extend the service manifest with an additional named endpoint.
  • Extend the Service Fabric application manifest file with a service EndpointBindingPolicy and EndpointCertificate.
  • Modify the generated OwinCommunicationListener to take the https protocol into account.
  • Add an additional named ServiceInstanceListener (besides the one that is listening on http).

If this all sounds like abracadabra, continue reading :)

Self-signed root certificate

This step isn’t absolutely necessary but it makes the entire setup much nicer. We store this certificate with the other trusted root certificates so that certificates that are signed by it are automatically trusted. This prevents browser certificate warnings later. I used the same trick in an earlier post so I’m not going to repeat all the details.

The self-signed root certificate is generated using makecert:

makecert -r -pe -n "CN=SSLTestRoot"
         -b 06/07/2016 -e 06/07/2018
         -ss root -sr localmachine -len 4096

Server authentication certificate

Next step is to use our root certificate to sign our server authentication certificate. Again we use makecert. This time the certificate is placed in the My store.

makecert -pe -n "CN=sfendpoint.local" -b 06/07/2016 -e 06/07/2018
         -eku 1.3.6.1.5.5.7.3.1 -is root -ir localmachine -in SSLTestRoot
         -len 4096 -ss My -sr localmachine

Modify hosts file

The hosts file in C:\Windows\System32\drivers\etc is the first place Windows looks when it needs to resolve a host name like www.google.com. In our case, we want to add an entry that matches the common name in our server authentication certificate and sends the user to 127.0.0.1.

127.0.0.1  sfendpoint.local

Add service endpoint to service manifest

We finished all the necessary preparations on our development machine, next step is Service Fabric configuration. In the service manifest file of the (API) service we wish to expose on https, we need to add an additional endpoint, besides the (http) endpoint that is already there.

<ServiceManifest Name="My.SF.ApiPkg" Version="1.0.0" ...>
  <ServiceTypes>
    <StatelessServiceType ServiceTypeName="ApiType" />
  </ServiceTypes>

  <CodePackage Name="Code" Version="1.0.0">
    <EntryPoint>
      <ExeHost>
        <Program>My.SF.Api.exe</Program>
      </ExeHost>
    </EntryPoint>
  </CodePackage>

  <ConfigPackage Name="Config" Version="1.0.0" />

  <Resources>
    <Endpoints>
      <Endpoint Protocol="http" Name="WebEndpoint" Type="Input" Port="8676" />
      <Endpoint Protocol="https" Name="WebEndpointHttps" Type="Input" Port="8677" />
    </Endpoints>
  </Resources>
</ServiceManifest>

I use port numbers 8676 and 8677 for http and https respectively but that is up to you.

Important note: if you have multiple endpoints, make sure to give each one a unique name. Service Fabric won’t complain but your service will not start. This took me some time to figure out because the error messages do not really point you in the right direction.

Extend the Service Fabric application manifest

Next step is the application manifest file. We need two things here: a reference to the certificate and a link between our micro service, the certificate and the endpoint. Note that we configure the certificate hash value (thumbprint) outside the application manifest file in a separate environment configuration file.

<ApplicationManifest ApplicationTypeName="MySFType"
                     ApplicationTypeVersion="1.0.0" ...>
  <Parameters>
    <Parameter Name="Api_SslCertHash" DefaultValue="" />
  </Parameters>
  <ServiceManifestImport>
    <ServiceManifestRef ServiceManifestName="My.SF.ApiPkg"
                        ServiceManifestVersion="1.0.0" />
    <ConfigOverrides />
    <Policies>
      <EndpointBindingPolicy EndpointRef="WebEndpointHttps"
                             CertificateRef="my_api_cert" />
    </Policies>
  </ServiceManifestImport>
  <DefaultServices>
    <Service Name="Api">
      <StatelessService ServiceTypeName="ApiType" InstanceCount="-1">
        <SingletonPartition />
      </StatelessService>
    </Service>
  </DefaultServices>
  <Certificates>
    <EndpointCertificate X509FindValue="[Api_SslCertHash]" Name="my_api_cert" />
  </Certificates>
</ApplicationManifest>

We added an EndpointBindingPolicy that references the https endpoint and the certificate my_api_cert. This tells Service Fabric that for this specific service it should add a certificate to the specified endpoint.

The certificate itself has a name and a thumbprint value that is a reference to a value in an environment-specific configuration file.

Modify OwinCommunicationListener

That was all the necessary Service Fabric configuration. What remains is some code changes. When you add a new stateless api service to your Service Fabric project in Visual Studio, an OwinCommunicationListener class is added. This class is responsible for booting a self-hosted Owin web server on the correct port number.

By default, this class assumes you never want to use https. So what you need to do is replace this line of code (that has a hard-code http reference):

_listeningAddress = string.Format(
    CultureInfo.InvariantCulture,
    "http://+:{0}/{1}",
    port,
    string.IsNullOrWhiteSpace(_appRoot)
        ? string.Empty
        : _appRoot.TrimEnd('/') + '/');

with this line of code:

_listeningAddress = string.Format(
  CultureInfo.InvariantCulture,
  "{0}://+:{1}/{2}",
  serviceEndpoint.Protocol,
  port,
  string.IsNullOrWhiteSpace(_appRoot)
      ? string.Empty
      : _appRoot.TrimEnd('/') + '/');

in the OpenAsync method. The serviceEndpoint variable should already be declared somewhere in the first few lines of OpenAsync.

Add a ServiceInstanceListener

Last but not least we must tell our service that it should (also) listen on the https endpoint. This happens in the StatelessService.CreateServiceInstanceListeners method that you override in your service class, which in my case looks like this:

internal sealed class Api : StatelessService
{
  public Api(StatelessServiceContext context) : base(context) { }

  protected override IEnumerable<ServiceInstanceListener>
      CreateServiceInstanceListeners()
  {
    return new[]
    {
      new ServiceInstanceListener(
        serviceContext => new OwinCommunicationListener(
            Startup.ConfigureApp, serviceContext, ServiceEventSource.Current,
            "WebEndpoint"), "Http"),

      new ServiceInstanceListener(
        serviceContext => new OwinCommunicationListener(
            Startup.ConfigureApp, serviceContext, ServiceEventSource.Current,
            "WebEndpointHttps"), "Https")
    };
  }
}

Note that each listener references the name of the endpoint it should listen on.

Conclusion

That is ‘all’ there is to it. Well, it’s actually quite a lot but the individual steps aren’t too complicated. Using the instructions above it should now be rather easy to get this working on your machine too.

Next time: how to set this up for an actual Service Fabric cluster running in Azure.

AppVeyor badge for your ASP.NET Core (RC1) project on GitHub

On several GitHub projects nowadays you find these nice badges in the readme.md that tell you whether the current build passed. Until a few days ago I didn’t know how these were implemented but since I have my own small open-source GitHub project now, I wanted a badge. Sounds a bit like gamification if I say it like this but that’s an entirely different topic :)

The badge I’m aiming for is this one: AppVeyor badge It’s from AppVeyor, a continuous delivery service for Windows. Out-of-the-box it supports msbuild. Since my project is ASP.NET Core (RC1) with an xUnit.net test suite, some configuration must be added to the project.

Add project to AppVeyor

Adding your GitHub project itself to AppVeyor is really easy: just login to https://ci.appveyor.com/login with your GitHub account credentials and the rest should point itself.

AppVeyor configuration

Next step is to add an appveyor.yml to the root folder of your project. You can check out my most recent version here. I’ll list it here to be able to explain the parts.

version: 1.0.{build}
# For now just the develop branch.
branches:
  only:
    -develop

# Defines the machine that is used to run build/test/deploy/...
os: Visual Studio 2015

# Called after cloning the repository.
install:
  # Add the v3 NuGet feed and myget.org (for moq.netcore package).
  - nuget sources add -Name api.nuget.org -Source https://api.nuget.org/v3/index.json
  - nuget sources add -Name myget.org -Source https://www.myget.org/F/aspnet-contrib/api/v3/index.json
  # Install dnvm.
  - ps: "&{$Branch='dev';iex ((new-object net.webclient).DownloadString('https://raw.githubusercontent.com/aspnet/Home/dev/dnvminstall.ps1'))}"
  # Install coreclr (no need for startup improvement here)
  - dnvm upgrade -r coreclr -NoNative

# Called before building.
before_build:
  - dnu restore
  - cd %APPVEYOR_BUILD_FOLDER%\src\Localization.JsonLocalizer

# Replace default build.
build: off
build_script:
  - dnu build

# Called before running tests.
before_test:
  - cd %APPVEYOR_BUILD_FOLDER%\test\Localization.JsonLocalizer.Tests

# Replace default test.
test: off
test_script:
  - dnx test

My configuration has three stages:

  • Install ASP.NET Core RC1.
  • Prepare and run dnu build.
  • Prepare and run dnx test

Some details worth noting:

  • By default the Visual Studio 2015 build machine is configured with the v2 NuGet feed. I add the v3 feed and the myget.org feed because that’s where the moq.netcore package lives that I use in my xUnit tests.
  • I use the -NoNative flag for dnvm upgrade. This skips native image generation to improve startup time. The only thing that needs starting up are unit tests and these run just once. Native image compilation costs way more time than I can win back in unit test startup time improvements.
  • AppVeyor defines a number of environment variables, one of which is APPVEYOR_BUILD_FOLDER that points to the folder that the project was cloned into.

##Results On the AppVeyor overview page, the result of a successful build is this: This result includes both the build and the tests. You already saw the badge that I included in my readme.md file. The badge itself is an SVG image that is generated to describe your latest build result.

So that’s it, pretty easy once you understand how it works. I didn’t invent all of this myself of course; there’s a nice post by Nimesh Manmohanlal that handles the installation steps. I added the necessary build and test steps.

Let's Encrypt certificates for ASP.NET Core on Azure

Let’s Encrypt is a new certificate authority that provides free certificates for web server validation. It issues domain-validated (DV) certificates meaning that the certificate authority has proven that the requesting party has control over some DNS domain (more on that later). And the best thing: it’s fully automated through an API and a command-line client.

Free DV certificates seem to be the new trend nowadays with Symantec being the next player in the market announcing they’re giving them away for free. Let’s Encrypt issued their first certificate on September 14, 2015 and announced on March 8, 2016 that they were at one million after just three months in public beta.

ASP.NET Core

I happen to be developing an ASP.NET Core website for a customer of ours that required a certificate so Let’s Encrypt seemed to make sense1. The easiest way to automatically connect a Let’s Encrypt certificate to an Azure web site is via the excellent Let’s Encrypt site extension by Simon J.K. Pedersen. Please note that there’s both a x86 and x64 version.

There’s some excellent guidance on installing and configuring the extension elsewhere on the web so I won’t go into details on that. What I’d like to discuss is how to configure your ASP.NET Core web application in such a way that Let’s Encrypt actually returns a certificate when asked to do so.

ACME

You may wonder: how does Let’s Encrypt actually validate a certificate request? It issues domain-validated certificates so how does it actually validate that you are the owner of the domain? Enter ACME: the Automatic Certificate Management Environment. ACME is an IETF internet draft (and still a work-in-progress, for the latest version, check out their GitHub repo).

The entire purpose of the ACME specification is to provide a contract between a certificate authority and an entity requesting a certificate (the applicant) so that the certificate request process can be entirely automated.

If an applicant requests a certificate, he has to provide the URL to which the certificate should be applied, e.g. example.com. Let’s Encrypt now expects a number of files to present at the following (browsable) url: http://example.com/.well-known/acme-challenge/. In my website, the contents of this folder are the following:

ACME challenge

So that’s how Let’s Encrypt checks that you own the domain: by checking the presence of a specific set of files in a specific location on the domain you claim to be the owner of. In official terms this is called challenge-response authentication. Please read the ACME specification if you want to know what these files actually mean.

The Let’s Encrypt site extension makes sure there is a .well-known/acme-challenge folder in the wwwroot folder of your site and that it has the correct contents2. Here’s the same folder as seen from the KUDU console:

Back to ASP.NET Core

So all is well and we call upon our site extension to request and install the certificate. And, well, nothing happens, no errors but also no certificate. The output from the Azure WebJob functions that execute the request provides no details at all (not even in the logging):

What goes wrong is actually two things:

  • ASP.NET Core disables directory browsing by default and the .well-known/acme-challenge folder must be browsable
  • ASP.NET Core (and IIS for that matter) do not by default serve files without extension

So how do we fix this? As with most ASP.NET Core configuration: through middleware. First some code, then the explanation:

public class Startup
{
  ...
  public void Configure(IApplicationBuilder app, IHostingEnvironment env)
  {
    var rootPath = Path.GetFullPath(".");
    var acmeChallengePath =
        Path.Combine(rootPath, @".well-known\acme-challenge");

    app.UseDirectoryBrowser(new DirectoryBrowserOptions
    {
      FileProvider = new PhysicalFileProvider(acmeChallengePath),
      RequestPath = new PathString("/.well-known/acme-challenge"),
    });

    app.UseStaticFiles(new StaticFileOptions
    {
      ServeUnknownFileTypes = true
    });
  }
}

Most of the code should be clear but some points of interest:

  • The directory browser middleware must be configured with an absolute path so I take the current path (D:\home\site\wwwroot) and append .well-known\acme-challenge to it.
  • This configuration ensures that only the acme-challenge folder is browsable, not every folder in your website.
  • The static files middleware is configured to serve unknown file types to clients. The files in the acme-challenge folder are all extensionless so without a known file type.
  • It’s impossible to limit serving of unknown file types to a specific folder.

So, with this middleware configuration in place we can again request a certificate and this time it will work. At least it did for me ;-)

I hope this post has given you some background information on cool new technology like Let’s Encrypt and ACME and will help you in setting up Let’s Encrypt for your ASP.NET Core websites.

Notes

  1. We actually tried the old-school way of getting a certificate first but that took so much time we decided to try the Let’s Encrypt route.
  2. The Azure Let’s Encrypt site extension uses the ACMESharp library for actual communication with the Let’s Encrypt API.

ASP.NET Core localization middleware with JSON resource files

I’m working on a ASP.NET Core RC1 project that requires localization of the UI. Damien Bod does a very good job of explaining how to do this using good old resx files. However, I’m writing the entire front-end with Visual Studio Code and resx somehow didn’t seem like a good match. It’s a clunky XML format that requires a header containing an XML schema definition and although .NET Core supports the format, Visual Studio Code lacks any support (at least none that I could find).

So I thought, why not store resources in JSON files? My initial requirements are text resources only so a simple key-value format should do the trick. JSON seems an obvious match. For comparison, I have a (Dutch) JSON resource file first:

{
   "ResourceKey.Welcome": "Welkom"
}

And the corresponding resx file for expressing ‘the same’ information. This comparison is of course not entirely fair but when you need just a simple key-value mapping, the resx format is a bit bloated to say the least…

<?xml version="1.0" encoding="utf-8"?>
<root>
  <xsd:schema id="root" xmlns="" xmlns:xsd="http://www.w3.org/2001/XMLSchema"
                        xmlns:msdata="urn:schemas-microsoft-com:xml-msdata">
    <xsd:import namespace="http://www.w3.org/XML/1998/namespace" />
    <xsd:element name="root" msdata:IsDataSet="true">
      <xsd:complexType>
        <xsd:choice maxOccurs="unbounded">
          <xsd:element name="metadata">
            <xsd:complexType>
              <xsd:sequence>
                <xsd:element name="value" type="xsd:string" minOccurs="0" />
              </xsd:sequence>
              <xsd:attribute name="name" use="required" type="xsd:string" />
              <xsd:attribute name="type" type="xsd:string" />
              <xsd:attribute name="mimetype" type="xsd:string" />
              <xsd:attribute ref="xml:space" />
            </xsd:complexType>
          </xsd:element>
          <xsd:element name="assembly">
            <xsd:complexType>
              <xsd:attribute name="alias" type="xsd:string" />
              <xsd:attribute name="name" type="xsd:string" />
            </xsd:complexType>
          </xsd:element>
          <xsd:element name="data">
            <xsd:complexType>
              <xsd:sequence>
                <xsd:element name="value" type="xsd:string" minOccurs="0" 
                             msdata:Ordinal="1" />
                <xsd:element name="comment" type="xsd:string" minOccurs="0" 
                             msdata:Ordinal="2" />
              </xsd:sequence>
              <xsd:attribute name="name" type="xsd:string" use="required" 
                             msdata:Ordinal="1" />
              <xsd:attribute name="type" type="xsd:string" msdata:Ordinal="3" />
              <xsd:attribute name="mimetype" type="xsd:string" msdata:Ordinal="4" />
              <xsd:attribute ref="xml:space" />
            </xsd:complexType>
          </xsd:element>
          <xsd:element name="resheader">
            <xsd:complexType>
              <xsd:sequence>
                <xsd:element name="value" type="xsd:string" minOccurs="0" 
                             msdata:Ordinal="1" />
              </xsd:sequence>
              <xsd:attribute name="name" type="xsd:string" use="required" />
            </xsd:complexType>
          </xsd:element>
        </xsd:choice>
      </xsd:complexType>
    </xsd:element>
  </xsd:schema>
  <resheader name="resmimetype">
    <value>text/microsoft-resx</value>
  </resheader>
  <resheader name="version">
    <value>2.0</value>
  </resheader>
  <resheader name="reader">
    <value>System.Resources.ResXResourceReader, System.Windows.Forms</value>
  </resheader>
  <resheader name="writer">
    <value>System.Resources.ResXResourceWriter, System.Windows.Forms</value>
  </resheader>
  <data name="ResourceKey.Welcome" xml:space="preserve">
    <value>Welkom</value>
  </data>
</root>

Did you find my key and value entirely at the bottom of the file? The rest of the file is metadata.

So, what does a localization middleware component look like that reads its resources from JSON files? By the way, all the code for this post can be found in this GitHub repository (still very much in beta at the moment of writing). And here’s a good explanation of doing something similar but then from a database.

Configuration

First, the configuration. This happens in the ConfigureServices method:

public void ConfigureServices(IServiceCollection services)
{
    // Add localization based on JSON files.
    services.AddJsonLocalization(options => options.ResourcesPath = "Resources");

    // Add MVC service and view localization.
    services
        .AddMvc()
        .AddViewLocalization();
}

The call to AddJsonLocalization installs the required localization services, which we will discuss next. At the moment, it has one configuration parameter: ResourcesPath to specify where to look for the JSON resource files.

The framework-supported AddViewLocalization installs an html-safe wrapper around our localization services and an IViewLocationExpander that selects views based on current culture: LanguageViewLocationExpander. For example, it can generate the view name Views/Home/nl/Action when you’re in Holland, pretty cool.

Middleware

Middleware configuration begins with the AddJsonLocalization call which is an extension method for IServicesCollection.

using System;
using Microsoft.Extensions.DependencyInjection.Extensions;
using Microsoft.Extensions.Localization;

namespace Microsoft.Extensions.DependencyInjection
{
  using global::Localization.JsonLocalizer;
  using global::Localization.JsonLocalizer.StringLocalizer;

  public static class JsonLocalizationServiceCollectionExtensions
  {
    public static IServiceCollection AddJsonLocalization(
        this IServiceCollection services)
    {
      return AddJsonLocalization(services, setupAction: null);
    }

    public static IServiceCollection AddJsonLocalization(
        this IServiceCollection services,
        Action<JsonLocalizationOptions> setupAction)
    {
      services.TryAdd(new ServiceDescriptor(typeof(IStringLocalizerFactory),
          typeof(JsonStringLocalizerFactory), ServiceLifetime.Singleton));
      services.TryAdd(new ServiceDescriptor(typeof(IStringLocalizer),
          typeof(JsonStringLocalizer), ServiceLifetime.Singleton));

      if (setupAction != null)
      {
        services.Configure(setupAction);
      }
      return services;
    }
  }
}

The AddJsonLocalization method basically adds two additional singleton services: JsonStringLocalizerFactory and JsonStringLocalizer. JsonStringLocalizerFactory is an implementation of IStringLocalizerFactory and this interface provides two factory methods:

public interface IStringLocalizerFactory
{
  IStringLocalizer Create(Type resourceSource);
  IStringLocalizer Create(string baseName, string location);
}

These correspond to the two usage patterns for localizers. The first is for injection into classes, a controller class for example:

public class HomeController : Controller
{
  public HomeController(IHtmlLocalizer<HomeController> localizer)
  {
    var welcomeText = localizer["ResourceKey.Welcome"];
  }
}

The second is called when a localizer is injected directly into a view:

@inject IViewLocalizer Localizer
<span>@Localizer["ResourceKey.Welcome"], Ronald</span>

Suppose the view is Views/Home/Index.cshtml and your application is located in a folder called My.Application then the second IStringLocalizerFactory method is called with parameters (baseName: "Views.Home.Index.cshtml", location: "My.Application").

Resource location algorithm

I’m not going into details on the JsonStringLocalizerFactory and JsonStringLocalizer classes themselves because the code is on GitHub so you can check it out there. What’s more interesting is the algorithm that looks for resource files. If you want to actually use this middleware that may be more useful, I think.

Suppose we inject a IHtmlLocalizer<HomeController> into a My.Application.HomeController class. Suppose furthermore we are in Holland so the culture is nl-NL and we have set the JsonLocalizationOptions.ResourcesPath to "Resources". The algorithm will look for a JSON resource file with the following paths in order:

  • My.Application.HomeController.nl-NL.json
  • My/Application.HomeController.nl-NL.json
  • My/Application/HomeController.nl-NL.json
  • Resources.HomeController.nl-NL.json
  • Resources/HomeController.nl-NL.json
  • My.Application.HomeController.nl.json
  • My/Application.HomeController.nl.json
  • My/Application/HomeController.nl.json
  • Resources.HomeController.nl.json
  • Resources/HomeController.nl.json
  • My.Application.HomeController.json
  • My/Application.HomeController.json
  • My/Application/HomeController.json
  • Resources.HomeController.json
  • Resources/HomeController.json

So the algorithm starts with the most specific culture and falls back to less specific cultures. This ‘looking for resource files’ operation is relatively expensive so the result is cached for later use and guaranteed to execute just once.

What’s next?

The library as it stands now is far from finished. There are still some NotImplementedExceptions to be dealt with so that’s the first step. And some other ideas that I have:

  • Implement ‘lazy’ functionality that allows re-loading of JSON resource files in development mode. This allows you to edit resource files on the fly and see the result after a browser refresh. This should work particularly well in combination with dnx-watch.
  • Offer this as a NuGet package.
  • Allow other resource types. The resx format allows a lot more resource types than just strings. One obvious candidate is images. Of course, this requires an extension of the simple JSON file format I have now with metadata about the type of value I’m reading.

So that’s it. I hope you like this localization middleware component. If you have feedback or ideas on further extensions, please email me at rwwilden_at_gmail_dot_com.