Implementing a Service Broker in .NET part 4: Azure Storage account binding

This is part 4 in a series of posts about writing service brokers in .NET Core. All posts in the series:

  1. Marketplace and service provisioning
  2. Service binding
  3. Provisioning an Azure Storage account
  4. Binding an Azure Storage account (this post)

In the previous post we implemented provisioning and deprovisioning of an Azure Storage account. Because this was already quite a long post, we skipped the binding and unbinding part for the post you’re reading now.

All source code for this blog post series can be found here.

Azure Storage account authorization

When we bind an application to an Azure Storage account, we must provide the application with the means to authorize against the account.

There are a few ways to authorize for Azure Storage:

  • Azure Active Directory: client application requests an access token from Azure AD and uses this token to access the Azure Storage account. This is supported only for blob and queue storage.
  • Shared Key.
  • Shared Access Signatures (SAS): a URI that grants access to Azure Storage resources for a specific amount of time.
  • Anonymous access: anonymous access can be enabled at the storage container or blob level.

SAS tokens will not work for a service broker because they are valid for a limited amount of time. And since it’s not a lot of fun to write a binding implementation for anonymous access we’ll skip that as well.

That leaves us with shared keys and Azure AD. Since Azure AD authorization for Azure Storage is in beta, I guess that would be a nice challenge 😊 And of course it is still possible to provide the shared key as well so client applications can choose between Azure AD and shared keys as their means of authorization.

What are we building?

When we bind an Azure Storage account we need to provide the application that we bind to with all the information that is necessary to access the storage account. So what information does a client application need?

First we need the storage account urls. These are urls of the form <account>.blob.core.windows.net, <account>.queue.core.windows.net, <account>.table.core.windows.net and <account>.file.core.windows.net.

Next is the means to authorize. The client application that we bind to should be able to use the OAuth 2.0 client credentials grant flow so we need a client id, a client secret, a token endpoint and the scopes (permissions) to authorize for. This means that when we bind, we must

  • create an Azure AD application,
  • generate a client secret for the AD application and
  • assign the AD application principal to the storage account in a role that gives the right set of permissions.

The client application needs to receive all the necessary information to be able to start an OAuth 2.0 client credentials flow.

Besides, we also would like to provide the shared keys for the storage account so the client application can choose how to authenticate: via Azure AD or via a shared key.

Additional permissions for the service broker

The service broker needs some additional permissions besides those from the custom role we defined in the previous post. It should now also be able to create Azure AD applications and assign these to an Azure Storage role.

This means we need to assign Microsoft Graph API permissions to the Azure Storage Service Broker AD application:

We assign the Application.ReadWrite.OwnedBy permission so the service broker should be able to manage AD apps that it is the owner of.

And because we perform the additional action of assigning a service principal to an Azure Storage role, we also need to extend the service broker role definition with one extra permission: Microsoft.Authorization/roleAssignments/write:

lib/azcli/service-broker-role.json view raw
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
{
    "Name": "Azure Storage Service Broker",
    "IsCustom": true,
    "Description": "Can create new resource groups and storage accounts",
    "Actions": [
      "Microsoft.Resources/subscriptions/resourceGroups/write",
      "Microsoft.Resources/subscriptions/resourceGroups/read",
      "Microsoft.Resources/subscriptions/resourceGroups/delete",

      "Microsoft.Storage/storageAccounts/listkeys/action",
      "Microsoft.Storage/storageAccounts/regeneratekey/action",
      "Microsoft.Storage/storageAccounts/delete",
      "Microsoft.Storage/storageAccounts/read",
      "Microsoft.Storage/storageAccounts/write",
      "Microsoft.Storage/checknameavailability/read",

      "Microsoft.Authorization/roleAssignments/write"
    ],
    "NotActions": [
    ],
    "AssignableScopes": [
      "/subscriptions/4c70a177-b978-43f9-9fc0-1e50dd20271f"
    ]
  }

On line 17 you see the new permission being added to the custom service broker role.

The end result

To put things in context, the screenshot below shows the result of a bind operation against the my-rwwilden service.

First we bind the rwwilden-client app to the my-rwwilden service, which is a service instance created by the rwwilden-broker service broker. When provisioning this instance we created an Azure Resource Group and an Azure Storage account (check the previous post for more details).

Next we get the environment settings for the rwwilden-client application and it now has a set of credentials in the VCAP_SERVICES environment variables. In the first block we have the settings that allow the rwwilden-client application to get an OAuth2.0 access token that authorizes requests to the Azure Storage API. The second block has the shared keys that provide another way to authorize to Azure Storage. And in the third block we see the API endpoints for accessing all storage services.

Let’s see what this looks like in Azure. Remember, we created an Azure AD app and service principal specifically for the current binding. The service principal is assigned to two roles: Storage Blob Data Contributor (Preview) and Storage Queue Data Contributor (Preview). Let’s see whether the principal that was created is assigned to these two roles:

At the top in box 1 you see that we are looking at a storage account named 65cef50071f949f0819c5308, the same account name we see appearing in the storage urls (e.g.: https://65cef50071f949f0819c5308.blob.core.windows.net). At the bottom in box 2 you can see that a service principal named fdc45ce4-5f16-43d6-ae4d-ee108428289f is assigned to the two roles. The service principal name happens to be the name of the binding that was provided to the service broker when binding the service.

Details

For the current version of the broker, I added all code directly to the BindAsync method of the ServiceBindingBlocking class, creating a rather large method that does everything. In the next version of the broker I will switch to an asynchronous implementation and take the opportunity to clean things up.

But for now, we’ll just take a look at what’s happening inside the BindAsync method. First, we retrieve all storage accounts from the Azure subscription that have a tag that matches the service instance id:

src/broker/Bindings/ServiceBindingBlocking.cs view raw
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
// Retrieve Azure Storage account.
var storageAccounts = await _azureStorageProviderClient.GetStorageAccountsByTag("cf_service_instance_id", context.InstanceId);
var nrStorageAccounts = storageAccounts.Count();
if (nrStorageAccounts == 0)
{
    var message = $"Could not find storage account with tag: cf_service_instance_id = {context.InstanceId}";
    _log.LogWarning(message);
    throw new ArgumentException(message, nameof(context));
}

if (nrStorageAccounts > 1)
{
    var message = $"Found multiple storage accounts for tag: cf_service_instance_id = {context.InstanceId}";
    _log.LogError(message);
    throw new ArgumentException(message, nameof(context));
}

var storageAccount = storageAccounts.Single();
var storageAccountId = storageAccount.Id;

This is also a fine opportunity to check if the bind request is correct by verifying that there actually exists a storage account with the service instance id tag.

Next we create the Azure AD application that corresponds to this binding. Note that we give it a display name and identifier URI that matches the binding id (lines 5/6):

src/broker/Bindings/ServiceBindingBlocking.cs view raw
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
// Create an Azure AD application.
var clientSecret = GeneratePassword();
var application = new Application
{
    DisplayName = context.BindingId,
    IdentifierUris = { $"https://{context.BindingId}" },
    PasswordCredentials =
    {
        new PasswordCredential
        {
            StartDateTime = DateTimeOffset.UtcNow,
            EndDateTime = DateTimeOffset.UtcNow.AddYears(2),
            KeyId = Guid.NewGuid(),
            SecretText = clientSecret
        }
    },
    SignInAudience = SignInAudience.AzureADMyOrg,
    Tags = { $"cf_service_id:{request.ServiceId}", $"cf_plan_id:{request.PlanId}", $"cf_binding_id:{context.BindingId}" }
};
var createdApplication = await _msGraphClient.CreateApplication(application);

Next step is to create the service principal that corresponds to the AD application:

src/broker/Bindings/ServiceBindingBlocking.cs view raw
1
2
3
4
5
6
7
8
9
10
// Create a service principal for the application in the same tenant.
var servicePrincipal = new ServicePrincipal
{
    AccountEnabled = true,
    AppId = createdApplication.AppId,
    DisplayName = createdApplication.DisplayName,
    Tags = { $"cf_service_id:{request.ServiceId}", $"cf_plan_id:{request.PlanId}", $"cf_binding_id:{context.BindingId}" }
};
var createdServicePrincipal = await _msGraphClient.CreateServicePrincipal(servicePrincipal);
var principalId = Guid.Parse(createdServicePrincipal.Id);

And assign this principal to two predefined Azure storage roles with predefined ids:

src/broker/Bindings/ServiceBindingBlocking.cs view raw
1
2
3
4
5
6
// Assign service principal to roles Storage Blob Data Contributor and Storage Queue Data Contributor.
var storageBlobDataContributorRoleId = Guid.Parse("ba92f5b4-2d11-453d-a403-e96b0029c9fe");
await GrantPrincipalAccessToStorageAccount(storageAccountId, storageBlobDataContributorRoleId, principalId);

var storageQueueDataContributorRoleId = Guid.Parse("974c5e8b-45b9-4653-ba55-5f855dd0fb88");
await GrantPrincipalAccessToStorageAccount(storageAccountId, storageQueueDataContributorRoleId, principalId);

Because we want to give our client application some options to choose from when accessing the storage account, we also get the access keys to return in the credentials object:

src/broker/Bindings/ServiceBindingBlocking.cs view raw
1
2
// Get the access keys for the storage account.
var storageAccountKeys = await _azureStorageClient.GetStorageAccountKeys(storageAccountId);

We finally have all the necessary information to build our credentials object that will be added to the VCAP_SERVICES environment variable of the client application that we bind to:

src/broker/Bindings/ServiceBindingBlocking.cs view raw
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
return new ServiceBinding
{
    Credentials = JObject.FromObject(new StorageAccountCredentials
    {
        Urls =
        {
            BlobStorageUrl = $"https://{storageAccount.Name}.blob.core.windows.net",
            QueueStorageUrl = $"https://{storageAccount.Name}.queue.core.windows.net",
            TableStorageUrl = $"https://{storageAccount.Name}.table.core.windows.net",
            FileStorageUrl = $"https://{storageAccount.Name}.file.core.windows.net",
        },
        SharedKeys = storageAccountKeys
            .Select(key => new SharedKey
            {
                Name = key.KeyName,
                Permissions = key.Permissions.ToString(),
                Value = key.Value
            })
            .ToArray(),
        OAuthClientCredentials =
        {
            ClientId = createdApplication.AppId,
            ClientSecret = clientSecret,
            TokenEndpoint = $"https://login.microsoftonline.com/{_azureAuthOptions.TenantId}/oauth2/v2.0/token",
            Scopes = new[] { "https://management.core.windows.net/.default" },
            GrantType = "client_credentials"
        }
    })
};

Conclusion

The last two posts had less to do with service brokers and more with Azure. However, you only run into real issues with implementing service brokers when you provision and bind real services. One issue I already anticipated is that provisioning and binding services may take time. So instead of doing this in a blocking way, we may want to leverage the asynchronous support that the OSBAPI offers.

Another thing that’s important is doing everything you can to keep your service broker stateless. This essentially means that you must encode the information that Cloud Foundry provides inside your backend system. For example, when binding we receive a binding id from PCF. We use this binding id as the name for an Azure AD application. When we unbind, we get the same binding id from PCF so we can locate the Azure AD app and delete it. This may not be possible in every backend system which means we have to keep track somewhere how Cloud Foundry identifiers (service instance and binding ids) map to backend concepts.

In the next post we will implement asynchronous service provisioning and polling to better handle long-running operations.

Implementing a Service Broker in .NET part 3: Azure Storage account provisioning

This is part 3 in a series of posts about writing service brokers in .NET Core. All posts in the series:

  1. Marketplace and service provisioning
  2. Service binding
  3. Provisioning an Azure Storage account (this post)
  4. Binding an Azure Storage account

In the previous posts we implemented a service catalog, service (de)provisioning and service (un)binding. Both provisioning and binding were blocking operations that happened in-memory. In this post we will give some body to the implemetation by provisioning an actual backend service: an Azure Storage account.

All source code for this post can be found here.

Azure, Azure Storage and Azure Active Directory

If you don’t know anything about Azure or Azure Storage, here’s a (very) short conceptual introduction to help explain the remainder of the post.

  • Azure Active Directory (AAD) is Microsoft’s identity service in the cloud. It stores user identities and service principals and implements the OAuth 2.0 and OpenID Connect protocols. We will use the OAuth 2.0 client credentials grant flow to authorize the service broker to perform the necessary operations on Azure.
  • An Azure Subscription is the billing unit that contains all the Azure resources you work with. If you want to do anything with Azure you need a subscription with payment details (a credit card for example).
  • An Azure Resource Group is a container for your Azure resources. Usually you group resources that belong together (e.g.: for one application) into one resource group. A resource group is also a security boundary in the sense that you can authorize principals to perform certain operations on the resource group and the resources within.
  • An Azure Storage account gives access to Azure Blob Storage, File Storage, Table Storage and Queues.

What are we building?

The service broker we are developing will use the OAuth 2.0 client credentials grant flow to obtain a token that authorizes the bearer to perform the necessary Azure operations. A custom role will be defined that gives the service broker exactly the set of permissions required.

Inside Cloud Foundry we have the concept of orgs and spaces as security boundaries. Azure Subscriptions and Resource Groups are at the same abstraction level. However, creating a new subscription from my service broker and linking credit card details may become a little complex for now so we take the following approach:

  • Provisioning:
    1. When a request comes in to provision a new Azure Storage account, we take the org/space combination and create a resource group with the name <org guid>_<space_guid> (for example: 109718b6-e892-41e7-8993-09ace9544385_7e5f5bc3-1da9-4f14-8827-d88c09affe02). If the resource group already exists we do nothing.
    2. Inside the resource group, we create a new storage account where the name derives from the service instance id (Azure Storage account names have a maximum length of 24 characters and service instance ids in PCF are GUIDs with a length of 32).
  • Deprovisioning:
    1. We remove the storage account.
    2. If no other resources are provisioned inside the resource group, we delete the resource group.
  • Binding:
    1. We retrieve the storage connection string and return it inside the credentials object.
  • Unbinding:
    1. This is a no-op, nothing needs to happen on the Azure side.

Custom Azure role

Following the principal of least privilege we want to give our service broker the minimum set of permissions required to perform the task at hand. So it should be able to create, list and delete resource groups and create, list and delete storage accounts. Besides, the service broker should be able to read storage connection strings during bind operations.

This leads us to the following role definition:

lib/azcli/service-broker-role.json view raw
{
    "Name": "Azure Storage Service Broker",
    "IsCustom": true,
    "Description": "Can create new resource groups and storage accounts",
    "Actions": [
      "Microsoft.Resources/subscriptions/resourceGroups/write",
      "Microsoft.Resources/subscriptions/resourceGroups/read",
      "Microsoft.Resources/subscriptions/resourceGroups/delete",

      "Microsoft.Storage/storageAccounts/listkeys/action",
      "Microsoft.Storage/storageAccounts/regeneratekey/action",
      "Microsoft.Storage/storageAccounts/delete",
      "Microsoft.Storage/storageAccounts/read",
      "Microsoft.Storage/storageAccounts/write",
      "Microsoft.Storage/checknameavailability/read"
    ],
    "NotActions": [
    ],
    "AssignableScopes": [
      "/subscriptions/4c70a177-b978-43f9-9fc0-1e50dd20271f"
    ]
  }

With this role definition we can create the role in our Azure subscription using the Azure CLI:

az login
az configure --defaults location=westeurope
az account set --subscription 4c70a177-b978-43f9-9fc0-1e50dd20271f
az role definition create --role-definition service-broker-role.json

A short inspection in the Azure portal tells us that our role has been created:

If you wonder where the action names (e.g.: Microsoft.Storage/storageAccounts/read) come from, you can find the complete list here.

Azure AD application

Next step is to create an Azure AD application and service principal that enables our service broker to get an access token that allows it to perform the required operations. The service principal will be assigned to the role we just defined.

I chose to create the AAD application from the Azure portal and the result is an application named Azure Storage Service Broker with client id b2213c77-9d93-474b-9b7f-89a1f0040162:

Next we generate a client secret that, together with the client id, allows the service broker to authenticate for this AD application using the standard OAuth 2.0 client credentials grant flow.

Finally we assign the service principal that corresponds to the Azure AD application to the role we created earlier:

az ad sp list --display-name 'Azure Storage Service Broker' | jq '.[0].objectId'
az role assignment create \
    --assignee-object-id 5afa5a58-fa38-4122-a114-34b989ed88b4 \
    --role 'Azure Storage Service Broker'

First we list all service principals with the name Azure Storage Service Broker and get the object id of the first result. Next we assign the Azure Storage Service Broker role to this principal.

We have now done all the preparatory work on the Azure side, back to our service broker application.

Azure REST API authorization

The first thing we need to worry about is getting the proper authorization for performing all desired operations. For this we use the Microsoft Authentication Library for .NET (MSAL). MSAL lets us acquire tokens from Azure AD using the OAuth 2.0 client credentials flow via the ConfidentialClientApplication class:

src/azure/Auth/AzureAuthorizationHandler.cs view raw
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
private static readonly TokenCache AppTokenCache = new TokenCache();

private readonly IConfidentialClientApplication _clientApplication;

public AzureAuthorizationHandler(IOptions<AzureRMAuthOptions> azureRMAuthOptions)
{
    var azureRMAuth = azureRMAuthOptions.Value;
    _clientApplication = new ConfidentialClientApplication(
        azureRMAuth.ClientId,
        $"{azureRMAuth.Instance}{azureRMAuth.TenantId}",
        $"https://{azureRMAuth.ClientId}",
        new ClientCredential(azureRMAuth.ClientSecret),
        null,
        AppTokenCache);
}

We need a number of settings, most of which are defined in the Azure AD app we created earlier. Here’s an overview of them (from a Cloud Foundry user-provided service which we will use later):

The following settings are necessary to be able to get an authorization token (via client credentials flow) from Azure AD that grants the bearer the permissions we defined earlier in our custom role:

  • client_id: the id of the Azure AD application (OAuth 2.0 Client Identifier)
  • client_secret: a secret shared between Azure AD and our service broker (OAuth 2.0 Client Password)
  • instance and tenant id: together form the base url for the OAuth 2.0 token endpoint, in this case: https://login.microsoftonline.com/e402c5fb-58e9-48c3-b567-741c4cef0b96/oauth2/v2.0/token (Oauth 2.0 Token Endpoint)
  • redirect_uri: is a relevant part of the OAuth 2.0 spec but not for the client credentials flow so we can enter any valid URI we like here (null is not accepted)

Azure REST API operations

Every Azure operation has a corresponding REST API call. For the purpose of our service broker I wrote a small Azure REST API client library containing the operations we need. I made use of IHttpClientFactory to create typed HTTP clients, as described here.

The library has one entry point AddAzureServices for adding all client middleware dependencies:

src/azure/ServiceCollectionExtensions.cs view raw
1
2
3
4
5
6
public static class ServiceCollectionExtensions
{
    public static IServiceCollection AddAzureServices(
        this IServiceCollection services,
        Action<AzureOptions> configureAzureOptions,
        Action<AzureADAuthOptions> configureAzureADAuthOptions)

One example dependency that is added to the service collection is a typed http client for accessing Azure Storage:

src/azure/ServiceCollectionExtensions.cs view raw
1
2
3
4
5
6
7
8
services
    .AddHttpClient<IAzureStorageClient, AzureStorageClient>((serviceProvider, client) =>
    {
        var azureConfig = serviceProvider.GetRequiredService<AzureOptions>();
        client.BaseAddress =
            new Uri($"https://management.azure.com/subscriptions/{azureConfig.SubscriptionId}/resourceGroups/");
    })
    .AddHttpMessageHandler<AzureAuthorizationHandler>();

We add a typed http client that implements the interface IAzureStorageClient and set the base address for accessing the Azure REST API. Besides, we add a DelegatingHandler implementation that fetches an authorization token and sets it on every request.

Back to the service broker

With all the plumbing out of the way we can finally implement a service broker that provisions Azure Storage accounts. Let’s take the provisioning step as an example. All code samples below are from the ServiceInstanceBlocking.ProvisionAsync method (see the first blog post for details on this method).

src/broker/Lib/ServiceInstanceBlocking.cs view raw
1
2
3
4
5
6
7
8
public async Task<ServiceInstanceProvision> ProvisionAsync(ServiceInstanceContext context, ServiceInstanceProvisionRequest request)
{
    LogContext(_log, "Provision", context);
    LogRequest(_log, request);

    var orgId = request.OrganizationGuid;
    var spaceId = request.SpaceGuid;
    var resourceGroupName = $"{orgId}_{spaceId}";

The first step is to determine the name of the resource group, a combination of org and space GUID. Next, we create the resource group if it does not exists:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
// Create resource group if it does not yet exist.
var exists = await _azureResourceGroupClient.ResourceGroupExists(resourceGroupName);
if (exists)
{
    _log.LogInformation($"Resource group {resourceGroupName} exists");
}
else
{
    _log.LogInformation($"Resource group {resourceGroupName} does not exist: creating");

    var resourceGroup = await _azureResourceGroupClient.CreateResourceGroup(new ResourceGroup
    {
        Name = resourceGroupName,
        Location = "westeurope",
        Tags = new Dictionary<string, string>
        {
            { "cf_org_id", orgId },
            { "cf_space_id", spaceId }
        }
    });
    _log.LogInformation($"Resource group {resourceGroupName} created: {resourceGroup.Id}");
}

Note that we apply some tags to the resource group to be able to link it back to our Cloud Foundry environment. The final step is to create the Azure Storage account itself. A lot of the properties are hard-coded for now: location is always westeurope, the SKU is Standard_LRS, etc. In a later blog post we will see how to parameterize these properties.

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
// Create storage account.
var storageAccountName = context.InstanceId.Replace("-", "").Substring(0, 24);
await _azureStorageClient.CreateStorageAccount(
    resourceGroupName,
    new StorageAccount
    {
        Name = storageAccountName,
        Kind = StorageKind.StorageV2,
        Location = "westeurope",
        Properties = new StorageAccountProperties
        {
            AccessTier = StorageAccessTier.Hot,
            Encryption = new StorageEncryption
            {
                KeySource = StorageEncryptionKeySource.Storage,
                Services = new StorageEncryptionServices
                {
                    Blob = new StorageEncryptionService { Enabled = true },
                    File = new StorageEncryptionService { Enabled = true },
                    Table = new StorageEncryptionService { Enabled = true },
                    Queue = new StorageEncryptionService { Enabled = true }
                }
            },
            SupportsHttpsTrafficOnly = true
        },
        Sku = new StorageSku
        {
            Name = StorageSkuName.Standard_LRS,
            Tier = StorageSkuTier.Standard
        },
        Tags = new Dictionary<string, string>
        {
            { "cf_org_id", orgId },
            { "cf_space_id", spaceId },
            { "cf_service_instance_id", context.InstanceId }
        }
    });

Again we provide some tags that we use to link Azure resources to CF service instances.

Service broker configuration

The new service broker needs a bit of configuration to be able to authorize and perform operations. There are a number of ways to provide this configuration:

  • in appsettings.<env>.json but now we have to push stuff to source control that probably varies per environment
  • directly from the environment by using cf set-env as we did with the basic authentication password in the first post but the number of settings has grown so this becomes a bit cumbersome
  • via a user-provided service instance

I opted for the latter approach by defining two user-provided service instances, one for settings concerning authorization and one for settings concerning the Azure subscription we target. The screenshot below shows how to create the user-provided service instance for the authorization settings by providing a JSON object with these settings.

Next we bind the user-provided service to our rwwilden-broker app:

After binding the service we show the environment for the app. You can see that the credentials are available in the VCAP_SERVICES environment variable.

Steeltoe

As you can see from the last screenshot, we have one VCAP_SERVICES environment variable with our settings buried deep within. We could use some help parsing this. Lucky for us, a library exists that can help us do just that: Steeltoe. Part of the Steeltoe set of libraries is Steeltoe.Extensions.Configuration.CloudFoundryCore that helps provide settings from VCAP_SERVICES in a more readable format via the CloudFoundryServicesOptions class.

This is in many ways still a dictionary of properties so we need to perform some translation to get to the AzureRMAuthOptions class that the small Azure library we wrote expects. You can check out the Startup class to see how that works.

Testing

We now have a new version of the service broker running inside Pivotal Cloud Foundry that actually provisions a backend resource: an Azure Storage account inside a resource group. The service broker receives its configuration from two user-provided service instances and has the exact required set of permissions required to do its job.

Now let’s see if all this works. Maybe you remember from the previous posts that the service is named rwwilden (not that good a name anymore, but alas). There is one service plan called basic. So we can create a service instance as follows:

Note that I introduced timing information to show how long it takes before the command returns. In this case it takes about 27s. Remember that we implemented a blocking version of service instance creation so somewhere a thread is blocked for 27s. Not the worst for these one-off operations but we could do better (which is the topic of a next post).

Let’s check the Azure portal to see if a resource group is created with a storage account:

I underlined the interesting parts:

  • the name of the resource group is a combination of the CF org and space ids
  • the resource group has two tags: the cf_org_id and the cf_space_id
  • the resource group contains one resource, a storage account whose name is the first 24 characters of the service instance id

So it seems all our efforts paid of and our service broker can provision Azure Storage accounts! Let’s open the Storage account itself:

As you can see it has the three tags we defined and the hard-coded properties we specified. Now let’s create another service in the same org/space. The expected behavior is a new Storage account in the same resource group:

As you can see this takes about the same amount of time. A quick check in the Azure portal reveals that a second storage account is created inside the resource group:

Now let’s see if deprovisioning also works by deleting the two service instances:

Both operations succeed and a check in the Azure portal reveals that both Storage accounts and the Resource Group they were a part of have disappeared.

Conclusion

In this (long) post we added a small Azure service library, implemented a custom Azure role for our service broker and configured the service broker to get an authorization token for performing a number of Azure operations. The primary goal for this exercise was to gain some experience implementing a real service broker. Staying with the in-memory version of the previous blog posts does not expose us to any problems we might encounter in the real world.

For this post we just implemented service provisioning and deprovisioning. The next post will handle binding and unbinding.

After that, we will turn our attention to asynchronous provisioning and binding.

Implementing a Service Broker in .NET part 2: service binding

This is part 2 in a series of posts about writing service brokers in .NET Core. In the previous post we implemented the bare minimum: a catalog and a blocking implementation of (de)provisioning a service. In this post we will look at a blocking implementation of service (un)binding. All posts in the series:

  1. Marketplace and service provisioning
  2. Service binding (this post)
  3. Provisioning an Azure Storage account
  4. Binding an Azure Storage account

Setting the stage

As in the first post, we implement (parts of) the Open Service Broker API specification. We use the OpenServiceBroker .NET library that already defines all necessary endpoints and provides implementation hooks for binding and unbinding. We use Pivotal Cloud Foundry, hosted at https://run.pivotal.io for testing our implementation and CF CLI for communicating with the platform.

All source code for this blog post can be found at: https://github.com/orangeglasses/service-broker-dotnet/tree/master.

What to bind to

When we want to bind a service to an application, we need an actual application. So we implement a second (empty) .NET Core application.

We now have two applications: the service broker and a client application that we can bind to called rwwilden-client.

Updating the catalog

In the first post we introduced a service catalog that advertised the rwwilden service. We chose to make the service not-bindable because at that time, service binding was not implemented. When we try to bind the service anyway, an error occurs:

So we need to update the catalog to advertise a bindable service:

src/broker/Lib/CatalogService.cs view raw
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
private static readonly Task<Catalog> CatalogTask = Task.FromResult(
    new Catalog
    {
        Services =
        {
            // https://github.com/openservicebrokerapi/servicebroker/blob/v2.14/spec.md#service-object
            new Service
            {
                Id = ServiceId,
                Name = "rwwilden",
                Description = "The magnificent and well-known rwwilden service",

                // This service broker now has support for service binding so we will set this property to true.
                Bindable = true,
                BindingsRetrievable = false,

                // This service broker will be used to provision instances so fetching them should also be supported.
                InstancesRetrievable = true,

                // No support yet for service plan updates.
                PlanUpdateable = false,

                Metadata = ServiceMetadata,

                Plans =
                {
                    new Plan
                    {
                        Id = BasicPlanId,
                        Name = "basic",
                        Description = "Basic plan",
                        Bindable = true,
                        Free = true,
                        Metadata = BasicPlanMetadata
                    }
                }
            }
        }
    });

public Task<Catalog> GetCatalogAsync() => CatalogTask;

The only changes are at lines 14 and 32 where we set Bindable to true. Note that just setting Bindable to true at the plan level would also have been enough. Lower-level settings override higher-level ones.

Bind and unbind

Next step is to implement binding and unbinding. There are 4 different types of binding defined by the OSBAPI spec: credentials, log drain, route service and volume service. For this post we will implement the most common one: credentials. Since our service broker does not have an actual backing service, this is quite simple. In real life, you might have a MySQL service broker that provisions a database during bind and returns a connection string that allows your application to access the database.

The OSBAPI server library I used in the previous post provides hooks for implementing blocking (un)binding in the form of the IServiceBindingBlocking interface so we just need to implement the BindAsync and UnbindAsync methods:

src/broker/Lib/ServiceBindingBlocking.cs view raw
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
public Task<ServiceBinding> BindAsync(ServiceBindingContext context, ServiceBindingRequest request)
{
    LogContext(_log, "Bind", context);
    LogRequest(_log, request);

    return Task.FromResult(new ServiceBinding
    {
        Credentials = JObject.FromObject(new
        {
            connectionString = "<very secret connection string>"
        })
    });
}

public Task UnbindAsync(ServiceBindingContext context, string serviceId, string planId)
{
    LogContext(_log, "Unbind", context);
    _log.LogInformation($"Deprovision: {{ service_id = {serviceId}, planId = {planId} }}");

    return Task.CompletedTask;
}

As you can see, our bind implementation simply returns a JObject with a very secret connection string.

The final change to our code is to register the IServiceBindingBlocking implementation with the DI container (line 4):

src/broker/Startup.cs view raw
1
2
3
4
5
services
    .AddTransient<ICatalogService, CatalogService>()
    .AddTransient<IServiceInstanceBlocking, ServiceInstanceBlocking>()
    .AddTransient<IServiceBindingBlocking, ServiceBindingBlocking>()
    .AddOpenServiceBroker();

Updating the service broker

When we push the new service broker application, the platform (PCF) does not yet know that the service broker has changed. So when we try to bind a service to an application, this still fails with the error: the service instance doesn’t support binding. To fix this, we can update the service broker using cf update-service-broker:

Binding and unbinding the service

With an updated service broker in place that supports binding we have finally reached the goal of this post: binding to and unbinding from the my-rwwilden service:

With the first command we bind the rwwilden-client application to the my-rwwilden service and give the binding a name: client-to-service-binding-rwwilden.

With the second command, cf env rwwilden-client, we check whether the credentials that the service broker provides when binding, are actually injected into the rwwilden-client application environment. And alas, there is our ‘very secret connection string’.

Conclusion

In the first post we implemented a service broker with a service catalog and (de)provisioning of a service. In this post we actually bound the service we created to an application and saw that the credentials the service broker returned when binding were injected into the application environment.

Until now, everything was happening in-memory and there was no actual service being provisioned. In the next post we will (de)provision and (un)bind an actual service, both still as blocking operations.

Implementing a Service Broker in .NET part 1: provisioning

I’m experimenting with writing a service broker in .NET Core that conforms to the Open Service Broker API specification. This is part 1 of a series of posts that explores all service broker details from a .NET perspective. All posts in the series:

  1. Marketplace and service provisioning (this post)
  2. Service binding
  3. Provisioning an Azure Storage account
  4. Binding an Azure Storage account

Setting the stage

I this first part, the goal is to write a service broker that:

So we do not yet implement binding or unbinding, this is for a follow-up post.

A service broker allows a platform to provision service instances for applications you or someone else writes. The platform I chose to test my service broker on is Pivotal Cloud Foundry (PCF). A public implementation of this platform that is hosted and managed by Pivotal can be found at https://run.pivotal.io.

To tell the service broker what to do I will use the CF CLI which can be used to push applications to the platform, create service instances and bind them to applications (among a lot of other things).

All source code for this blog post can be found at: https://github.com/orangeglasses/service-broker-dotnet/tree/master.

OSBAPI library for .NET

Lucky for me, there is no need to implement the OSBAPI spec myself. An excellent open source OSBAPI client and server implementation already exists for .NET: https://github.com/AXOOM/OpenServiceBroker. The server library implements the entire OSBAPI interface and provides hooks you must implement to actually (de)provision and (un)bind and fetch services and service instances.

When implementing a service broker for some underlying service, you have to make a choice between implementing synchronous or asynchronous (de)provisioning and (un)binding. If the platform (PCF) and the client (CF CLI) support it, requests to the service broker contain the accepts_incomplete=true parameter. This indicates that the platform supports polling the latest operation to check for completeness. In this case, both PCF and CF CLI support asynchronous operations.

If we want to make our service broker as generic as possible, we should implement the blocking version of the API because not all platform may support asynchronous provisioning. Therefore, for this post, we just implement IServiceInstanceBlocking. In a later post we’ll explore asynchronous provisioning.

Catalog

The starting point for any service broker is its catalog, exposed on the /v2/catalog endpoint. When using the OpenServiceBroker.NET library we need to implement the ICatalogService. For this post we start with a simple catalog:

src/broker/Lib/CatalogService.cs view raw
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
private static readonly Task<Catalog> CatalogTask = Task.FromResult(
    new Catalog
    {
        Services =
        {
            // https://github.com/openservicebrokerapi/servicebroker/blob/v2.14/spec.md#service-object
            new Service
            {
                Id = ServiceId,
                Name = "rwwilden",
                Description = "The magnificent and well-known rwwilden service",

                // Since this service broker will not yet have support for binding services to applications,
                // these properties are set to false.
                Bindable = false,
                BindingsRetrievable = false,

                // This service broker will be used to provision instances so fetching them should also be supported.
                InstancesRetrievable = true,

                // No support yet for service plan updates.
                PlanUpdateable = false,

                Metadata = ServiceMetadata,

                Plans =
                {
                    new Plan
                    {
                        Id = BasicPlanId,
                        Name = "basic",
                        Description = "Basic plan",
                        Free = true,
                        Metadata = BasicPlanMetadata
                    }
                }
            }
        }
    });

public Task<Catalog> GetCatalogAsync() => CatalogTask;

A catalog has services with some properties and a service has plans, nothing fancy yet. We can already deploy the application that exposes this service catalog to PCF:

We now have a service catalog up-and-running at https://rwwilden-broker.cfapps.io/v2/catalog:

Service instances

Now that we have a catalog, we need a way to create services from it. For simplicity, we implement the blocking version of service instancing: IServiceInstanceBlocking and leave the asynchronous (deferred) version for a future post. Since we’re not actually provisioning anything yet, there is little to implement except some logging statements:

src/broker/Lib/ServiceInstanceBlocking.cs view raw
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
public Task<ServiceInstanceProvision> ProvisionAsync(ServiceInstanceContext context, ServiceInstanceProvisionRequest request)
{
    _log.LogInformation(
        $"Provision - context: {{ instance_id = {context.InstanceId}, " +
                                $"originating_identity = {{ platform = {context.OriginatingIdentity.Platform}, " +
                                                          $"value = {context.OriginatingIdentity.Value} }} }}");
    _log.LogInformation(
        $"Provision - request: {{ organization_guid = {request.OrganizationGuid}, " +
                                $"space_guid = {request.SpaceGuid}, " +
                                $"service_id = {request.ServiceId}, " +
                                $"plan_id = {request.PlanId}, " +
                                $"parameters = {request.Parameters}, " +
                                $"context = {request.Context} }}");

    return Task.FromResult(new ServiceInstanceProvision());
}

public Task DeprovisionAsync(ServiceInstanceContext context, string serviceId, string planId)
{
    _log.LogInformation(
        $"Deprovision - context: {{ instance_id = {context.InstanceId}, " +
                                  $"originating_identity = {{ platform = {context.OriginatingIdentity.Platform}, " +
                                                            $"value = {context.OriginatingIdentity.Value} }} }}");
    _log.LogInformation($"Deprovision: {{ service_id = {serviceId}, planId = {planId} }}");

    return Task.CompletedTask;
}

Platform-to-service-broker authentication

We now have an application that implements a catalog and service (de)provisioning. However, the platform does not yet know that this application is a service broker. In PCF, we can use the cf create-service-broker command to do that. This command requires the name of the service broker, its url (https://rwwilden-broker.cfapps.io) and a username and password.

The username/password are required because communication between platform and broker is authenticated through basic authentication. So the platform and the broker share a secret (username/password) that allows them to communicate. ASP.NET Core does not support basic auth out-of-the-box so we turn to the idunno.Authentication library. I’m not going to go into the details of configuring this, check out the Startup.cs class in the Git repo. One thing to take into account is that in PCF, the load balancer terminates SSL. Requests to the app are sent in plain HTTP. The idunno.Authentication requires that you explicitly allow HTTP requests since in the context of basic authentication, HTTP is a very bad idea.

The basic authentication password will of course not be hard-coded inside the application but will be read from an environment setting Authentication:Password. So after we push the application, we can use cf set-env to add the password to the environment.

Creating the broker

Now that we have the app up-and-running with a catalog, service (de)provisioning and basic authentication, we can create a service broker from it via cf create-service-broker:

Provisioning and deprovisiong a service

The service broker we deployed exposes the rwwilden service that should be visible in the marketplace:

And as you can see, there it is, at the bottom of the list of available services. Note that we made this a space-scoped service so it’s only available in the current org and space.

Next we can create a service and delete it again and we have reached the goal of the first post of this series.

CF create service

We created a service of the type rwwilden in the basic plan and we named it my-rwwilden. Binding is not yet supported by this service so there are no bound apps.

Use AsyncLocal with SimpleInjector for unobtrusive context

Suppose you have a component (an API controller, for instance) that has a dependency on another component. And you have all of this nicely configured using your favorite IoC (Inversion of Control) library: SimpleInjector. In most applications you also have some cross-cutting concerns like logging, validation or caching. Ideally, you do not want these cross-cutting concerns influence the way you write your business code.

Let’s take a hypothetical situation where you want to correlate log messages across different components. So component A calls component B calls component C and you want to log each call and be able to see that these calls were part of the same operation. You need some correlation id.

Logging and this correlation id have nothing to do with your business logic so you do not want any logging statements in your business code and you definitely do not want to pass a ‘correlation id’ around everywhere. How to solve this?

One possible answer is: AsyncLocal<T>. It allows you to persist data across asynchronous control flows. Besides, the data you store in an AsyncLocal is local to the current asynchronous control flow. So if you have a web application that receives multiple simultaneous requests, each request sees its own async local value.

I illustrated this with a small project on Github. It contains a simple controller that returns a customer by id from some repository. The repository is injected as a dependency into the controller. The controller method also initializes an async local value:

AsyncLocal.SimpleInjector.Web/Controllers/CustomerController.cs view raw
1
2
3
4
5
6
7
8
9
10
11
12
13
[HttpGet]
public async Task<Customer> Get(int id)
{
    // Set async local correlation id.
    var correlationId = Guid.NewGuid();
    _correlationContainer.SetCorrelationId(correlationId);

    // Call controller dependency (decorated by LoggingDecorator).
    var customer = await _customerService.GetCustomer(id);

    // Return values.
    return customer;
}

On lines 5 and 6 I generate a new ‘correlation id’ (but this can be any value or object you like) and set it in a container, which I will show in a minute. Note that this correlation id does not have to be passed in the GetCustomer call on line 9.

The CorrelationContainer is a simple wrapper around an AsyncLocal<Guid>:

AsyncLocal.SimpleInjector.Web/Controllers/CorrelationContainer.cs view raw
1
2
3
4
5
6
7
8
9
10
11
12
13
14
public class CorrelationContainer
{
    private readonly AsyncLocal<Guid> _correlationId = new AsyncLocal<Guid>();

    public void SetCorrelationId(Guid correlationId)
    {
        _correlationId.Value = correlationId;
    }

    public Guid GetCorrelationId()
    {
        return _correlationId.Value;
    }
}

This wrapper class is injected as a singleton dependency by Simple Injector so there is only one instance. However, the AsyncLocal takes care of providing each asynchronous control flow (in this example each web request) with its own value.

Finally we have a decorator that does our logging. Log messages should contain the correct correlation id.

AsyncLocal.SimpleInjector.Web/Controllers/LoggingDecorator.cs view raw
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
public class LoggingDecorator : ICustomerService
{
    private readonly Func<ICustomerService> _decorateeFunc;
    private readonly ILogger<CustomerService> _logger;
    private readonly CorrelationContainer _correlationContainer;

    public LoggingDecorator(
        Func<ICustomerService> decorateeFunc,
        ILogger<CustomerService> logger,
        CorrelationContainer correlationContainer)
    {
        _decorateeFunc = decorateeFunc;
        _logger = logger;
        _correlationContainer = correlationContainer;
    }

    public async Task<Customer> GetCustomer(int customerId)
    {
        // Get async local correlation id and log.
        var correlationIdBeforeAwait = _correlationContainer.GetCorrelationId();
        _logger.LogWarning($"Getting customer by id {customerId} ({correlationIdBeforeAwait})");

        // Call decoratee.
        var decoratee = _decorateeFunc.Invoke();
        var customer = await decoratee.GetCustomer(customerId);

        // For demo purposes: get correlation id again after await and log.
        var correlationIdAfterAwait = _correlationContainer.GetCorrelationId();
        _logger.LogWarning($"Retrieved customer by id {customerId} ({correlationIdBeforeAwait})");

        // Return values.
        return customer;
    }
}

It looks at the same singleton CorrelationContainer instance that CustomerController used for context information and logs some messages before and after calling the decoratee. Example log messages:

Note that the same correlation id is logged before and after the await in LoggingDecorator. And not that nowhere did we have to pass this correlation id as a parameter in our business APIs.

And as a final note, I used SimpleInjector to illustrate the usage of AsyncLocal in a decorator but you can use this in many more situation of course.