Implementing a Service Broker in .NET part 2: service binding

This is part 2 in a series of posts about writing service brokers in .NET Core. In the previous post we implemented the bare minimum: a catalog and a blocking implementation of (de)provisioning a service. In this post we will look at a blocking implementation of service (un)binding. All posts in the series:

  1. Marketplace and service provisioning
  2. Service binding (this post)

Setting the stage

As in the first post, we implement (parts of) the Open Service Broker API specification. We use the OpenServiceBroker .NET library that already defines all necessary endpoints and provides implementation hooks for binding and unbinding. We use Pivotal Cloud Foundry, hosted at https://run.pivotal.io for testing our implementation and CF CLI for communicating with the platform.

All source code for this blog post can be found at: https://github.com/orangeglasses/service-broker-dotnet/tree/master.

What to bind to

When we want to bind a service to an application, we need an actual application. So we implement a second (empty) .NET Core application.

We now have two applications: the service broker and a client application that we can bind to called rwwilden-client.

Updating the catalog

In the first post we introduced a service catalog that advertised the rwwilden service. We chose to make the service not-bindable because at that time, service binding was not implemented. When we try to bind the service anyway, an error occurs:

So we need to update the catalog to advertise a bindable service:

src/broker/Lib/CatalogService.cs view raw
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
private static readonly Task<Catalog> CatalogTask = Task.FromResult(
    new Catalog
    {
        Services =
        {
            // https://github.com/openservicebrokerapi/servicebroker/blob/v2.14/spec.md#service-object
            new Service
            {
                Id = ServiceId,
                Name = "rwwilden",
                Description = "The magnificent and well-known rwwilden service",

                // This service broker now has support for service binding so we will set this property to true.
                Bindable = true,
                BindingsRetrievable = false,

                // This service broker will be used to provision instances so fetching them should also be supported.
                InstancesRetrievable = true,

                // No support yet for service plan updates.
                PlanUpdateable = false,

                Metadata = ServiceMetadata,

                Plans =
                {
                    new Plan
                    {
                        Id = BasicPlanId,
                        Name = "basic",
                        Description = "Basic plan",
                        Bindable = true,
                        Free = true,
                        Metadata = BasicPlanMetadata
                    }
                }
            }
        }
    });

public Task<Catalog> GetCatalogAsync() => CatalogTask;

The only changes are at lines 14 and 32 where we set Bindable to true. Note that just setting Bindable to true at the plan level would also have been enough. Lower-level settings override higher-level ones.

Bind and unbind

Next step is to implement binding and unbinding. There are 4 different types of binding defined by the OSBAPI spec: credentials, log drain, route service and volume service. For this post we will implement the most common one: credentials. Since our service broker does not have an actual backing service, this is quite simple. In real life, you might have a MySQL service broker that provisions a database during bind and returns a connection string that allows your application to access the database.

The OSBAPI server library I used in the previous post provides hooks for implementing blocking (un)binding in the form of the IServiceBindingBlocking interface so we just need to implement the BindAsync and UnbindAsync methods:

src/broker/Lib/ServiceBindingBlocking.cs view raw
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
public Task<ServiceBinding> BindAsync(ServiceBindingContext context, ServiceBindingRequest request)
{
    LogContext(_log, "Bind", context);
    LogRequest(_log, request);

    return Task.FromResult(new ServiceBinding
    {
        Credentials = JObject.FromObject(new
        {
            connectionString = "<very secret connection string>"
        })
    });
}

public Task UnbindAsync(ServiceBindingContext context, string serviceId, string planId)
{
    LogContext(_log, "Unbind", context);
    _log.LogInformation($"Deprovision: {{ service_id = {serviceId}, planId = {planId} }}");

    return Task.CompletedTask;
}

As you can see, our bind implementation simply returns a JObject with a very secret connection string.

The final change to our code is to register the IServiceBindingBlocking implementation with the DI container (line 4):

src/broker/Startup.cs view raw
1
2
3
4
5
services
    .AddTransient<ICatalogService, CatalogService>()
    .AddTransient<IServiceInstanceBlocking, ServiceInstanceBlocking>()
    .AddTransient<IServiceBindingBlocking, ServiceBindingBlocking>()
    .AddOpenServiceBroker();

Updating the service broker

When we push the new service broker application, the platform (PCF) does not yet know that the service broker has changed. So when we try to bind a service to an application, this still fails with the error: the service instance doesn’t support binding. To fix this, we can update the service broker using cf update-service-broker:

Binding and unbinding the service

With an updated service broker in place that supports binding we have finally reached the goal of this post: binding to and unbinding from the my-rwwilden service:

With the first command we bind the rwwilden-client application to the my-rwwilden service and give the binding a name: client-to-service-binding-rwwilden.

With the second command, cf env rwwilden-client, we check whether the credentials that the service broker provides when binding, are actually injected into the rwwilden-client application environment. And alas, there is our ‘very secret connection string’.

Conclusion

In the first post we implemented a service broker with a service catalog and (de)provisioning of a service. In this post we actually bound the service we created to an application and saw that the credentials the service broker returned when binding were injected into the application environment.

Both service provisioning and binding were implemented as blocking operations, which makes sense in the context of our in-memory service broker. However, often a service broker implements operations that take time, like provisioning a database server. In the next post we will look at how to implement a non-blocking version of service provisioning.

Implementing a Service Broker in .NET part 1: provisioning

I’m experimenting with writing a service broker in .NET Core that conforms to the Open Service Broker API specification. This is part 1 of a series of posts that explores all service broker details from a .NET perspective. All posts in the series:

  1. Marketplace and service provisioning (this post)
  2. Service binding

Setting the stage

I this first part, the goal is to write a service broker that:

So we do not yet implement binding or unbinding, this is for a follow-up post.

A service broker allows a platform to provision service instances for applications you or someone else writes. The platform I chose to test my service broker on is Pivotal Cloud Foundry (PCF). A public implementation of this platform that is hosted and managed by Pivotal can be found at https://run.pivotal.io.

To tell the service broker what to do I will use the CF CLI which can be used to push applications to the platform, create service instances and bind them to applications (among a lot of other things).

All source code for this blog post can be found at: https://github.com/orangeglasses/service-broker-dotnet/tree/master.

OSBAPI library for .NET

Lucky for me, there is no need to implement the OSBAPI spec myself. An excellent open source OSBAPI client and server implementation already exists for .NET: https://github.com/AXOOM/OpenServiceBroker. The server library implements the entire OSBAPI interface and provides hooks you must implement to actually (de)provision and (un)bind and fetch services and service instances.

When implementing a service broker for some underlying service, you have to make a choice between implementing synchronous or asynchronous (de)provisioning and (un)binding. If the platform (PCF) and the client (CF CLI) support it, requests to the service broker contain the accepts_incomplete=true parameter. This indicates that the platform supports polling the latest operation to check for completeness. In this case, both PCF and CF CLI support asynchronous operations.

If we want to make our service broker as generic as possible, we should implement the blocking version of the API because not all platform may support asynchronous provisioning. Therefore, for this post, we just implement IServiceInstanceBlocking. In a later post we’ll explore asynchronous provisioning.

Catalog

The starting point for any service broker is its catalog, exposed on the /v2/catalog endpoint. When using the OpenServiceBroker.NET library we need to implement the ICatalogService. For this post we start with a simple catalog:

src/broker/Lib/CatalogService.cs view raw
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
private static readonly Task<Catalog> CatalogTask = Task.FromResult(
    new Catalog
    {
        Services =
        {
            // https://github.com/openservicebrokerapi/servicebroker/blob/v2.14/spec.md#service-object
            new Service
            {
                Id = ServiceId,
                Name = "rwwilden",
                Description = "The magnificent and well-known rwwilden service",

                // Since this service broker will not yet have support for binding services to applications,
                // these properties are set to false.
                Bindable = false,
                BindingsRetrievable = false,

                // This service broker will be used to provision instances so fetching them should also be supported.
                InstancesRetrievable = true,

                // No support yet for service plan updates.
                PlanUpdateable = false,

                Metadata = ServiceMetadata,

                Plans =
                {
                    new Plan
                    {
                        Id = BasicPlanId,
                        Name = "basic",
                        Description = "Basic plan",
                        Free = true,
                        Metadata = BasicPlanMetadata
                    }
                }
            }
        }
    });

public Task<Catalog> GetCatalogAsync() => CatalogTask;

A catalog has services with some properties and a service has plans, nothing fancy yet. We can already deploy the application that exposes this service catalog to PCF:

We now have a service catalog up-and-running at https://rwwilden-broker.cfapps.io/v2/catalog:

Service instances

Now that we have a catalog, we need a way to create services from it. For simplicity, we implement the blocking version of service instancing: IServiceInstanceBlocking and leave the asynchronous (deferred) version for a future post. Since we’re not actually provisioning anything yet, there is little to implement except some logging statements:

src/broker/Lib/ServiceInstanceBlocking.cs view raw
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
public Task<ServiceInstanceProvision> ProvisionAsync(ServiceInstanceContext context, ServiceInstanceProvisionRequest request)
{
    _log.LogInformation(
        $"Provision - context: {{ instance_id = {context.InstanceId}, " +
                                $"originating_identity = {{ platform = {context.OriginatingIdentity.Platform}, " +
                                                          $"value = {context.OriginatingIdentity.Value} }} }}");
    _log.LogInformation(
        $"Provision - request: {{ organization_guid = {request.OrganizationGuid}, " +
                                $"space_guid = {request.SpaceGuid}, " +
                                $"service_id = {request.ServiceId}, " +
                                $"plan_id = {request.PlanId}, " +
                                $"parameters = {request.Parameters}, " +
                                $"context = {request.Context} }}");

    return Task.FromResult(new ServiceInstanceProvision());
}

public Task DeprovisionAsync(ServiceInstanceContext context, string serviceId, string planId)
{
    _log.LogInformation(
        $"Deprovision - context: {{ instance_id = {context.InstanceId}, " +
                                  $"originating_identity = {{ platform = {context.OriginatingIdentity.Platform}, " +
                                                            $"value = {context.OriginatingIdentity.Value} }} }}");
    _log.LogInformation($"Deprovision: {{ service_id = {serviceId}, planId = {planId} }}");

    return Task.CompletedTask;
}

Platform-to-service-broker authentication

We now have an application that implements a catalog and service (de)provisioning. However, the platform does not yet know that this application is a service broker. In PCF, we can use the cf create-service-broker command to do that. This command requires the name of the service broker, its url (https://rwwilden-broker.cfapps.io) and a username and password.

The username/password are required because communication between platform and broker is authenticated through basic authentication. So the platform and the broker share a secret (username/password) that allows them to communicate. ASP.NET Core does not support basic auth out-of-the-box so we turn to the idunno.Authentication library. I’m not going to go into the details of configuring this, check out the Startup.cs class in the Git repo. One thing to take into account is that in PCF, the load balancer terminates SSL. Requests to the app are sent in plain HTTP. The idunno.Authentication requires that you explicitly allow HTTP requests since in the context of basic authentication, HTTP is a very bad idea.

The basic authentication password will of course not be hard-coded inside the application but will be read from an environment setting Authentication:Password. So after we push the application, we can use cf set-env to add the password to the environment.

Creating the broker

Now that we have the app up-and-running with a catalog, service (de)provisioning and basic authentication, we can create a service broker from it via cf create-service-broker:

Provisioning and deprovisiong a service

The service broker we deployed exposes the rwwilden service that should be visible in the marketplace:

And as you can see, there it is, at the bottom of the list of available services. Note that we made this a space-scoped service so it’s only available in the current org and space.

Next we can create a service and delete it again and we have reached the goal of the first post of this series.

CF create service

We created a service of the type rwwilden in the basic plan and we named it my-rwwilden. Binding is not yet supported by this service so there are no bound apps.

Use AsyncLocal with SimpleInjector for unobtrusive context

Suppose you have a component (an API controller, for instance) that has a dependency on another component. And you have all of this nicely configured using your favorite IoC (Inversion of Control) library: SimpleInjector. In most applications you also have some cross-cutting concerns like logging, validation or caching. Ideally, you do not want these cross-cutting concerns influence the way you write your business code.

Let’s take a hypothetical situation where you want to correlate log messages across different components. So component A calls component B calls component C and you want to log each call and be able to see that these calls were part of the same operation. You need some correlation id.

Logging and this correlation id have nothing to do with your business logic so you do not want any logging statements in your business code and you definitely do not want to pass a ‘correlation id’ around everywhere. How to solve this?

One possible answer is: AsyncLocal<T>. It allows you to persist data across asynchronous control flows. Besides, the data you store in an AsyncLocal is local to the current asynchronous control flow. So if you have a web application that receives multiple simultaneous requests, each request sees its own async local value.

I illustrated this with a small project on Github. It contains a simple controller that returns a customer by id from some repository. The repository is injected as a dependency into the controller. The controller method also initializes an async local value:

AsyncLocal.SimpleInjector.Web/Controllers/CustomerController.cs view raw
1
2
3
4
5
6
7
8
9
10
11
12
13
[HttpGet]
public async Task<Customer> Get(int id)
{
    // Set async local correlation id.
    var correlationId = Guid.NewGuid();
    _correlationContainer.SetCorrelationId(correlationId);

    // Call controller dependency (decorated by LoggingDecorator).
    var customer = await _customerService.GetCustomer(id);

    // Return values.
    return customer;
}

On lines 5 and 6 I generate a new ‘correlation id’ (but this can be any value or object you like) and set it in a container, which I will show in a minute. Note that this correlation id does not have to be passed in the GetCustomer call on line 9.

The CorrelationContainer is a simple wrapper around an AsyncLocal<Guid>:

AsyncLocal.SimpleInjector.Web/Controllers/CorrelationContainer.cs view raw
1
2
3
4
5
6
7
8
9
10
11
12
13
14
public class CorrelationContainer
{
    private readonly AsyncLocal<Guid> _correlationId = new AsyncLocal<Guid>();

    public void SetCorrelationId(Guid correlationId)
    {
        _correlationId.Value = correlationId;
    }

    public Guid GetCorrelationId()
    {
        return _correlationId.Value;
    }
}

This wrapper class is injected as a singleton dependency by Simple Injector so there is only one instance. However, the AsyncLocal takes care of providing each asynchronous control flow (in this example each web request) with its own value.

Finally we have a decorator that does our logging. Log messages should contain the correct correlation id.

AsyncLocal.SimpleInjector.Web/Controllers/LoggingDecorator.cs view raw
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
public class LoggingDecorator : ICustomerService
{
    private readonly Func<ICustomerService> _decorateeFunc;
    private readonly ILogger<CustomerService> _logger;
    private readonly CorrelationContainer _correlationContainer;

    public LoggingDecorator(
        Func<ICustomerService> decorateeFunc,
        ILogger<CustomerService> logger,
        CorrelationContainer correlationContainer)
    {
        _decorateeFunc = decorateeFunc;
        _logger = logger;
        _correlationContainer = correlationContainer;
    }

    public async Task<Customer> GetCustomer(int customerId)
    {
        // Get async local correlation id and log.
        var correlationIdBeforeAwait = _correlationContainer.GetCorrelationId();
        _logger.LogWarning($"Getting customer by id {customerId} ({correlationIdBeforeAwait})");

        // Call decoratee.
        var decoratee = _decorateeFunc.Invoke();
        var customer = await decoratee.GetCustomer(customerId);

        // For demo purposes: get correlation id again after await and log.
        var correlationIdAfterAwait = _correlationContainer.GetCorrelationId();
        _logger.LogWarning($"Retrieved customer by id {customerId} ({correlationIdBeforeAwait})");

        // Return values.
        return customer;
    }
}

It looks at the same singleton CorrelationContainer instance that CustomerController used for context information and logs some messages before and after calling the decoratee. Example log messages:

Note that the same correlation id is logged before and after the await in LoggingDecorator. And not that nowhere did we have to pass this correlation id as a parameter in our business APIs.

And as a final note, I used SimpleInjector to illustrate the usage of AsyncLocal in a decorator but you can use this in many more situation of course.

Tips for working with Azure Resource Manager (templates)

Azure Resource Manager (ARM) provides you with the means to describe the infrastructure for your Azure applications. This includes storage accounts, virtual machines, Azure SQL databases and a lot more. On a project I’m working on we’re using it to describe the layout of an Azure Service Fabric cluster but we decided to start using it for all resource groups.

The main reason why is that this is the only way to guarantee consistency between environments in Azure. We have a develop, acceptance and production environment and you want these to be as similar as possible. Using a parameterized ARM template, you can guarantee this.

Azure Resource Manager provides a form of desired state configuration. You describe what your infrastructure should look like and Azure Resource Manager makes it so. If you apply a change, Azure Resource Manager makes sure your infrastructure matches this change.

And now for some tips if you want to get started with ARM.

Tip 1: Download the automation template from the portal

The Azure portal provides a download link at the resource group level for the ARM template for that resource group, parameterized and all. The following screenshot tells you where to find it.

You can use this template directly for deployment to the resource group you just downloaded it from.

Tip 2: No need to create resources

It is often useful to create a resource in the portal to see what it will look like in a resulting ARM template. However, there is no need to actually create the resource. When you have configured your new resource, a VM for example, you can download an ARM template representing your new resource without actually creating it. In the screenshot below I have configured a new VM and in the confirmation page you see a download link.

Tip 3: Use Azure Resource Explorer to check existing resources

Azure Resource Explorer provides a view on all the resources in your subscription(s). Every resource and related resources are presented as JSON documents that you can inspect and even modify. Here’s a screenshot showing an Azure Storage account:

There’s already a lot to see here so I marked a few parts:

  1. By default, you can’t change anything in resource explorer. To make changes to resources, you must explicitly enable Read/Write mode.

  2. This is the absolute url of your resource in Azure Resource Manager. As you can see, it follows a certain hierarchy: subscriptionresource groupresource typename. All operations you can run on this resource use this url.

  3. Here you find Powershell or Azure CLI scripts for working with this resource.

  4. All actions you can perform on the resource are found here. The screenshot represents an Azure Storage account so you could, for example, perform a POST to the following url to retrieve the primary and secondary access keys: https://management.azure.com/subscriptions/4c70a177-b978-43f9-9fc0-1e50dd20271f/resourceGroups/horses-for-courses/providers/Microsoft.Storage/storageAccounts/rwwildenml/listKeys?api-version=2017-06-01.

  5. The JSON representation of the resource itself (this is different for each resource type of course). In this case, you could change the SKU or configure VNet protection (recently announced).

    There are some additional properties here you could try to change like the blob endpoint but I never tried that. I don’t think that will work.

Tip 4: Getting API version information

Every resource type you want to address via ARM has a list of supported API versions. These are always in the following format: yyyy-MM-dd with an optional -preview. For example: 2017-10-01 or 2015-05-01-preview.

How do you know what the supported API versions for a specific resource type are? You can ask the relevant resource provider either through Powershell or through Azure CLI. I’ll describe them both. In both cases I’d like to know the supported API versions for managing virtual networks via ARM.

Using Powershell, you can run the following commands:

Login-AzureRmAccount
$networkRP = Get-AzureRmResourceProvider -ProviderNamespace "Microsoft.Network"
$networkRP.ResourceTypes | where { $_.ResourceTypeName -eq "virtualNetworks" }

The result should look like this:

As you can see, we have a list of supported API versions and also the locations that support a specific resource type.

If you want to use Azure CLI, you can run the following script:

$ az login
$ az provider list --query \
  "[?namespace=='Microsoft.Network'].resourceTypes[] | [?resourceType=='virtualNetworks'].apiVersions[]"
[
  "2017-11-01",
  "2017-10-01",
  "2017-09-01",
  "2017-08-01",
  "2017-06-01",
  "2017-04-01",
  "2017-03-01",
  "2016-12-01",
  "2016-11-01",
  "2016-10-01",
  "2016-09-01",
  "2016-08-01",
  "2016-07-01",
  "2016-06-01",
  "2016-03-30",
  "2015-06-15",
  "2015-05-01-preview",
  "2014-12-01-preview"
]

The az provider list command lists the details for all Azure Resource Providers as JSON. You can use JMESPath queries to extract results from this JSON.

Tip 5: Azure Resource Explorer (raw)

For some resources you can not use Azure Resource Explorer. If you want to go really low-level you can visit https://resources.azure.com/raw/. For example, for a customer we have an OMS (Log Analytics) workspace. This workspace has a number of data sources that you can not inspect in Azure Resource Explorer:

Apparently you have to apply a filter and Resource Explorer does not support that. So let’s turn to the raw version and see what we can do there:

We can now do a GET request with the required kind parameter and we see all data sources of the performanceCounter kind.

Besides GET requests, you can also perform all the other operations on your resources: PUT, POST, DELETE and PATCH.

Tip 6: Non-exportable resources

Sometimes when you download an automation template (tip 1) you get a warning message stating that: resource types cannot be exported yet and are not included in the template.

In this case, one of the resource types that can not be exported is Microsoft.KeyVault/vaults/accessPolicies. In my case, I would actually like these access policies to be part of the ARM template but we don’t know what this should look like because they weren’t exported. So what can we do?

There are several options:

  1. Check Azure Resource Explorer (tips 3 and 5). In the case of Key Vault access policies, we can easily find them there.
  2. Check the ARM template documentation. Information for Azure Key Vault can be found here.
  3. If both previous options do not provide an answer, you can also try checking out the Azure REST API. The syntax that describes resources is the same. And in case of Key Vault access policies, you can find information here.

Further tips…

… will be added to this post when I figure them out so stay tuned :)

Scripting an Azure API Management Version Set via Azure REST API in Powershell

Azure API Management recently announced the general availability of a new feature called Versions and Revisions. Versions allow you to group multiple versions of your API, revisions allow controlled, safe and testable API changes. Here is another post that explains things in more detail.

The documentation is still a little behind and in some cases not even correct so I hope this post helps in creating versioned API’s from scratch.

First of all, why would we use the Azure REST API for this and not the Azure portal or maybe Powershell? Three reasons:

  1. The portal does not yet allow you to create API version sets from scratch. You can create an API and add a new version to it later but then you get an ‘original’ API and the new version you added. This is not quite the same because for your original API you can not provide a versioning scheme (path, header or query string) or names for the header or query string parameter.
  2. Azure Powershell does not have support for API Version Sets (yet).
  3. In any professional environment you should not (in my opinion) have people clicking around in the Azure portal, adding, modifying and deleting resources. Especially when you have multiple environments (e.g. DEV, PROD) you want a repeatable deployment process between environments. Scripting your resources using the Azure REST API (or Azure Powershell or ARM templates) is a good way to prevent differences between environments.

I suppose that Powershell and (better) Azure portal support for API Version Sets will become available in the near future but until then, this post is a detailed guide to get you started with this nice addition to API Management.

Getting an access token

If we want to use the Azure REST API, we need a JWT token. I already wrote about how to get a token when using Powershell so for that I direct you to a previous post.

Creating API Version Set

There isn’t actually any documentation yet on creating an API Version Set using the REST API. I got the necessary details from this post and a lot of trial and error. The first step is actually creating the API Version Set:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
# Create the body of the PUT request to the REST API.
$versionSetDisplayName = "My version set"

$createOrUpdateApiVersionSetBody = @{
    name = $versionSetDisplayName
    versioningScheme = "Header"
    versionHeaderName = "X-Api-Version"
}

# Send PUT request to the correct endpoint for creating an API Version Set.
$subscriptionId = "01234567-89ab-cdef-0123-456789abcdef"
$resourceGroupName = "MyResourceGroup"
$apimServiceName = "myapiminstance"
$apimVersionSetName = "my-version-set"
$apimApiVersion = "2018-01-01"

$apiVersionSet = Invoke-RestMethod `
    -Method Put `
    -Uri ("https://management.azure.com/subscriptions/" + $subscriptionId +
          "/resourceGroups/" + $resourceGroupName +
          "/providers/Microsoft.ApiManagement/service/" + $apimServiceName +
          "/api-version-sets/" + $apimVersionSetName +
          "?api-version=" + $apimApiVersion) `
    -Headers @{ Authorization = ("Bearer " + $accessToken)
                "Content-Type" = "application/json" } `
    -Body ($createOrUpdateApiVersionSetBody | ConvertTo-Json -Compress -Depth 3)

Write-Host ("Created or updated APIM version set: " +
            $apiVersionSet.properties.displayName +
            " (" + $apiVersionSet.id + ")")

First, on lines 1 to 8 we create the body for the PUT request. Note that the name you specify one line 5 is the display name and not the name of the API Version Set. I specify the Header versioning scheme so I have to specify a header name as well.

On lines 10 to 26 a PUT request is executed via the Invoke-RestMethod Powershell cmdlet. This can be broken down into the following steps:

  • Lines 19-23: construct the url that we need to PUT to which results in: https://management.azure.com/subscriptions/01234567-89ab-cdef-0123-456789abcdef/resourceGroups/MyResourceGroup/providers/Microsoft.ApiManagement/service/my-apim/api-version-sets/my-version-set?api-version=2018-01-01
  • Lines 24-25: specify request headers, including the Authorization header with the access token
  • Line 26: pass the body we created in lines 4-8

We now have an API version set that we can use when creating a new versioned API. You do not see this version set anywhere in the portal, so the only way to check what happened is through another REST API request:

$apiVersionSet = Invoke-RestMethod `
    -Method Get `
    -Uri ("https://management.azure.com/subscriptions/" + $subscriptionId +
          "/resourceGroups/" + $resourceGroupName +
          "/providers/Microsoft.ApiManagement/service/" + $apimServiceName +
          "/api-version-sets/" + $apimVersionSetName +
          "?api-version=" + $apimApiVersion) `
    -Headers @{ Authorization = ("Bearer " + $accessToken)
                "Content-Type" = "application/json" }

The resulting object is a PSCustomObject like id, name an properties.

Creating the first version of the API

Now that we have an API Version Set we can add our first API version to it. This must be done with another PUT request as follows:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
# Create body for PUT request to create new API.
$apiDescription = "My wonderful API"
$apiDisplayName = "My wonderful API"
$apiPath = "my-api"
$backendServiceUrl = "https://my-backend-api.com"
$createOrUpdateApiBody = @{
    properties = @{
        description = $apiDescription
        apiVersion = "v1"
        apiVersionSetId = $apiVersionSet.id
        displayName = $apiDisplayName
        path = $apiPath
        protocols = , "https"
        serviceUrl = $backendServiceUrl
    }
}

# Send PUT request for creating/updating a versioned API.
$apimApiId = "my-api-v1"
$restApi = Invoke-RestMethod `
    -Method Put `
    -Uri ("https://management.azure.com/subscriptions/" + $subscriptionId +
          "/resourceGroups/" + $resourceGroupName +
          "/providers/Microsoft.ApiManagement/service/" + $apimServiceName +
          "/apis/" + $apimApiId +
          "?api-version=" + $apimApiVersion) `
    -Headers @{ Authorization = ("Bearer " + $accessToken)
                "Content-Type" = "application/json" } `
    -Body ($createOrUpdateApiBody | ConvertTo-Json -Compress -Depth 3)

The idea is the same as for the API version set:

  • lines 2-16: create a body for the PUT request that represents our new API. On line 9 we specify the version identifier for this version of the API and on line 10 we refer to the API version set.
  • lines 19-29: create the new API

The result

When we take a look in the portal, we can now see what happened. We have an API Version Set that contains all versions of the API:

And a v1 version of the API that is a part of the version set:

So that is how you create versioned API’s in Azure API Management :)