One of the services in Azure that I enjoy most lately, is Azure Functions. Functions are great for writing the small single-purpose type of application that I write a lot nowadays. However, last week I was struggling with the configuration of bindings using an Azure Key Vault and I thought I’d share how to fix that.

When you create a new Azure Function for writing a message to a queue every minute, you might end up with something like the code below.

As you can see, I am using Functions V2 and the new approach to dependency injection using constructors and non-static functions. And this works great! Not being in a static context anymore is highly satisfying for an OOP programmer and it also means that I can retrieve dependencies like my logger and configuration through the constructor.

One of the things I am doing with my Function, is pulling the name of the queue and the connection string to connect to that queue from my application settings. For a connection string this is the default and for the name of a queue or topic, I can do that by using a variable name enclosed in %-signs. After adding the correct three settings to my application settings, this function runs fine locally. My IConfiguration instance is automatically build and filled by the Functions runtime and my queueName and connectionString variables are in my local.settings.json.

The problem comes when trying to move this function to the cloud. Here I do not have a local.settings.json, nor do I want to have secrets in my application settings, the default location for the Functions runtime to pull its settings from. What I want to do, is using an Azure Key Vault for storing my secrets and loading any secrets from there.

It might have been that my Google-fu has been failing, but unfortunately I have not find any hook or method to allow the loading of extra configuration values for an Azure Function. Integrating with the runtime was important for me, since I also wanted to grab values for the configuration of my Function from the configuration, not just configuration that was used in my function.

Anyhow, what I ended up doing after a while of searching was the following:

The solution above will, if running in the cloud, use the Managed Identity of the Function plan to pull the values from a Key Vault and append them to the configuration. It works like a charm, however it feels a bit hacky to do override the existing configuration this way. If you find a better way, please do let me know!

Last week I had the pleasure of attending the 4DotNet event in Zwolle, The Netherlands. Next to catching up with old friends, I enjoyed presenting and listening to two other talks

Pat Hermens | Learning from Failure – Finding the ‘second story’

The first speaker up was Pat Hermens. Pat talked to us about Failure. Going over three examples exploring how a catastrophic event came to be, continuously returning to the question: was this a failure [by someone]? His point was that the answer was “no” in all these cases. Instead of focusing on what we believe was a mistake or human error in hindsight, we should focus on the circumstances that made a such an error even possible. Assuming no ill intent, no one wants a nuclear meltdown or a space shuttle crash to occur – still they did while everyone believed they were making correct decisions. Focusing on the second story, or the circumstances or culture that allowed the wrong decision to look like a good decision is the way forward in his opinion.

Patrick Schmidt | Valkuilen bij het maken van high performance applicaties

Next up was Patrick Schmidt. Patrick talked about some .NET internals and explained how you can still create memory leaks in a managed language. He showed how creating a labda function that uses a closure over large object can end in memory leaks. He then moved on to explain some garbage collector internals and how incorrect usage of object creation and destruction can ruin your performance. Of course the prime example of string concatenation vs. the StringBuilder came along here. Finally, he talked about some pitfalls when using Entity Framework: the 1+ n problem and how you can accidentally download a whole table and only do the selection in memory by mixing up IQueryable and IEnumerable.

Henry Been | Logging, instrumentation, dashboards, alerts and all that – for developers

For the final session I had the privilege of presenting myself. In this session I share what I have learned about monitoring and logging over the last year when using Azure Monitor in a number of applications. The slidedeck for this session can be downloaded. If you are looking for an example application to try things out yourself, you can continue working with the example I showed during the talk.

 

If you have read any of my blogs before, or know me only a little bit, you know I am a huge fan of ARM templates for Azure Resource Manager. However, every now and then I run into some piece of infrastructure that I would like to set up for my application only to find out that it is not supported by ARM templates. Examples are Cosmos DB databases and collections. Having to createIfNotExists() these was always a pain to code and also mixes the responsibility of resource allocation up with business logic. But no more as part of all the #MSBuild news, the following came in!

As of right now, you can specify the creation of an CosmosDB database and collection, using ARM templates. To create a CosmosDB database for use with the SQL API, you can now use the following template:

After setting up a database, it is time to add a few containers. In this case I already provisioned throughput at the database level, so I can add as many containers as I need without additional cost. But, let’s start with just one:

I cannot just create the container and specify the, now mandatory, PartitionKey but also specify custom indexing policies. Putting this together with the template that I already had for creating a CosmosDB account, I can now automatically create all the dependencies for my application using the following ARM template:

I hope you enjoy CosmosDB database and collection support just as much as I do, happy coding!

Following up on my previous post on this subject (https://www.henrybeen.nl/add-a-ssl-certificate-to-your-azure-web-app-using-an-arm-template/), I am sharing a minimal, still complete, working example of an ARM template that can be used to provision the following:

  • An App Service Plan
  • An App Service, with:
    • A custom domain name
    • The Lets Encrypt Site extension installed
    • All configuration of the Lets Encrypt Site extension prefilled
  • An Authorization Rule for an Service Principal to install certificates

The ARM template can be found at: https://github.com/henrybeen/ARM-template-AppService-LetsEncrypt

To use this to create a Web App with an Lets Encrypt certificate and to automatically renew that, you have to do the following:

  • Pre-create a new Service Principal in your Azure Service Direction and obtain the objectId, clientId and a clientSecret for that Service Principal
  • Fill in the parameters.json file with a discriminator to make the names of your resources unique, the obtained objectId, clientId and clientSecret, a self-choosen GUID to use as the authorizationRule nameand a customHostname
  • Create a CNAME record pointing from that domain name to the follwing url: {discriminator}-appservice.azurewebsites.net
  • Roll out the template
  • Open up the Lets Encrypt extension, find all settings prefilled and request a certificate!

In response to a comment / question on an earlier blog, I have taken a quick look at applying Azure Policy to Azure Management Groups. Azure Management Groups are a relatively new concept that was introduce to ease the management of authorizations on multiple subscriptions, by providing a means to group them. For such a group, RBAC roles and assignments can be created, to manage authorizations for a group of subscriptions at once. This saves a lot of time when managing multiple subscriptions, but also reduces the risk of mistakes or oversight of a single assignment. A real win.

Now, Azure Policies can also be defined in and assigned to management groups it is claimed here. However, how to do that is not documented yet (to my knowledge and limited Goo– Bing skills), nor was it visible in the portal. So after creating a management group in the portal (which I had not done before), I I turned to Powershell and wrote the following to try and do a Policy assignment to a Management Group:

Which gave me the following error:

Which makes sense: you can only assign a policy to (a resourcegroup in) a subscription, if that is also the subscription the policy definition is saved into. So on to find out, how to define a policy within a resource group. To do that, I first wanted to retrieve an existing policy from the portal, so I navigated to Azure Policy page in the portal and stumbled onto the following screen:

And from here on, I could just assign a policy to a management group, if I had one in that group already… After switching to defining a policy, I noticed that I now could also save a policy definition to a management group.

So the conclusion is: yes, you can assign Azure Policies to Management Groups just like you can to a resource group or subscription, iff you already have at least one management group!

Further reading

Now, denying unwanted configurations is fine and can be a great help, but would it not be much better if we could automatically fix unwanted configurations when they are deployed? Yes.., this has pros and cons. Automagically correcting errors is not always the best way forward, as there is not really a learning curve for the team member deploying the unwanted configuration. On the other hand, if you can fix it automatically, why not? Just weigh your options on a I guess.

Let’s take an example from the database world. Let’s say we have a requirement that says that we want to have an IP address added to the firewall of every single database server in our subscription. The policy that would allow us to specify an IP address to add to the firewall of every database server is as follows:

Quite the JSON. Let’s walk through this step by step. First of all we have the conditions under which this policy must apply, this whenever we are deploying something of type Microsoft.Sql/servers. The effect we are looking for is deployIfNotExists, which will do an additional ARM template deployment when ever the existenceCondition is not fulfilled. This template takes the same form as any nested template, which means we have to respecify all parameters to the template and provide them from the parameters of the policy or using field values.

Managed Identity for template deployment

Every ARM template deployment is done on behalf of an authenticated identity. When using any effect that causes another deployment, you have to create a Managed Identity when assigning the policy. In the policy property roleDefinitionIds you should list all roles that are needed to deploy the defined template. When assigning and executing this policy to a subscription or resourcegroup, Azure will automatically provision a service principal with these roles over the correct scope, which will be used to deploy the specified template.

Field() function and aliases

In the template itself (and also when passing parameters to the template), there is the usage of a function called field. With this function you can reference one or more properties of the resource that is triggering the policy. To see all available fields per resource, use the Get-AzureRmPolicyAlias Powershell command. This will provide a list of all aliases available. To filter this list by namespace, you can use:

Policy in action

After creating and assigning this policy, we can see it in action by creating a SQL Server using the portal. This is just another interface for creating a template deployment. After successfully deploying this template, our policy will be evaluated and the first time it will create the intended firewall rule. We can see this when looking up all deployments to the resource group, which will also list a PolicyDeployment:

Next to that, when looking at the related events for the initial Microsoft.SQLServer deployment, we see that our policy is accepted for deployment after the initial deployment:

In my previous blog post I showed how to audit unwanted Azure resource configurations. Now, listing non-compliant resources is fine and can be a great help, but would it not be much better if we could just prevent unwanted configurations from being deployed?

Deny any deployment not in Europe

Let’s take the example of Azure regions. If you are working for an organization that wants to operate just within Europe, it might be best to just deny any deployment outside of West- and North-Europe. This will help prevent mistakes and help enforce rules within the organization. How would we write such a policy?

Any policyRule still consist out of two parts, an if and an then. First, the if – which acts like a query on our subscription or resourcegroup and determines to which resources the policy will be applied. In this case we want to trigger our policy under the following condition: location != ‘North Europe’ && location != ‘West Europe.’ We write this using JSON, by nesting conditions. This syntax is a bit verbose, but easy to understand. The effect that we want to trigger is a simple deny of any deployment that matches this condition. In JSON, this would look like this:

Creating a policy like this and then applying it to an subscription or resourcegroup, will the Azure Resource Manager instruct to immediately deny any deployment that violates the policy. With the following as a result:

Azure Policy is also evaluated when deploying from Azure Pipelines, where you will also get a meaningfull error when trying to deploy any template that violates a deny policy:

After sliced bread, the next best thing is of course Infrastructure-as-Code. In Azure, this means using ARM templates to deploy your infrastructure automatically in a repeatable fashion. It allows teams to quickly create anything they want in any Azure resourcegroup they have access to. Azure allows you to use RBAC to limit the Resourcegroup(s) a team or individual has access to. But is this secure enough? Yes, .. and no. Yes, can you limit everyone to the resource(groups) they have access to. However, within that group they can still do whatever they please. Wouldn’t it be cool to also be able to limit what a team can do, even within their own resourcegroup?

This is the first of a series of posts on Azure Policy, an Azure offering that allows you to define policies that allow or disallow specific configurations within your Azure subscription. In this post we will see how to define policies that inform us when a situation exists that we might rather not have. To be more concrete: we want to create an audit log entry whenever someone deploys a virtual machine. I am not a big fan of them and I want to know who is using VM’s to get a conversation going about how to move them to PaaS or serverless offerings.

Create a policy

Let’s start by writing our policy. The policy as a whole will look like this:

Every policy has a property type, that always has to have the same value. If you are familiar with ARM templates, you might guess that this allows the policy as a whole to be inserted into an ARM template. And this is correct, you can deploy Azure Policies as part of a subscription level ARM template. Next to the type, a name is mandatory and can be chosen freely. The third property describes the policy itself. The displayName and description speak for themselves and can be freely chosen. For the policyRule there are numerous possible approaches, but let’s start with a simple condition under the if property that ensures that this policies effect only triggers when it encounters any resource with a type of Microsoft.Compute/virtualMachines. Again, this relates to the resourcetype also encountered in ARM templates and references a namespace provider. Finaly the effect that we then want to to trigger is only an audit of anything that matches the if expression.

This way we can view the compliance state of the resource this policy is assigned to and also see all events that violate this policy.

Create the policy

Before we can assign this policy to a subscription or resourcegroup, we have to define the policy itself. Let’s go to the portal and open up Azure Policy, then choose Definitions and finally new Policy Definition:

This opens up a new screen, that we fill in as shown here:

Finally, clicking Save at the bottom of the page, registers the policy in the subscription at the top and makes it ready for use.

Assign the policy

Now let’s assign this policy to a scope (a subscription or a specific resourcegroup) that it should apply to. Again we open up Azure Policy and then go to the Assignments tab, where we click Assign Policy:

This opens up a new view:

In this view we must select the scope, including any exclusions if we want those. (Why would you?) Then you can select the Policy to apply, which you can search by name as I did or find in the category Custom left of the search box. After selecting the Policy an assignment name is mandatory and a description is optional. Filling these with a rationale for your policy will decrease the chance that others will simply remove it. Hit assign and put Azure to work on your policy assignment.

Inspect the results

After first assigning our policy, it’s compliance state will be set to Not started, this means that your policy has not been applied to existing resources yet. It will however be evaluated for every new resource that is deployed. Now after a while the compliance state of my policy changed from Not started to Non-compliant, indicating there were one or more resources violating the policy.

This screenshot is taken from the Compliance overview tab, listing all my policies and whether my resources are compliant or not. Clicking on one of the policies, shows the list of resources not meeting the policy:

Here it shows both a VM that existed before this policy was assigned (hbbuild001) and a VM that was created after assigning the policy (sadf…).

Using this overview and the option to drill down to violating resources, it is easy to inspect on what is happening accros your subscriptions and get conversations going when things are happening that are not ideal.

Get notified on a violation

However, visiting these screens regularly to keep tabs on what is happening is probably not feasible in any real organization. It would be far better, if you were notified of any violation of a policy automatically. Luckily, this is possible as well. If you were to click on Events in the previous screenshot, this view will open:

Here you can see all the events surrounding this Policy: it’s creation, but also every violation. Opening up the details from this violation allow you to use this event as a blueprint for creating alerts for any other new violation.

For more

If you are looking for more things to do with Azure Policy, check out the links below. Also, I hope to write another blog on Azure Policy to show how to block or automatically fix deployments that violate your policies.

Resources:

In an earlier post on provisioning a Let’s encrypt SSL certificate to a Web App, I touched upon the subject of creating an RBAC Role Assignment using an ARM template. In that post I said that I wasn’t able to provision an Role Assignment to a just single resource (opposed to a whole Resourcegroup.) This week I found out that this was due to an error on my side. The template for provisioning an Authorizaton Rule for just a single resource, differs from that for provisioning a Rule for a whole Resourcegroup.

Here the correct JSON for provisioning an Role Assignment to a single resource:

As Ohad correctly points out in the comments the appServiceContributerRoleGuid, should be a unique Guid generated by you. It does not refer back to a Guid of any predefined role.

In contrast, below find the JSON for provisioning an Authorizaton Rule for a Resourcegroup as a whole. To provision a roleAssignment for a single resource, we do not need to set a more specific scope, but completely leave it out. Instead the roleAssignment has to be nested within the resource it applies to. This is visible when comparing the type, name and scope properties of both definitions.

Today and last Friday I had the opportunity to get one of my favorite, but older topics, out on a beamer again: building database per tenant architectures with one example spanning over 60.000 databases! I was lucky to be invited to both CloudBrew 2018 (Mechelen, Belgium) and CloudCamp Ireland 18 (Dublin) to give this crazy presentation. Since then I have received multiple requests to share my slides, which I did on Slideshare: https://www.slideshare.net/HenryBeen/cloud-brew-cloudcamp

A good number of slides were adapted from the Microsoft WingTips repository, that you can find here: https://github.com/Microsoft/WingtipTicketsSaaS-DbPerTenant

If you attended and want to further discuss the topic, feel free to reach out to me!