Categoriearchief: Algemeen

Azure Advent Calendar 2019 – Azure Logic Apps (again)

This year there is an Azure advent calendar, organized by Gregor Suttie and Richard Hooper, both fellow Microsoft MVPs. In total 75 sessions will be published, all covering an interesting Azure topic. Of course I jumped on the opportunity to get aboard, and fortunatly I was just in time to be added to the list. As always, I am happy to share with the community and delighted that I am allowed to do so. In this case, my contribution is on the topic of Logic Apps and can be found on the calendars Youtube Channel.

When I started preparing for my session and looked over the calendar again, I found that there were two sessions on Logic Apps on the calendar planned. Next to my own, there was also a session by Simon Waight. After a bit of coordination and a few e-mails back and forth, we divided up the topics. In the recording by Simon you will find:

  • An introduction to Logic Apps, connectors and actions
  • An introduction to pay per use environments and Integration Service Environments
  • Some best practices
  • A step by step guide on building your first Logic App

In my session I will expand upon this with the following subjects:

  • The introduction of a real world use case where I am working with Logic Apps
  • Infrastructure as Code for Logic Apps, so you can practice continuous deployment for Logic Apps
  • A short introduction on building your own Logic Apps.

I hope you enjoy the recording, which will go live somewhere today on the calendars Youtube Channel.

Merry Christmas!

Update Conference 2019

This week I had the privilege of attending the Update Conference in Prague.

It was a great opportunity to visit this beautiful city and to attend talks give by some very bright people from all around the world. I had a great time and am happy I was invited and would be delighted to return next year.

As requested by some of the attendees, I am sharing my slides for both presentations here.

  1. Infrastructure as Code: Azure Resource Manager – inside out
  2. Secure deployments: Keeping your Application Secrets Private

Thank you for attending and I hope you enjoyed it just as much as I did!

What I learned creating on an online training

When I visited my own website a few days ago, I was shocked at how long it has been since I posted here. The reason for this is that I have been working on two large projects, next to my regular work, for several months now. A few weeks ago the first one came to completion and I am really proud to say that my training Introduction to Azure DevOps for A Cloud Guru has been launched.

I think this has been the most ambitious side project I have started and completed for a while. And while I really enjoyed the work on this project, I also found it challenging. A great deal of things I had to learn on the fly and often I found I had to spend a tremendous amount of time on something that was seemingly easy. For this reason, I thought it wise to recap what I have learned. And by sharing it, I hope others might benefit from my experiences as well.

1. Yes, it is work!

I think I have spent roughly one day per week for five months or so on this online course. My estimation is that this will equate to roughly between 100 and 150 hours of work for 3 hours of video material. I have no idea how this compares to other authors of online training material and maybe this is were I find out I completely suck. Also, my guesstimate may be completely wrong. Maybe due to the time spent thinking about a video while going for a run or by taking a break and loosing track of time. The point however is, that this is a lot of time that you have to invest. This ratio of work hours to video hours shows that after a day of hard work, you have completed maybe twenty minutes of video or even less.

For me it is clear that you cannot do this type of work with a twenty minutes here and a half an hour there attitude. You will have to make serious room in your calendar and embrace the fact that you will spend at least one day a week on the project. If you cannot do that, progress will be too slow and your project might slowly fade into oblivion.

2. Find something small to start with

When I first came into contact with A Cloud Guru, I was strongly advised not to start working on a large project right away but start with a smaller project first. For me this meant the opportunity to create a more use-case focused, ACG Project style video. This is a 25-minute video that takes the viewer along in creating a cool project or showing a specific capability in a single video. This allowed me to practice and improve a lot of the skills that I would need when creating a complete course.

The advantage of creating and completing something smaller first, compared to splitting a larger project into parts, is that you are getting the full experience. You will have to write a project proposal, align your style of working with that of your editor, write a script, record, get feedback, start over, create a final recording, get more feedback and finally go through the steps of completing a project: adding a description, link to related resources, putting the sources online, etc.

Having this complete experience for a smaller project before moving on to a larger project really helped me.

3. Quality, quality, quality and … some more quality

Creating courses or videos for a commercial platform is paid work. This means that there is a clear expectation that the work you perform will be of a certain quality. For example, I got a RØDE Podcaster delivered to my home to help with sound quality. However, just using a high-quality microphone was not enough. Every audio sample I have edited in both Audacity and Camtasia to ensure that the audio did not contain any background sound or hum that would distract students. Also, all the videos were recorded on a large screen, configured to use a resolution of only 1080p. Every single time, all excess windows need to be closed or a re-record was necessary. Every time, excess browser tabs needed to be closed or a re-record was necessary. Browser windows and terminals needed the correct level of zoom, or a re-record was necessary. Favorites removed, browser cache cleared to ensure there are no pop-ups of previous entries, etc etc.

Conclusion? Producing high quality content entails a tremendous amount of details that you have to keep in mind every single time. Be prepared for that!

4. Be ready for (constructive) criticism

I feel really lucky in how my interactions with my editors at ACG went. Being on opposite sides of the world, most of the communication was offline through documents or spreadsheets and yet somehow, they managed to make all feedback feel friendly and constructive. In the occasional video call there was always time for a few minutes of pleasantries, before we got down to business. But yes, there were things to talk about. Feedback and criticism were frequent and very strict. I have re-edited some videos five or six times to meet the standards that I was supposed to meet. Especially on the quality of audio and video, there was a clear expectation of quality and that was rigorously verified by people who definitely had more experience than me.

All in all, I can say I have learned a lot recording these videos. But if you want to do this, be prepared and open to feedback.

5. Slides are cheap, demos are hard!

This is really a topic on its own and I think I will write more about it later. But if there is one thing I have learned, it is that demos are hard. Doing a demo on stage is hard, but much more forgiving than doing one in a video. When presenting, no-one minds if your mouse cursor is floating a bit around, searching for that button. When doing a live demo, it is cool to see someone debug a typo in a command on the fly. When presenting, you can make a minor mistake, correct it and then explain what went wrong and how to handle that. The required level of perfection is not as high as for a recorded demo. And then there is the sound! I found it impossible to record the video and audio for my demos in one go and have developed my own approach to it, which I will write about some other time.

In my experience, if you must record five minutes of demo, it might take four to five times as long as recording a five-minute slide presentation.

6. A race against time

While recording your project, your subject might be changing. For example, in the time I was creating my training on Azure DevOps multi-stage YAML builds were introduced, the user interface for Test Plans was changed and several smaller features that I showed in my demos were removed, renamed or moved to another location. Honestly, there are parts that I have recorded multiple times, due to the changes in Azure DevOps. Want more honesty? By the time the course went public, it was still outdated at some points. And yes, I know that I will have to update my course to include multi-stage YAML when it goes out of preview.

The point is, you will have to invest enough time every week in your project to ensure that your work in creating the course is not being overtaken by changes from the vendor. Software development and cloud in particular, is changing at such a rate that you will have to plan for incoming changes and know how to adapt. Also, circling back, taking a ‘yes this is work’ attitude will help spending enough time on your project, shortening its duration and help decrease the chance of being overtaken by changes.

Concluding

If you ever go down the path of creating an online training, I would recommend to keep the above in mind. Along with one final tip: Make sure you enjoy doing it. One thing that I do know for sure now is that if I was not enjoying my work on this training and I had to do it next to my other work, I would never have finished it.

Oh by the way, more details on that second large project? That will have to wait a few more months I’m afraid.

 

Loading settings for an Azure Function binding from a Key Vault

Please note: I have written a follow-up to this blog post, detailing a new, better approach in my opinion

One of the services in Azure that I enjoy most lately, is Azure Functions. Functions are great for writing the small single-purpose type of application that I write a lot nowadays. However, last week I was struggling with the configuration of bindings using an Azure Key Vault and I thought I’d share how to fix that.

When you create a new Azure Function for writing a message to a queue every minute, you might end up with something like the code below.

public class DemoFunction
{
  private readonly ILogger _logger;
  private readonly IConfiguration _configuration;

  public DemoFunction(ILogger logger, IConfiguration configuration)
  {
    _logger = logger;
    _configuration = configuration;
  }

  [FunctionName(nameof(DemoFunction))]
  public async Task Run(
    [TimerTrigger("0 */1 * * * *")] TimerInfo timer,
    [ServiceBus("%queueName%", Connection = "serviceBusConnectionString", EntityType = EntityType.Queue)] 
      IAsyncCollector sericeBusQueue)
  {
    var loopCount = int.Parse(_configuration["loopCount"]);

    for (var i=0; i<loopCount; i++)
    {
      await sericeBusQueue.AddAsync(i.ToString());
    }

    await sericeBusQueue.FlushAsync();
  }
}

As you can see, I am using Functions V2 and the new approach to dependency injection using constructors and non-static functions. And this works great! Not being in a static context anymore is highly satisfying for an OOP programmer and it also means that I can retrieve dependencies like my logger and configuration through the constructor.

One of the things I am doing with my Function, is pulling the name of the queue and the connection string to connect to that queue from my application settings. For a connection string this is the default and for the name of a queue or topic, I can do that by using a variable name enclosed in %-signs. After adding the correct three settings to my application settings, this function runs fine locally. My IConfiguration instance is automatically build and filled by the Functions runtime and my queueName and connectionString variables are in my local.settings.json.

The problem comes when trying to move this function to the cloud. Here I do not have a local.settings.json, nor do I want to have secrets in my application settings, the default location for the Functions runtime to pull its settings from. What I want to do, is using an Azure Key Vault for storing my secrets and loading any secrets from there.

It might have been that my Google-fu has been failing, but unfortunately I have not find any hook or method to allow the loading of extra configuration values for an Azure Function. Integrating with the runtime was important for me, since I also wanted to grab values for the configuration of my Function from the configuration, not just configuration that was used in my function.

Anyhow, what I ended up doing after a while of searching was the following:

public class Startup : FunctionsStartup    {
        public override void Configure(IFunctionsHostBuilder builder)
        {
            var services = builder.Services;

            var hostingEnvironment = services
                .BuildServiceProvider()
                .GetService<IHostingEnvironment>();

            var configurationBuilder = new ConfigurationBuilder()
                .SetBasePath(hostingEnvironment.ContentRootPath)
                .AddEnvironmentVariables();

            if (!hostingEnvironment.IsDevelopment())
            {
                {
                    var currentConfiguration = configurationBuilder.Build();
                    var tokenProvider = new AzureServiceTokenProvider();
                    var kvClient = new KeyVaultClient((authority, resource, scope) => 
                      tokenProvider.KeyVaultTokenCallback(authority, resource, scope));

                    configurationBuilder
                        .AddAzureKeyVault($"https://{currentConfiguration["keyVaultName"]}.vault.azure.net/", 
                          kvClient, new DefaultKeyVaultSecretManager());
                }
            }

            services.AddSingleton(configurationBuilder.Build());

            // More dependencies ...
        }
    }

The solution above will, if running in the cloud, use the Managed Identity of the Function plan to pull the values from a Key Vault and append them to the configuration. It works like a charm, however it feels a bit hacky to do override the existing configuration this way. If you find a better way, please do let me know!

4DotNet event May 2019

Last week I had the pleasure of attending the 4DotNet event in Zwolle, The Netherlands. Next to catching up with old friends, I enjoyed presenting and listening to two other talks

Pat Hermens | Learning from Failure – Finding the ‘second story’

The first speaker up was Pat Hermens. Pat talked to us about Failure. Going over three examples exploring how a catastrophic event came to be, continuously returning to the question: was this a failure [by someone]? His point was that the answer was “no” in all these cases. Instead of focusing on what we believe was a mistake or human error in hindsight, we should focus on the circumstances that made a such an error even possible. Assuming no ill intent, no one wants a nuclear meltdown or a space shuttle crash to occur – still they did while everyone believed they were making correct decisions. Focusing on the second story, or the circumstances or culture that allowed the wrong decision to look like a good decision is the way forward in his opinion.

Patrick Schmidt | Valkuilen bij het maken van high performance applicaties

Next up was Patrick Schmidt. Patrick talked about some .NET internals and explained how you can still create memory leaks in a managed language. He showed how creating a labda function that uses a closure over large object can end in memory leaks. He then moved on to explain some garbage collector internals and how incorrect usage of object creation and destruction can ruin your performance. Of course the prime example of string concatenation vs. the StringBuilder came along here. Finally, he talked about some pitfalls when using Entity Framework: the 1+ n problem and how you can accidentally download a whole table and only do the selection in memory by mixing up IQueryable and IEnumerable.

Henry Been | Logging, instrumentation, dashboards, alerts and all that – for developers

For the final session I had the privilege of presenting myself. In this session I share what I have learned about monitoring and logging over the last year when using Azure Monitor in a number of applications. The slidedeck for this session can be downloaded. If you are looking for an example application to try things out yourself, you can continue working with the example I showed during the talk.

 

ARM template support for Cosmos DB databases and collections

If you have read any of my blogs before, or know me only a little bit, you know I am a huge fan of ARM templates for Azure Resource Manager. However, every now and then I run into some piece of infrastructure that I would like to set up for my application only to find out that it is not supported by ARM templates. Examples are Cosmos DB databases and collections. Having to createIfNotExists() these was always a pain to code and also mixes the responsibility of resource allocation up with business logic. But no more as part of all the #MSBuild news, the following came in!

As of right now, you can specify the creation of an CosmosDB database and collection, using ARM templates. To create a CosmosDB database for use with the SQL API, you can now use the following template:

{
    "type": "Microsoft.DocumentDB/databaseAccounts/apis/databases",
    "name": "accountName/sql/databaseName",
    "apiVersion": "2016-03-31",
    "properties": {
        "resource": {
            "id": "databaseName"
        },
        "options": {
            "throughput": 400
        }
    }
}

After setting up a database, it is time to add a few containers. In this case I already provisioned throughput at the database level, so I can add as many containers as I need without additional cost. But, let’s start with just one:

{
    "type": "Microsoft.DocumentDb/databaseAccounts/apis/databases/containers",
    "name": "accountName/sql/databasename/containername",
    "apiVersion": "2016-03-31",
    "dependsOn": [ 
        "[resourceId('Microsoft.DocumentDB/databaseAccounts/apis/databases/accountName/sql/databaseName')]"
    ],
    "properties":
    {
        "resource":{
            "id":  "containerName",
            "partitionKey": {
                "paths": [
                    "/PartitionKey"
                ],
                "kind": "Hash"
            },
            "indexingPolicy": {
                "indexingMode": "consistent",
                "includedPaths": [{
                        "path": "/*",
                        "indexes": [
                            {
                                "kind": "Range",
                                "dataType": "number",
                                "precision": -1
                            },
                            {
                                "kind": "Hash",
                                "dataType": "string",
                                "precision": -1
                            }
                        ]
                    }
                ]
            }
        }
    }
}

I cannot just create the container and specify the, now mandatory, PartitionKey but also specify custom indexing policies. Putting this together with the template that I already had for creating a CosmosDB account, I can now automatically create all the dependencies for my application using the following ARM template:

{
  "$schema": "https://schema.management.azure.com/schemas/2015-01-01/deploymentTemplate.json#",
  "contentVersion": "1.0.0.0",
  "parameters": {
    "discriminator": {
      "type": "string",
      "minLength": 1
    }
  },
  "variables": {
      "accountName": "[concat(parameters('discriminator'), '-adc-demo')]",
      "databaseName": "myDatabase",
      "usersContainerName": "users",
      "customersContainerName": "customers"
  },
  "resources": [
    {
      "type": "Microsoft.DocumentDB/databaseAccounts",
      "name": "[variables('accountName')]",
      "apiVersion": "2016-03-31",
      "location": "[resourceGroup().location]",
      "kind": "GlobalDocumentDB", 
      "properties": {
        "databaseAccountOfferType": "Standard",
        "consistencyPolicy": {
          "defaultConsistencyLevel": "Session",
          "maxIntervalInSeconds": 5,
          "maxStalenessPrefix": 100
        },
        "name": "[variables('accountName')]"
      }
    },
    {
      "type": "Microsoft.DocumentDB/databaseAccounts/apis/databases", 
      "name": "[concat(variables('accountName'), '/sql/', variables('databaseName'))]", 
      "apiVersion": "2016-03-31",
      "dependsOn": [
        "[resourceId('Microsoft.DocumentDB/databaseAccounts/', variables('accountName'))]"
      ], 
      "properties": { 
        "resource": { 
          "id": "[variables('databaseName')]"
        },
        "options": { 
          "throughput": "400" 
        } 
      }
    },
    { 
      "type": "Microsoft.DocumentDb/databaseAccounts/apis/databases/containers",
      "name": "[concat(variables('accountName'), '/sql/', variables('databasename'), '/', variables('usersContainerName'))]", 
      "apiVersion": "2016-03-31", 
      "dependsOn": [
        "[resourceId('Microsoft.DocumentDB/databaseAccounts/apis/databases', variables('accountName'), 'sql', variables('databaseName'))]"
      ], 
      "properties": { 
        "resource": {
          "id": "[variables('usersContainerName')]", 
          "partitionKey": {
            "paths": [
               "/CustomerId"
            ],
             "kind": "Hash"
          }, 
          "indexingPolicy": {
            "indexingMode": "consistent", 
            "includedPaths": [
              { 
                "path": "/*", 
                "indexes": [
                  { "kind": "Range", 
                    "dataType": "number", 
                    "precision": -1
                  },
                  { 
                    "kind": "Hash", 
                    "dataType": "string", 
                    "precision": -1
                  }
                ]
              }
            ]
          }
        } 
      }
     },
     { 
       "type": "Microsoft.DocumentDb/databaseAccounts/apis/databases/containers",
       "name": "[concat(variables('accountName'), '/sql/', variables('databasename'), '/', variables('customersContainerName'))]", 
       "apiVersion": "2016-03-31", 
       "dependsOn": [
         "[resourceId('Microsoft.DocumentDB/databaseAccounts/apis/databases', variables('accountName'), 'sql', variables('databaseName'))]"
       ], 
       "properties": { 
         "resource": {
           "id": "[variables('customersContainerName')]", 
           "partitionKey": {
             "paths": [
                "/City"
             ],
              "kind": "Hash"
           }, 
           "indexingPolicy": {
             "indexingMode": "consistent", 
             "includedPaths": [
               { 
                 "path": "/*", 
                 "indexes": [
                   { "kind": "Range", 
                     "dataType": "number", 
                     "precision": -1
                   },
                   { 
                     "kind": "Hash", 
                     "dataType": "string", 
                     "precision": -1
                   }
                 ]
               }
             ]
           }
         } 
       }
      }
  ]
}

I hope you enjoy CosmosDB database and collection support just as much as I do, happy coding!

Add a SSL certificate to your Azure Web App using an ARM template revisited

Following up on my previous post on this subject (https://www.henrybeen.nl/add-a-ssl-certificate-to-your-azure-web-app-using-an-arm-template/), I am sharing a minimal, still complete, working example of an ARM template that can be used to provision the following:

  • An App Service Plan
  • An App Service, with:
    • A custom domain name
    • The Lets Encrypt Site extension installed
    • All configuration of the Lets Encrypt Site extension prefilled
  • An Authorization Rule for an Service Principal to install certificates

The ARM template can be found at: https://github.com/henrybeen/ARM-template-AppService-LetsEncrypt

To use this to create a Web App with an Lets Encrypt certificate and to automatically renew that, you have to do the following:

  • Pre-create a new Service Principal in your Azure Service Direction and obtain the objectId, clientId and a clientSecret for that Service Principal
  • Fill in the parameters.json file with a discriminator to make the names of your resources unique, the obtained objectId, clientId and clientSecret, a self-choosen GUID to use as the authorizationRule nameand a customHostname
  • Create a CNAME record pointing from that domain name to the follwing url: {discriminator}-appservice.azurewebsites.net
  • Roll out the template
  • Open up the Lets Encrypt extension, find all settings prefilled and request a certificate!

Azure Policy 4: Azure policy for Management groups

In response to a comment / question on an earlier blog, I have taken a quick look at applying Azure Policy to Azure Management Groups. Azure Management Groups are a relatively new concept that was introduce to ease the management of authorizations on multiple subscriptions, by providing a means to group them. For such a group, RBAC roles and assignments can be created, to manage authorizations for a group of subscriptions at once. This saves a lot of time when managing multiple subscriptions, but also reduces the risk of mistakes or oversight of a single assignment. A real win.

Now, Azure Policies can also be defined in and assigned to management groups it is claimed here. However, how to do that is not documented yet (to my knowledge and limited Goo– Bing skills), nor was it visible in the portal. So after creating a management group in the portal (which I had not done before), I I turned to Powershell and wrote the following to try and do a Policy assignment to a Management Group:

$policyDefinition = Get-AzureRmPolicyDefinition -Name ca7660f6-1ba5-4c57-b26c-f816d2a192f6
$mg = Get-AzureRmManagementGroup -GroupName test
New-AzureRmPolicyAssignment -Name test -DisplayName test -PolicyDefinition $policyDefinition -Scope $mg.Id

Which gave me the following error:

New-AzureRmPolicyAssignment : InvalidCreatePolicyAssignmentRequest : The policy definition specified in policy assignment 'test' is out of scope. Policy definitions should be specified only at or 
above the policy assignment scope.
At line:1 char:1
+ New-AzureRmPolicyAssignment -Name test -DisplayName test -PolicyDefin ...
+ ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+ CategoryInfo : CloseError: (:) [New-AzureRmPolicyAssignment], ErrorResponseMessageException
+ FullyQualifiedErrorId : InvalidCreatePolicyAssignmentRequest,Microsoft.Azure.Commands.ResourceManager.Cmdlets.Implementation.NewAzurePolicyAssignmentCmdlet

Which makes sense: you can only assign a policy to (a resourcegroup in) a subscription, if that is also the subscription the policy definition is saved into. So on to find out, how to define a policy within a resource group. To do that, I first wanted to retrieve an existing policy from the portal, so I navigated to Azure Policy page in the portal and stumbled onto the following screen:

And from here on, I could just assign a policy to a management group, if I had one in that group already… After switching to defining a policy, I noticed that I now could also save a policy definition to a management group.

So the conclusion is: yes, you can assign Azure Policies to Management Groups just like you can to a resource group or subscription, iff you already have at least one management group!

Further reading

Azure Policy part 3: Automatically fix or augment unwanted ARM template deployments

Now, denying unwanted configurations is fine and can be a great help, but would it not be much better if we could automatically fix unwanted configurations when they are deployed? Yes.., this has pros and cons. Automagically correcting errors is not always the best way forward, as there is not really a learning curve for the team member deploying the unwanted configuration. On the other hand, if you can fix it automatically, why not? Just weigh your options on a I guess.

Let’s take an example from the database world. Let’s say we have a requirement that says that we want to have an IP address added to the firewall of every single database server in our subscription. The policy that would allow us to specify an IP address to add to the firewall of every database server is as follows:

{
  "if": {
    "field": "type",
    "equals": "Microsoft.Sql/servers"
  },
  "then": {
    "effect": "DeployIfNotExists",
    "details": {
      "type": "Microsoft.Sql/servers/firewallrules",
      "name": "AllowAccessForExampleIp",
      "existenceCondition": {
        "allOf": [
          {
            "field": "Microsoft.Sql/servers/firewallRules/startIpAddress",
            "equals": "[parameters('ipAddress')]"
          },
          {
            "field": "Microsoft.Sql/servers/firewallrules/endIpAddress",
            "equals": "[parameters('ipAddress')]"
          }
        ]
      },
      "deployment": {
        "properties": {
          "mode": "incremental",
          "template": {
            "$schema": "http://schema.management.azure.com/schemas/2015-01-01/deploymentTemplate.json#",
            "contentVersion": "1.0.0.0",
            "parameters": {
              "serverName": {
                "type": "string"
              },
              "ipAddress": {
                "type": "string"
              }
            },
            "resources": [
              {
                "name": "[concat(parameters('serverName'), '/AllowAccessForExampleIp')]",
                "type": "Microsoft.Sql/servers/firewallrules",
                "apiVersion": "2014-04-01",
                "properties": {
                  "startIpAddress": "[parameters('ipAddress')]",
                  "endIpAddress": "[parameters('ipAddress')]"
                }
              }
            ]
          },
          "parameters": {
            "serverName": {
              "value": "[field('name')]"
            },
            "ipAddress": {
              "value": "[parameters('ipAddress')]"
            }
          }
        }
      },
      "roleDefinitionIds": [
        "/providers/Microsoft.Authorization/roleDefinitions/b24988ac-6180-42a0-ab88-20f7382dd24c"
      ]
    }
  }
}

Quite the JSON. Let’s walk through this step by step. First of all we have the conditions under which this policy must apply, this whenever we are deploying something of type Microsoft.Sql/servers. The effect we are looking for is deployIfNotExists, which will do an additional ARM template deployment when ever the existenceCondition is not fulfilled. This template takes the same form as any nested template, which means we have to respecify all parameters to the template and provide them from the parameters of the policy or using field values.

Managed Identity for template deployment

Every ARM template deployment is done on behalf of an authenticated identity. When using any effect that causes another deployment, you have to create a Managed Identity when assigning the policy. In the policy property roleDefinitionIds you should list all roles that are needed to deploy the defined template. When assigning and executing this policy to a subscription or resourcegroup, Azure will automatically provision a service principal with these roles over the correct scope, which will be used to deploy the specified template.

Field() function and aliases

In the template itself (and also when passing parameters to the template), there is the usage of a function called field. With this function you can reference one or more properties of the resource that is triggering the policy. To see all available fields per resource, use the Get-AzureRmPolicyAlias Powershell command. This will provide a list of all aliases available. To filter this list by namespace, you can use:

Use Get-AzureRmPolicyAlias -ListAvailable | Where-Object -equals -Property Namespace -Value Microsoft.Sql

Policy in action

After creating and assigning this policy, we can see it in action by creating a SQL Server using the portal. This is just another interface for creating a template deployment. After successfully deploying this template, our policy will be evaluated and the first time it will create the intended firewall rule. We can see this when looking up all deployments to the resource group, which will also list a PolicyDeployment:

Next to that, when looking at the related events for the initial Microsoft.SQLServer deployment, we see that our policy is accepted for deployment after the initial deployment:

Azure Policy part 2: Automatically deny unwanted ARM template deployments

In my previous blog post I showed how to audit unwanted Azure resource configurations. Now, listing non-compliant resources is fine and can be a great help, but would it not be much better if we could just prevent unwanted configurations from being deployed?

Deny any deployment not in Europe

Let’s take the example of Azure regions. If you are working for an organization that wants to operate just within Europe, it might be best to just deny any deployment outside of West- and North-Europe. This will help prevent mistakes and help enforce rules within the organization. How would we write such a policy?

Any policyRule still consist out of two parts, an if and an then. First, the if – which acts like a query on our subscription or resourcegroup and determines to which resources the policy will be applied. In this case we want to trigger our policy under the following condition: location != ‘North Europe’ && location != ‘West Europe.’ We write this using JSON, by nesting conditions. This syntax is a bit verbose, but easy to understand. The effect that we want to trigger is a simple deny of any deployment that matches this condition. In JSON, this would look like this:

{
  "type": "Microsoft.Authorization/policyDefinitions",
  "name": "audit-vms",
  "properties": {
    "displayName": "Audit every Virtul Machine",
    "description": "This policy audits any virtual machine that exists in the assigned scope.",
    "policyRule": {
      "if": {
        "allOf": [
          {
            "field": "location",
            "notEquals": "westeurope"
          },
          {
            "field": "location",
            "notEquals": "northeurope"
          }
        ]
      },
      "then": {
        "effect": "deny"
      }
    }
  }
}

Creating a policy like this and then applying it to an subscription or resourcegroup, will the Azure Resource Manager instruct to immediately deny any deployment that violates the policy. With the following as a result:

Azure Policy is also evaluated when deploying from Azure Pipelines, where you will also get a meaningfull error when trying to deploy any template that violates a deny policy: