In response to my post on tracking bugs using Azure DevOps, I got a question about tracking different types of bugs. As the question was in writing, I’m going to answer it here to -hopefully- help explain my view on tracking bugs in more detail.

Question (slightly edited)

I see a difference between three types of bugs:

  1. Bugs that are discovered when testing a user-story.
  2. Bugs that are discovered when testing a user-story, but are very important (for example an unexpected app shutdown).
  3. Bugs in production that are related to an epic closed a long time ago.

In the first case I would create a new task to deal with the problem encountered. In the second case I would then add an Impediment object to the sprint backlog. And finally, in the third case, I would create an actual bug and track it on the backlog. Now we are organizing our user stories in features and our features in epics. So.., where in this hierarchy should I place this bug? Should I create an epic called “Bugs” with the feature “Bugs” under it, and then add the bug under that feature?

Answer

How to track bugs that are not part of the in-progress work is a question I have encountered at different clients. To answer it, I think we must further differentiate between two groups of bugs that make up this third category:

  1. Bugs in production that we are tracking in a bug-tracking capacity. F.e. to allow customers to vote on them, publish workarounds, etc. Some of these bugs might be part of the product for years, but haven’t been fixed and will probably never be fixed.
  2. Bugs in production that we are tracking because we want to work on them. F.e. bugs that we want to fix in the next sprint or ‘somewhere this quarter.’

If you already have a system for tracking this first category, you probably don’t want to duplicate all the bugs from there into Azure DevOps. But, if you haven’t such a system, it makes sense to track these bugs in Azure DevOps instead and yes, I would then all add them under a feature ‘Known Bugs,’ under an epic ‘Known Bugs.’ But again, if you are using a CRM, GitHub issues or bug-tracking software for recording and detailing bugs, it doesn’t really make sense to me to duplicate all these bugs into Azure DevOps.

The second category is different, these bugs I would duplicate to Azure DevOps (or even better, link to that bug from within Azure DevOps to avoid spreading information related to the bug over two systems.) So how to link these to features and epics? For me it is important that epics and features are ‘done’ at some point, so a general epic or feature ‘bugs to fix’ doesn’t really make sense to me. Instead I would propose to create features like this: ‘Bugs Sprint 101,’ ‘Bugs Sprint 102,’ etc etc, and plan these features for the mentioned sprint. We then connect the bugs we duplicate from the bug-tracker, to the feature, and we are suddenly in planning mode. These features I would then group under features called ‘LCM 2022Q3,’ ‘LCM 2022Q4,’ etc etc. This provides a good overview of which bugs we will be fixing when. Yet, I wouldn’t plan more than 1 or 2 sprints ahead myself.

The added benefit of this system is that it allows you to plan other life cycle management (LCM) work as well. Let’s say you are facing an upgrade to .NET6, the latest version of Angular, or you need to remove a dependency on a duplicated library in 8 projects. As fixing right away is not always possible, you know have the epics in place to plan your LCM work ahead as well.

PS

Tracking a bug that introduces a critical regression as an impediment instead of a bug (the second case) is something that can be debated. Personally, I don’t think I see the difference between the first and second type of bug. However, this may vary from context to context. I would be curious to learn when this would make sense.

When you are using Azure DevOps for tracking work items like features, user stories, and tasks, you probably want to track bugs in that same system. And of course, this is possible with the correct configuration.

Within Azure DevOps, the work item types (epic, feature, user stories, tasks, etc) that are available, is controlled by the work item process you choose for your project. Secondly, once you have chosen a work item process that supports tracking bugs, you have to configure how you want to track bugs: as a backlog item, or as a task. But before we get to that, let’s explore choosing a work item process.

Choosing a work item process

The work item process is selected when you create the Project, and cannot be changed afterward. Selecting a different work item process is available under the Advanced options when creating a project:

Each work item process has different types of work items available, which are all listed here. To track bugs on your backlog, you will have to choose the Agile, CMMI, or Scrum process. My personal favorite is the Agile process, as I prefer the term ‘User Story’ over Product Backlog Item or Requirement. It also has the benefit over the Basic process of having two levels of grouping for user stories: both epics and features.

For the remainder of my examples, I have selected the Agile process.

Choosing how to track bugs

Once you have selected a work item process that allows for tracking bugs, you can decide how to track bugs. There are three options available:

  • Tracking bugs as user stories. This allows you to create bugs on the backlog, where they behave just like user stories: you can group them under a feature, order them for priority and move them to the sprint backlog when you are ready to begin work on them. Once in the sprint backlog, you can add one or more tasks below the bug, tracking progress towards resolving it;
  • Tracking bugs as a task. This allows you to create bugs as a child to a user story. You cannot track bugs on the product backlog directly, cannot group them under a feature, and cannot add them to a sprint directly. Instead, within every user story on the backlog, you can add one or more bugs
  • Not tracking bugs, effectively disabling the ‘bug’ work item type. It is beyond me why you would want to do this. If you don’t want to use the bug work item type, then just don’t.

Here you can see how to choose between the three options:

To get here, first, navigate to your sprint board or product backlog. Next, click the cogwheel icon at the top right. In the pop-up that opens, select ‘Working with bugs,’ to find the option you are looking for.

But which option to choose? Let’s explore the first two alternatives in a bit more detail to help you decide which to use.

Tracking bugs as user stories

When you are tracking bugs as user stories, they appear on the product backlog, along with the user stories, you see this below, on the left. They are nestable below features and you can prioritize them against other work within the feature. On the right, you see the same stories and tasks in the sprint backlog. Here both appear on the left and you can add tasks to bot the stories and the bugs, to work towards completing the work.

Tracking bugs as tasks

When tracking bugs as tasks, they do not appear on the product backlog along with user stories, as you can see below, on the left. On the right is the sprint view, here the bugs no longer apear on the left, but instead they can be added to user stories, just as tasks.

Choosing between the two options

Now, which one do you choose? I think it depends on how you work and what you use the ‘bug’ work item for. My personal belief is that it makes no sense to track bugs as tasks, under a user story. As long as the story is not complete, anything that is not working properly yet, is just work to be done. A defect is only worthy of tracking as a bug as soon as it is reported by users and no sooner. Therefore, they should be product backlog items in my opinion. Trackable on the backlog, by users, and team members – but more importantly: ready for priorization. You should be able to prioritize bugs just as regular work items on the product backlog.

Do you agree? And if not, what are your reasons?

If you want to learn more about Azure DevOps, check out my Azure DevOps course at A Cloud Guru!

 

There is no place like home, and also: there is no place like production. Production is the only location where your code is really put to the test by your users. This insight has helped developers to accept that we have to go to production fast and often. But we also want to do so in a responsible way. Common tactics for safely going to production often are for example blue-green deployments and the use of feature flags.

While helping to limit the impact of mistakes, the downside of these two approaches is that they both run only one version of your code. Either by using a different binary or by using a different code path, the one or the other implementation is executed. But what if we could execute both the old and the new code in parallel and compare the results? In this post I will show you how to run two algorithms in parallel and compare the results, using a library called scientist.net, which is available on NuGet.

The use case

To experiment a bit met experimentation I have created a straight forward site for calculation the nth position in the Fibonacci sequence as you can see below.

I started out with the most simple implementation that seemed to meet the requirements of the implementation. In other words, the implementation behind this calculation is done using recursion, like shown below.

public class RecursiveFibonacciCalculator : IRecursiveFibonacciCalculator
{
  public Task<int> CalculateAsync(int position)
  {
    return Task.FromResult(InnerCalculate(position));
  }

  private int InnerCalculate(int position)
  {
    if (position < 1)
    {
      return 0;
    }
    if (position == 1)
    {
      return 1;
    }

    return InnerCalculate(position - 1) + InnerCalculate(position - 2);
  }
}

While this is a very clean and to-the-point implementation code wise, the performance is -to say the least- up for improvement. So I took a few more minutes and I came up with an implementation which I believe is also correct, but also much more performant, namely the following:

public class LinearFibonacciCalculator : ILinearFibonacciCalculator
{
  public Task<int> CalculateAsync(int position)
  {
    var results = new int[position + 1];

    results[0] = 0;
    results[1] = 1;

    for (var i = 2; i <= position; i++)
    {
      results[i] = results[i - 1] + results[i - 2];
    }

    return Task.FromResult(results[position]);
  }
}

However, just swapping implementations and releasing didn’t feel good to me, so I thought: how about running an experiment on this?

The experiment

With the experiment that I am going to run, I want to achieve the following goals:

  • On every user request, run both my recursive and my linear implementation.
  • To the user I want to return the recursive implementation, which I know to be correct
  • While doing this, I want to record:
    • If the linear implementation yields the same results
    • Any performance differences.

To do this, I installed the Scientist NuGet package and added the code shown below as my new implementation.

public async Task OnPost()
{
  HasResult = true;
  Position = FibonacciInput.Position;

  Result = await Scientist.ScienceAsync<int>("fibonacci-implementation", experiment =>
  {
    experiment.Use(async () => await _recursiveFibonacciCalculator.CalculateAsync(Position));
    experiment.Try(async () => await _linearFibonacciCalculator.CalculateAsync(Position));

    experiment.AddContext("Position", Position);
  });
}

This code calls into the Scientist functionality and sets up an experiment with the name fibonacci-implementation that should return an int. The configuration of the experiment is done using the calls to Use(..) and Try(..)

Use(..): The use method is called with a lambda that should execute the known, trusted implementation of the code that you are experimenting with.

Try(..): The try method is called with another lambda, but now the one with the new, not yet verified implementation of the algorithm.

Both the Use(..) and Try(..) method accept a sync-lambda as well, but I do use async/await here on purpose. The advantage of using an async-lambda is that both implementations will be executed in parallel, thus reducing the duration of the web server call. The final thing I do with the call to the AddContext(..) method is adding a named value to the experiment. I can use this context property-bag later on to interpret the results and to pin down scenarios in which the new implementation is lacking.

Processing the runs

While the code above takes care of running two implementations of the Fibonacci sequence in parallel, I am not working with the results yet – so let’s change that. Results can be redirected to an implementation of the IResultPublisher interface that ships with Scientist by assigning an instance to the static ResultPublisher property as I do in my StartUp class.

var resultsPublisher = new ExperimentResultPublisher();
Scientist.ResultPublisher = resultsPublisher;

In the ExperimentResultPublisher class, I have added the code below.

public class ExperimentResultPublisher : IResultPublisher, IExperimentResultsGetter
{
  public LastResults LastResults { get; private set; }
  public OverallResults OverallResults { get; } = new OverallResults();

  public Task Publish<T, TClean>(Result<T, TClean> result)
  {
    if (result.ExperimentName == "fibonacci-implementation")
    {
      LastResults = new LastResults(! result.Mismatched, result.Control.Duration, result.Candidates.Single().Duration);
      OverallResults.Accumulate(LastResults);
    }

    return Task.CompletedTask;
  }
}

For all instances of the fibonacci-implementation experiment, I am saving the results of the last observation. Observation is the Scientist term for a single execution of the experiment. Once I have moved the results over to my own class LastResults, I am adding these last results to another class of my own OverallResults that calculated the minimum, maximum and average for each algorithm.

The LastResults and OverallResults properties are part of the IExperimentResultsGetter interface, which I later on inject in my Razor page.

Results

All of the above, combined with some HTML, will then gave me the following results after a number of experiments.

I hope you can see here how you can take this forward and extract more meaningful information from this type of experimentation. One thing that I would highly recommend is finding all observations where the existing and new implementation do not match and logging a critical error from your application.

Just imagine how you can have your users iterate and verify all your test cases, without them ever knowing.

If you have read any of my blogs before, or know me only a little bit, you know I am a huge fan of ARM templates for Azure Resource Manager. However, every now and then I run into some piece of infrastructure that I would like to set up for my application only to find out that it is not supported by ARM templates. Examples are Cosmos DB databases and collections. Having to createIfNotExists() these was always a pain to code and also mixes the responsibility of resource allocation up with business logic. But no more as part of all the #MSBuild news, the following came in!

As of right now, you can specify the creation of an CosmosDB database and collection, using ARM templates. To create a CosmosDB database for use with the SQL API, you can now use the following template:

{
    "type": "Microsoft.DocumentDB/databaseAccounts/apis/databases",
    "name": "accountName/sql/databaseName",
    "apiVersion": "2016-03-31",
    "properties": {
        "resource": {
            "id": "databaseName"
        },
        "options": {
            "throughput": 400
        }
    }
}

After setting up a database, it is time to add a few containers. In this case I already provisioned throughput at the database level, so I can add as many containers as I need without additional cost. But, let’s start with just one:

{
    "type": "Microsoft.DocumentDb/databaseAccounts/apis/databases/containers",
    "name": "accountName/sql/databasename/containername",
    "apiVersion": "2016-03-31",
    "dependsOn": [ 
        "[resourceId('Microsoft.DocumentDB/databaseAccounts/apis/databases/accountName/sql/databaseName')]"
    ],
    "properties":
    {
        "resource":{
            "id":  "containerName",
            "partitionKey": {
                "paths": [
                    "/PartitionKey"
                ],
                "kind": "Hash"
            },
            "indexingPolicy": {
                "indexingMode": "consistent",
                "includedPaths": [{
                        "path": "/*",
                        "indexes": [
                            {
                                "kind": "Range",
                                "dataType": "number",
                                "precision": -1
                            },
                            {
                                "kind": "Hash",
                                "dataType": "string",
                                "precision": -1
                            }
                        ]
                    }
                ]
            }
        }
    }
}

I cannot just create the container and specify the, now mandatory, PartitionKey but also specify custom indexing policies. Putting this together with the template that I already had for creating a CosmosDB account, I can now automatically create all the dependencies for my application using the following ARM template:

{
  "$schema": "https://schema.management.azure.com/schemas/2015-01-01/deploymentTemplate.json#",
  "contentVersion": "1.0.0.0",
  "parameters": {
    "discriminator": {
      "type": "string",
      "minLength": 1
    }
  },
  "variables": {
      "accountName": "[concat(parameters('discriminator'), '-adc-demo')]",
      "databaseName": "myDatabase",
      "usersContainerName": "users",
      "customersContainerName": "customers"
  },
  "resources": [
    {
      "type": "Microsoft.DocumentDB/databaseAccounts",
      "name": "[variables('accountName')]",
      "apiVersion": "2016-03-31",
      "location": "[resourceGroup().location]",
      "kind": "GlobalDocumentDB", 
      "properties": {
        "databaseAccountOfferType": "Standard",
        "consistencyPolicy": {
          "defaultConsistencyLevel": "Session",
          "maxIntervalInSeconds": 5,
          "maxStalenessPrefix": 100
        },
        "name": "[variables('accountName')]"
      }
    },
    {
      "type": "Microsoft.DocumentDB/databaseAccounts/apis/databases", 
      "name": "[concat(variables('accountName'), '/sql/', variables('databaseName'))]", 
      "apiVersion": "2016-03-31",
      "dependsOn": [
        "[resourceId('Microsoft.DocumentDB/databaseAccounts/', variables('accountName'))]"
      ], 
      "properties": { 
        "resource": { 
          "id": "[variables('databaseName')]"
        },
        "options": { 
          "throughput": "400" 
        } 
      }
    },
    { 
      "type": "Microsoft.DocumentDb/databaseAccounts/apis/databases/containers",
      "name": "[concat(variables('accountName'), '/sql/', variables('databasename'), '/', variables('usersContainerName'))]", 
      "apiVersion": "2016-03-31", 
      "dependsOn": [
        "[resourceId('Microsoft.DocumentDB/databaseAccounts/apis/databases', variables('accountName'), 'sql', variables('databaseName'))]"
      ], 
      "properties": { 
        "resource": {
          "id": "[variables('usersContainerName')]", 
          "partitionKey": {
            "paths": [
               "/CustomerId"
            ],
             "kind": "Hash"
          }, 
          "indexingPolicy": {
            "indexingMode": "consistent", 
            "includedPaths": [
              { 
                "path": "/*", 
                "indexes": [
                  { "kind": "Range", 
                    "dataType": "number", 
                    "precision": -1
                  },
                  { 
                    "kind": "Hash", 
                    "dataType": "string", 
                    "precision": -1
                  }
                ]
              }
            ]
          }
        } 
      }
     },
     { 
       "type": "Microsoft.DocumentDb/databaseAccounts/apis/databases/containers",
       "name": "[concat(variables('accountName'), '/sql/', variables('databasename'), '/', variables('customersContainerName'))]", 
       "apiVersion": "2016-03-31", 
       "dependsOn": [
         "[resourceId('Microsoft.DocumentDB/databaseAccounts/apis/databases', variables('accountName'), 'sql', variables('databaseName'))]"
       ], 
       "properties": { 
         "resource": {
           "id": "[variables('customersContainerName')]", 
           "partitionKey": {
             "paths": [
                "/City"
             ],
              "kind": "Hash"
           }, 
           "indexingPolicy": {
             "indexingMode": "consistent", 
             "includedPaths": [
               { 
                 "path": "/*", 
                 "indexes": [
                   { "kind": "Range", 
                     "dataType": "number", 
                     "precision": -1
                   },
                   { 
                     "kind": "Hash", 
                     "dataType": "string", 
                     "precision": -1
                   }
                 ]
               }
             ]
           }
         } 
       }
      }
  ]
}

I hope you enjoy CosmosDB database and collection support just as much as I do, happy coding!

In an earlier post on provisioning a Let’s encrypt SSL certificate to a Web App, I touched upon the subject of creating an RBAC Role Assignment using an ARM template. In that post I said that I wasn’t able to provision an Role Assignment to a just single resource (opposed to a whole Resourcegroup.) This week I found out that this was due to an error on my side. The template for provisioning an Authorizaton Rule for just a single resource, differs from that for provisioning a Rule for a whole Resourcegroup.

Here the correct JSON for provisioning an Role Assignment to a single resource:

    {
      "type": "Microsoft.Web/sites/providers/roleAssignments",
      "apiVersion": "2015-07-01",
      "name": "[concat(parameters('appServiceName'), '/Microsoft.Authorization/', parameters('appServiceContributerRoleGuid'))]",
      "dependsOn": [
        "[resourceId('Microsoft.Web/Sites', parameters('appServiceName'))]"
      ],
      "properties": {
        "roleDefinitionId": "[concat('/subscriptions/', subscription().subscriptionId, '/providers/Microsoft.Authorization/roleDefinitions/', 'b24988ac-6180-42a0-ab88-20f7382dd24c')]",
        "principalId": "[parameters('appServiceContributorObjectId')]"
      }
    },

As Ohad correctly points out in the comments the appServiceContributerRoleGuid, should be a unique Guid generated by you. It does not refer back to a Guid of any predefined role.

In contrast, below find the JSON for provisioning an Authorizaton Rule for a Resourcegroup as a whole. To provision a roleAssignment for a single resource, we do not need to set a more specific scope, but completely leave it out. Instead the roleAssignment has to be nested within the resource it applies to. This is visible when comparing the type, name and scope properties of both definitions.

    {
      "type": "Microsoft.Authorization/roleAssignments",
      "apiVersion": "2015-07-01",
      "name": "[parameters('appServiceContributerRoleName')]",
      "dependsOn": [
        "[resourceId('Microsoft.Web/Sites', parameters('appServiceName'))]"
      ],
      "properties": {
        "roleDefinitionId": "[concat('/subscriptions/', subscription().subscriptionId, '/providers/Microsoft.Authorization/roleDefinitions/', 'b24988ac-6180-42a0-ab88-20f7382dd24c')]",
        "principalId": "[parameters('appServiceContributorObjectId')]",
        "scope": "[concat(subscription().id, '/resourceGroups/', resourceGroup().name)]"
      }
    }

Last week I received a follow-up question from a fellow developer about a presentation I did regarding Azure Key Vault and Azure Managed Identity. In this presentation I claimed, and quickly showed, how you can use these two offerings to store all the passwords, keys and certificates you need for your ASP.NET application in a secure storage (the Key Vault) and also avoid the problem of just getting another, new password to access that Key Vault.

I have written a small ASP.NET application that reads just one very secure secret from an Azure Key Vault and displays it on the screen. Let’s dive into the infrastructure and code to make this work!

Infrastructure

Whenever we want our code to run in Azure, we need to have some infrastructure it runs on. For a web application, your infrastructure will often contain an Azure App Service Plan and an Azure App Service. We are going to create these using an ARM template. We use the same ARM template to also create the Key Vault and provide an identity to our App Service. The ARM template that delivers these components can be found on GitHub. Deploying this template, would result in the following:

The Azure subscription you are deploying this infrastructure to, is backed by an Azure Active Directory. This directory is the basis for all identity & access management within the subscription. This relation also links the Key  Vault to that same AAD. This relation allows us to create access policies on the Key Vault that describe what operations (if any) any user in that directory can perform on the Key Vault.

Applications can also be registered in an AAD and we can thus give them access to the Key Vault. However, how would an application authenticate itself to the AAD? This is where Managed Identity comes in. Managed Identity will create an service principal (application) in that same Active Directory that is backing the subscription. At runtime your Azure App Service will be provided with environment variables that allow you to authenticate without the use of passwords.

For more information about ARM templates, see the information on MSDN. However there are two important parts of my template that I want to share. First the part that enables the Managed Identity on the App Service:

{
  "name": "[parameters('appServiceName')]",
  "type": "Microsoft.Web/sites",
  …,
  "identity": {
    "type": "SystemAssigned"
  }
}

Secondly, we have to give this identity, that is yet to be created, access to the Key Vault. We do this by specifying an access policy on the KeyVault. Be sure to declare a ‘DependsOn’ the App Service, so you will only reference the identity after it is created:

{
  "type": "Microsoft.KeyVault/vaults",
  "name": "[parameters('keyVaultName')]",
  …,
  "properties": {
    "enabledForTemplateDeployment": false,
    "tenantId": "[subscription().tenantId]",
    "accessPolicies": [
       {
        "tenantId": "[subscription().tenantId]",
        "objectId": "[reference(concat(resourceId('Microsoft.Web/sites', parameters('appServiceName')),'/providers/Microsoft.ManagedIdentity/Identities/default'), '2015-08-31-preview').principalId]",
        "permissions": {
          "secrets": [ "get" ]
        }
      }
    ]
  }
}

Here I am using some magic (that I just copy/pasted from MSDN) to refer back to my earlier deployed app service managed identity and retrieve the principalId and use that to create an access policy for that identity.

That is all, so let’s deploy the templates. Normally you would set up continuous deployment using Azure Pipelines, but for this quick demo I used Powershell:

Login-AzureRmAccount
Select-AzureRmSubscription -SubscriptionId " xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxx"
New-AzureRmResourceGroup "keyvault-managedidentity" -Location "West-Europe"
New-AzureRmResourceGroupDeployment -TemplateFile .\keyvault-managedidentity.json -TemplateParameterFile .\keyvault-managedidentity.parameters.json -ResourceGroupName keyvault-managedidentity

Now with the infrastructure in place, let’s add the password that we want to protect to the Key Vault. There are many, many ways to do this but let’s use Powershell again:

$password = Read-Host 'What is your password?' -AsSecureString
Set-AzureKeyVaultSecret -VaultName demo4847 -Name password -SecretValue $password

Do not be alarmed if you get an access denied error. This is most likely because you still have to give yourself access to the Key Vault. By default no-one has access, not even the subscription owners. Let’s fix that with the following command:

Set-AzureRmKeyVaultAccessPolicy -ResourceGroupName "keyvault-managedidentity" -VaultName demo4847 -ObjectId xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx -PermissionsToSecrets list,get,set

Code

With the infrastructure in place, let’s write the application that access this secret. I have created a simple, ASP.NET MVC application and edited the Home view to contain the following main body. Again the code is also on GitHub:

A secret

@if (ViewBag.IsManagedIdentity) {

Got a secret, can you keep it? (...) If I show you then I know you won't tell what I said: @ViewBag.Secret

} else {

Running locally, so no secret to tell }

Now to supply the requested values, I have added the following code to the HomeController:

public async Task Index()
  var isManagedIdentity = Environment.GetEnvironmentVariable("MSI_ENDPOINT") != null
    && Environment.GetEnvironmentVariable("MSI_SECRET") != null;

  ViewBag.IsManagedIdentity = isManagedIdentity;

  if (isManagedIdentity)
  {
    var azureServiceTokenProvider = new AzureServiceTokenProvider();
    var keyVaultClient = new KeyVaultClient(new KeyVaultClient.AuthenticationCallback(azureServiceTokenProvider.KeyVaultTokenCallback));
    var secret = await keyVaultClient.GetSecretAsync("https://demo4847.vault.azure.net/secrets/password");

    ViewBag.Secret = secret.Value;
  }

  return View();
}

First I check if we are running in an Azure App Service with Managed Identity enabled. This looks a bit hacky, but it is actually the recommended approach. Next, if running as an MI, I use the AzureSErviceTokenProvider (NuGet package: Microsoft.Azure.Services.AppAuthentication) to retrieve an AAD token. In turn I use that token to instantiate an KeyVaultClient (NuGet package: Microsoft.Azure.KeyVault) and use it to retrieve the secret.

That’s it!

Want to know more?

I hope to write two more blogs on this subject soon. One about using system to system authentication and authorization and not storing extra secrets into KeyVault and one about Config Builders, a new development for .NET Core 2.0 and .NET Framework 4.71 or higher.

Over the last couple of months I have been coaching a number of PHP teams to help them improve their software engineering practices. The main goals were to improve the quality of the product, ease of delivery and the overall maintainability of the code. If there is one thing that defines maintainable code, in my opinion, it is the existence of unit tests. However, one of the things that proved more difficult than one might expect is to start writing proper unit tests in an existing PHP solution.

In this instance, the teams were using the Laravel framework. However, standard Laravel practices limited the testability of the code created by the teams. I have worked with these teams to make their code more testable to two ends:

  • Improve overall testability by introducing new class design patterns
  • Reduce the duration of tests. Prior to this approach, a lot of tests were implemented as end-to-end, interface based tests. And boy, are they slow!

After a number of weeks, we saw the first results coming in, so all of this worked out nicely.

The goal of this post is to share the issues found that were preventing the team from proper unit testing and how we got around them.

Issue 1: instantiating a class in a unit test

The first thing we ran into was the fact that it was impossible to instantiate any class from a unit test. There were two reasons for this. The first was that there was actual work done in the constructor of almost every class: calling a method on another class and/or hitting the database.

Next to this, dependencies for any class were not passed in via the constructor, but were created in the constructor using a standard Laravel pattern. The good news here is that Laravel actually provides you with a dependency container. The bad news is, that it was often used like this:

class TestSubject {
    public function __construct() {
        $this->someDependency = app()->make(SomeDependency::class);
    }
}

This calls a global, static method app() to get the dependency container and then instantiates a class by type. Having this code in the constructor makes it completely impossible to new the class up from a unit test.

In short, we couldn’t instantiate classes in a unit test due to:

  • Doing work in a constructor
  • Instantiating dependencies ourselves

Solution: Let the constructor only gather dependencies

First of all, calling methods or the database was quite easy to refactoring out of the constructors. Also, this is a thing that can easily be avoided when creating new classes.

The best way to not instantiate dependencies yourselves, is to leave that to the framework. Instead of hitting the global app() method to obtain the container, we added the needed type as a parameter to the constructor, leaving it up to the Laravel container to provide an instance at runtime (constructor dependency injection.)

public function __construct(SomeDependency $someDependency) {
    $this->someDependency = $someDependency;
}

Now there is still one issue here and that is we are depending upon a concrete class, not an abstraction. This means, we are violation the Dependency Inversion principle. To fix this, we need to depend on an interface. However, now the Laravel dependency container no longer knows which type to provide to our class when instantiating it, since it cannot instantiate a class. Therefore, we have to configure a binding that maps the interface to the class.

app()->bind(SomeDependencyInterface::class, SomeDependency::class);

Having done this, we can now change our constructor to look as follows.

public function __construct(SomeDependency $someDependency) {
    $this->someDependency = $someDependency;
}

At this point we have changed the following:

  • No work in constructors
  • Getting dependencies provided instead of instantiating them ourselves.
  • Depending upon abstractions

Mission accomplished! These things combined now allows to instantiate our test subject in a unit test as follows:

$this->dependencyMock = $this->createMock(SomeDependencyInterface::class);

$this->subject = new TestSubject($this-> dependencyMock);

Issue 2: Global static helper methods

Now we can instantiate a TestSubject in a test and start testing it. The second we got to this state, we ran into another problem that was all over the code base: global, static, helper methods. These methods have different sources. They are built-in PHP methods, Laravel helper methods or convenience methods from 3rd parties. However, they all present us with the same problems when it comes to testability:

  • We cannot mock calls to global, static methods. Which means we cannot remove their behavior at runtime and thus cannot isolate our TestSubject and start pulling in real dependencies, dependencies of dependencies, etc…

From here on, I will share (roughly in order of preference) a number of approaches to get around this limitation.

Solution 1: Finding a constructor injection replacement

When starting to investigate these static methods, especially those provided by Laravel, we saw that a lot of them were just short wrapper methods around the Dependency Container. For example, the implementation of a much used view method was this:

public function view($name = null, $data = [], $mergeData = [])
{
    $factory = app(ViewFactory::class);

    if (func_num_args() === 0) {
        return $factory;
    }
    return $factory->make($view, $data, $mergeData);
}

For all these convenience methods, it is straightforward to see that we can easily refactor the calling code from this:

public function __construct() { }

public function index() {
    // … more code
    return view(“blade.name”, $params);
}

To this:

public function __construct(ViewFactory $viewFactory) {
    $this->viewFactory = $viewFactory;
}

public function index() {
    // … more code
    return $this->viewFactory->make(“blade.name”, $params);
}

A quick and easy way to remove a decent portion of calls to global functions.

Solution 2: Software engineering tricks

If there is no interface readily available for constructor injection, we can create one ourselves. A common engineering trick is to move unmockable code to a new class. We then inject this to our subject at runtime. At test time however, we can then mock this wrapper and test our subject as much as possible.

As an example, let’s take the following code:

class Subject {
    function isFileChanged($fileName, $originalHash) {
        $newHash = sha1($fileName);
        return $originalHash == $newHash;
    }
}

Of course we can test this class by letting it operate on a temporary file, but another approach would be to do this:

class Subject {
    private $sha1Hasher;

    public function __construct(ISha1Hasher $sha1Hasher) {
        $this->sha1Hasher = $sha1Hasher;
    }

    function isFileChanged($fileName, $originalHash) {
        $newHash = $this->sha1Hasher->hash ($fileName);
        return $originalHash == $newHash;
    }
}

Maybe not a thing you would do in this specific instance. But if you have code that is more complex and is executing a single call to a global method, this way you can move that call behind an interface and mock it out while testing:

public function testSubject() {
    $hasherMock = $this->createMock(ISha1Hasher::class);
    $hasherMock->method(“hash”)->with(“n/a”)->willReturn(“123”);
    $subject = new Subject($hasherMock);

    $result =$subject->isFileChanged(“n/a”, “123”);

    $this->assertFalse($result);
}

In my opinion, solution 2 is by far a better approach to take than solutions 3 and 4. However, if you are afraid that adding to much types might clutter your codebase or reduce the performance of your application, there are two more approaches available. Both have drawbacks, so I would only use them if you see no other way.

Solution 3: Leveraging PHP namespace precedence

If refactoring global static calls in you code to a new class is not an option and your code is organized into namespaces, there is another way we can mock calls to built-in PHP methods. In the file with our TestClass, we can add a new method with the same name in a namespace that is closer to the caller.

For example, the following call to file_exists() cannot be mocked out:

namespace demo;

class Subject {
    public function hasFile() {
        return file_exists("d:\bier");
    }
}

As you can see, the class containing the hasFile() method is in a namespace called demo. We can create a new method, also called file_exists() in that same namespace, just before our TestClass. When executing, the methods in the namespace that is the closed to the caller will take precedence.

This means, we mock the call to file_exists() to always return true, as follows:
namespace demo;

function file_exists($fileName) {
    return true;
}

class TestClass {
    public function testWhenFileExists_thenReturnTrue() {
        $result = $this->subject->hasFile();

        $this->assertTrue($result);
    }
}

 

The main drawback of this approach is that it reduces the readability of your code. Also, relying on method hiding for testing purposes might make your code harder to understand for those that do not grasp all the language details.

Solution 4: Leveraging your frameworks and libraries

Finally, your framework might provide its own means for mocking certain calls. In Laravel for example, there is a construct of Facades that you can also use for mocking purposes. Another example is the Carbon datetime convenience library that provides a global static Carbon::setTestNow() method.

I for one would discourage this, as it would mean that you are writing logic that will become dependent on your framework and will not ever be able to switch to another framework without redoing everything. (However… who has done that even once?)

My other argument is one of taste: I simply do not like adding methods to production code, only to make it testable. And I have seen misuse of methods intended for tests only in production code as well…

However, if you do not share these feelings, the approach is quite nicely detailed here: https://laravel.com/docs/5.6/facades or here: http://laraveldaily.com/carbon-trick-set-now-time-to-whatever-you-want/

Conclusion

I hope that this blog gives you a number of approaches to make your PHP code more (unit)testable. Because we all know that only code that is continued tested in a pipeline, can quickly and easily be shipped fast and often to customers.

Enjoy!

With thanks for proofreading: Wouter de Kort, Alex Lisenkov

Ever wished you would receive a simple heads up when an Azure deployment fails? Ever troubleshooted an issue and looked for the button: “Tell me when this happens again?” Well, I just found it.

Yesterday I stumbled across a -for me (*) – new feature that is just amazing: azure activity log alerts. A feature to notify me when something specific happens.

With the introduction of the Azure Resource Manager model, the activity log was also introduced. The activity log is an audit trail of all events that happen within your Azure subscription, either user initiated or events that originate in Azure itself. This is a tremendous powerfull feature in itself, however it has become more powerfull now. With azure activity log alerts you can create rules that automatically trigger and notify you when an event is emitted that you find interesting.

In this blog post I will detail two scenario’s where activity log alerts can help you out.

(*) It seems this feature was already launched in May this year, according to this Channel9 video

Example: Manage authorizations

Let’s say you are working with a large team on a large project or on a series of related projects. One thing that you might want to keep taps on, is people creating new authorizations. So let’s see if we can quickly set something up to send me an e-mail whenever this happens.

  1. Let’s start by spinning up the monitoring blade in the Azure portal.
  2. In the monitoring blade the activity log automatically opens up. Here we can look through past events and see what has happened and why. Since we are looking to get pro-activly informed about any creation events, lets navigate to Alerts:
  3. In the top of the blade, choose Add activity log alert and the following dialog will open:
  4. Here there are a number of things we have to fill out. As the name and description “A new authorization is created” covers what we are about to do. Select your subscription and the resourcegroup where you want to place this alert. This is not the resourcegroup that the alert concerns, it is where the alert itself lives. As event category we pick “Administrative” and as Resource Type “Role assignment.” The last resets all other dropdowns so we only have to select an Operation name. Let’s pick “Create role assignment.”
  5. After selecting what we want to be alerted about, let’s decide how we want to alerted. This is done via an Alert group, an alert group is a group of one or more actions that are grouped under one name and can be reused. Let’s name our action group “StandardActionGroup” and add an e-mailadres. Giving us a final result as follows:
  6. Now let’s authorize a new user on a resource:
  7. And hurray, we are notified by e-mail:

Example: Streaming Analytics hick-up

So you have an Azure resource that has some issues. Every now and then it gets in a faulted state or just stops working. Often you will find that this is nicely put into the activity log. For example I have a Streaming Analytics job that faults every now and then. Let’s see how we can get Azure to “tell me when this happens again.”

  1. Go to the activity log of the resource with an error
  2. Open the details of the Warning and find the link to Add activity log alert

  3. The blade to open a new alert is added, with everything prefilled to capture just that specific event. In essence allowing you to ask Azure to tell you ‘if it happens again’

Can we automate that?

Finally, as you can see in the image below, every activity log alert is a resource in itself. Which means you can see them when you list a resourcegroup and that you can create them automatically using ARM templates. For example as part of your continuous delivery practice.

E-mail sucks, I want to create automated responses

Also possible. You can also have an webhook called as part of an actiongroup. This way you can easily hook up an Azure function to immediately remedy an issue, for example.