So you’ve written Infrastructure As Code, Automated All The Things, and drank deeply of the DevOps kool-aid on your current project – what’s next?

You’ve been reaping the rewards of DevOps on a small scale, and your one-off DevOps effort has earned a lot of positive attention – now management would like you to implement DevOps for all the projects! So how do you spread the DevOps wealth, and what do you need to be aware of?

Delivering DevOps

What is DevOps

For this article, we’ll need shorthand “DevOps” to mean the code bits – such as bash/posh/cmd for scripting, TF/CLI for IAC, and YAML for Azure DevOps CI/CD pipelines. Start with identifying the valuable bits from your current DevOps efforts and use that as a basis for what you want to disseminate to all other projects.

Cross-Project DevOps

Not all projects will have the exact DevOps requirements. Still, over time you will build up a collection of useful scripts and templates that are generic enough to provide value across all software projects in your organization.

But you can’t simply copy/paste these files into every repo, as that would be a massive headache to manage and keep updated. Instead, you’ll want to version and package these scripts so that every project that adopts the enterprise DevOps approach can track and plan for DevOps package updates.

Custom Package Repository

The easiest way to distribute file packages is through a custom package repository. Chances are your software project is already using at least Nuget, NPM, Maven, or PyPI. Azure DevOps can create all of these kinds of repositories, which is how you can seamlessly distribute your company-proprietary DevOps package without making it publicly available. These custom repositories are also handy as a local cache for public packages.

DevOps Integration

Usually, downloaded packages are not committed to the repo, only the reference to the package – then the build/release agents download them as needed. But for DevOps packages, should be committed to the repo for a variety of reasons. To do that, include an install script with your package, like this example for an NPM package:

#!/usr/bin/env node

const path = require('path');
const fse = require('fs-extra');
const rootFolder = path.resolve('../..'); // backwards from “./node_modules/<package>”
const installFolder = `${rootFolder}/.devops`;
const oldInstallFolder = `${rootFolder}/.devops.backup`;
const nodePkgFolder = path.resolve('.');
const srcFolder = `${nodePkgFolder}/src`;

let oldFolderRenamed = false;

// rename .devops to .devops.backup
if (fse.existsSync(installFolder)) {
    oldFolderRenamed = true;
    if (fse.existsSync(oldInstallFolder)) {
        console.log('removing last [.devops.backup] folder...');
        fse.removeSync(oldInstallFolder);
    }
    console.log('renaming [.devops] as [.devops.backup]...');
    fse.renameSync(installFolder, oldInstallFolder);
}

// copy package src folder to install folder
console.log('installing devops package...');
fse.copySync(srcFolder, installFolder);

// read version from package.json and save to installFolder
const packageJson = fse.readFileSync(`${nodePkgFolder}/package.json`);
const package = JSON.parse(packageJson);
fse.writeFileSync(`${installFolder}/pkg_version.txt`, package.id);

if (oldFolderRenamed) {
    console.warn('Existing devops configuration has been backed up and replaced, please manually merge your configuration back into the new devops package!');
    console.log(`Please read ${nodePkgFolder}\\changelog.md`)
} else {
    console.log(`Devops package has been installed, please read ${nodePkgFolder}\\readme.md`)
}

This script copies the DevOps package from the node_modules (NPM package cache) directory into the project’s root directory. If the DevOps package directory is already present, it renames the old directory and copies it in the new one. It is then trivial to diff the old and new directories for changes and merge them.

For NPM install, use the –no-save option, as we are using NPM as a downloader/installer so it doesn’t need to save the reference in the package.json

Package Versioning

DevOps package files should be tracked in a Version Control System like Git, and each feature should then be developed in a separate branch, then PR’d, and each PR should create a new version of the DevOps package. You can then Devops your Devops by setting up CI pipelines to automatically publish new package versions whenever the master branch is changed.

Don’t forget to document! Include a readme for first-time consumers and a changelog for updates.

Adopt and Migrate Incrementally

Generally, DevOps development will follow this kind of flow:

  1. Implement a project-specific DevOps functionality
  2. Test, Fix and Refine
  3. Generalize and extract useful bits of functionality
  4. Test, Fix and Refine
  5. Repeat

When integrating an enterprise DevOps package, a similar process can be followed:

  1. Migrate project-specific DevOps functionality to use the enterprise DevOps package
  2. Test, Fix and Refine
  3. Generic and extract useful bits of functionality
  4. Test, Fix and Refine
  5. Merge the generically useful bits into the enterprise DevOps package
  6. Repeat

Standardize and Prevent Making Changes

A critical benefit of the DevOps package approach is that it allows for the standardization of DevOps processes across the enterprise and provides a straightforward way to keep all projects in sync. A bug fix or new process can be quickly rolled out to all consumers of the package.

Standardization could also be accomplished using Azure DevOps Pipelines or Task Groups, but any change to those will immediately affect all consumers and invisibly break things. But if the DevOps package is stored in the project repo, those projects are insulated from breaking changes and bugs.

Track Changes and Update Easily

Each project will likely have a separate set of custom configuration files that must be modified from the package baseline to customize the build and release pipelines. It is essential to separate these files and keep track of changes to them after a package update. Conversely, all the other files in the DevOps package should not be modified to ensure a smooth update. Suppose one of the package files must be modified. In that case, it should be either A) temporary with the expectation of merging the changes up to the package or B) copied locally to the custom configuration directory so that it is evident that it must be inspected during the package update process.

Enterprise DevOps Principles

To sum everything up, there are several themes here that will ensure a successful enterprise DevOps approach:

  • Consistency – standardize DevOps processes across the enterprise
  • Modularity – design DevOps processes so that each component is focused and reusable – follow DRY and SRP guidelines
  • Resiliency/Stability – make DevOps processes resistant to unexpected changes
  • Traceability – easily understand changes to DevOps processes and merge quickly

Example

Consider this NPM package:

  • scripts/install.js – module install script
  • src/config – all project-specific customization and scripts
  • src/pipelines – enterprise-wide CI/CD pipelines
  • src/scripts – enterprise-wide bash/posh/cmd scripts
  • src/tf – enterprise-wide terraform templates
  • src/Update-DevopsPackage.ps1 – helper script that triggers the download of the latest package
  • changelog.md – description of differences between versions, upgrade notes
  • package.json – NPM publishes metadata, contains version
  • readme.md – introduction to DevOps package, contains getting started instructions, directory, and file descriptions
  • And here’s what a consuming project may look like:
  • .devops/config – src/config directory from package
  • .devops/pipelines – src/pipelines directory from package
  • .devops/scripts – src/scripts directory from package
  • .devops/tf – src/tf directory from package
  • .devops/version.txt – text file containing the package version (handy for tracking updates)
  • .devops/Update-DevopsPackage.ps1 – copied from src directory from package
  • src/ – project source code, etc.
I am currently working as a development operation engineer at a client that maintains a large and complex infrastructure with an even more complicated development workflow. Cloud infrastructure had already been provisioned, and several development cycles had been completed when I joined the team a couple of years ago. As a result, I came into an environment that had already established a development workflow, with barely any infrastructure as code (IaC) practices in place.

What is unique about this client is that it maintains six separate web applications, each designated for a different area in the United States; the backends for each application vary from application to application, but the front ends are mostly similar. To avoid a maintenance nightmare, we use code from one repo to deploy the main UI of all six applications; the same code is deployed to each application but configured to run differently through a master configuration file. Each web application is compromised of several sub-applications, each with its codebase.

One of my responsibilities is to spin up and maintain application stacks for new development teams that are working on a specific feature for this client. That feature may or may not be utilized by all six web applications. For this team to develop rapidly, they need a stable testing environment, equipped with continuous delivery, to do their work.

This “application stack” is comprised of:

  • 1 Azure App Service Plan
  • 1 App Service (up to 6)
  • 1 Redis Cache
  • 1 Storage Account

These are Microsoft Azure constructs and are not essential to understand in the context of this post. Keep in mind that 6 separate applications utilize a set of shared resources.

Since the DevOps team at this particular client is small, we were only able to create a script to deploy an application stack for a specific development effort, but we did not get the chance to implement any strategies around maintaining these stacks once they were deployed. As a result, we are applying configurations that need to be applied across the 25-30 application stacks that we maintain has turned out to be a logistical nightmare.

Model 1 Main App RepoThe diagram above represents a high-level overview of a single application stack that we need to maintain; the diagram is scaled down from 6 applications to 3 applications for clarity.

The current script we use to provision application stacks does the following:

  1. Creates all Azure resources
  2. Performs KeyVault configuration
    1. Enabled managed identity on all app services
    2. Adds access policy to the key vault for all app services
    3. Adds key vault references for secret app settings on all app services
  3. Creates 1 Azure DevOps release pipeline to deploy code to all app services; this is done by cloning a base template for a release pipeline that exists before the script is run.

The Problem

Although the deployment script that is currently in use saves time, it falls short on a few things:

If we want to make configuration changes across all the application stacks, we would have to make edits to the deployment script and rerun it on each application stack. This process is entirely too manual and can be improved.

  1. If we want to change the mechanics of how the underlying deployment pipeline functions across all the application stacks, we have to make edits on each deployment pipeline that is tied to a given application stack. This is too tedious and leaves lots of room for error; we initially mitigated some of this by utilizing Azure DevOps task groups.
  2. Configuration drift is widely prevalent; because we do not have an easy way to maintain all of the application environments across the board, minor configuration changes during development are difficult to track and often fail to propagate to our main application stacks.
  3. The Solution: TEMPLATE YOUR LIFE

Azure YAML Templates

This client is relatively young in terms of their cloud journey, especially with Azure and Azure DevOps. They currently rely heavily on Release Pipeline User Interface within Azure DevOps to deploy code to their applications. In recent years, Azure has been subtly urging teams to utilize multi-stage YAML templates instead of the Release Pipeline User Interface to adopt an “infrastructure as code” mindset to the continuous delivery process. With this mindset, there is no difference between “Build” pipelines and “Release” pipelines; the only construct is a pipeline where you can perform any task (build, test, deploy, etc.).

I encourage you to read more about Azure DevOps YAML templates. I’ve included some relevant links below:

Azure DevOps YAML Templates
Azure DevOps Multistage YAML UI Experience
YAML Schema

Given the problem statement above, there is a large need to develop a framework and process around maintaining the cloud infrastructure and its respective continuous delivery process that is easy to manage and propagate changes through.

This is where Azure DevOps YAML templates become useful.

All you need to create a pipeline in Azure DevOps is a YAML file. This YAML file can exist in any code repository. Once this file is imported into a code repository, you can use the Azure DevOps UI to create a pipeline from this file. While you can run the pipeline from the Azure DevOps UI, the pipeline’s changes will be maintained and version within the YAML file itself, just like you would with any application code.

This YAML file can inherit and reference another YAML file that exists in other code repositories.

Template Repository

I am currently developing a code repository that will contain:

  1. Azure ARM templates to version app service configurations (including virtual/sub applications)
  2. Deployment script to deploy ARM templates mentioned above
  3. Azure DevOps YAML files that will function as “release pipelines”:
    1. Create / Refresh shared resources
    2. Create / Refresh app services
    3. Deploy code to app services

When a new workstream or development effort begins, all they need to do is create a simple YAML file that extends from the main (release.yaml) template file in the repo mentioned above (see azurepipeline.yaml below). Once that is done, the first time the development team pushes code to a specified branch, they will be equipped with a new environment with their code changes.

Hierarchy of the Templates

The diagram above represents the hierarchy of the templates in the code repository mentioned above. You can see there are only 7 YAML template files to maintain. The azurepipeline.yaml file inherits these template files. This helps address the challenge mentioned above related to the daunting task of maintaining 25+ release pipelines; changes made to any of the template files will propagate to any azurepipeline.yaml files that inherit from the release.yaml file.

The idea is that we can import the azurepipeline.yaml file that can be into any repo or branch. This file is relatively simple:

azurepipeline.yaml

# run on schedule instead of utilizing ci triggers
name: $(Date:yyyyMMdd)$(Rev:.r)
trigger:
- none
  # pipeline artifact list (previously source artifacts in release pipeline)
extends:
  template: release.yaml@templates

The release.yaml file that the azurepipeline.yaml file extends from looks similar to the one below:

release.yaml

resources:
  # script and template reference
  repositories:
    - repository: templates
      type: git
      name: template-repo
  pipelines:
    - pipeline: main
      project: ExampleProject
      source: YAML\main
      branch: stage
    - pipeline: subapp1
      project: ExampleProject
      source: YAML\subapp1
      branch: stage
    - pipeline: subapp2
      project: ExampleProject
      source: YAML\subapp2
      branch: stage
stages:
- stage: SHARED
  dependsOn: []
  displayName: refresh shared resources
  jobs: 
  - job: refresh_shared_resources
    pool:
      vmImage: 'ubuntu-latest'
    steps:
      - template: templates/update-infrastructure.yaml@templates
        parameters:
          sharedOnly: true
- stage: APP1
  dependsOn: ['SHARED']
  variables:
  - template: templates/appvars.yaml@templates
    parameters:
      appName: 'app1'
  displayName: app1 
  jobs: 
  - job: REFRESH_APP1
    pool:
      vmImage: 'ubuntu-latest'
    steps:
      - template: templates/update-infrastructure.yaml@templates
        parameters:
          sharedOnly: true
          appName: app1
  - template: templates/app1.yaml@templates
- stage: APP1
  dependsOn: ['SHARED']
  variables:
  - template: templates/appvars.yaml@templates
    parameters:
      appName: 'app1'
  displayName: app1 
  jobs: 
  - job: REFRESH_APP1
    pool:
      vmImage: 'ubuntu-latest'
    steps:
      - template: templates/update-infrastructure.yaml@templates
        parameters:
          sharedOnly: true
          appName: app1
  - template: templates/app1.yaml@templates

App stages are created for each app (not shown).

The app template files referenced by the release.yaml file looks similar to the file below:

appl.yaml

jobs:
- deployment: app1
  dependsOn: REFRESH_APP1
  displayName: "DEPLOY APP1"
  environment: stage
  pool:
    vmImage: windows-latest    
  strategy:
    runOnce:
      deploy:
        steps:
        - checkout: self
        - template: configure-app.yaml@templates
        - template: deploy-app.yaml@templates
          parameters:
            isVirtualApp: false
            appName: 'ui'
        - template: deploy-app.yaml@templates
          parameters:
            isVirtualApp: true
            appName: 'subapp1'
        - template: deploy-app.yaml@templates
          parameters:
            isVirtualApp: true
            appName: 'subapp2'

Take note of the different steps that are used to deploy an application. It comprises a configuration step and a deployment step for each sub-application that utilizes the same azure YAML template file with different parameters.

The release.yaml file results in a multi-stage YAML pipeline that looks like the one below:

Release.yaml file results

Resources

The resource section of the template defines what resources your pipeline will use; this is analogous to the artifact section in the release pipeline UI view. You can only have the resources section declared once in your series of templates for a given pipeline. In the example above, it is defined in the release.yaml file, the template that other templates will extend from.

In this example, the resource section references the template repository itself and other build pipelines that produce code artifacts that we will need to deploy to our applications.

Defining the resources section in the base template (release.yaml) allows us to be abstract the choice of artifact sources away from the end-user. This is advantageous if we want to add more sub-applications to our release pipeline; we would only need to change the release resources section.yaml file (changes will propagate to all inherited/extended pipelines).

At the client I work with; this is problematic. In the solution above, all pipelines that extend from release.yaml (that contains the resources section) is limited to only use the specific artifacts AND the branches they are set to pull from, as defined in the base template (release.yaml). We consistently have project teams that need sub-application artifacts from specific branches that their development work exists on. To solve this, we moved the resources section into the extended template (azurepipeline.yaml). However, this isn’t optimal because we would still need to update every comprehensive template if we wanted to add to the resources section across all application stacks.

As far as I know, there is no way to use pipeline variables or template expressions to determine what resource artifacts are needed dynamically. Keep the resources section in the base template and override them within the extended templates.

Dynamic Insertion
Currently, we must maintain a manifest that describes the relationship between each application and its respective sub-applications. For example, app1 could have sub-applications subapp1 and subapp2, and app2 could have sub-applications subapp1 and subapp3 (or any combination of sub-applications). We utilize a JSON file that defines the mappings between application and sub-application. This JSON file is parsed in the deployment script to ensure the correct sub-applications exist before code deployment; in Azure, the sub-application must live before you push code to it. As a result, we also need to maintain sub-applications in each of the different YAML step templates for each application. At this point, I am unaware of an elegant way to iterate over an Azure DevOps pipeline variable to create steps dynamically.

Variable Templates
Template expressions can be applied to variables. This was extremely useful in the context of the solution I was trying to solve. For example, each stage in the pipeline described above is for a particular application with a distinct name. This name is used to determine several properties for its respective pipeline stage. For example, the app1 pipeline stage uses the site name/url app1.azurewebsites.net. The app2 stage uses the site name/url app2.azurewebsites.net. You can see both site names follow the same naming convention. This is a great use case for a variable template.

Here is the variable template I used:

appvars.yaml

parameters:
  appName: ""
  deployToSlot: false
variables:
  siteName: "$(tier)-$(region)-$(app)-${{parameters.appName}}"
  ${{ if eq(parameters.deployToSlot, true) }}:
    corsOrigin:"https://${{parameters.appName}}-$(slotName).azurewebsites.net"
  ${{ if eq(parameters.deployToSlot, false) }}:  
    corsOrigin: “https://${{parameters.appName}}.azurewebsites.net"
  appName: ${{parameters.appName}}
  template: "$(System.DefaultWorkingDirectory)/arm_templates/azuredeploy.json"
  virtualApps: "$(System.DefaultWorkingDirectory)/manifest.json"

You can see that I’ve included a parameters section that takes in appName as a parameter. You can see it in the release.yaml file, this parameter is being applied with:

variables:
  - template: templates/appvars.yaml@templates
    parameters:
      appName: 'app1'

This allows us to cut down on repeated code by using an extendable template for the variables needed for each deployment stage. It is essential to understand how variables and parameters work in Azure pipelines and the different ways they can be expressed in your YAML.

Check out these resources:
YAML Variables Documentation
YAML Parameters Documentation
Developer Community Forum Post

Conclusion

Development operation decisions should be considered in architecture design. In the scenario described above, the architecture I inherited did not foster the most efficient development and deployment workflow. For example, the decision to have sub-applications exist on an individual app service instead of assigning them to their app services. This limited us to deploy the individual applications in series, as you are limited to run one deployment at a time to give app service.

Infrastructure as code implemented in the continuous integration and delivery process can be equally as important as implementing it for your existing infrastructure.

Suppose you are working on a project that requires a complicated development lifecycle with multiple teams needing distinct environments. In this case, it becomes increasingly important to formulate a strategy around maintaining the various environments. Environment creation and code deployment should be as automated as possible. Applying configurations across all environments should be an easy task.

Have you been in a situation where an Azure Resource Manager (ARM) template with Azure Blueprint is the only option available for Infrastructure as Code (IaC) deployment? Have you realized that certain operations during deployment are not possible with an ARM template? Have you been in a situation that PowerShell or Azure CLI can get the job done, but there is no way to inject that script into the ARM template? Maybe you have felt like ARM-disabled at this time. If you answered yes to at least one of the situations, this blog is for you. Let’s dive into a cool feature, ‘Deployment Scripts in ARM Templates,’ which helped me to overcome these hurdles!

Azure managed disks by default have Server-Side Encryption (SSE) with Platform Managed Key (PMK), identified as SSE + PMK. I had the requirement to encrypt VM’s (Windows or Linux) with either SSE or Azure Disk Encryption (ADE) with Customer Managed Key (CMK). CMK provides an additional security layer as the customer can manage the keys, and they can rotate the keys periodically. Both types of encryptions require an RSA-based Key Vault key. While you can implement an SSE and ADE with CMK can using PowerShell and Azure CLI, I only had an ARM template deployment option. An ARM template has the functionality to create key vault secrets but cannot create key vault keys. For SSE encryption, a ‘disk encryption set‘ needs to be created. In the automated deployment, the key vault key and disk encryption set must exist for the virtual machine deployment to consume the key vault key to encrypt the VM and OS/Data disks.

The following picture shows default encryption on a VM managed disk – SSE with PMK (by default):

VM Disk

Choosing encryption types is based on customer requirements. Azure offers SSE + PMK as a default feature, which provides encryption at rest. Either SSE + CMK or ADE + CMK can be applied on top of default encryption. For more information to understand SSE and ADE, please read this great blog post. It explains fundamental differences between these types of encryptions when to choose one over the other, and the caveats to watch out after the encryption is applied.

Microsoft explains the difference between SSE and ADE by stating that Azure Disk Encryption leverages the BitLocker feature of Windows to encrypt managed disks with Customer-Managed Keys within the guest VM. Server-Side Encryption with Customer-Managed Keys improves on ADE by enabling you to use any OS type and images for your VMs by encrypting data in the Storage service.

The following screenshot shows Managed disk SSE + CMK encryption implemented via ARM template at the time of VM creation using Disk Encryption Set:

SSE + CMK encryption

The following screenshot shows Managed disk ADE + CMK encryption implemented via ARM Custom Extension:

ADE + CMK

The following screenshot shows ARM Custom Extension for ADE + CMK encryption:

ARM Custom Extension

The following screenshot shows how Key Vault secret is used by writing ‘Encrypted BEK’ for ADE + CMK encryption:

Key Vault secret

While I learned more about disk encryption, I was still contemplating disk encryption options in the ARM template. Microsoft ARM template team announced this cool new feature, deployment scripts! This was announced at the MS Build Event, which you can view here for more information. Deployment scripts are a new resource in ARM that can run PowerShell and Azure CLI scripts within the ARM template! The feature came in time to resolve ARM deployment impediments. The implementation creates a storage account to copy the script from the ARM template into file share and run an Azure container instance to execute the script. A user-managed identity is created and given permission to the resource group and added to key vault access policies. The script is executed as this user-managed identity. At this time, no system assigned identity is supported. The deployment scripts resource is available in both Azure public and government, wherever Azure container instances are available. Both the storage account and container instances are deleted after successful deployment scripts resource deployment.

The following screenshot is the deployment scripts ARM template resource:

Deployment Scripts ARM template

The following screenshot shows a container instance and storage account resources used by deployment scripts. They will be deleted after successful deployment script execution. The user managed identity is created in an ARM to execute the deployment script. The deployment script creates Key Vault, Keys in key vault, and Disk Encryption Set.

Container Instance

Additional features in deployment scripts resource:

  • The scripts can be either embedded inline or referenced from a location accessible from deployment.
  • The output from deployment scripts resource can be consumed/referenced by other resources in the deployment
  • An existing storage account can be referenced, and that storage account will be used instead of creating a temporary storage account.
  • Any task done via PowerShell and Azure CLI can be done in the deployment scripts, such as disk encryption, the storage account can be encrypted with CMK via PowerShell in deployment scripts.

Security considerations:

  • Deployment scripts create Standard_LRS storage account SKU, and if a Geo-Redundant security policy is enabled then the policy might fail and report. As the storage account gets deleted after the deployment scripts are done, the policy will not be non-compliant.
  • The storage account access should be public when using the existing storage account for the deployment script. This is to allow the script to be copied to file share and by permission for the container, instance to execute the script.
  • The Key Vault firewall needs to be turned off for the deployment scripts to be able to perform operations in key vault key and certificates. But the firewall can be enabled back in the PowerShell after all the work is done.

The following screenshot shows ARM Template Key vault resource, no firewall restriction:

ARM Template

In the following screenshot, I enable firewall access to the key vault in PowerShell script after the tasks are done:

Enable Firewall access

ARM template for SSE + CMK disk encryption:

SSE + CMK is applied when the disk-encryption-type parameter is set to ‘SSE’. If it is ‘ADE’ then no SSE is applied.

SSE + CMK is applied

ARM Template for ADE + CMK Disk Encryption using VM Extension

ARM Template for ADE

In the code example, the newly created Azure key vault key is saved to secrets to be able to access from an ARM template. ARM does not have the functionality to access key vault keys but can access secrets using the template reference parameter, like below.

Template Reference parameter

Instead of using as secrets, another approach to write the key vault key (KeK ID) as deployment scripts output and use that output in the VM ARM template.

The following screenshot shows how to write output from deployment scripts resource in keyvault.json:

Write up output

The following screenshot shows how to consume the output written in deployment scripts resource in vm.json for KeK URL:

Consume the output

For full code access to try using ‘deployment scripts’ in an ARM template to create Key vault keys, disk encryption set, and to encrypt the VM disks with SSE or ADE, please follow the link to get the code.

Once downloaded, follow the steps to test in your environment, Azure Cloud or Azure Government. Update the parameters before deploying:

Connect-AzAccount -Tenant "000000-000-0000-0000" -SubscriptionId "000000-000-0000-0000" -EnvironmentName AzureCloud

# Create Resource Group
$rg = New-AzResourceGroup -Name "deploy-script-rg" -Location "eastus2"

# Create Key Vault with ‘deployment scripts’
New-AzResourceGroupDeployment -ResourceGroupName $rg.ResourceGroupName -TemplateFile "C:\deploymentScriptsARM\keyvault.json" -TemplateParameterFile "C:\deploymentScriptsARM\keyvault.parameters.json"

# Create Virtual Machine with disk encryption
New-AzResourceGroupDeployment -ResourceGroupName $rg.ResourceGroupName -TemplateFile "C:\deploymentScriptsARM\vm.json" -TemplateParameterFile "C:\deploymentScriptsARM\vm.parameters.json"

Additional ARM Templates references:

Happy ARM Templating for IaC!!!

In this post, I will show you how DevOps practices can add value to a variety of Office 365 development scenarios. The practices we will discuss are Infrastructure as Code, Continuous Integration, and Continuous Delivery. The advances in DevOps and SharePoint Framework (SPFx) have allowed us to make advancements in the way that we develop software and have improved our efficiency.

Infrastructure as Code (IaC)

Practicing IaC means that the infrastructure your applications depend on is created and maintained by code that is source controlled, tested, and deployed to production much like software. When discussing IaC, we’re typically talking about provisioning resources to a cloud provider. In our case, the “infrastructure” is Office 365 – a SaaS product with extensive customization and configuration options.  

While you could manage your O365 tenant with PowerShell, the code-centric and template-based PnP Provisioning Framework aligns better with this practice because: 

  1. Using the frameworks declarative XML syntax, you describe what you want to exist rather than writing code to manage how it gets created.
  2. It is easier for developers to run idempotent deployments to enact the desired state of your Office 365 tenant.  

While originally developed to support SharePoint Online and on-premise deployments, you can see in its latest schema that it has expanded to support Microsoft Teams, OneDrive, and Active Directory.  

Continuous Integration (CI) 

The practice of continuously integrating first means that your team has established the habit of frequently merging small batches of changes into a central code repository. Upon that merge, we automatically build and test the code to quickly identify bugs and quality issues.  

SharePoint Framework is a commonly used tool used to extend the capabilities of SharePoint Online and on-premise Much like the Provisioning Framework, SharePoint Framework is expanding to support other Office 365 services. You can currently use it to develop for Microsoft Teams and will soon be able to use it to develop Office Add-Ins.

Azure DevOps is a one-stop-shop service that provides everything you need throughout the software development lifecycle. For example, your team can version control your projects source code in Repos. After merging changes, use Pipelines to trigger a CI process that runs the build and test tasks of your SharePoint Framework solution.  

Continuous Delivery (CD)

Continuous Delivery, the practice of running automated deployments through a sequence of environments, starts after a completed CI process. Azure DevOps Pipelines will again be the tool of choice to execute the deployment procedures against each environment.

Example Solution

A solution demonstrating how to use the technologies and practices described above is available on Applied Information ScienceGitHub account. The result is a pipeline capable of receiving frequent changes to O365 configuration and SPFx applications from 1 or many developers, verifying the quality of the change, and deploying it to a series of environments.

Dev Tenant Diagram

I encourage you to explore the source code using the following summary as a guide. You’ll find the solution organized into three areas – SPFx, Provisioning, and Pipeline.

SPFx

A simple hello world web part was created using the yeoman generator. Jest was added to test the code. Npm and gulp scripts are used to build and package the source code which produces an sppkg file.

Provisioning

The PnP Provisioning Template XML file found here defines the desired state of the target tenant. The following is the desired state:

  1. Install the SPFx App into the tenant App Catalog.
  2. Create a site collection that will host our web parts page.
  3. Install the SPFx App to the Site Collection.
  4. Create a page that will host our web part.
  5. Add the web part to the page.

By using parameters for the tenant URL and site owner, the same template can be deployed to multiple environments. A PowerShell build script bundles the template and all required files, such as the SPFx sppkg file, into a single pnp file ready for deployment.

Pipeline

A multi-stage YAML pipeline defined in the Pipeline folder of the example solution runs the following process:

  1. Build, test, and package the SPFx and Provisioning Template source code.
  2. Deploy the prerequisite SharePoint infrastructure to the tenant if it does not already exist.
  3. Install and configure the SPFx web part.
  4. Repeat #2 and #3 for all environments.

Build Process Diagram

Secret variables, such as the username and password used to connect to the tenant, are only referenced in the pipeline. The values are set and encrypted in the Azure DevOps pipeline editor.

Variables Diagram with Passwords

Conclusion

In the not-too-distant past, it was high effort to write unit tests for a SharePoint solution, and most deployments were manual. In this post, I have shown you how advancements in the platform and tooling have changed this. The mentality, practices, and tools brought by DevOps can improve the pace and quality of any software development and infrastructure management project, including projects building upon Office 365.

Define cloud apps and infrastructure in your favorite language and deploy to any cloud with Pulumi.

Pulumi logoIf you search the Internet for Infrastructure-as-Code (IaC), it’s pretty easy to come up with a list of the most popular tools: Chef, Ansible, Puppet, Terraform…and the freshman to the IaC:  PULUMI.

It’s 4 a.m. and the production server has gone down. You can’t keep calm?

Sure, how tough is it? Except that you’ll probably need to recall what you did a year ago to set up your environment, then desperately try to figure out what you’ve installed or implemented or configured since. Finally, you’ve gathered all your findings to closely replicate the environment.

Wouldn’t it be nice to have something that manages all this configuration for you? No, there aren’t robots coming to take over the DevOps team yet. I’m talking about using Infrastructure-as-Code to automatically and consistently manage infrastructure configuration.

What is Infrastructure as Code (IaC)?

As the name suggests, Infrastructure-as-Code is the concept of managing your operations environment in the same way you manage applications or other code.

Infrastructure as code simply means to convert your infrastructure into code, where it is managed by some kind of version control system (e.g., Git), and stored in a repository where you can manage it similar to your application.

Pulumi: the new IaC tool

While learning Azure, I tried implementing IaC with Azure Resource Manager Templates (aka ARM Templates). For this, I learned Powershell and wrote several templates using it. As a developer, PowerShell isn’t the language I use on a daily basis to write my code, but I use Javascript abundantly for many of my projects.

Then the internet community whispered about Pulumi.

I’ve tried my hand at Pulumi and the experience has been very enlightening, so I’m sharing some of the more important and interesting findings with you all.

Pulumi is a multi-language and multi-cloud development platform.

Pulumi supports all major clouds — including Amazon Web Services (AWS), Azure and Google Cloud, as well as Kubernetes clusters. It lets you create all aspects of cloud programs using real languages (Pulumi currently supports JavaScript, TypeScript, and Python, with more languages supported in the future) and real code, from infrastructure on up to the application itself. Just write programs and run them, and Pulumi figures out the rest.

Using real languages unlocks tremendous benefits:

  • Familiarity: no need to learn new bespoke DSLs or YAML-based templating languages.
  • Abstraction: build bigger things out of smaller things.
  • Sharing and reuse: we leverage existing language package managers to share and reuse these abstractions, either with the community, within your team, or both.
  • Full control: use the full power of your language, including async, loops, and conditionals.

My favorite things about Pulumi

  1. Multi-Language and real language: Using general-purpose programming languages reduces the learning curve and makes it easier to express your configuration requirements.
  2. Developer friendly and easily configurable: Pulumi bridges the gap between Development and Operations teams by not treating application code and infrastructure as separate things. Developers can easily list out dependencies in the package.json file. The below snippet explains:
{
   "name": "azure-javascript",  // Name of the Pulumi project
   "main": "index.js",          // start point of the Pulumi program.
   "dependencies": {            // Dependencies with version number to be
       "@pulumi/pulumi": "latest",   installed with NPM
       "@pulumi/azure": "latest",
       "azure-storage": "latest",
       "mime": "^2.4.0"
   }
}

The YAML is created while we initialize the Pulumi Stack to configure all the parameters required for the program like credentials, location, etc.

  1. Reusable Components: Thanks to having a real language, we can build higher-level abstractions.

Below is one of my example code snippets using a Pulumi component that creates an instance of the Azure Resource Group to be used in other programs. You can find the full source code that provisions Azure Load Balancer GitHub Code.

class ResourceGroup extends pulumi.ComponentResource {
    constructor(resourceGroupName, location,path, opts)
    {
    	 super("az-pulumi-createstorageaccount:ResourceGroup", resourceGroupName,location, {}, opts); 

         console.log(`Resource Group ${resourceGroupName} : location ${location} `);
    	 // Create an Azure Resource Group
		const resourceGroup = new azure.core.ResourceGroup(resourceGroupName, 
		{
		    location:location,
		},

           { 
              parent: this 
           }
        );

	   // Create a property for the resource group name that was created
        this.resourceGroupName = resourceGroup.name,
        this.location = location
        

         // For dependency tracking, register output properties for this component
        this.registerOutputs({
            resourceGroupName: this.resourceGroupName,
           
        });

    }

}


module.exports.ResourceGroup = ResourceGroup;


This class can be instantiated as below:

// import the class 
const resourceGroup = require("./create-resource-group.js");


// Create an Azure Resource Group
// Arguments : Resource group name and location
let azureResouceGroup = new resourceGroup.ResourceGroup("rgtest","EastUS");

  1. Multi-Cloud: Pulumi supports all major clouds — including AWS, Azure and Google Cloud, as well as Kubernetes clusters. This delivers a consolidated programming model and tools for managing cloud software anywhere. There’s no need to learn three different YAML dialects, and five different CLIs, just to get a simple container-based application stood up in production.

The below code uses a single Pulumi program to provision resources in both AWS and GCP (Google Cloud Platform). The example is in typescript and it is required to install @pulumi/aws and @pulumi/gcp packages from NPM.

import * as aws from "@pulumi/aws";
import * as gcp from "@pulumi/gcp";

// Create an AWS resource (S3 Bucket)
const awsBucket = new aws.s3.Bucket("my-bucket");

// Create a GCP resource (Storage Bucket)
const gcpBucket = new gcp.storage.Bucket("my-bucket");

// Export the names of the buckets
export const bucketNames = [
awsBucket.bucket,
gcpBucket.name,
];

Pulumi ensures that resources will be created in both clouds. Let’s take a look at how Pulumi creates the plan for both clouds and deploy the resources to the respective clouds.

Previewing update (multicloud-ts-buckets-dev):

Type Name Plan
+ pulumi:pulumi:Stack multicloud-ts-buckets-multicloud-ts-buckets-dev create
+ ├─ gcp:storage:Bucket my-bucket create
+ └─ aws:s3:Bucket my-bucket create

Resources:
3 changes
+ 3 to create

Do you want to perform this update? yes
Updating (multicloud-ts-buckets-dev):

Type Name Status
+ pulumi:pulumi:Stack multicloud-ts-buckets-multicloud-ts-buckets-dev created
+ ├─ gcp:storage:Bucket my-bucket created
+ └─ aws:s3:Bucket my-bucket created

Outputs:
bucketNames: [
[0]: "my-bucket-c819937"
[1]: "my-bucket-f722eb9"
]

Resources:
3 changes
+ 3 created

Duration: 21.713128552s

The outputs show the name of the AWS and GCP buckets respectively.

Another scenario would be to create a storage account and S3 object in Azure and AWS respectively using Pulumi.

// Creating storage account in Azure

const pulumi = require("@pulumi/pulumi");
const azure = require("@pulumi/azure");

const storageAccount = new azure.storage.Account(storageAccountName, {
   	resourceGroupName: rgName,
    	location: rgLocation,
    	accountTier: "Standard",
    	accountReplicationType: "LRS",
 });

// Creating S3 bucket  in AWS

const pulumi = require("@pulumi/pulumi");
const azure = require("@pulumi/aws");

const siteBucket = new aws.s3.Bucket("my-bucket",{
	website: {
    indexDocument: "index.html",
  }
});

Pulumi enables you to mix and match these cloud resources inside of the same or different program or file.

  1. Stacks: A core concept in Pulumi is the idea of a “stack.” A stack is an isolated instance of your cloud program whose resources and configuration are distinct from all other stacks. You might have a stack each for production, staging, and testing, or perhaps for each single-tenanted environment. Pulumi’s CLI makes it trivial to spin up and tear down lots of stacks.

Closing Thoughts

I would like to close this post with a statement: Cloud Renaissance for DevOps and Developers as called by the whole internet community. Building powerful cloud software will be more enjoyable, more productive, and more collaborative for the developers. Of course, everything comes with a cost: after exploring, I found that Pulumi lacks some documentation. Besides this, for developers to write IAC, a deep understanding of infrastructure is a must.

I hope that this post has given you a better idea of the overall platform, approach, and unique strengths.

Happy Puluming 🙂