What is Managed Identity?

The password lifecycle can be tedious. Between securely storing, rotating regularly, and ensuring their complexity, they add a hefty burden to IT teams. In comes the use of managed identities, a free feature that supports the ability to authenticate with resources that use Azure Active Directory (AAD). There are two types of managed identities:

  • System-assigned: An identity is created within Azure AD and is tied to the lifecycle of that specific resource. Only that resource can use the identity; once that resource is deleted, so is the identity.
  • User-assigned: You can create your own managed identity, which can be assigned to multiple resources. You must maintain the lifecycle of that identity, though. When a user-assigned resource is deleted, that identity is not deleted from Azure AD.

For this article’s purposes, we will focus on system-assigned identities. For more information, check out Microsoft’s managed identities for Azure resources article. If you’re interested in learning more about creating a password-less strategy with Microsoft security, check out Derek Morgan’s article.

Setting Up the Virtual Machine with Managed Identity

The scope of this post will not cover how to set up an Azure VM. For more information on that, you can find instructions at Microsoft Learn. System-assigned managed identity can be configured through a variety of methods.

Pre-VM Creation

When creating a VM in the Azure Portal, the Management tab will have an ‘Identity’ checkbox that should be selected. This will create the Azure AD identity without any permissions.

Pre VM Creation

Post-VM Creation

If you already have an existing VM, you can set up a managed identity by selecting Identity under the Settings blade of your chosen VM.

Post VM Creation

You’ll be given the option to choose System assigned or User assigned. For now, we’ll stick with the System set, select ‘On’ under the Status, and then click ‘Save.’

System Assignment for User

A prompt will appear asking for confirmation. Click ‘Yes.’

Enable system assignments

Initially, your managed identity doesn’t have permission to do anything in your Azure environment. So, the next step is to set one more Azure role assignment by clicking ‘Azure role assignments.’

Create Azure Role Assignment

You will then click on Add role assignment, which is currently in preview at the time of this writing, and select your scope (subscription/resource group/resource) and role (Owner, Contributor, etc.). Use whichever role best meets your needs; this can also include custom roles!

Creating Role Assignments in Azure

Then save! Your VM now has a managed identity configured with your desired permissions.

Bicep and Terraform IaC

There’s a good chance that if you’re setting up a managed identity on a VM, you intend on deploying IaC. This can result in the classic chicken and egg problem. This leaves you with a few options:

  • Use Microsoft’s managed build agents to deploy your initial IaC
  • Configure a managed identity manually following the above directions and then use this code for future VM creations.

For brevity, we’ll just link to some resources from the significant IaC languages:

Create Service Connection in ADO

Azure DevOps > Project Settings > Service Connections > New Service Connection > Azure Resource Manager > Managed Identity

Create Service Connection in ADO

Create a New Service Connection

Service Connection Type

New Azure Service Connection

Be sure to choose the right Scope Level to match what you chose when setting the role assignment.

Azure Resource Manager using Managed Security

Configuring the VM

We’ll focus on configuring the VM with two programs, ADO Pipeline Agent and PowerShell Core, to run our sample Pipeline below.

ADO Pipelines Agent for Agent Pool Build and Releases

  1. ADO > Project Settings > Agent pools > Add Pool
    1. Pool type = Self-hosted
    2. Name can be any string. I like env_pool (e.g., dev_pool, prd_pool).
    3. Description is optional.
  2. Select your new pool > New agent
  3. You’ll be presented with a window that looks like this. It is easier if you can access this from the VM itself, then you don’t need to transfer the file.Configure your account
  4. Run .\config.cmd and answer the following prompts:
Enter server URL > https://dev.azure.com/organization
Enter authentication type (press enter for PAT) > Enter
Enter personal access token > ******
Enter agent pool (press enter for default) > PoolName
Enter agent name (press enter for VMNAME) > Enter
Enter work folder (press enter for _work) > Enter
Enter run agent as service? (Y/N) (press enter for N) > Y
Enter enable SERVICE_SID_TYPE_UNRESTRICTED for agent service (Y/N) (press enter for N) > N
Enter User account to use for the service (press enter for NT AUTHORITY\NETWORK SERVICE) > Enter

5. If the installation was successful, you will see something like this:

If the install was successful, you will see something like this:
Service vstsagent.organization.PoolName.VMNAME successfully installed
Service vstsagent.organization.PoolName.VMNAME successfully set recovery option
Service vstsagent.organization.PoolName.VMNAME successfully set to delayed auto start
Service vstsagent.organization.PoolName.VMNAME successfully configured
Enter whether to prevent service starting immediately after configuration is finished? (Y/N) (press enter for N) > N / Enter
Service vstsagent.organization.PoolName.VMNAME started successfully

ADO Pipelines Agent for Environment Deployments

  1. ADO > Pipelines > Environments > Select environment (or create a ‘New Environment’ with a resource type of virtual machines) > Add resource > Virtual Machines > Next > Copy the script seen in Step 1.
    1. This will create a short-lived Personal Access Token (PAT) with the following scope:
    2. Environment: Read & Manage

      Registration Script

  2. RDP/Bastion into the VM and run the provided script in an Administrative PowerShell window. You will receive the following prompts:
     a. Enter Environment Virtual Machine resource tags. (Y/N) (press enter for N) > N / Enter
    b. Enter enable SERVICE_SID_TYPE_UNRESTRICTED for agent service (Y/N) (press enter for N): N / Enter
    c. Enter the User account to use for the service (press enter for NT AUTHORITY\SYSTEM): Enter
    d. Enter whether to prevent service from starting immediately after the configuration is finished. (Y/N) (press enter for N): N / Enter
  3. If the installation was successful, you would see something like this:
    Service vstsagent.organization.environment.virtualmachinename successfully installed
    Service vstsagent.organization.environment.virtualmachinename successfully set recovery option
    Service vstsagent.organization.environment.virtualmachinename successfully set to delayed auto start
    Service vstsagent.organization.environment.virtualmachinename successfully set SID type
    Service vstsagent.organization.environment.virtualmachinename successfully configured
    Service vstsagent.organization.environment.virtualmachinename started successfully

Software

One of the main benefits of hosting your build agent is that you can pick what software and which version you run. Our pipeline example will require PowerShell Core and Azure CLI to run on the build agent. Instructions for installing these tools can be found on PowerShell Core’s GitHub page as well as Azure CLI’s GitHub Page.

Important note: A VM restart might be required for the ADO Pipelines agent.

Create and Run a Pipeline

In ADO, we’ll now be able to create a new pipeline. Below is a sample pipeline. Be sure to replace the commented sections with what you have created.

  • Name: Name of the environment you used or created.
  • resourceName: Name of the virtual machine you setup the ADO Pipelines agent on.
  • resourceType: This will be either ‘virtualMachine’ or ‘kubernetes.’ In this case ‘virtualMachine’
  • azureSubscription: The name of the service connection you created in ADO. Note: This is not the name of your Azure subscription. The name is, in my opinion, a bit misleading.

Here is a sample of YAML code:

trigger: none

stages:
- stage: test
  displayName: 'test'
  jobs:
  - deployment: test
    displayName: 'test'
    environment: 
      name: 'dev' # Name of environment. 
      resourceName: 'vmName01' # Name of virtual machine resource. 
      resourceType: 'virtualMachine' # Type of environment resource. 
    strategy:
      runOnce:          
        deploy:
          steps:
          - task: AzureCLI@2
            inputs:
              azureSubscription: 'serviceConnectionName' # Connection name
              scriptType: 'pscore'
              scriptLocation: 'inlineScript'
              inlineScript: 'az account show'
              workingDirectory: '$(Build.SourcesDirectory)'

Once run, you should receive checkmarks with an output of the ‘az account show’ command that was ran on your new build agent with your managed system assigned identity.

Conclusion

We reviewed quite a bit in this article and walked through several steps to set up and use an Azure System Assigned Managed Identity on a Virtual Machine with Azure DevOps. I hope this helps give you the confidence to embrace managed identities!

Automating the build and deployment of an Angular application in the Azure DevOps (AzDO) ecosystem is as easy as any other code hosting service. AzDO has added task types to help deploy directly to Azure subscriptions. In this post, we will look at creating a build pipeline with caching, discuss deploying to Azure App Service, and pushing a container to the Azure Container Registry for use in AKS.

Building the Pipeline

AzDO does have a GUI to help set up a JavaScript-based project. The YAML generated is a great starting place for most applications. We noticed while running the generated build pipeline that the Node Package Manager (NPM) installation task was taking a significant amount of time, 15 minutes in some cases. The packages and versions have been decided at this point in development and were not expected to change often. AzDO allows for caching of files or artifacts during a build to be used in subsequent pipeline runs, saving the node modules folder and not installing if there are no changes. The package-lock.json file with its hash keys is the perfect file for checking if there have been changes.

The first part of the pipeline for installation and building is kept from the generated pipeline: install the latest node version, 16.14.0 is the latest stable version at the writing of this post; install the specific angular cli version globally needed for the application, 13.2.5; install packages from package.json file and install a library from a tar archive located in our repository. Finally, run ng build to create a dist folder of the angular application that can be deployed to the cloud.

steps:

- task: NodeTool@0
  inputs:
    versionSpec: '16.x'
  displayName: 'Install Node.js'

- task: Npm@1  
  displayName: 'Angular CLI 13.2.5'  
  inputs:  
    command: custom  
    verbose: false  
    customCommand: 'install -g @angular/cli@13.2.5'

- task: Npm@1  
  displayName: 'npm install'  
  inputs:
    command: custom  
    customCommand: 'install file:custom-angular-lib.tgz --legacy-peer-deps'  
  
- task: Npm@1  
  displayName: Build  
  inputs: 
    command: custom  
    verbose: false  
    customCommand: 'run build'  

Next, we wanted to update our job with caching to improve the run time of the pipeline. We added the npm_config_cache variable and used a “Cache” task type to set up the npm install. Using an environment variable to a path under $(Pipeline.Workspace) ensures the cache is accessible from container and non-container jobs. The logic on the cache task attempts to restore the cache, if successful the npm clean install step will not need to run.

variables:
  npm_config_cache: $(Pipeline.Workspace)/.npm

... 

- task: Cache@2
  inputs:
    key: 'npm | "$(Agent.OS)" | package-lock.json'
    restoreKeys: |
        npm | "$(Agent.OS)"
    path: $(npm_config_cache)
  displayName: Cache npm
- script: npm clean install

Lastly, building the Angular project with the ng build command and publish the created dist folder to our AzDO artifacts for use in the next stage, deployment.

- task: Npm@1  
      displayName: Build  
      inputs:
        command: custom  
        customCommand: 'run build'  
      
    - task: CopyPublishBuildArtifacts@1  
      displayName: 'Copy Publish Artifact: angular-web-app'  
      inputs:  
        CopyRoot: /dist  
        Contents: '**'  
        ArtifactName: angular-web-app 
        ArtifactType: Container

TOP 10 REASONS TO CHOOSE YAML FOR YOUR NEXT AZURE DEVOPS CI/CD PIPELINE
Discover reasons to choose YAML for your next Azure DevOps Pipeline and the benefits of a unified development experience.

Deploying the Pipeline

We had two deployment ideas in mind: Azure Kubernetes Service, AKS, and Azure App Service. Deploying to App Service requires the archived build artifact and pushing it to the Azure Subscription. Alternatively, if we wanted to use our application in AKS the application is bundled into a Docker image.

Before either deployment option, a task to sign in to the Azure subscription is required. More information on setting up to deploy to Azure can be found in Microsoft’s documentation.

- task: AzureCLI@2
      displayName: Az Login
      inputs:
        azureSubscription: <Name of the Azure Resource Manager service connection>
        scriptType: bash
        scriptLocation: inlineScript
        inlineScript: |
          echo "Login to AZ"
          az login --name <login name>

Deploy to App Service

The Azure Web Apps task can be used to deploy the Angular application artifact directly to the App Service. The task takes the artifact and pushes it to the App Service.

- task: AzureWebApp@1
    displayName: 'Deploy Azure Web App : angular-app'
    inputs:
        azureSubscription: <Name of the Azure Resource Manager service connection>
        appName: angular-app
        appType: webAppLinux
        package: $(Pipeline.Workspace)/angular-web-app 

Container for ACR and AKS

In order to deploy to an AKS cluster, the Angular application will need to be containerized and deployed to the Azure Container Registry, ACR. A basic Nginx container that hosts the dist folder is a common Angular dockerizing practice. Using the dockerfile in the repository, the pipeline will build the image and then push it to the ACR in our AZ subscription. Once the container is pushed to the registry, it can be used in any container service like AKS. See the Microsoft documentation on how to push to an Azure Container Registry.

- task: AzureCLI@2
      displayName: Az Container Registry Login
      inputs:
        azureSubscription: <Name of the Azure Resource Manager service connection>
        scriptType: bash
        scriptLocation: inlineScript
        inlineScript: |
          echo "Login to AZ"
          az acr login --name <login name>

- task: AzureCLI@2
      displayName: Docker Build angular-app
      inputs:
        azureSubscription: <Name of the Azure Resource Manager service connection>
        scriptType: bash
        scriptLocation: inlineScript
        inlineScript: |
          echo "Build image with docker"
          docker build -t angular-app:latest -f Dockerfile .

    - task: AzureCLI@2
      displayName: Docker Tag and Push angular-app
      inputs:
        azureSubscription: <Name of the Azure Resource Manager service connection>
        scriptType: bash
        scriptLocation: inlineScript
        inlineScript: |
          docker image tag angular-app:latest <name><endpoint>/<repo>/angular-app:latest
          docker image tag angular-app:latest <name><endpoint>/<repo>/angular-app:<build id>
          docker image push <name><endpoint>/<repo>/angular-app:<build id>
          docker image push <name><endpoint>/<repo>/angular-app:latest

Azure Artifacts vs. Caching

Caching was chosen as a solution over creating an artifact to improve build time by reusing files from earlier pipeline runs. The Microsoft documentation recommends using pipeline caching specifically for reducing build time and reusing files from previous runs. On the other hand, a pipeline artifact should be used if a job needs the files produced in a previous job in order to succeed. Since the installation of the node modules could be done even if a pulled cache fails, it made more sense to use caching.

Conclusion

By using AzDO pipelines, we were successful in creating a build pipeline with caching while having the flexibility to deploy to Azure App Service or AKS. The caching of the node modules sped up our installation task from 15 minutes to 3 minutes. The retrieval of the node modules still takes time but is faster overall, and possibly more cost-effective. Using the artifacts would not have benefited us in the same manner. Finally, the built-in Azure CLI tasks allow us to either deploy the Angular application directly to App Services or to build a Docker container, which allows us to push the image up to our Azure Registry for use in AKS or to use the container elsewhere.

So you’ve written Infrastructure As Code, Automated All The Things, and drank deeply of the DevOps Kool-Aid on your current project – what’s next?

You’ve been reaping the rewards of DevOps at some scale, and your one-off DevOps effort has earned a lot of positive attention – now, management would like you to implement DevOps for all the projects!

How do you spread the DevOps wealth, and what do you need to be successful?

Delivering DevOps

What is DevOps?

For this article, we’ll use shorthand “DevOps” to mean the code bits – such as bash/posh/cmd for scripting, TF/CLI for IAC, and YAML for Azure DevOps CI/CD pipelines.

Start with identifying the valuable bits from your current DevOps efforts and use that as a basis for what you want to disseminate to all other projects.

Cross-Project DevOps

Not all projects will have the exact same DevOps requirements. Still, over time you will build up a collection of useful scripts and templates that are generic enough to provide value across all software projects in your organization.

But you can’t simply copy/paste these files into every repo, as that would be a massive headache to manage and keep updated. Instead, you’ll want to version and package these scripts so that every project that adopts the enterprise DevOps approach can track and plan for DevOps package updates.

Custom Package Repository

The easiest way to distribute file packages is through a custom package repository. Chances are your software project is already using at least Nuget, NPM, Maven, or PyPI. Azure DevOps can create all of these kinds of repositories, which is how you can seamlessly distribute your company-proprietary DevOps package without making it publicly available. These custom repositories are also handy as a local cache for public packages.

DevOps Integration

Usually, downloaded packages are not committed to the repo, only the reference to the package – then the build/release agents download them as needed. But for DevOps packages, should be committed to the repo for a variety of reasons. To do that, include an install script with your package, like this example for an NPM package:

#!/usr/bin/env node

const path = require('path');
const fse = require('fs-extra');
const rootFolder = path.resolve('../..'); // backwards from “./node_modules/&lt;package&gt;”
const installFolder = `${rootFolder}/.devops`;
const oldInstallFolder = `${rootFolder}/.devops.backup`;
const nodePkgFolder = path.resolve('.');
const srcFolder = `${nodePkgFolder}/src`;

let oldFolderRenamed = false;

// rename .devops to .devops.backup
if (fse.existsSync(installFolder)) {
    oldFolderRenamed = true;
    if (fse.existsSync(oldInstallFolder)) {
        console.log('removing last [.devops.backup] folder...');
        fse.removeSync(oldInstallFolder);
    }
    console.log('renaming [.devops] as [.devops.backup]...');
    fse.renameSync(installFolder, oldInstallFolder);
}

// copy package src folder to install folder
console.log('installing devops package...');
fse.copySync(srcFolder, installFolder);

// read version from package.json and save to installFolder
const packageJson = fse.readFileSync(`${nodePkgFolder}/package.json`);
const package = JSON.parse(packageJson);
fse.writeFileSync(`${installFolder}/pkg_version.txt`, package.id);

if (oldFolderRenamed) {
    console.warn('Existing devops configuration has been backed up and replaced, please manually merge your configuration back into the new devops package!');
    console.log(`Please read ${nodePkgFolder}\\changelog.md`)
} else {
    console.log(`Devops package has been installed, please read ${nodePkgFolder}\\readme.md`)
}

This script copies the DevOps package from the node_modules (NPM package cache) directory into the project’s root directory. If the DevOps package directory is already present, it renames the old directory and copies it in the new one. It is then trivial to diff the old and new directories for changes and merge them.

For NPM install, use the –no-save option, as we are using NPM as a downloader/installer, so it doesn’t need to save the reference in the package.json

Package Versioning

DevOps package files should be tracked in a Version Control System like Git. We need to develop each feature in a separate branch, then PR’d, and each PR should create a new version of the DevOps package. You can then Devops your Devops by setting up CI pipelines to automatically publish new package versions whenever the master branch is changed.

Don’t forget to document! Include a readme for first-time consumers and a changelog for updates.

Adopt and Migrate Incrementally

Generally, DevOps development will follow this kind of flow:

  1. Implement a project-specific DevOps functionality
  2. Test, Fix and Refine
  3. Generalize and extract valuable bits of functionality
  4. Test, Fix and Refine
  5. Repeat

When integrating an enterprise DevOps package, a similar process can be followed:

  1. Migrate project-specific DevOps functionality to use the enterprise DevOps package
  2. Test, Fix and Refine
  3. Generalize and extract useful bits of functionality
  4. Test, Fix and Refine
  5. Merge the generically useful bits into the enterprise DevOps package
  6. Repeat

Standardize and Prevent Breaking Changes

A critical benefit of the DevOps package approach is that it allows for the standardization of DevOps processes across the enterprise and provides a straightforward way to keep all projects in sync. A bugfix or new process can quickly be rolled out to all consumers of the package.

Standardization could also be accomplished using Azure DevOps Pipelines or Task Groups, but any change to those will affect all consumers immediately and can invisibly break things. But if the DevOps package is stored in the project repo, those projects are insulated from damaging changes and bugs.

Track Changes and Update Easily

Each project will likely have a separate set of custom configuration files to modify from the package baseline to customize the build and release pipelines. It is essential to separate these files and keep track of changes to them after a package update. Conversely, the other files in the DevOps package should not be modified to ensure a smooth update. Suppose one of the package files must be modified. In that case, it should be either A) temporary with the expectation of merging the changes up to the package or B) copied locally to the custom configuration directory so that it is evident that it must be inspected during the package update process.

Enterprise DevOps Principles

To sum everything up, there are several themes here that will ensure a successful enterprise DevOps approach:

  • Consistency – standardize DevOps processes across the enterprise
  • Modularity – design DevOps processes so that each component is focused and reusable – follow DRY and SRP guidelines
  • Resiliency/Stability – make DevOps processes resistant to unexpected changes
  • Traceability – easily understand changes to DevOps processes and merge quickly

Example

Consider this NPM package:

  • scripts/install.js – module install script
  • src/config – all project-specific customization and scripts
  • src/pipelines – enterprise-wide CI/CD pipelines
  • src/scripts – enterprise-wide bash/posh/cmd scripts
  • src/tf – enterprise-wide terraform templates
  • src/Update-DevopsPackage.ps1 – helper script that triggers the download of the latest package
  • changelog.md – description of differences between versions, upgrade notes
  • package.json – NPM publish metadata, contains version
  • readme.md – introduction to DevOps package, contains getting started instructions, directory, and file descriptions

And here’s what a consuming project may look like:

  • .devops/config – src/config directory from package
  • .devops/pipelines – src/pipelines directory from package
  • .devops/scripts – src/scripts directory from package
  • .devops/tf – src/tf directory from package
  • .devops/version.txt – a text file containing the package version (handy for tracking updates)
  • .devops/Update-DevopsPackage.ps1 – copied from src directory from package
  • src/ – project source code, etc.
AIS recently held an internal hackathon for Microsoft Power Platform to expose our team to the platform, concepts, approaches through hands-on experience and to demonstrate the role Power Platform plays in modernizing legacy applications in the cloud.

As a premier Microsoft partner, Power Platform has quickly become a core part of our holistic enterprise cloud transformation strategy. Our teams have helped enterprises across Financial Services, Insurance, Health and Life Sciences, and the Government leverage Power Platform for enterprise-grade application modernization in the last three years.

We’re integrating Power Platform with Azure and Microsoft 365 for powerful legacy modernization capabilities, and throughout the project, we uncovered new lessons learned to share across project teams.

The Power Platform Market

There’s been a surge in demand for Low Code Modernization solutions in the last few years. Many organizations are looking to enable Line of Business (LOB) owners and teams to build productivity and collaboration solutions, offsetting the continued strain on IT teams.

Microsoft answered the call with Power Platform. Since its release, we’ve been working with product teams and industry leaders to develop adoption frameworks, accelerators, and solutions to help organizations get the most out of their Power Platform investments. The demand has grown such that we’re continuing to introduce new ways to build our Power Platform skills. Enter Our hackathon team.

“Power platform is really a ‘work less do more,’ as it would even get the integrations done with legacy systems without writing a single line of code. This I have never experienced in my 13 years of experience, just amazing. The perspective of low-code-no-code is completely changed with this experience, it is an enterprise development tool that can connect your teams’ apps to outlook app to data verse to SQL. You name the technology; it has a connector to fit in.”
– Technology Lead at AIS, Kranthi Kiran Gullapalli

Meet the Hackathon Team

There was a lot of initial interest in learning the platform from our India-based team out of the AIS Hyderabad HQ. Over a dozen people committed weeks of their time inside and outside of regular working hours in the pursuit of the Power Platform experience.

Some of the participants included employees from our HUB team. The HUB is a group of employees fully dedicated to helping AIS project teams deliver successful cloud modernization projects.​ This team consolidates knowledge and experience to provide rapid research and guidance for cloud migration and modernization projects­­­. As app modernization using Power Platform work grows, we wanted to allow individuals to skill up on the platform.

But the HUB team members weren’t the only participants. These groups comprised learners and teachers across account teams. Teams were broken down into several pods with a leader to guide that team’s focus area for the project, including Dataverse data migration and integration, authentication, Power Apps Portals development, and more.

“MPP Hackathon” is where the idea of converting a monolithic application to a Power Apps Portal became a reality. Initially, some ideas flowed on reimagining a typical legacy app to Power Apps Portal, but the hackathon allowed us to experiment & try it. Hackathon helped me & my team to have hands-on exposure to the MPP environment. We worked with data modeling and got to know better on Dataverse (In-house storage in MPP). We took our first of many steps towards App Modernization with this hackathon as we are new to MPP. Microsoft Power Platform is a suite of business services that unifies our data. It provides a powerful way to analyze data, act on newly generated insights, deliver personalization, and automate business processes.”
– Jagrati Modi, Software Developer

A Mission to Innovate, Passion for Learning

AIS strives to provide ongoing learning opportunities for our employees. And we’re lucky to have a team that is so passionate about learning, innovation, and the continual pursuit of new skills.

We’ve developed programs like aisU to nurture a growing community of continued education, knowledge sharing, and technical excellence. We offer various opportunities to gain exposure to new technology and concepts, such as training boot camps, technology deep dives, and level-up sessions presented by consultants based on project experience.

Access to the methodologies, approaches, and lessons learned from these project teams helps employees gain the know-how and resources to consult on and deliver similar projects within their accounts successfully.

These opportunities aim to help AIS improve overall project delivery, presales support, and envisioning capabilities for better customer outcomes.

Up Next: Our Approach

Stay tuned for part two of this series as we dive deeper into the approach and execution of an end-to-end application modernization project leveraging Power Platform, API Integration, Dataverse, Azure DevOps, and more.

Until then, check out some of our open job opportunities. We’re always looking to add passionate technologists to our growing team. Introduce yourself and let’s talk about a future career at AIS.

Kubernetes has become the go-to orchestrator for running containers. At AIS, we are working with clients using Azure Kubernetes Service (AKS), applying our expertise in DevOps. One concern is delivering HTTPS certificates to containerized web applications; let’s use Azure Key Vault to achieve this.

Azure Key Vault is a cloud service that provides hardware security modules (HSMs) for securely storing and delivering secrets, encryption keys, and certificates. The Key Vault FlexVolume driver for Kubernetes offers a means for mounting certificates as files in our containers; this allows us to automate and manage HTTPS certificates for applications without requiring them to access Azure Key Vault directly.

Let’s look at how we can apply this to Kubernetes services that front containers running ASP.NET Core 3.1 applications. We’ll start with a sample application from Microsoft’s .NET Docker repo. You can find the full solution and step-by-step instructions in this aks-keyvault-certs GitHub repo.

We’ll use PFX encoded certificates in our Azure Key Vault for this demo, as they are readily loadable in .NET Core 3.1 for use in Kestrel hosting.

There are a few important details to note:

  1. You can retrieve a certificate from Azure Key Vault using the certificate, key or secret object types. To get the full private key certificate, you need to use the “secret” object type.
  2. If you import a PFX encoded certificate into Azure Key Vault, getting its secret will return the full PFX file; however, since the API return value is a string and a PFX is a binary file format, the result must be base64 decoded.
  3. If you import a certificate using the text encoded PEM format, it is returned as-is and base64 decoding is unnecessary; however, .NET Core does not currently support loading private-key PEM certificates.
  4. Since we’ve already authenticated to Key Vault, the resulting PFX file mounted in our container no longer requires the PFX password to load.

Knowing this, if we import a PFX certificate into an Azure Key Vault, we can configure our Kubernetes Deployment YAML to create a volume containing a file containing the certificate secret contents. For example:

volumes:
      - name: aks-keyvault-aspnetcore-httpscert
flexVolume:
          driver: "azure/kv"
          secretRef:
            name: kvcreds
          options:
            # update with your key vault name
            keyvaultname: "YOUR KEY VAULT NAME"
            # update with your AAD tenant
            tenantid: "YOUR KEY VAULT TENANT"
            # The name of the object in Key Vault
            keyvaultobjectnames: "aks-https"
            # Use "secret" to get the full cert
            keyvaultobjecttypes: secret
            # This becomes the file name on the mount.
            keyvaultobjectaliases: "https.pfx.base64"

Then, in our pod specification we can mount the volume and set an environment variable for its path:

env:
        # Set an environment var to the cert path for
        # the application to use.
        - name: HTTPS_CERTIFICATE_PATH
          value: "/certs/https.pfx.base64"
        volumeMounts:
        # Mount the key vault volume to /certs
        - name: aks-keyvault-aspnetcore-httpscert
          mountPath: /certs
          readOnly: true

Finally, in our ASP.NET Core application, we configure Kestrel to use the mounted certificate file for HTTPS endpoints:

public static IHostBuilder CreateHostBuilder(string[] args) =>
            Host.CreateDefaultBuilder(args)
                .ConfigureWebHostDefaults(webBuilder =>
                {
                    webBuilder
                        .UseStartup<Startup>()
                        .UseKestrel(options =>
                        {
                            // configure Kestrel with our HTTPS certificate
                            options.ConfigureHttpsDefaults(ConfigureHttps);
                        });
                });

        private static void ConfigureHttps(HttpsConnectionAdapterOptions options)
        {
            try
            {
                // When we get the certificate from Key Vault as a secret,
                // it provides the entire PFX file but without the password.
                // Since PFX is a binary format and a secret is a string,
                // it is base64 encoded. So we read in the text file and convert
                // it to the bytes to initialize the X509Certificate2.
                var certPath = Environment.GetEnvironmentVariable("HTTPS_CERTIFICATE_PATH");
                if (!string.IsNullOrEmpty(certPath))
                {
                    var certString = System.IO.File.ReadAllText(certPath);
                    var certBytes = Convert.FromBase64String(certString);
                    var httpsCert = new X509Certificate2(certBytes);

                    Console.WriteLine($"HTTPS cert Subject:    {httpsCert.Subject}");
                    Console.WriteLine($"HTTPS cert Thumbprint: {httpsCert.Thumbprint}");

                    // set the Kestrel HTTPS certificate
                    options.ServerCertificate = httpsCert;
                }
            }
            catch (Exception ex)
            {
                Console.Error.WriteLine($"unable to load https cert: {ex}");
                throw;
            }
        }

See the full working solution in the aks-keyvault-certs repo.

With this approach, we can automate managing HTTPS certificates with Azure Key Vault and delivering them to Azure Kubernetes Service pods running ASP.NET Core applications.

Introduction

Unfortunately, Azure DevOps does not have a SaaS offering running in Azure Government. The only options are to spin up Azure DevOps server in your Azure Government tenant or connect the Azure DevOps commercial PaaS offering (specifically Azure Pipelines) to Azure Government. Your customer may object to the latter approach; the purpose of this post is to provide you with additional ammunition in making a case that you can securely use commercial Azure DevOps with Azure Government.

Throughout this blog post, the biggest question you should always keep in mind is where is my code running?

Scenario

Take a simple example; a pipeline that calls a PowerShell script to create a key vault and randomly generates a secret to be used later (such as for a password during the creation of a VM)

add-type -AssemblyName System.Web
$rgName = "AisDwlBlog-rg"
$kvName = "AisDwlBlog-kv"
$pw = '"' + [System.Web.Security.Membership]::GeneratePassword(16, 4) + '"'
az group create --name $rgName --location eastus --output none
az keyvault create --name $kvName --resource-group $rgName --output none
az keyvault set-policy --name $kvName --secret-permissions list get --object-id 56155951-2544-4008-9c9a-e53c8e8a1ab2 --output none
az keyvault secret set --vault-name $kvName --name "VMPassword" --value $pw

The easiest way we can execute this script is to create a pipeline using a Microsoft-Hosted agent with an Azure PowerShell task that calls our deployment script:

pool:
  name: 'AisDwl'

steps:
- task: AzureCLI@2
  inputs:
    azureSubscription: 'DwlAzure'
    scriptType: 'ps'
    scriptLocation: 'scriptPath'
    scriptPath: '.\Deploy.ps1'

When we execute this, note that output from that PowerShell script flows back into the Pipeline:

Deployment Output

To circle back to that question that should still be in your mind….where is my code running? In this case, it is running in a Virtual Machine or container provided by Microsoft. If you have a customer that requires all interactions with potentially sensitive data to be executed in a more secure environment (such as IL-4 in Azure Government), you are out of luck as that VM/Container for the hosted build agent is not certified at any DoD Impact Level. Thus, we have to look at other options, where our deployment scripts can run in a more secure environment.

I’ll throw a second wrench into things…did you see the bug in my Deploy.ps1 script above? I forgot to add --output none to the last command (setting the password in the key vault). When I run the pipeline, this is shown in the output:

Secret Visible

Not good! In an ideal world, everyone writing these scripts would be properly handling output, but we need to code defensively to handle unintended situations. Also, think about any error messages that might bubble back to the output of the pipeline.

Option 1

Azure Pipelines provide the capability to run pipelines in self-hosted agents, which could be a VM or container-managed by you/your organization. If you set up this VM in a USGov or DoD region of Azure, your code is running in either an IL-4 or IL-5 compliant environment. However, we can’t simply spin up a build agent and call it a day. As with the Microsoft-hosted build agent, the default behavior of the pipeline still returns output to Azure DevOps. If there is ever an issue like I just demonstrated, or an inadvertent Write-Output or Write-Error, or an unhandled exception containing sensitive information, that will be displayed in the output of the pipeline. We need to prevent that information from flowing back to Azure Pipelines. Fortunately, there is a relatively simple fix for this: instead of having a task to execute your PowerShell scripts directly, create a wrapper/bootstrapper PowerShell script.

The key feature of the bootstrapper is that it executes the actual deployment script as a child process and captures the output from that child process, preventing any output or errors from flowing back into your Pipeline. In this case, I am simply writing output to a file on the build agent, but a more real-world scenario would be to upload that file to a storage account.

try
{
	& "$PSScriptRoot\Deploy.ps1" | Out-File "$PSScriptRoot\log.txt" -append
	Write-Output "Deployment complete"
}
catch
{
	Write-Error "there was an error"
}

The biggest disadvantage of this approach is the additional administrative burden of setting up and maintaining one (or more) VMs/containers to use as self-hosted build agents.

Option 2

If you would prefer to avoid managing infrastructure, another option is to run your deployment scripts in an Azure Automation Account. Your Pipeline (back to running in a Microsoft-hosted agent) starts an Azure Automation Runbook to kick off the deployment. The disadvantage of this approach is that all of your deployment scripts must either be staged to the Automation Account as modules or converted into “child” runbooks to be executed by the “bootstrapper” runbook. Also, keep in mind that the bootstrapper runbook must take the same preventative action of capturing output from any child scripts or runbooks to prevent potentially sensitive information from flowing back to the Pipeline.

Sample code of calling a runbook:

$resourceGroupName = "automation"
$automationAccountName = "dwl-aaa"
$runbookName = "Deployment-Bootstrapper"
                    
$job = Start-AzAutomationRunbook -AutomationAccountName $automationAccountName -ResourceGroupName $resourceGroupName -Name $runbookName -MaxWaitSeconds 120 -ErrorAction Stop
                    
$doLoop = $true
While ($doLoop) {
    Start-Sleep -s 5
    $job = Get-AzAutomationJob -ResourceGroupName $resourceGroupName –AutomationAccountName $automationAccountName -Id $job.JobId
    $status = $job.Status
    $doLoop = (($status -ne "Completed") -and ($status -ne "Failed") -and ($status -ne "Suspended") -and ($status -ne "Stopped"))
}
                    
if ($status -eq "Failed")
{
    Write-Error "Job Failed"
}

The Deployment script code running as an Azure Automation Runbook (Note that this has been converted to Azure PowerShell as the AzureCLI isn’t supported in an Automation Account Runbook):

$Conn = Get-AutomationConnection -Name AzureRunAsConnection
Connect-AzAccount -ServicePrincipal -Tenant $Conn.TenantID -ApplicationId $Conn.ApplicationID -CertificateThumbprint $Conn.CertificateThumbprint

add-type -AssemblyName System.Web
$rgName = "AisDwlBlog-rg"
$kvName = "AisDwlBlog-kv"
$pw = [System.Web.Security.Membership]::GeneratePassword(16, 4)

$rg = Get-AzResourceGroup -Name $rgName
if ($rg -eq $null)
{
	$rg = New-AzResourceGroup -Name $rgName -Location EastUs
}

$kv = Get-AzKeyVault -VaultName $kvName -ResourceGroupName $rgName
if ($kv -eq $null)
{
	$kv = New-AzKeyVault -Name $kvName -ResourceGroupName $rgName -location EastUs
}
Set-AzKeyVaultAccessPolicy -VaultName $kvName -PermissionsToSecrets list,get,set -ServicePrincipalName $Conn.ApplicationID

$securePw = ConvertTo-SecureString -String $pw -AsPlainText -Force
Set-AzKeyVaultSecret -VaultName $kvName -Name "VMPassword" -SecretValue $securePw

Leveraging the Power Platform and Microsoft Azure to connect housing agencies and intermediaries with the Department of Housing and Urban Development (HUD)

Intro

The US Department of Housing and Urban Development runs a program known as the Housing Counseling Program that aids housing agencies around the nation through things like grants and training. These agencies provide tremendous benefit to Americans in many forms including credit counseling, foreclosure prevention, predatory lending awareness, homelessness, and more. One of the requirements for approval into the program is that the agency uses a software system that is also approved by HUD and interfaces with HUD’s Agency Reporting Module SOAP web service.

Original System

The original system is built on Dynamics 365 Online using custom entities, web resources, and plug-ins. Complex batching functionality was implemented to deal with the two-minute timeout period that plug-ins have, web resource files (JavaScript, HTML, and CSS) were stuck in a solution with no real structure, and application lifecycle management (ALM) was extremely painful. The original system design also doesn’t account for the intermediary-to-agency relationship, specifically the roll-up reporting and auditing that intermediaries desperately need for providing their compiled agency data to HUD, as well as their own internal reporting. Finally, because the solution was built using Dynamics 365 Online Customer Engagement, licensing costs were more than double what we knew they could be with Microsoft’s new Power Apps licenses and Azure pay-as-you-go subscriptions.

The Original Diagram

Figure 1 – current CMS system build on Dynamics 365 Online

Definition

Intermediaries – organizations that operate a network of housing counseling agencies, providing critical support services to said agencies including training, pass-through funding, and technical assistance.

Modern System

Whereas the data schema for the solution remains mostly unchanged, the architecture of the system has changed profoundly. Let’s check out some of the bigger changes.

Power Apps Component Framework (PCF)

The Power Apps Component Framework enables developers to create code components that can run on model-driven and canvas apps and can be used on forms, dashboards, and views. Unlike traditional web resources, PCF controls are rendered in the same context and at the same time as any other component on a form. A major draw for developers is that PCF controls are created using TypeScript and tools like Visual Studio, and Visual Studio Code. In the modern solution, web resources are replaced with PCF controls that make calls directly to the Power Apps Web API.

Azure Resources (API Management, Function Apps, Logic Apps, and a Custom Connector)

A Custom Connector and Azure API Management are used to manage and secure various APIs that are used in the system. The connector can be used in any Azure Logic App to make connecting to an API much easier than having to configure HTTP actions. All of the business logic that once lived in plug-ins has been replaced with a combination of Azure Logic Apps and Azure Function Apps. This has provided incredible gains where development and testing are concerned, but it also provides a more secure and scalable model to support these agencies as they grow. Lastly, it removes the burden experienced with the two-minute time-out limitation that plug-ins have.

ALM with Azure DevOps Services and Power Apps Build Tools

Azure DevOps Services, coupled with the Power Apps Build Tools, are being used to ease the pain we experienced with ALM prior to these tools being available. Now we can easily move our Power App solutions across environments (e.g. dev, test, production) and ensure our latest solution code and configurations are always captured in source control.

ALM with Azure

Figure 2 – modern solution using Power Apps and Azure resources for extreme scalability and maintainability

Next Steps

I hope by demonstrating this use case you’re inspired to contact your Microsoft partner, or better yet contact us and let us work with you to identify your organization’s workloads and technology modernization opportunities.

Ready to dig a little deeper into the technologies used in this blog post? Below we have some hands-on labs for you. Stay tuned for more updates!

  1. Power Apps Component Framework Controls
    As a developer, PCF Controls has been one of the most exciting things to grace the Power Platform landscape. In this video, I’ll show you how to go from zero to a simple PCF control that calls the Power Platform Web API. I also provide a Pro Tip or two on how I organize my PCF controls in my development environment.

  2. Using an Azure Logic App to Query Data in an On-Premises SQL Server Database
    Companies are embracing modernization efforts across the organization, quickly replacing legacy apps with Power Apps and Azure resources. This can’t all happen at once though, and often times companies either can’t or won’t move certain data to the cloud. In this hands-on lab, I’ll walk you through using an Azure Logic App and an On-Premises Data Gateway to connect to a local SQL Server 2017 database to query customer data using an HTTP request and response.
  3. Azure API Management and Custom Connectors
    API’s give you the ability to better secure and manage your application interfaces. Couple an API with a Custom Connector and you not only better secure and manage, but also make it super easy for other app developers across the organization to connect to your back-end resources through the API, without having to worry about all of the API configurations.
  4. Power Apps DevOps Using Best Practices
    There’s more to DevOps than just backing up your Power App solutions in source control and automating your build and release pipeline. Good solution management and best practices around publishers, entity metadata, components, and subcomponents are also key to manageability and scalability with your ALM.
Azure DevOps provides a suite of tools that your development team needs to plan, create, and ship products. It comes in two flavors:

  • Azure DevOps Services – the SaaS option hosted by Microsoft.
  • Azure DevOps Server – the IaaS option hosted by you.

When comparing the two to decide which option enables your team to deliver the most value in the least amount of time, Azure DevOps Services is the clear winner, but velocity alone is not the only consideration for most government teams. The services you use must also be compliant with standardized government-wide security, authorization, and monitoring requirements.

Azure DevOps Services are hosted by Microsoft in Azure regions. At the time of this writing, you do not yet have the option to host Azure DevOps in an Azure Government region so you must use one of the available Azure Commercial regions. As a public service, it has also has not yet achieved compliance with any FedRAMP or DoD CC SRG audit scopes. This may sound like a non-starter, but as it states on the FedRAMP website, it depends on your context and how you use the product.

Depending on the services being offered, the third-party vendor does not necessarily have to be FedRAMP compliant, but there are security controls you must make sure they adhere to. If there is a connection to the third-party vendor, they should be listed in the System Security Plan in the Interconnection Table.

This is the first of two blog posts that will share solutions to common Azure DevOps Services concerns:

  1. In “Azure DevOps Services for Government: Access Control” (this post), I will cover common access control concerns.
  2. In “Azure DevOps Services for Government: Information Storage”, I will address common concerns with storing information in the commercial data centers that host Azure DevOps Services.

Managing User Access

All Azure DevOps Organizations support cloud authentication through either Microsoft accounts (MSA) or Azure Active Directory (AAD) accounts. If you’re using an MSA backed Azure DevOps organization, users will create their own accounts and will self-manage security settings, like multi-factor authentication. It is common for government projects to require a more centralized oversight of account management and policies.

The solution is to back your Azure DevOps Services Organization with an AAD tenant. An AAD backed Azure DevOps organization meets the following common authentication and user management requirements:

  • Administrators control the lifecycle of user accounts, not the users themselves. Administrators can centrally create, disable, or delete user accounts.
  • With AAD Conditional Access, administrators can create policies that allow or deny access to Azure DevOps based on conditions such as user IP location.
  • AAD can be configured to federate with an internal identity provider, which could be used to enable CAC authentication.

Controlling Production Deployment Pipelines

The value added by Continuous Delivery (CD) pipelines includes increasing team velocity and reducing the risks associated with introducing a change. To fully realize these benefits, you’ll want to design your pipeline to begin at the source code and end in production so that you can eliminate bottlenecks and ensure a predictable deployment outcome across your test, staging, and production environments.
It is common for teams to grant limited production access to privileged administrators, so some customers raise valid concerns with the idea of a production deployment pipeline when they realize that an action triggered by a non-privileged developer could initiate a process that ultimately changes production resources (ex: deployment pipeline triggered from a code change).

The solution is to properly configure your Azure DevOps pipeline Environments. Each environment in Azure DevOps represents a target environment of a deployment pipeline. Environment management is separate from pipeline configuration which creates a separation of duties:

  • Team members who define what a deployment needs to do to deploy an application.
  • Team members who control the flow of changes into environments.

Example

A developer team member has configured a pipeline that deploys a Human Resource application. The pipeline is called “HR Service”. You can see in the YAML code below, the developer intends to run the Deploy.ps1 scripts on the Production pipeline Environment.

Production Environment

If we review the Production environment configuration, we can see that an approval check has been configured. When the deployment reaches the stage that will attempt to run against the production environment, a member of the “Privileged Administrators” AAD group will be notified that deployment is awaiting their approval.

Approvals and Checks

Only the Privileged Administrators group has been given access to administer the pipeline environment, so the team member awaiting approval would not be able to bypass the approval step by disabling it in the environment configuration or in the pipelines YAML definition.

Security Production

By layering environment configuration with other strategies you’ll establish the boundaries needed to protect your environment but will also empower your development team to work autonomously and without unnecessary bottlenecks. Other strategies to layer include:

  • Governance enforces with Azure Policies
  • Deployment gates based on environment monitoring
  • Automated quality and security scans into your pipeline

It is also important for all team members involved with a release to be aware of what is changing so that you are not just automating a release over the wall of confusion. This is an example of how a product alone, such as Azure DevOps, is not enough to fully adopt DevOps. You must also address the process and people.

Deployment Pipeline Access to Private Infrastructure

How will Azure DevOps, a service on a commercial Azure region, be able to “see” my private infrastructure hosted in Azure Government? That’s usually one of the first questions I hear when discussing production deployment pipelines. Microsoft has a straight-forward answer to this scenario: self-hosted pipeline agents. Azure DevOps will have no line of sight to your infrastructure. The high-level process to follow when deploying an agent into your environment looks like this:

  • Deploy a virtual machine or container into your private network.
  • Apply any baseline configuration to the machine, such as those defined in the DISA’s Security Technical Implementation Guides (STIG).
  • Install the pipeline agent software on the machine and register the agent with an agent pool in Azure DevOps.
  • Authorize pipelines to use the agent pool to run deployments.

With this configuration, a pipeline job queued in Azure DevOps will now be retrieved by the pipeline agent over 443, pulled into the private network, and then executed.

Azure DevOps Configuration

Conclusion

In this post, I’ve introduced you to several practices you can use to develop applications faster without sacrificing security. Stay tuned for the next post in this series where we will discuss common concerns from Government clients around the storage of data within Azure DevOps.

When Microsoft introduced pipelines as part of their Azure DevOps cloud service offering, we received the tools to add continuous integration (CI) and continuous delivery (CD) practices to our development processes. An Azure DevOps pipeline can be created in two ways: 1) The current generally available “classic” pipeline tooling, and 2) the new multi-stage YAML pipeline feature which is currently in preview.

Classic Pipelines

Classic pipelines achieve CI through Azure DevOps build pipelines. A build pipeline executes before a developer integrates code changes into a code base. The pipeline does things like execute a build task, run the unit tests and/or run static code analysis. It then either accepts or rejects the new changes based on the outcome of these tasks.

CD is achieved through Azure DevOps release pipelines. After the build pipeline has produced a build artifact, a release pipeline will publish the artifact to various environments for manual functional testing, user experience testing and quality assurance. When testers have thoroughly tested the deployed artifacts, the release pipeline can then push the artifacts to a production environment.

As powerful as these classic CI/CD pipelines are, they do have their drawbacks. Firstly, the tooling for creating build and release pipelines does not provide a unified experience. CI pipelines provide an intuitive GUI to create and visualize the integration steps…

Classic build pipeline editor

… and also allow you to define those very same steps in YAML:

YAML build pipeline editor

Release pipelines also provide a GUI to create and visualize the pipeline steps. The problem is that the interface is different from that of the build pipeline and does not allow YAML definitions:

Classic release pipeline editor

Multi-stage Pipelines

To resolve these discrepancies Microsoft introduced multi-stage pipelines. Currently in preview, these pipelines allow an engineer to define a build, release or a combined build and release pipeline as a single YAML document. Besides the obvious benefits gained through a unified development experience, there are many other good reasons to choose YAML over classic pipelines for both your builds and releases.

Since you commit YAML definitions directly to source control, you get the same benefits source control has been providing developers for decades. Here are the top 10 reasons (in no particular order) you should choose YAML for your next Azure DevOps pipeline:

1. History

Want to see what your pipeline looked like last month before you moved your connection strings to Azure KeyVault? No problem! Source control allows you to see every change ever make to your pipeline since the beginning of time.

2. Diff

Have you ever discovered an issue with your build but not known exactly when it started failing or why? Having the ability to compare the failing definition with the last known working definition can greatly reduce the recovery time.

3. Blame

Similarly, it can be useful to see who committed the bug that caused the failure and who approved the pull request. You can pull these team members into discussions on how best to fix the issue while ensuring that the original objectives are met.

4. Work Items

Having the ability to see what was changed is one thing but seeing why it was changed is another. By attaching a user story or task to each pipeline commit, you don’t need to remember the thought process that went into a particular change.

5. Rollback

If you discover that the pipeline change you committed last night caused a bad QA environment configuration, simply rollback to the last known working version. You’ll have your QA environment back up in minutes.

6. Everything As Code

Having your application, infrastructure and now build and release pipelines as code in the same source control repository gives you a complete snapshot of your system at any point in the past. By getting an older version of your repo, you can easily spin up an identical environment, execute the exact same pipelines and deploy the same code exactly as it was. This is an extremely powerful capability!

7. Reuse and Sharing

Sharing or duplicating a pipeline (or part thereof) is as simple as copy and paste. It’s just text so you can even email it to a colleague if desired.

8. Multiple Engineers

Modern CI/CD pipelines can be large and complex, and more than one engineer might modify the same YAML file, causing a conflict. Source control platforms solved this problem long ago and provide easy to use tools for merging conflicting changes. For better or worse, YAML definitions allow multiple engineers to work on the same file at the same time.

9. Peer Reviews

If application code peer reviews are important, so are pipeline peer reviews. The ability to submit a pull request before bringing in new changes allows team members to weigh in and provides an added level of assurance that the changes will perform as desired.

10. Branching

Have a crazy idea you want to try out? Create a new branch for it and trigger a pipeline execution from that branch. If your idea doesn’t pan out, simply delete the branch. No harm done.

Though still in preview, the introduction of fully text-based pipeline definitions that can be committed to source control provides benefits that cannot be achieved with classic GUI-based definitions, especially for larger organizations. Be sure to consider YAML for your next Azure DevOps pipeline implementation.

A big announcement from Microsoft this month: The introduction of Azure DevOps, the most complete offering of proven, modern DevOps tools and processes available in the public cloud. Used together, the Azure DevOps services span the entire breadth of the development lifecycle so enterprises can modernize apps in a faster and more streamlined way.

What Is DevOps, Anyway?

DevOps solutions bring together people, processes, and technology, automating and streamlining software delivery to provide continuous value to your users.What is DevOps?

If you want your next development or app modernization project to be a success, DevOps is the way to go.

High-performance DevOps enterprises achieve increased revenue with a faster time to market and produce solutions that are more powerful, flexible, and open. (Yes, Microsoft has been partnering with the open-source community to ship products that work for everyone.) New features can be safely deployed to users as soon as they’re ready vs. bundling them together in one large update down the road.

New Services & Tools in Azure DevOps

  • Azure Pipelines – Continuously build, test and deploy to ANY language, platform, or cloud.  Azure Pipelines offers unlimited build minutes and 10 free parallel jobs for all open-source projects!
  • Azure Boards – Plan, track and discuss your work and ideas across teams with proven agile tools.
  • Azure Artifacts – With the click of a button, add artifacts to your CI/CD pipeline.
  • Azure Repos – Unlimited private Git repos (cloud-hosted) allow team members to build and collaborate better.
  • Azure Test Plans – These manual and exploratory testing tools will allow you to test and ship with ease and confidence.

Azure DevOps is what’s next for Visual Studio Team Services. VSTS users will be automatically upgraded without jeopardizing functionality.

With the services provided with Azure DevOps, you can choose the tools and cloud services that you want to use and build end-to-end solutions for an enterprise-level toolchain. As long-time believers in both Azure and DevOps, we’re really excited about this offering and what it can offer our clients.

FREE HALF DAY SESSION: APP MODERNIZATION APPROACHES & BEST PRACTICES
Transform your business into a modern enterprise that engages customers, supports innovation, and has a competitive advantage, all while cutting costs with cloud-based app modernization.