Prototyping in Axure vs. Power Apps

As I described in part one of this blog series, prototyping was an integral step of our user-centered design/discovery process. When we first started on the project, the UX Architect and I used Axure, a well-known prototyping tool familiar to us. However, as the project progressed, we moved into prototyping in Power Apps directly.

Several reasons led to such change:

  • Our client’s security policy required that we keep the prototypes in their tenant.
  • Axure is a third-party tool.
  • When developers built the new solution in Power Apps, they could not reuse any Axure elements, such as the interaction that we built with Axure’s dynamic panels. While they could inspect the Axure prototype about spacing, color, or padding, they had to re-create all design elements in Power Apps.
  • Some Axure controls or design elements were difficult to replicate in Power Apps due to formatting and accessibility limitations, which created inconsistencies between the prototype that stakeholders saw and the final solution.
  • Using Axure also incurred an additional licensing cost.

The table below lists some pros and cons between Axure and Power Apps.

Axure vs. Power Platform

As shown in the table above, prototyping in Power Apps presented significant advantages over Axure. Not only were we able to keep all content within the same tenant, but also our Developers could reuse some prototype code and design elements when they started development, such as directly copying form controls and functions from the prototype.

In addition, as the time of writing this blog, the project team has been developing a Power Apps Component Library, with the consistent design of form elements, which will further streamline code and element reuse in the future. In addition, the project team has been mentoring client employees from several business units in their InfoPath form from the modernization process. The Component Library will serve as a great tool to help these people, who may not have any UX or user interface design knowledge, follow UX design best practices.

Prototyping in Power Apps did pose some challenges because it always takes time to learn something new. However, I was very motivated and started to prototype in Power Apps as soon as I had the opportunity. I saw how our Developers quickly added, removed, or modified form elements when we worked together on some app requirements. I also had a glimpse of the Power Platform capabilities when I attended a workshop two years ago and knew it was citizen developer-friendly. (See my previous blog: Microsoft Business Applications: A UX Researcher’s Perspective).

My prototyping experience in Power Apps proved to be very positive, and I felt empowered by the Power Platform. Here’s why:

  • I received a lot of help from my Developer colleagues, especially Stephanie Zaloga (LinkedIn). She created a template for me based on a canvas app she developed, which included many frequently used form elements and controls, pre-formatted with the appropriate fill, border color, hover, and font variations. I could easily reuse them in my prototype without having to go through the tedious formatting process. (The Component Library in development will further help.)
  • I focused on improving form instructions, labels, and controls, based on form design and plain language writing best practices, which I was familiar and comfortable with.
  • I was able to prototype cascading data fields and simple interactions, usually crucial to meet client’s business needs, with just a few essential functions in Power Apps, such as Switch or If. During development, our Developers could copy and paste these correctly formatted controls, such as dropdown fields, text inputs, and HTML text/instructions, directly to the final solution, which sped up their overall app development time.
  • I did not create any back-end data for my prototype, which would require more Developer skills.
  • I found Microsoft’s online resources and support from the Power Apps community extremely helpful. It was comforting to start a search and see results highlighting a particular problem solved.

Extensive and Helpful Microsoft Resources

Form Design and Content Development Best Practices

When I prototyped the new solution based on the existing InfoPath forms, I always considered form design best practices and tried to improve form usability whenever possible. When we met with stakeholders to get their approval of the prototype and requirements, we would point out such improvements in the prototype, making sure that they were aware and comfortable with our recommendations.

Some of the form design best practices that we implemented included:

  • Follow digital content development best practices for instructions and labels
    • Rewrite content to be more usable and accessible, including removing all those Click Here or Here standalone links
    • Shorten or eliminate form instructions whenever possible
    • Use numbered lists for sequential steps and bulleted lists to support user scan, instead of long paragraphs
    • Spell out acronyms when they are used the very first time
    • Consistently use words and phrases and eliminate inconsistencies, such as up-grade vs. upgrade
  • Create new form sections with clear section labels, if needed, and logically group similar questions together
  • Ensure a clear visual distinction between primary and secondary action buttons, such as Save and Go Back, and align primary actions with input fields/user flow
  • Never use colors alone to convey information to meet accessibility guidelines

Dropdown lists were common form components. For app consistency, at the beginning of the project, our team agreed that:

  • Higher numbers or more favorable options should be placed at the top of all options instead of the bottom. This is to support natural conceptual mapping. For example: With dropdown options showing risk, we would display the options as:
    • 5 – High Risk (on the top)
    • 4 – Medium to High Risk
    • 3 – Medium Risk
    • 2 – Low Risk
    • 1 – No Risk (at the bottom)
  • Options should be displayed in a certain order: alphabetically or by priority/frequency.
  • Radio buttons should be used for two to three options instead of a dropdown.
    • Radio buttons should always have a default value when used.
    • The default value will be displayed first.

Project Updates and More Information

Our project that lasted from August 2020 to this February was a success, which led to two additional projects starting within the same insurance company. Since February, I have been working in a project team that requires Microsoft Dataverse and model-driven apps to modernize InfoPath forms related to an important product. In this project, I function as a BA, UX Researcher, and Organizational Change Manager. This has presented exciting new learning opportunities, as well as challenges. I may write another blog post depicting my experience afterward.

To learn more:

Project Background

From August 2020 to this February, I worked as a Business Analyst (BA) on a project team to help a major insurance company modernize thousands of InfoPath forms into the Microsoft Power Platform. Our client needed to retain critical business functionality and workflows of these forms before Microsoft ends their extended InfoPath support in July 2026. Our efforts included modernizing the InfoPath forms by using Power Apps canvas apps, Power Automate, and some Power BI.

It was a great experience working on this project. First of all, I got to work with a project team with members specializing in different technical areas who were motivated to mentor and learn from each other. I learned about the Power Platform, Power Platform accessibility, Modern SharePoint lists, and some QA testing basics.

At the same time, as a BA with an extensive background in user experience (UX) research, I contributed to the project’s success by sharing and implementing UX best practices and helping improve the overall usability of our applications. Throughout the project, I had opportunities to:

  • Lead project discovery that followed a user-centered design process involved multiple activities and steps within our internal AIS team and with our client stakeholders.
  • Prototype the new solution first in Axure and later on directly in the Power Platform – the low-code/no-code, connected, and secure platform, which empowers citizen developers like me to solve business problems.
  • Work with our team to improve overall form design for better usability, including improving form labels, instructions, and controls.

I describe my overall project experience in a two-part blog post:

  • Part 1 (this post) focuses on how we followed a user-centered design process to conduct InfoPath form discovery.
  • Part 2 compares prototyping in Axure and Power Apps and illustrates how we improved form design from InfoPath to Power Apps.

User-Centered Design/Discovery Process

Our project included ten members: 1 Project Manager, 1 Power Platform Solution Architect, 3 Power Platform Developers, 2 Accessibility Specialists, 1 UX Architect, 1 QA Test Engineer, and me as the BA. The Power Platform Solution Architect, UX Architect, and I formed the discovery team, with constant support and participation from our Project Manager.

Our project followed an Agile process with two-week sprints, and our discovery team of three worked on the target InfoPath forms a sprint ahead of the rest of the team members. When our Developers and Test Engineer were ready to start developing and testing the new app, we had already completed the needed discovery. We received the client’s approval on the app requirements. Figure 1 below depicts our simplified discovery process.

Design Discovery for Agile Process

Detailed Discovery Steps

In reality, multiple activities and meetings could occur in each step, and below are more details of how we conducted discovery.

  1. While familiarizing ourselves with the target InfoPath form(s), our discovery team focused on forms themselves and reviewed relevant
    • SharePoint site(s), different SharePoint list views, and workflows.
    • It was helpful for us to understand the form context, which was necessary for our conversations with stakeholders.
    • I created a Stakeholder Interview Guide Template at the beginning of the Project and constantly updated the Template with tailored questions.
    • We might hold an internal meeting about the initial findings before we met with client stakeholders.
  2. Regarding our initial meeting with key stakeholders/form owners:
    • To ensure their attendance, I always tried to schedule meetings at least one week in advance.
    • During the meeting, I followed the Interview Guide, focusing on their existing use of the InfoPath forms and workflows, what worked well, what pain points they experienced, and what wish lists they wanted.
    • I would schedule follow-up meetings if needed, especially if multiple InfoPath forms would be modernized together.
  3. Armed with a good understanding of the target forms, we would meet internally to debrief and propose a new solution.
    • Sometimes a modern SharePoint list would suffice, but most frequently, a canvas app would be required to replace existing forms, especially when multiple or similar InfoPath forms needed to be modernized.
    • Our Solution Architect would also check data integration or connection with other systems outside the form(s), ensuring that such integration would continue to work in the new solution.
    • Power Automate would be used for associated workflows, such as email notifications, permission/role-based access control, and user profile fields.
  4. After our discovery team agreed that a canvas app should be used, I would create an Excel spreadsheet to document app requirements, while our UX Architect would start to create a prototype.
    • The prototype would illustrate our design ideas, provide a visual representation, and include some interactions of the new solution.
    • I always created at least three worksheets in the Excel spreadsheet:
      • Background: Summary of what we learned in client meetings, such as the purpose of the existing form(s), description of form users, and form owners’ contact info.
      • URLs: Links to the existing form(s) and SharePoint site(s).
      • Acceptance Criteria: Detailed requirements of the canvas app.
    • During this step, the UX Architect and I would meet multiple times to ensure that the documented requirements and prototype would match.
    • When needed, we would consult our Solution Architect to ensure that the requirements and prototype took full advantage of the Power Platform capabilities.
    • I acted as both a BA and UX Architect for several forms, documenting requirements and creating prototypes, which simplified this step.
  5. When we met with stakeholders again, we showed them both the prototype and requirements and emphasized that no actual coding had occurred. Doing so would encourage them to provide any additional suggestions if needed so that the new solution would meet their business needs.
    • The Excel file and prototype complemented each other well, especially in areas where the prototype did not cover. For example, instead of making workflows work in the prototype, which would require a lot of coding, I would specify that in a requirement explaining when and who would be notified when a form was submitted.
    • Sometimes we needed to go back and adjust our prototype and requirements before we met with the stakeholders again to get their final approval to our proposed solution.
  6. As the last discovery step, we would meet with our Developers and QA Test Engineer, walking through the prototype and detailed requirements accessible from a SharePoint document library.
    • The meeting acted as a knowledge transfer session from the Discovery team to all other members.
    • Based on the information, Developers would create tasks in the Azure DevOps (ADO) to restructure data, estimate needed screens or forms, report data in Excel or Power Bi, and create flows.
    • Test Engineer would create the necessary test plan, cases, and steps in ADO.

Ideally, our Discovery team could answer all questions from our Developers and Test Engineer at this step. However, from time to time, they would ask us a few questions or details that we could not answer, which required us to go back to client stakeholders for further clarification. When that happened, I would update the Stakeholder Interview Template to cover such questions in future discovery.

Examples of such questions included:

  • Will form submitters be allowed to edit their submissions?
  • Should they be allowed to save a draft before submission, or should they fill out the entire form in one sitting?
  • Should there be an app Administrator screen so that stakeholders could make future updates easily within the app, or will they be comfortable enough to make updates directly from the back-end, such as a SharePoint list?

In Summary

As described above, having a designated Discovery team in a project with members specializing in UX and technology solutions worked well and significantly contributed to our project success. In general:

  • Following a user-centered design/discovery process helped the project teamwork efficiently, avoided surprises in later development and testing, and saved overall project time.
  • It was important for project discovery to start at least a couple of weeks ahead of the rest of the team.
  • Involving client stakeholders early and throughout discovery was crucial in ensuring that the new solution would meet the client’s business needs.
  • Discovery artifacts, such as the prototype and requirements, should be available at a central location so that all team members could easily refer to them and clearly understand how the new solution should function.

In Part 2 of this blog series, I will compare my experience prototyping in Axure and the Power Platform. I will also share a few form design and content development best practices that we implemented when we modernized InfoPath forms to Power Apps.

So you’ve written Infrastructure As Code, Automated All The Things, and drank deeply of the DevOps kool-aid on your current project – what’s next?

You’ve been reaping the rewards of DevOps on a small scale, and your one-off DevOps effort has earned a lot of positive attention – now management would like you to implement DevOps for all the projects! So how do you spread the DevOps wealth, and what do you need to be aware of?

Delivering DevOps

What is DevOps

For this article, we’ll need shorthand “DevOps” to mean the code bits – such as bash/posh/cmd for scripting, TF/CLI for IAC, and YAML for Azure DevOps CI/CD pipelines. Start with identifying the valuable bits from your current DevOps efforts and use that as a basis for what you want to disseminate to all other projects.

Cross-Project DevOps

Not all projects will have the exact DevOps requirements. Still, over time you will build up a collection of useful scripts and templates that are generic enough to provide value across all software projects in your organization.

But you can’t simply copy/paste these files into every repo, as that would be a massive headache to manage and keep updated. Instead, you’ll want to version and package these scripts so that every project that adopts the enterprise DevOps approach can track and plan for DevOps package updates.

Custom Package Repository

The easiest way to distribute file packages is through a custom package repository. Chances are your software project is already using at least Nuget, NPM, Maven, or PyPI. Azure DevOps can create all of these kinds of repositories, which is how you can seamlessly distribute your company-proprietary DevOps package without making it publicly available. These custom repositories are also handy as a local cache for public packages.

DevOps Integration

Usually, downloaded packages are not committed to the repo, only the reference to the package – then the build/release agents download them as needed. But for DevOps packages, should be committed to the repo for a variety of reasons. To do that, include an install script with your package, like this example for an NPM package:

#!/usr/bin/env node

const path = require('path');
const fse = require('fs-extra');
const rootFolder = path.resolve('../..'); // backwards from “./node_modules/<package>”
const installFolder = `${rootFolder}/.devops`;
const oldInstallFolder = `${rootFolder}/.devops.backup`;
const nodePkgFolder = path.resolve('.');
const srcFolder = `${nodePkgFolder}/src`;

let oldFolderRenamed = false;

// rename .devops to .devops.backup
if (fse.existsSync(installFolder)) {
    oldFolderRenamed = true;
    if (fse.existsSync(oldInstallFolder)) {
        console.log('removing last [.devops.backup] folder...');
        fse.removeSync(oldInstallFolder);
    }
    console.log('renaming [.devops] as [.devops.backup]...');
    fse.renameSync(installFolder, oldInstallFolder);
}

// copy package src folder to install folder
console.log('installing devops package...');
fse.copySync(srcFolder, installFolder);

// read version from package.json and save to installFolder
const packageJson = fse.readFileSync(`${nodePkgFolder}/package.json`);
const package = JSON.parse(packageJson);
fse.writeFileSync(`${installFolder}/pkg_version.txt`, package.id);

if (oldFolderRenamed) {
    console.warn('Existing devops configuration has been backed up and replaced, please manually merge your configuration back into the new devops package!');
    console.log(`Please read ${nodePkgFolder}\\changelog.md`)
} else {
    console.log(`Devops package has been installed, please read ${nodePkgFolder}\\readme.md`)
}

This script copies the DevOps package from the node_modules (NPM package cache) directory into the project’s root directory. If the DevOps package directory is already present, it renames the old directory and copies it in the new one. It is then trivial to diff the old and new directories for changes and merge them.

For NPM install, use the –no-save option, as we are using NPM as a downloader/installer so it doesn’t need to save the reference in the package.json

Package Versioning

DevOps package files should be tracked in a Version Control System like Git, and each feature should then be developed in a separate branch, then PR’d, and each PR should create a new version of the DevOps package. You can then Devops your Devops by setting up CI pipelines to automatically publish new package versions whenever the master branch is changed.

Don’t forget to document! Include a readme for first-time consumers and a changelog for updates.

Adopt and Migrate Incrementally

Generally, DevOps development will follow this kind of flow:

  1. Implement a project-specific DevOps functionality
  2. Test, Fix and Refine
  3. Generalize and extract useful bits of functionality
  4. Test, Fix and Refine
  5. Repeat

When integrating an enterprise DevOps package, a similar process can be followed:

  1. Migrate project-specific DevOps functionality to use the enterprise DevOps package
  2. Test, Fix and Refine
  3. Generic and extract useful bits of functionality
  4. Test, Fix and Refine
  5. Merge the generically useful bits into the enterprise DevOps package
  6. Repeat

Standardize and Prevent Making Changes

A critical benefit of the DevOps package approach is that it allows for the standardization of DevOps processes across the enterprise and provides a straightforward way to keep all projects in sync. A bug fix or new process can be quickly rolled out to all consumers of the package.

Standardization could also be accomplished using Azure DevOps Pipelines or Task Groups, but any change to those will immediately affect all consumers and invisibly break things. But if the DevOps package is stored in the project repo, those projects are insulated from breaking changes and bugs.

Track Changes and Update Easily

Each project will likely have a separate set of custom configuration files that must be modified from the package baseline to customize the build and release pipelines. It is essential to separate these files and keep track of changes to them after a package update. Conversely, all the other files in the DevOps package should not be modified to ensure a smooth update. Suppose one of the package files must be modified. In that case, it should be either A) temporary with the expectation of merging the changes up to the package or B) copied locally to the custom configuration directory so that it is evident that it must be inspected during the package update process.

Enterprise DevOps Principles

To sum everything up, there are several themes here that will ensure a successful enterprise DevOps approach:

  • Consistency – standardize DevOps processes across the enterprise
  • Modularity – design DevOps processes so that each component is focused and reusable – follow DRY and SRP guidelines
  • Resiliency/Stability – make DevOps processes resistant to unexpected changes
  • Traceability – easily understand changes to DevOps processes and merge quickly

Example

Consider this NPM package:

  • scripts/install.js – module install script
  • src/config – all project-specific customization and scripts
  • src/pipelines – enterprise-wide CI/CD pipelines
  • src/scripts – enterprise-wide bash/posh/cmd scripts
  • src/tf – enterprise-wide terraform templates
  • src/Update-DevopsPackage.ps1 – helper script that triggers the download of the latest package
  • changelog.md – description of differences between versions, upgrade notes
  • package.json – NPM publishes metadata, contains version
  • readme.md – introduction to DevOps package, contains getting started instructions, directory, and file descriptions
  • And here’s what a consuming project may look like:
  • .devops/config – src/config directory from package
  • .devops/pipelines – src/pipelines directory from package
  • .devops/scripts – src/scripts directory from package
  • .devops/tf – src/tf directory from package
  • .devops/version.txt – text file containing the package version (handy for tracking updates)
  • .devops/Update-DevopsPackage.ps1 – copied from src directory from package
  • src/ – project source code, etc.
Microsoft affirms its commitment to the Java ecosystem by offering open-source distribution of Java with the Microsoft Build of OpenJDK™. It’s a no-cost, Long-Term Support (LTS) distribution of OpenJDK currently available for preview.

The preview build includes the binaries for Java 11 available on x64 server and desktop environments on Windows, macOS, and Linux. An early-access release for Java 16 is available for Windows on ARM. Microsoft is currently collecting feedback on the packing and installer of the build on various platforms from the users of the preview build and aims to release the GA build by the end of 2021.

Why Microsoft is Offering JDK Distribution?

Microsoft has been deeply involved in the Java ecosystem by offering Java tools for Visual Studio Code, supporting Java on Azure, running several internal platforms, and some Azure infrastructure services on Java Virtual Machine (JVM). Microsoft runs more than 500,000 JVMs internally, excluding all Azure services and customer workloads. Azure has experienced significant growth in Java workloads through Azure Spring Cloud, Azure App Service, Azure Functions, and Azure Kubernetes Service. LinkedIn, Minecraft, and Yammer are examples of major internal platforms run on Java.

Below are few critical triggers outside the rise of internal JVM workloads for Microsoft to jump into OpenJDK distribution:

  • Cost reduction: Java support costs on Azure cloud can be reduced by avoiding commercial licenses for Java. It helps to reduce the overall Azure subscription costs by removing the cost passed to other vendors for JDK licenses.
  • Security & performance improvement: The build may contain backported fixes and enhancements for security and performance, which may not have formally backported upstream.
  • Support Java devs: Part of Microsoft’s dedication to better support Java developers on Azure.
  • Become a leader in providing toolkits for the open-source community: Microsoft has been a sponsor and contributor to the Java open-source community through AdoptOpenJDK projects since 2018, and has contributed to OpenJDK by providing more than 50 patches for OpenJDK in the last 18 months, which includes resolutions for issues in macOS packaging, build, infrastructure, and garbage collection.

Benefits to Java Developers on Azure

  • Multi-OS Support: Windows, macOS, and Linux
  • Multi-Environment Support: cloud, local data centers, and user development environments
  • Free to Use: the Microsoft Build of OpenJDK is at no cost to use.
  • Backed by Microsoft: This is backed by the promise of Microsoft. Many backported fixes and enhancements are recommended by Microsoft that may not be available in upstream OpenJDK implementations.
  • Ease of Migration: Smooth and transparent transition for existing workloads.

Limitations

  • Docker image is not available at this point
  • Medium-term support releases of OpenJDK are not supported
  • No API available to access the binary distribution
  • No ARM-based macOS binary at this point

How to Install on Windows

The packages and installer for Microsoft’s OpenJDK preview build are available at https://www.microsoft.com/openjdk#11. Azure customers can try out the preview using Azure Cloud Shell too.

Download the package on the platform of your choice and confirm the Java version. For Windows, the installer takes care of the default location and setting PATH and JAVA_HOME environment variables.

Microsoft Build of OpenJDK on Windows 10

Picture 1: Microsoft Build of OpenJDK on Windows 10
A build of Open JDK 11 is already available in the Azure Cloud Shell; developers can use it with Shell.

Microsoft Build of OpenJDK on Azure Cloud Shell

Picture 2: Microsoft Build of OpenJDK on Azure Cloud Shell

We expect more enterprise organizations modernizing Java on Azure with increased support and options from Microsoft. This is welcome news to organizations with a large investment in Java and has experienced a large increase in the cost of legacy Java workloads. Need a partner to accelerate your modernization journey? Reach out to AIS today.

Reference

So you’ve written Infrastructure As Code, Automated All The Things, and drank deeply of the DevOps Kool-Aid on your current project – what’s next?

You’ve been reaping the rewards of DevOps at some scale, and your one-off DevOps effort has earned a lot of positive attention – now, management would like you to implement DevOps for all the projects!

How do you spread the DevOps wealth, and what do you need to be successful?

Delivering DevOps

What is DevOps?

For this article, we’ll use shorthand “DevOps” to mean the code bits – such as bash/posh/cmd for scripting, TF/CLI for IAC, and YAML for Azure DevOps CI/CD pipelines.

Start with identifying the valuable bits from your current DevOps efforts and use that as a basis for what you want to disseminate to all other projects.

Cross-Project DevOps

Not all projects will have the exact same DevOps requirements. Still, over time you will build up a collection of useful scripts and templates that are generic enough to provide value across all software projects in your organization.

But you can’t simply copy/paste these files into every repo, as that would be a massive headache to manage and keep updated. Instead, you’ll want to version and package these scripts so that every project that adopts the enterprise DevOps approach can track and plan for DevOps package updates.

Custom Package Repository

The easiest way to distribute file packages is through a custom package repository. Chances are your software project is already using at least Nuget, NPM, Maven, or PyPI. Azure DevOps can create all of these kinds of repositories, which is how you can seamlessly distribute your company-proprietary DevOps package without making it publicly available. These custom repositories are also handy as a local cache for public packages.

DevOps Integration

Usually, downloaded packages are not committed to the repo, only the reference to the package – then the build/release agents download them as needed. But for DevOps packages, should be committed to the repo for a variety of reasons. To do that, include an install script with your package, like this example for an NPM package:

#!/usr/bin/env node

const path = require('path');
const fse = require('fs-extra');
const rootFolder = path.resolve('../..'); // backwards from “./node_modules/<package>”
const installFolder = `${rootFolder}/.devops`;
const oldInstallFolder = `${rootFolder}/.devops.backup`;
const nodePkgFolder = path.resolve('.');
const srcFolder = `${nodePkgFolder}/src`;

let oldFolderRenamed = false;

// rename .devops to .devops.backup
if (fse.existsSync(installFolder)) {
    oldFolderRenamed = true;
    if (fse.existsSync(oldInstallFolder)) {
        console.log('removing last [.devops.backup] folder...');
        fse.removeSync(oldInstallFolder);
    }
    console.log('renaming [.devops] as [.devops.backup]...');
    fse.renameSync(installFolder, oldInstallFolder);
}

// copy package src folder to install folder
console.log('installing devops package...');
fse.copySync(srcFolder, installFolder);

// read version from package.json and save to installFolder
const packageJson = fse.readFileSync(`${nodePkgFolder}/package.json`);
const package = JSON.parse(packageJson);
fse.writeFileSync(`${installFolder}/pkg_version.txt`, package.id);

if (oldFolderRenamed) {
    console.warn('Existing devops configuration has been backed up and replaced, please manually merge your configuration back into the new devops package!');
    console.log(`Please read ${nodePkgFolder}\\changelog.md`)
} else {
    console.log(`Devops package has been installed, please read ${nodePkgFolder}\\readme.md`)
}

This script copies the DevOps package from the node_modules (NPM package cache) directory into the project’s root directory. If the DevOps package directory is already present, it renames the old directory and copies it in the new one. It is then trivial to diff the old and new directories for changes and merge them.

For NPM install, use the –no-save option, as we are using NPM as a downloader/installer, so it doesn’t need to save the reference in the package.json

Package Versioning

DevOps package files should be tracked in a Version Control System like Git. We need to develop each feature in a separate branch, then PR’d, and each PR should create a new version of the DevOps package. You can then Devops your Devops by setting up CI pipelines to automatically publish new package versions whenever the master branch is changed.

Don’t forget to document! Include a readme for first-time consumers and a changelog for updates.

Adopt and Migrate Incrementally

Generally, DevOps development will follow this kind of flow:

  1. Implement a project-specific DevOps functionality
  2. Test, Fix and Refine
  3. Generalize and extract valuable bits of functionality
  4. Test, Fix and Refine
  5. Repeat

When integrating an enterprise DevOps package, a similar process can be followed:

  1. Migrate project-specific DevOps functionality to use the enterprise DevOps package
  2. Test, Fix and Refine
  3. Generalize and extract useful bits of functionality
  4. Test, Fix and Refine
  5. Merge the generically useful bits into the enterprise DevOps package
  6. Repeat

Standardize and Prevent Breaking Changes

A critical benefit of the DevOps package approach is that it allows for the standardization of DevOps processes across the enterprise and provides a straightforward way to keep all projects in sync. A bugfix or new process can quickly be rolled out to all consumers of the package.

Standardization could also be accomplished using Azure DevOps Pipelines or Task Groups, but any change to those will affect all consumers immediately and can invisibly break things. But if the DevOps package is stored in the project repo, those projects are insulated from damaging changes and bugs.

Track Changes and Update Easily

Each project will likely have a separate set of custom configuration files to modify from the package baseline to customize the build and release pipelines. It is essential to separate these files and keep track of changes to them after a package update. Conversely, the other files in the DevOps package should not be modified to ensure a smooth update. Suppose one of the package files must be modified. In that case, it should be either A) temporary with the expectation of merging the changes up to the package or B) copied locally to the custom configuration directory so that it is evident that it must be inspected during the package update process.

Enterprise DevOps Principles

To sum everything up, there are several themes here that will ensure a successful enterprise DevOps approach:

  • Consistency – standardize DevOps processes across the enterprise
  • Modularity – design DevOps processes so that each component is focused and reusable – follow DRY and SRP guidelines
  • Resiliency/Stability – make DevOps processes resistant to unexpected changes
  • Traceability – easily understand changes to DevOps processes and merge quickly

Example

Consider this NPM package:

  • scripts/install.js – module install script
  • src/config – all project-specific customization and scripts
  • src/pipelines – enterprise-wide CI/CD pipelines
  • src/scripts – enterprise-wide bash/posh/cmd scripts
  • src/tf – enterprise-wide terraform templates
  • src/Update-DevopsPackage.ps1 – helper script that triggers the download of the latest package
  • changelog.md – description of differences between versions, upgrade notes
  • package.json – NPM publish metadata, contains version
  • readme.md – introduction to DevOps package, contains getting started instructions, directory, and file descriptions

And here’s what a consuming project may look like:

  • .devops/config – src/config directory from package
  • .devops/pipelines – src/pipelines directory from package
  • .devops/scripts – src/scripts directory from package
  • .devops/tf – src/tf directory from package
  • .devops/version.txt – a text file containing the package version (handy for tracking updates)
  • .devops/Update-DevopsPackage.ps1 – copied from src directory from package
  • src/ – project source code, etc.