First, What is Git?

Git is a DevOps tool used for source code management. It is a free and open-source distributed version control system used to manage small and large projects efficiently. Git tracks changes to source code, allowing multiple developers to collaborate on non-linear development.

Before Git

Previously, developers sent code to a central server without a copy of themselves. Changes to the source code are not known to other developers. There was no communication between the developers.

After Git

  • Every developer has a local copy of the code
  • Source code changes can be tracked by others
  • Developers regularly communicate with each other

How Does Git Work?

Git allows users to use simple commands to track code changes and manage their projects. Git’s heart is a repository that houses a project. A repository can be kept locally or on a website like GitHub. Users of Git can keep numerous distinct repositories and track each one separately.

The project has several savepoints during the development stage. Those save points are called “commits.” Every modification made to the project during development, or a commit, is listed in the commit history. Commit allows for rolling back or forwarding the code to any commit history commit.

To refer to commit, Git uses “SHA-1 hashes”. Every hash points to a specific commit in the repository. With the help of hashes, Git creates a tree structure for easy data storage and retrieval.

Each Git project’s files go through several stages:

  • Working Directory: This stage has modified files, but they are untracked and not yet ready to commit.
  • Staging Directory: Modified files are prepared for commit when added to the staging environment.
  • Committed: File snapshots from the staging area are saved in the commit history.

Basic Git Workflow Diagram

GIT Workflow Diagram

Staging

To track changes made to a file it must be added first to the staging area. Whenever a file is modified, Git recognizes it, but unless and until it is not staged, Git does not track it. The staging area adds an extra layer of security that allows us to audit the changes before they are pushed to commit.
It is the preliminary step for all the files to be in the staging area before they are pushed to commit, i.e., deployed on the master branch.

We can check the status of the files that Git tracks with the use of the following command: git status

To add files to the staging area, use the below syntax:
git add [filename]
Where [filename] is the name of the actual file.

For example:

Add files to staging area

If we have multiple files to add we can use a command for staging faster:  git add
we can remove the file from the staging area with the below command: git rm –cached [filename]

Example:

Remove File from Staging Area

Commit

These are like snapshots of our work, a snapshot of the repository at a specific time. Commit snapshots do not change unless specifically executed.

We can check if any files are ready to commit by using the following command: git status

Example:

Four files are ready to commit

We see in the above screenshot that four files are ready to commit.

To commit them, we use the syntax:
git commit -m “Notes about the commit”

We use the -m flag followed by the description of the commit to identifying why the commit was made.

Example:
Commit history

The output will have the commit and states what has been changed.
Commit history can be checked using the below command: git log

The result will give us the history that includes information on who made commits and what those commits are when it was made, and the description of the commit.
Adding – oneline flag will display the history in a one-liner.

Display commit history

Reverting

Git revert enables us to roll back a commit if we made a mistake during project development or for any other reason. The Git revert rolls back the commit that is made to remove the changes from the master branch: git revert [commit_ID]

We can get the information of commit_ID using the command: git log

Git Reset Command

The Git reset command takes back to a particular point in the development permanently. If we want to re-add files or updates that were added after that time, they are all unstaged.

Git reset [commit_ID]if we add the flag – hard to the above command it removes the unstaged files after then it is impossible to bring them back.

Branching:

Git branching enables programmers to deviate from the code in use to correct bugs or add features. To work with a copy of the code without altering the original, developers create branches. To isolate the code changes, we establish branches. We test these branches before integrating them into the main branch.
The master branch is nothing but the stable version of the code which is created when we initialize a Git repository using the Git init command.

Branching makes the developers or users create an isolated environment where they can add features or edit code without touching the master branch. Once everything goes well then, we can merge it into the master branch. If something goes wrong, then we can delete the branch and the master branch is untouched.
With branching every developer of the project to work on it parallelly.

Developer Branching

To create a branch use the command: git branch [branch-name]

Create a git branch

Here we have created a new branch with the name feature-1.

Merging and Conflicts

Once the developers finish their work on a new feature or a bug fix on a separate branch, using the git merge command, the developers can
merge their changes to the main branch without sending the work to other developers in the project. Merging changes entails incorporating the code into the master branch.

Existing branches can be seen using the command: git branch -a

Git Branch

The screenshot above shows that we have one branch, feature-1. To merge with the master branch, we follow the steps below:

  1. We need to switch to the master branch.
  2. git checkout master
  3. Once we switch to the master branch to merge the branch, we use the command below:
  4. git merge [branch-name]

Master Branch Conflict

A conflict occurs when someone decides to make changes to the master branch while we are working on another branch. This type of conflict occurs when we attempt to merge changes with the master branch, which is now distinct from the code copy.

Fetching and Pulling Changes:

To retrieve changes from the remote repositories, use git fetch and git pull commands.

Fetch & Pulling Changes
git fetch only fetch all the changes that are in the remote repository. There will be no changes to the local repository.
git pull fetches the metadata from the remote repository and adds new changes to the local repository.

In general git pull = git fetch + git merge

Pushing Changes:

In contrast to git pull, git push enables you to share modifications and publish them in the remote repository.
Most frequently, local modifications are published and uploaded to a central repository using git push. A push is used to distribute changes made to a local repository with team members who are working remotely.

Before Push Changes

Before and after Push Changes

When you make changes locally and want to push them to a remote repository, run: git push

Rebasing:

Git makes a copy of the existing code when we create a branch so we can continue to work on it. To keep up with general development, developers might occasionally need to incorporate fresh modifications from the master branch.

Rebasing entails merging fresh changes from your feature branch into the master branch. Git then creates commits on top of the tip of your feature branch by replaying the most recent modifications from the master branch.

To rebase the feature branch, we use the following steps:

  • Use the command git checkout and move to the feature branch.
    • git checkout [branch-name]
  • Use the below command to rebase the branch.
    • git rebase master.

Both rebasing and merging, but in different methods, are intended to incorporate changes from one branch into another branch. In pictorial representation if we have commits as shown below, the merge will result in a combination of commits, whereas rebase will add all changes in the feature branch beginning with the last commit of the master branch:

Commit Process

Merge vs. Rebase

Rebasing is not advised if the feature branch from which you are getting changes is shared with other developers because it will result in inconsistent repositories.
Merge is the best option if you want to observe history exactly as it occurred. Rebase rewrites history while merge maintains it.

Conclusion:

There are definite benefits of using Git. It saves time, works offline, undoes mistakes, and tracks changes. With this, we get an idea of what Git is and how it works, basic commands to use in Git, and what is branching, merging, rebasing, push and pull. I hope this should help move forward in learning Git.

What is Ansible?

It is the most trusted DevOps tool for many users, which can be used in platforms like Unix, Linux, and Windows Azure. It’s an open-source tool used to automate cloud provisioning, configuration management, and App deployments. In addition, it helps provision virtual machines, networks, containers, and complete cloud infrastructures.

How is Ansible Used?

It’s been used in the form of a playbook file written in YAML format like AKS to define a configuration or process model. A playbook is a combination of all the various tasks that are combined into a single file. Once we have the playbook ready, we can create a DevOps pipeline and maintain static configuration for application deployment and other configurations. Ansible is safe to use on any cloud service as they don’t need to install any agent to perform DevOps activities. It will perform the deployment and exit the cloud without leaving any traces, so there won’t be any scope of vulnerability and safe to use, and this won’t consume CPU like other deployment tools. With the help of Ansible, we can create pipelines in Azure DevOps for various activities like building VM’s, application deployments, and configuration settings.

Deploy a New Application from an Azure DevOps Pipeline

Once our application is ready on your desktop locally, we can repeatedly deploy and re-deploy the application without any changes in the cloud using the CI/CD pipeline. The advantage of using Ansible is your application will run in the same way it ran on your local desktop, so there won’t be any environment-related issues. Re-deploying it to another infrastructure is as straightforward as defining your Azure environment and applying your application’s Playbook. It ensures your Azure deployments work seamlessly in a hybrid cloud configuration as efficiently as you build a single system.

Ansible Capabilities in Azure

Ansible has modules for many different Azure capabilities, including:

  • Creating Virtual Machines
  • Creating Virtual Networks
  • Creating Storage and Storage Accounts
  • Creating Resource Groups
  • Creating Security Groups
  • Creating Resource Manager Templated Deployments

To deploy using Ansible, we need to create an Ansible VM in Azure:

az ad sp create-for-rbac --name Ansible VM name

Once VM is created, connect to the Ansible VM with login credentials under the SSH tab. Ansible is an agentless architecture-based automation tool. Only it needs ssh authentication:

  1. Navigate to your project. Select Repos.
  2. Select the webapp.yml file under the ansible-scripts folder.

Below are a few examples of YAML files.

    1. Create a resource group and Azure VM azure_windows_vm.yml
      name: Create Azure VM
      hosts: localhost
      connection: local
      tasks:
      - name: Create resource group
      azure_rm_resourcegroup:
      name: myResourceGroup
      location: eastus
    2. Create the virtual network and subnet azure_windows_vm.yml
      name: virtual network Name
      azure_rm_virtualnetwork:
      resource_group: myResourceGroup
      name: vNet
      address_prefixes: "10.0.0.0/16"
      - name: Add subnet
      azure_rm_subnet:
      resource_group: myResourceGroup
      name: subnet
      address_prefix: "10.0.1.0/24"
      virtual_network: vNet
      register: output_ip_address 
    3. We will save all these YAML files as playbooks and save them in Azure DevOps as Artifacts.

Build Your Application Using Azure CI Pipeline

  • Navigate to Pipeline > Builds. Select Ansible-CI and click Edit.
  • Select the Azure Dev stage and click View stage tasks to view the pipeline tasks.
  • Select the Replace Tokens task.

Replace the values as per the environment like SQL password, Environment name, etc. Once complete, you can save the changes and create a release, and run the application deployment by creating a new Pipeline. I have created screenshots below for reference.

Creating a release

ENERGY PROVIDER POWERS ITS WORKFORCE WITH DEVOPS
AIS created pipelines in Azure DevOps for a leading energy provider to enable automation, reduce manual errors, and more frequent releases.

Stage one and creating a new release

In the Artifacts Tab, select the one that we created by using playbook and click on create.

Stage one in progress

Pipeline suceeded

Introduce Ansible to Your Organization

Ansible can provide you with a simple and powerful package for configuration management and automation, and takes care of the repetitive work to free up admins so they can focus on other efforts. It helps DevOps improve technological scalability, reliability, and consistency. AIS has the application development experience, IT operations experience, and cultural change management skills that DevOps adoption requires. We have deep DevOps and Agile development expertise at the enterprise level and have helped many clients’ development organizations successfully adopt these new practices.

Thank you to Prasantha Raju for his contributions to this blog.

We have multiple DevOps engagement options. If you’re eager to modernize your software development practices, contact us today.

In May, AIS held an internal hackathon for Microsoft Power Platform to expose our team to the platform, concepts, approaches through hands-on experience and to demonstrate the role Power Platform plays in modernizing legacy applications in the cloud.

The DevOps team automated the portal deployment process using Power DevOps Tools and deployed the solution across three environments (dev, test, prod). Since Microsoft Power Platform does not support source control and versioning, the team used Azure DevOps as the solution repository and version control.

Introduction

When building apps on Microsoft Power Platform, use Microsoft Power Platform Build Tools to automate common build and deployment tasks. These tasks include:

  • Synchronization of solution metadata (also known as solutions) that contains the various platform components like model-driven apps, connectors, and virtual agents.
  • Generating build artifacts
  • Deploying to downstream environments
  • Provisioning or de-provisioning environments
  • Checking solutions to identify problematic patterns

Our Purpose

We can easily create compelling apps, and automation flows using Microsoft Power Platform. But, regardless of platform technology – how do you package and deploy the solution between various environments? Usually, at least three are for development, one for testing, and one for production. Power Platform is no exception: you should have separate development (dev), test, and production (prod) environments for your solution. So, let’s look at how to automate the deployment of Power Platform solutions from one environment to another.

Architecture Flowchart Diagram

DevOps Architecture Flowchart

Technical Approach

Problem:

The following picture shows the manual and repetitive steps required to deploy the Power Platform solution from one environment to another. Manual tasks are prone to errors, and actions are missed or incorrectly executed. This can lead to defects and inconsistency across environments.

Power Platform Technical Approach

Approach

Azure DevOps and Power Platform Build Tools help automate manual tasks and minimize issues to make the team more efficient. Once there is automation, releases become more consistent and predictable, and requirements move more quickly between environments. This gives the team more confidence plus the ability to release more frequently.

Automate Build and Deploy Solutions
Microsoft Power Platform Build Tools tasks are used along with any other available Azure DevOps tasks to compose, build and release pipelines. Pipelines that teams commonly put in place include Initiate, Export from Dev, Build, and Release.

  • In our application, we used three environments (dev, test, prod). We exported the solution from dev and source controlled to git.
  • Packed the solution from source control and deployed it to test.
  • Exported the managed solution from test and imported it into prod.

Automate Portal Deployment
Portal development involves several configurations and customizations to achieve the desired experience for portal end-users. After the development and configuration of a portal instance are complete, it is deployed in other environments like test and prod. Creating a manual backup of your portal configuration and importing it to different environments is a very time-consuming process. The portal deployment process can be easily automated by installing Power DevOps Tools, an additional toolset available in the Azure Marketplace.

Lessons Learned & Next Steps

  • With the help of Power Platform build tools, we can easily deploy the solution into various environments.
  • We can source control the Power Platform environment.
  • Increase the release frequency.

Thank you to the DevOps team for sharing their experience

  • Vikram Reddy (team lead)

Recommended Content

So you’ve written Infrastructure As Code, Automated All The Things, and drank deeply of the DevOps kool-aid on your current project – what’s next?

You’ve been reaping the rewards of DevOps on a small scale, and your one-off DevOps effort has earned a lot of positive attention – now management would like you to implement DevOps for all the projects! So how do you spread the DevOps wealth, and what do you need to be aware of?

Delivering DevOps

What is DevOps

For this article, we’ll need shorthand “DevOps” to mean the code bits – such as bash/posh/cmd for scripting, TF/CLI for IAC, and YAML for Azure DevOps CI/CD pipelines. Start with identifying the valuable bits from your current DevOps efforts and use that as a basis for what you want to disseminate to all other projects.

Cross-Project DevOps

Not all projects will have the exact DevOps requirements. Still, over time you will build up a collection of useful scripts and templates that are generic enough to provide value across all software projects in your organization.

But you can’t simply copy/paste these files into every repo, as that would be a massive headache to manage and keep updated. Instead, you’ll want to version and package these scripts so that every project that adopts the enterprise DevOps approach can track and plan for DevOps package updates.

Custom Package Repository

The easiest way to distribute file packages is through a custom package repository. Chances are your software project is already using at least Nuget, NPM, Maven, or PyPI. Azure DevOps can create all of these kinds of repositories, which is how you can seamlessly distribute your company-proprietary DevOps package without making it publicly available. These custom repositories are also handy as a local cache for public packages.

DevOps Integration

Usually, downloaded packages are not committed to the repo, only the reference to the package – then the build/release agents download them as needed. But for DevOps packages, should be committed to the repo for a variety of reasons. To do that, include an install script with your package, like this example for an NPM package:

#!/usr/bin/env node

const path = require('path');
const fse = require('fs-extra');
const rootFolder = path.resolve('../..'); // backwards from “./node_modules/<package>”
const installFolder = `${rootFolder}/.devops`;
const oldInstallFolder = `${rootFolder}/.devops.backup`;
const nodePkgFolder = path.resolve('.');
const srcFolder = `${nodePkgFolder}/src`;

let oldFolderRenamed = false;

// rename .devops to .devops.backup
if (fse.existsSync(installFolder)) {
    oldFolderRenamed = true;
    if (fse.existsSync(oldInstallFolder)) {
        console.log('removing last [.devops.backup] folder...');
        fse.removeSync(oldInstallFolder);
    }
    console.log('renaming [.devops] as [.devops.backup]...');
    fse.renameSync(installFolder, oldInstallFolder);
}

// copy package src folder to install folder
console.log('installing devops package...');
fse.copySync(srcFolder, installFolder);

// read version from package.json and save to installFolder
const packageJson = fse.readFileSync(`${nodePkgFolder}/package.json`);
const package = JSON.parse(packageJson);
fse.writeFileSync(`${installFolder}/pkg_version.txt`, package.id);

if (oldFolderRenamed) {
    console.warn('Existing devops configuration has been backed up and replaced, please manually merge your configuration back into the new devops package!');
    console.log(`Please read ${nodePkgFolder}\\changelog.md`)
} else {
    console.log(`Devops package has been installed, please read ${nodePkgFolder}\\readme.md`)
}

This script copies the DevOps package from the node_modules (NPM package cache) directory into the project’s root directory. If the DevOps package directory is already present, it renames the old directory and copies it in the new one. It is then trivial to diff the old and new directories for changes and merge them.

For NPM install, use the –no-save option, as we are using NPM as a downloader/installer so it doesn’t need to save the reference in the package.json

Package Versioning

DevOps package files should be tracked in a Version Control System like Git, and each feature should then be developed in a separate branch, then PR’d, and each PR should create a new version of the DevOps package. You can then Devops your Devops by setting up CI pipelines to automatically publish new package versions whenever the master branch is changed.

Don’t forget to document! Include a readme for first-time consumers and a changelog for updates.

Adopt and Migrate Incrementally

Generally, DevOps development will follow this kind of flow:

  1. Implement a project-specific DevOps functionality
  2. Test, Fix and Refine
  3. Generalize and extract useful bits of functionality
  4. Test, Fix and Refine
  5. Repeat

When integrating an enterprise DevOps package, a similar process can be followed:

  1. Migrate project-specific DevOps functionality to use the enterprise DevOps package
  2. Test, Fix and Refine
  3. Generic and extract useful bits of functionality
  4. Test, Fix and Refine
  5. Merge the generically useful bits into the enterprise DevOps package
  6. Repeat

Standardize and Prevent Making Changes

A critical benefit of the DevOps package approach is that it allows for the standardization of DevOps processes across the enterprise and provides a straightforward way to keep all projects in sync. A bug fix or new process can be quickly rolled out to all consumers of the package.

Standardization could also be accomplished using Azure DevOps Pipelines or Task Groups, but any change to those will immediately affect all consumers and invisibly break things. But if the DevOps package is stored in the project repo, those projects are insulated from breaking changes and bugs.

Track Changes and Update Easily

Each project will likely have a separate set of custom configuration files that must be modified from the package baseline to customize the build and release pipelines. It is essential to separate these files and keep track of changes to them after a package update. Conversely, all the other files in the DevOps package should not be modified to ensure a smooth update. Suppose one of the package files must be modified. In that case, it should be either A) temporary with the expectation of merging the changes up to the package or B) copied locally to the custom configuration directory so that it is evident that it must be inspected during the package update process.

Enterprise DevOps Principles

To sum everything up, there are several themes here that will ensure a successful enterprise DevOps approach:

  • Consistency – standardize DevOps processes across the enterprise
  • Modularity – design DevOps processes so that each component is focused and reusable – follow DRY and SRP guidelines
  • Resiliency/Stability – make DevOps processes resistant to unexpected changes
  • Traceability – easily understand changes to DevOps processes and merge quickly

Example

Consider this NPM package:

  • scripts/install.js – module install script
  • src/config – all project-specific customization and scripts
  • src/pipelines – enterprise-wide CI/CD pipelines
  • src/scripts – enterprise-wide bash/posh/cmd scripts
  • src/tf – enterprise-wide terraform templates
  • src/Update-DevopsPackage.ps1 – helper script that triggers the download of the latest package
  • changelog.md – description of differences between versions, upgrade notes
  • package.json – NPM publishes metadata, contains version
  • readme.md – introduction to DevOps package, contains getting started instructions, directory, and file descriptions
  • And here’s what a consuming project may look like:
  • .devops/config – src/config directory from package
  • .devops/pipelines – src/pipelines directory from package
  • .devops/scripts – src/scripts directory from package
  • .devops/tf – src/tf directory from package
  • .devops/version.txt – text file containing the package version (handy for tracking updates)
  • .devops/Update-DevopsPackage.ps1 – copied from src directory from package
  • src/ – project source code, etc.
So you’ve written Infrastructure As Code, Automated All The Things, and drank deeply of the DevOps Kool-Aid on your current project – what’s next?

You’ve been reaping the rewards of DevOps at some scale, and your one-off DevOps effort has earned a lot of positive attention – now, management would like you to implement DevOps for all the projects!

How do you spread the DevOps wealth, and what do you need to be successful?

Delivering DevOps

What is DevOps?

For this article, we’ll use shorthand “DevOps” to mean the code bits – such as bash/posh/cmd for scripting, TF/CLI for IAC, and YAML for Azure DevOps CI/CD pipelines.

Start with identifying the valuable bits from your current DevOps efforts and use that as a basis for what you want to disseminate to all other projects.

Cross-Project DevOps

Not all projects will have the exact same DevOps requirements. Still, over time you will build up a collection of useful scripts and templates that are generic enough to provide value across all software projects in your organization.

But you can’t simply copy/paste these files into every repo, as that would be a massive headache to manage and keep updated. Instead, you’ll want to version and package these scripts so that every project that adopts the enterprise DevOps approach can track and plan for DevOps package updates.

Custom Package Repository

The easiest way to distribute file packages is through a custom package repository. Chances are your software project is already using at least Nuget, NPM, Maven, or PyPI. Azure DevOps can create all of these kinds of repositories, which is how you can seamlessly distribute your company-proprietary DevOps package without making it publicly available. These custom repositories are also handy as a local cache for public packages.

DevOps Integration

Usually, downloaded packages are not committed to the repo, only the reference to the package – then the build/release agents download them as needed. But for DevOps packages, should be committed to the repo for a variety of reasons. To do that, include an install script with your package, like this example for an NPM package:

#!/usr/bin/env node

const path = require('path');
const fse = require('fs-extra');
const rootFolder = path.resolve('../..'); // backwards from “./node_modules/<package>”
const installFolder = `${rootFolder}/.devops`;
const oldInstallFolder = `${rootFolder}/.devops.backup`;
const nodePkgFolder = path.resolve('.');
const srcFolder = `${nodePkgFolder}/src`;

let oldFolderRenamed = false;

// rename .devops to .devops.backup
if (fse.existsSync(installFolder)) {
    oldFolderRenamed = true;
    if (fse.existsSync(oldInstallFolder)) {
        console.log('removing last [.devops.backup] folder...');
        fse.removeSync(oldInstallFolder);
    }
    console.log('renaming [.devops] as [.devops.backup]...');
    fse.renameSync(installFolder, oldInstallFolder);
}

// copy package src folder to install folder
console.log('installing devops package...');
fse.copySync(srcFolder, installFolder);

// read version from package.json and save to installFolder
const packageJson = fse.readFileSync(`${nodePkgFolder}/package.json`);
const package = JSON.parse(packageJson);
fse.writeFileSync(`${installFolder}/pkg_version.txt`, package.id);

if (oldFolderRenamed) {
    console.warn('Existing devops configuration has been backed up and replaced, please manually merge your configuration back into the new devops package!');
    console.log(`Please read ${nodePkgFolder}\\changelog.md`)
} else {
    console.log(`Devops package has been installed, please read ${nodePkgFolder}\\readme.md`)
}

This script copies the DevOps package from the node_modules (NPM package cache) directory into the project’s root directory. If the DevOps package directory is already present, it renames the old directory and copies it in the new one. It is then trivial to diff the old and new directories for changes and merge them.

For NPM install, use the –no-save option, as we are using NPM as a downloader/installer, so it doesn’t need to save the reference in the package.json

Package Versioning

DevOps package files should be tracked in a Version Control System like Git. We need to develop each feature in a separate branch, then PR’d, and each PR should create a new version of the DevOps package. You can then Devops your Devops by setting up CI pipelines to automatically publish new package versions whenever the master branch is changed.

Don’t forget to document! Include a readme for first-time consumers and a changelog for updates.

Adopt and Migrate Incrementally

Generally, DevOps development will follow this kind of flow:

  1. Implement a project-specific DevOps functionality
  2. Test, Fix and Refine
  3. Generalize and extract valuable bits of functionality
  4. Test, Fix and Refine
  5. Repeat

When integrating an enterprise DevOps package, a similar process can be followed:

  1. Migrate project-specific DevOps functionality to use the enterprise DevOps package
  2. Test, Fix and Refine
  3. Generalize and extract useful bits of functionality
  4. Test, Fix and Refine
  5. Merge the generically useful bits into the enterprise DevOps package
  6. Repeat

Standardize and Prevent Breaking Changes

A critical benefit of the DevOps package approach is that it allows for the standardization of DevOps processes across the enterprise and provides a straightforward way to keep all projects in sync. A bugfix or new process can quickly be rolled out to all consumers of the package.

Standardization could also be accomplished using Azure DevOps Pipelines or Task Groups, but any change to those will affect all consumers immediately and can invisibly break things. But if the DevOps package is stored in the project repo, those projects are insulated from damaging changes and bugs.

Track Changes and Update Easily

Each project will likely have a separate set of custom configuration files to modify from the package baseline to customize the build and release pipelines. It is essential to separate these files and keep track of changes to them after a package update. Conversely, the other files in the DevOps package should not be modified to ensure a smooth update. Suppose one of the package files must be modified. In that case, it should be either A) temporary with the expectation of merging the changes up to the package or B) copied locally to the custom configuration directory so that it is evident that it must be inspected during the package update process.

Enterprise DevOps Principles

To sum everything up, there are several themes here that will ensure a successful enterprise DevOps approach:

  • Consistency – standardize DevOps processes across the enterprise
  • Modularity – design DevOps processes so that each component is focused and reusable – follow DRY and SRP guidelines
  • Resiliency/Stability – make DevOps processes resistant to unexpected changes
  • Traceability – easily understand changes to DevOps processes and merge quickly

Example

Consider this NPM package:

  • scripts/install.js – module install script
  • src/config – all project-specific customization and scripts
  • src/pipelines – enterprise-wide CI/CD pipelines
  • src/scripts – enterprise-wide bash/posh/cmd scripts
  • src/tf – enterprise-wide terraform templates
  • src/Update-DevopsPackage.ps1 – helper script that triggers the download of the latest package
  • changelog.md – description of differences between versions, upgrade notes
  • package.json – NPM publish metadata, contains version
  • readme.md – introduction to DevOps package, contains getting started instructions, directory, and file descriptions

And here’s what a consuming project may look like:

  • .devops/config – src/config directory from package
  • .devops/pipelines – src/pipelines directory from package
  • .devops/scripts – src/scripts directory from package
  • .devops/tf – src/tf directory from package
  • .devops/version.txt – a text file containing the package version (handy for tracking updates)
  • .devops/Update-DevopsPackage.ps1 – copied from src directory from package
  • src/ – project source code, etc.

From C# Developer to DevOps Engineer

Over the last couple of years, I’ve become a DevOps Engineer after having been primarily a C# developer. Instead of primarily C# and SQL, I was now working almost exclusively with JSON, YAML, and PowerShell. While I was very familiar with Visual Studio 2013/2015/2017 and its excellent support for the .NET work I did over the years, I found the experience for building DevOps solutions to be underwhelming. At the time, the Intellisense for Azure Resource Manager (ARM) or Terraform templates, GitLab or Azure DevOps pipelines, and PowerShell was either non-existent or incomplete. In addition, Visual Studio was quite the resource hog when I wasn’t needing all the extras it provides.

Enter Visual Studio (VS) Code

Now, I had downloaded VS Code soon after it was released with the intent to use it at some point, to say I had. However, after seeing Visual Studio Code used in some ARM template videos where snippets were used, I decided to try it out. Like most Integrated Development Environments (IDE), VS Code isn’t truly ready to go right after installation. It’s taken me some time to build up my configuration to where I am today, and I’m still learning about new features and extensions that can improve my productivity. I want to share some of my preferences.

I want to point out a couple of things. First, I’ve been working primarily with GitLab Enterprise, Azure DevOps Services, and the Azure US Government Cloud. Some of these extensions are purely focused on those platforms. Second, I use the Visual Studio Code – Insiders release rather than the regular Visual Studio Code version. I have both installed, but I like having the newest stuff as soon as I can. For this post, that shouldn’t be an issue.

Theming

As long as there’s a decent dark color theme, I’m content. The bright/light themes give me headaches over time. VS Code’s default dark theme, Dark+, fits the bill for me.

One of the themes I didn’t know I needed before I stumbled across them was icon themes. I used to have the standard, generic folder and file icons, the Minimal theme in VS Code. That made it difficult to differentiate between PowerShell scripts, ARM templates, and other file types at a glance. There are a few included templates, but I’m using the VSCode Icons Theme. It’s one of the better options, but I’m contemplating making a custom one as this one doesn’t have an icon for Terraform variables files (.tfvars), and I’d like a different icon for YAML files. If the included themes aren’t suitable for you, there are several options for both types of themes and Product Icons themes through the marketplace.

Figure 1 – VS Code’s Minimal icon theme

Workspaces

Workspaces are a collection of folders that are a “collection of one or more folders are opened in a VS Code window.” A workspace file is created that contains a list of the folders and any settings for VS Code and extensions. I’ve only recently started using workspaces because I wanted to have settings configured for different projects.

Extensions in Visual Studio Code provide enhancements to improve productivity. Extensions include code snippets, new language support, debuggers, formatters, and more. I have nearly 60 installed (this includes several Microsoft pre-installs). We will focus on a handful that I rely on regularly.

Workspace Code Configuration
Figure 2 – VS Code Workspace configuration. Also shows the choice of Azure Cloud referenced in the Azure Account extension section below.

Azure Account

The Azure Account extension provides login support for other Azure extensions. By itself, it’s not flashy, but there are a few dozen other Azure extensions that can use the logged-on account from one to reference Azure resources targeted by the others. This extension has a setting, Azure Cloud, that was the main reason I started adopting Workspaces. The default is the commercial version, AzureCloud. I’ve changed it at the user level to AzureUSGoverment, but some of my recent projects use AzureCloud. I’ve set the workspace setting for those.

Azure Resource Manager (ARM) Tools

This extension will make your ARM template tasks much more manageable! It provides an extensive collection of code snippets to scaffolding out many different Azure resources. Schema support provides template autocompletion and linting-like validation. A template navigation pane makes finding resources in a larger template easy. There is also support for parameter files, linked templates, and more.

HashiCorp Terraform

Terraform is an offering of HashiCorp. They’ve provided an extension that supports Terraform files (.tf and .tfvars), including syntax highlighting. While there are only a few snippets included, the autocompletion when defining new blocks (i.e., resources, data) is quite extensive.

Terraform
Figure 3 – Terraform autocompletion

GitLens – Git Supercharged

GitLens is full of features that make tracking changes in code easily accessible. I installed this extension for the “Current Line Blame” feature that shows who changed the current line last, when they changed it and more. In addition, there are sidebar views for branches, remotes, commits, and file history that I use regularly. There are several other features that I either don’t use or even wasn’t aware of until writing this post, as well this is an excellent tool for Git repo users.
GitLens Line Blame

MSBuild Project Tools

I had a recent project that contained a relatively large MSBuild deployment package that needed to be updated to work with the changes made to migrate the application to Azure. I haven’t worked with MSBuild in several years. When I did, I didn’t have all the syntax and keywords committed to memory. This extension provides some essential support, including element completion and syntax highlighting. It did make the project a little easier to modify.

PowerShell Preview

I’ve become a bit of a PowerShell fan. I had been introduced to it when I was working with SharePoint, but since I’ve been doing DevOps work in conjunction with Azure, I’ve started enjoying writing scripts. The less-than-ideal support for PowerShell (at the time, at least) in Visual Studio 20xx was the main reason I gave VS Code a shot. This extension (or the stable PowerShell extension) provides the excellent IntelliSense, code snippets, and syntax highlighting you’d expect. However, it also has “Go to Definition” and “Find References” features that I relied on when writing C#. In addition, it incorporates linting/code analysis with PowerShell Script Analyzer, which helps you develop clean code that follows best practices.

PowerShell Preview

Powershell (stable)

Wrapping Up

I have far more than these extensions installed, but these are the ones I use the most when doing DevOps work. Some of the others either haven’t been used enough yet, aren’t helpful for a DevOps Engineer, or weren’t interesting enough to list for the sake of brevity.

However, I’ve created a Gist on my GitHub that contains the complete list of extensions I have installed if that’s of interest. Visual Studio Code is an amazing tool that, along with the proper configuration and extensions, has increased my productivity as a DevOps Engineer.

DevOps implements a Continuous Integration/Continuous Delivery (CI/CD) process. When multiple team members work in the same codebase, anyone’s update could break the integrated code. So, Continuous Integration is to trigger a build pipeline whenever a code update is pushed. The build pipeline will fail if the newly updated code is incompatible with the existing codebase if there are any conflicts. The codebase might work well within a single developer environment, but in a build pipeline where all configurations and dependencies are expected to be in place can fail. Continuous Delivery speeds up the deployment process. The release pipeline helps to deploy the same code base to multiple environments based on configurations. This helps to support code to be deployed in all environments without many manual changes.

Having an approval process helps peer code reviews, identifies potential issues, and any security flaws ahead of time. The current production applications are very distributed and complex. Whether it is an on-premise or cloud-based solution, missing a dependency or proper configurations could cost significant risk in deployments. DevOps helps to maintain the same code base for repeatable deployment in many environments with just configuration changes. DevOps avoids manually building the deployment packages and handing over to the operations team who would not have insights on what is being deployed. If an error occurs during deployment or post-deployment, then the development team jumps in at that time, which is time-consuming. This will cost in production timeline and end up with some unhappy customers also!
DevOps ImagePicture credit: DoD DevOps

Popular DevOps Tools

Follow here to learn more about DevOps practices from other AIS bloggers!

Why not just “DevOps”?

DevOps is fundamental for any organization’s build and deployment process with seamless CI/CD integration. Then, what is ‘DevSecOps’ and why is ‘Sec’ added between Dev and Ops. The ‘Sec’ in DevSecOps is ‘Security.‘ Though it’s added in between, security implementation should start from Development and continue in Operations. As development and deployment packages add many dependencies from both internal and external, this could introduce vulnerabilities. It could cost severe issues in production if not identified earlier in the build pipeline. Code scans help identify possible weaknesses in code implementations. But for any cybersecurity-related vulnerabilities, only specific tools at different stages of the pipeline must be used to identify as early as possible. Adding security scanning earlier in the pipeline and automating are essential for DevSecOps.

DevSecOps Software Lifecycle

Picture Credit: DoD DevSecOps

DevSecOps is not a tool or pattern but a practice and can be enhanced by adding appropriate tools. It is a process in securing the build and deployment by using several security tools by shifting security to the left. These security tools help to identify vulnerabilities that the code could have introduced, recommend possible solutions to fix those issues, and in some instances, the tools can mitigate some of those issues as well. This is to use the ‘fail fast’ method to identify vulnerabilities earlier in the build pipeline. As more applications moved into the cloud, it is highly imperative to follow Cloud Native Computing Foundation (CNCF) certified tools and implement security benchmarks that provided CIS benchmarks. DevSecOps avoids manual changes once the code is in the pipeline, deployed, and deployed. The codebase will be a single source of truth and should not be manipulated at any point.

Adding scanning tools for security and vulnerabilities helps to mitigate any flaws introduced in code and operations. Many open-source tools provide these functionalities. Enabling logging, continuous monitoring, alerting processes, and any self-fix for faster remediation are key for ongoing business operations. Containerizing with hardened container images from DoD Iron Bank helps to protect application container images. Hardened images can be kept up to date from reliable providers. Containers provide cloud-agnostic and no vendor lock-in solutions.

All the security tools in the DevSecOps pipeline must be deployed and running for pipeline scanning in the customer environment. A request will be sent to those security tools from the pipeline code via API request or trigger command-line interface (CLI) commands. Those tools then respond with their findings, statistics, and provide pass/fail criteria. If a tool identifies any vulnerability findings in the scan, then the pipeline will fail.

Deploying the security tools as SaaS services will require permission from the security team. Not all are approved to run in highly secured cloud environments. Those tools all need to be Authority to Operate (ATO) to deploy and configure. Whereas getting the hardened container images for those tools is a safer and secure approach to deploy those tools in the cloud. As the containers are already hardened, which means scanned, secured, and ready to go with all dependencies, they will provide continuous ATO. The hardened container images can be downloaded from DoD Iron Bank, and almost all tool providers provide container images. Many of these providers have different downloads, whether as a software download or a container image. When downloading as a software image, additional tasks to be done to ensure all the dependencies are appropriately configured or should pre-exist. Simultaneously, downloading as hardened container images comes with dependencies and are pre-scanned. The tools can be deployed into Kubernetes in your cloud environment to provide scalable functionality.

Below is a sample DevSecOps pipeline implementation with recommended security tools, as depicted in the picture below:

  • Source code pull request is approved by reviewers
  • The build pipeline kicks off and code scan is run after a successful initial build
    • If any code vulnerabilities are identified, then the pipeline fails
  • Build pipeline continues with DAST and PEN testing
    • If any vulnerabilities are identified, then the pipeline fails
  • Build artifacts are added to private repository either as packages or container
    • Repository scan is performed using repository scanning tools and vulnerabilities are reported
  • Release pipeline picks up artifacts from private repositories and deploys to Azure (or cloud of your choice)
    • Kubernetes is a highly recommended deployment for orchestration, but deployment can be an application of your choice such as Function App, App Service, Azure Container Instances, etc.
  • Security has been applied throughout the pipeline process and will continue once the application is deployed. Both native security tools such as Azure Monitor, Azure Security Center, Azure Policies, etc., and third-party tools such as Twistlock, Qualys, etc. Can be used to monitor the health of your production environment.DevSecOps Diagram

Let’s look at a few of the recommended tools to support the security validations in the DevSecOps process.

Build tools/CLI

A developer can write their code in their favorite editor such as Visual Studio, VS Code, and run/execute to test their applications. The code editor also generates debug/release packages generating binaries using the build tool that comes with the editor. The application works seamlessly from the developer environment as the dependencies and correct configurations exist. For the build to work in the pipeline, the build tool must be available to build the code. Based on the code language, the build tool varies, and they must be available in the pipeline.

Some of the build tools are:

  • DotNet Build
  • MSBuild
  • Maven
  • Gradle

Static Application Security Testing (SAST)

A code scan is one of the essential steps in securing the codebase. Automated testing helps identify failures, but these specific code scan tools help identify security flaws and vulnerabilities. The application does not need to be running for code scan tools as it scans only the codebase and not any dependencies.

Some of the Code scanning tools are:

  • SonarQube
  • Fortify
  • Anchore
  • JFrog Xray
  • OpenSCAP
  • HBSS
  • OWASP dependency check

Dynamic Application Security Testing (DAST)

DAST scans the application while its running or a container image that is hosted in private repositories. Container scanning before deploying helps resolve many security vulnerabilities.

Some of the DAST scanning tools are:

Penetration (Pen) Testing

Provides Web Applications scanner to help to find security vulnerabilities. Read here to learn about, “Top 10 Web Application Security Risks”

PEN testing tools:

  • OWASP ZAP

Deploy Code & IaC (Infrastructure as Code)

IaC is paramount in DevOps to avoid any manual work in customer environments and help with immutable infrastructure.

Popular IaC tools are:

  • Azure ARM Templates
  • Terraform
  • HELM
  • Private Repositories

In DevSecOps, a private repository is recommended to host the build dependencies, reference container images, container images for tools, and the built packages or application container images. This is to keep all the artifacts together in one centralized location, and the release pipeline can continue with deployments from there.
Some of the private repositories are:
JFrog
Docker Hub
Azure Container Registry (ACR)

Private Repository Scanning

As the pipeline requires security scanning, the repositories require scanning also. These tools scan for vulnerabilities in all packages and container artifacts stored in the repository. A scan report is being sent/notified for any issues.

Some artifact scanning tools are:

  • XRay
  • SonaType
  • Azure Monitor
  • Azure Security Center

Deploy

As the recommendation to deploy the security tools with container orchestration, the same recommendation goes to deployed applications. Containers provide high security with limited ways to be affected by attackers. Sidecar containers protect by continually monitoring applications with a container security stack built-in. Applications are scalable on a demand basis using Kubernetes and tools such as Kubectl; HELM packages are used to deploy and manage K8S clusters. ArgoCD is a declarative tool specifically for Kubernetes deployment in CI/CD pipeline.

Deployments to Azure could be:

  • Azure function app
  • Azure App Service
  • Azure Container Instance
  • Azure Kubernetes Service (AKS)
  • Open Shift in Azure
  • Monitoring/Alerting

Monitoring/Alerting

As the applications deployed and running in a cloud environment, it must be continuously monitored for attacks and identify any security vulnerabilities. For containers, these tools act as sidecar containers to regularly protect main containers from attacks, and some mitigate the issue. All these tools have built-in alert/notify operations team for immediate actions.

Monitoring/alerting tools:

  • Azure Monitor
  • Azure Security Center
  • Twistlock
  • Qualys
  • Aqua Security

So, all powered up with learning DevSecOps! Follow up back here for the next blog post in container-based deployments and containers scanning in the DevSecOps pipeline!

References for continuing your DevSecOps Journey

Agility and Discipline are Both Required as Organizations Adopt the Cloud

At the beginning of cloud adoption, experimentation is common and exploration of applicable technologies is welcomed. Development teams are excited to try the newest service offerings and proof of concept efforts are completed with great interest. However, the thrill of discovery often turns to frustration as configuration complexity increases, and cost control challenges mount. The initial exploration proof of concepts often does not consider where costs are coming from or what resources are assigned to different teams.

While cloud providers make it easy to provision new resources, considerable forethought is necessary to come up with disciplined procedures to measure costs and control access. Security and management raise valid concerns regarding proper security controls. Infrastructure as Code (IaC) concepts will allow for exploration and development agility while including the proper management review, thoughtful allocation of resources, and cost control initiatives. In this post, we will introduce IaC concepts and tools that can be used to bring agile interaction to your cloud adoption experience.

What is Infrastructure as Code (IaC)?

Infrastructure as Code (IaC) refers to the tools and processes used to track and manage infrastructure resources as code. This code is commonly stored in a version control system and the tools are most effective when applied to virtual environments. One such example is the Microsoft Azure public cloud, but IaC can be equally effective in private cloud environments. The combination of version control with executable infrastructure definitions provide a complete change history and ensure future repeatability

Historical Configuration Management

IT administration teams have historically consisted of a handful of individuals tasked with managing a large footprint of physical and virtual machines. With the popularization of virtualization technologies came the need to effectively maintain configuration and security baselines. Rudimentary automation tools such as parallel SSH gave way to more powerful configuration management tools such as Puppet (2005) and Chef (2009). These tools allowed a single IT administrator to establish and apply configuration templates across thousands of machines, and address divergence from configuration when needed.

With the maturation of the configuration management tooling, different paradigms began to emerge. Puppet and Chef both operated with a pull model, utilizing a machine agent responsible for retrieving configuration from the central control server. Newer entrants to the Infrastructure as Code space such as Ansible (2012) and Terraform (2014) operated with a push model over established protocols (such as SSH) for remote access. Configuration changes also saw two very different approaches to how they were defined between Infrastructure as Code tools. The imperative model relied on the developer to define specific change operations, while the declarative model favored the definition of the desired end-state. In the declarative model, the IaC tool itself was responsible for identifying and applying changes. Each model comes with unique benefits and challenges, but regardless of the model chosen, the application of IaC tools has become a huge force multiplier for the IT team.

DevOps

In the last few years, the term DevOps has emerged out of the convergence of development and operational teams adopting shared tooling. This new engineering specialization brings with it a mindset founded in the security and configuration management needs of the IT administrator while enhancing it with version control and module abstraction common in the software development space. Probably the biggest change brought forth by DevOps was the adoption of container technologies, largely popularized by Docker. Containers greatly hastened the adoption of immutable infrastructure. The underlying machines became less important themselves, instead, acting as a vehicle to deploy container workloads.

Immutability in Infrastructure

Infrastructure immutability refers to the static nature of the machines themselves and is most easily achieved in virtualized environments such as Microsoft Azure. The value of immutability is that infrastructure configuration can be reasoned about based on a single source of truth – the code that resides in the version control system. Configuration drift in long-lived infrastructure systems is a monumental challenge that Puppet and Chef have worked to solve, but this problem can be eliminated entirely by treating our infrastructure as replaceable. When changes are required, we simply replace old infrastructure with new ones.

Adopting immutability in infrastructure configuration is not an overnight process. There are considerable challenges, especially when dealing with stateful systems such as databases or file storage systems. While these challenges are real, the advantages are substantial:

  • Deterministic infrastructure
  • Version controlled source of truth
  • Infrastructure automation via CI/CD systems
  • Audit change deltas prior to execution
  • Peer Review based on code change and change deltas
  • Gated approvals from security authorities

Adopting IaC Within Your Organization

Adopting Infrastructure as Code in your organization must be approached carefully. You may be met with resistance from security and IT staff. Successful adaptation requires your stakeholders to understand the benefits of the IaC method.

Greenfield cloud projects are one of the best ways to introduce these concepts without having to tackle the challenging task of codifying existing systems. This approach has the advantage of a reduced scope while learning the tools and exploration without danger of impacting operational systems. If a greenfield project is not possible, this can be done with a short-run proof of concept effort. In the early stages of a project, attention to the following guidelines will help set you up for future success:

  • Security first
  • Avoid premature optimization
  • Favor managed (PaaS/SaaS) vs unmanaged (IaaS)
  • Tag resources aggressively

Security First

Security first is one of the most challenging things to get right in any project, but cloud efforts are especially challenging as there is a much higher exposed surface area. With consuming data from object storage APIs you could expose sensitive data to the world if not properly locked down. There are known security breaches every year where misconfiguration caused huge leaks of Intellectual Property and Personally Identifiable Information. Infrastructure as Code is uniquely suited to minimize the pain associated with good security practices. Least privilege is a key tenant of security and Infrastructure as Code can be used to define access policies that explicitly allow only the resources that need them. This is made simple because resource identifiers can be dynamically injected into policies as the Infrastructure as Code is responsible for managing both.

Avoid Premature Optimization

Premature optimization of systems or code is a trap that can be easily fallen into. Infrastructure as Code offers a wealth of opportunity to create reusable modules for resources. These modules bring advantages for sharing proven code between environments, but if used too early can greatly slow down team velocity. It is generally best to iterate your system until you reach a secure, stable MVP where the architecture is well established before beginning the abstraction into discrete modules.

Favor managed Paas/Saas

Deciding between Platform as a Service (PaaS), Software as a Service (SaaS) and Infrastructure as a Service (IaaS) is straight-forward early in a project. SaaS offerings, such as Azure Databases, are generally always the right choice over IaaS with a self-managed database. SaaS or PaaS is able to eliminate the busywork of security patching and backup management, allowing the team to focus on using the software to solve their unique business challenges. Time is our most valuable resource and while IaaS has its place, it incurs the highest time expenditure to manage properly.

Tag Resources Aggressively

Resource tagging is fundamental to cost visibility in cloud environments. It is difficult to devise a complete resource tagging strategy, but at a bare minimum, all resources should include a project tag. Even if your project is the only one present in your cloud account, allowing tags to be set on all resources by your Infrastructure as Code will make it easy to adopt uniform tagging in the future. Tags can also be useful in differentiating development, test, staging, and operational environments if they reside in a shared account.

Summary

This is just the first post in a blog series on DevOps and Infrastructure as Code. Understanding the concepts outlined here is the foundation of successful cloud adoption. In the future, we will look at applying these principles with specific examples that can be used as a template in your own DevOps journey. In the meantime, some resources can be found below that expand on topics discussed within this post:

PaaS & Cloud-Native Technologies

If you have worked with Azure for a while, you’re aware of the benefits of PaaS, such as the ability to have the cloud provider manage the underlying storage and compute infrastructure so you don’t have to worry about things like patching, hardware failures, and capacity management. Another important benefit of PaaS is the rich ecosystem of value-add services like database, identity, and monitoring as a service that can help reduce time to market.

So if PaaS is so cool, why are cloud-native technologies like Kubernetes and Prometheus all the rage these days? In fact, not just Kubernetes and Prometheus, there is a groundswell of related cloud-native projects. Just visit the cloud-native landscape to see for yourself.

Key Benefits of Cloud-Native Architecture

Here are ten reasons why cloud-native architecture is getting so much attention:

  1. Application as a first-class construct — Rather than speak in terms of VMs, storage, firewall rules, etc. cloud-native is about application-specific constructs. Whether it is a Helm chart that defines the blueprint of your application or a service mesh configuration that defines the network in application-specific terms.
  1. Portability — Applications can run on any CNCF certified clouds and on-premises and edge devices. The API surface is exactly the same.
  1. Cost efficiency — By densely packing the application components (or containers) on the underlying cluster, the cost of running an application is significantly more efficient.
  1. Extensibility model — Standards-based extensibility model allows you to tap into innovations offered by the cloud provider of your choice. For instance, using the service catalog and open service broker for Azure, you can package a Kubernetes application with a service like Cosmos DB.
  1. Language agnostic — Cloud-native architecture can support a wide variety of languages and frameworks including .NET, Java, Node etc.
  1. Scale your ops teams — Because the underlying infrastructure is decoupled from the applications, there is greater consistency for lower levels of your infrastructure. This allows your ops team to scale much more efficiently.
  1. Consistent and “decoupled” — In addition to greater consistency at the lower levels of infrastructure, applications developers are exposed to a consistent set of constructs for deploying their applications. For example, Pod, Service Deployment and Job. These constructs remain the same across cloud, on-premises and edge environments. Furthermore, these constructs also help decouple the developers from the underlying layers (Cluster, Kernel and Hardware layers ) shown in the diagram below.decoupling
  1. Declarative Model – Kubernetes, Istio, and other projects are based on a declarative, configuration-based model that support self-healing. This means that any deviation from the “desired state” is automatically “healed” by the underlying system. Declarative models reduce the need for imperative automation scripts that can be expensive to develop and maintain.
  1. Community momentum – As stated earlier, the community momentum behind CNCF is unprecedented. Kubernetes is #1 open source project in terms of contributions. In addition to Kubernetes and Prometheus, there are close to 500 projects that have collectively attracted over $5 B of venture funding! In the latest survey, (August 2018), the use of cloud-native technologies in production has gone up by 200% since Dec 2017.
  1. Ticket to DevOps 2.0 – Cloud-native combines the well-recognized benefits of what is being termed as “DevOps 2.0” that combines hermetically sealed and immutable container images, microservices and continuous deployment. Please refer to the excellent book by Victor Farcic.

Now that we understand the key benefits of cloud-native technologies, let us compare it to a traditional PaaS offering:

Attribute Tradition PaaS Cloud-Native as a Service
Portability Limited Advanced
Application as a first-class construct Limited (application construct limited to the specific PaaS service) Advanced construct including Helm, network and security policies
Managed offering Mature (fully managed) Maturing (some aspects of the cluster management currently require attention)
Stateful applications Advanced capabilities offered by the database as service offerings Some cloud-native support for stateful applications (However, cloud-native applications can be integrated with PaaS database offerings through the service catalog)
Extensibility Limited Advanced (extensibility includes Container Network Interface, Container Runtime Interface)

Azure & CNCF

Fortunately, Microsoft has been a strong supporter of CNCF, as they joined CNCF back in 2017 as a platinum member. Since then, they have made significant investments in a CNCF-compliant offering in the form of Azure Kubernetes Service (AKS). AKS combines the aforementioned benefits of a cloud-native computing with a fully managed offering – think of AKS as a PaaS solution that is also CNCF compliant.

Additionally, AKS addresses enterprise requirements such as compliance standards, integration with capabilities like Azure AD, Key Vault, Azure Files etc. Finally, offerings like Azure Dev Spaces and Azure DevOps greatly enhance the CI/ CD experience in working with cloud-native applications. I will be remiss not to talk about VS Code extension for Kubernetes that also brings a useful tooling to the mix.

Cloud-Native Use Cases

Here are few key use cases for cloud-native applications. Microservices are something you would expect, of course.  Customers are also being used to run Apache Spark on AKS.  There is also thinking around managing IoT Edge deployments right from within the Kubernetes environment. Finally, “Lift and shift to containers” – this use case is getting a lot of attention from customers as the preferred route for moving on-premises applications to the cloud. Please refer to our recent blog post on this very topic “A “Modernize-by-Shifting” App Modernization Approach” for more details!

Cloud-Native Scenarios

FREE HALF DAY SESSION: APP MODERNIZATION APPROACHES & BEST PRACTICES
Transform your business into a modern enterprise that engages customers, supports innovation, and has a competitive advantage, all while cutting costs with cloud-based app modernization.

A big announcement from Microsoft this month: The introduction of Azure DevOps, the most complete offering of proven, modern DevOps tools and processes available in the public cloud. Used together, the Azure DevOps services span the entire breadth of the development lifecycle so enterprises can modernize apps in a faster and more streamlined way.

What Is DevOps, Anyway?

DevOps solutions bring together people, processes, and technology, automating and streamlining software delivery to provide continuous value to your users.What is DevOps?

If you want your next development or app modernization project to be a success, DevOps is the way to go.

High-performance DevOps enterprises achieve increased revenue with a faster time to market and produce solutions that are more powerful, flexible, and open. (Yes, Microsoft has been partnering with the open-source community to ship products that work for everyone.) New features can be safely deployed to users as soon as they’re ready vs. bundling them together in one large update down the road.

New Services & Tools in Azure DevOps

  • Azure Pipelines – Continuously build, test and deploy to ANY language, platform, or cloud.  Azure Pipelines offers unlimited build minutes and 10 free parallel jobs for all open-source projects!
  • Azure Boards – Plan, track and discuss your work and ideas across teams with proven agile tools.
  • Azure Artifacts – With the click of a button, add artifacts to your CI/CD pipeline.
  • Azure Repos – Unlimited private Git repos (cloud-hosted) allow team members to build and collaborate better.
  • Azure Test Plans – These manual and exploratory testing tools will allow you to test and ship with ease and confidence.

Azure DevOps is what’s next for Visual Studio Team Services. VSTS users will be automatically upgraded without jeopardizing functionality.

With the services provided with Azure DevOps, you can choose the tools and cloud services that you want to use and build end-to-end solutions for an enterprise-level toolchain. As long-time believers in both Azure and DevOps, we’re really excited about this offering and what it can offer our clients.

FREE HALF DAY SESSION: APP MODERNIZATION APPROACHES & BEST PRACTICES
Transform your business into a modern enterprise that engages customers, supports innovation, and has a competitive advantage, all while cutting costs with cloud-based app modernization.