Our team enters the fiscal year 2022 with excitement, intention, and confidence. In 2022, AIS turns 40. We’ve thrived through four decades of IT consulting and continue to achieve record growth numbers while keeping pace with technology advancements.

As we enter another year alongside our partners and teammates at Microsoft, we reflect on and take great pride in our accomplishments and the innovative, dedicated people that help drive them forward.

It is also the time of year we look ahead to what’s next and sharpen our strategy for the next fiscal year. But, even as new technologies and tactics emerge, for AIS, a few things remain constant:

  • Our Focus on Client Success: First and foremost is the success of the organizations we serve. The impact we have on our clients, their employees, and their customers are what keeps us motivated. This dedication to client success is core to the solutions we build and the problems we solve. Speed and scalability are consistent outcomes of our digital transformation strategies.
  • Our Investments in Microsoft: Microsoft technology is in our DNA. As a partner since 1994, we invest in Microsoft’s vision by training our people, working with product engineering, and developing intellectual property (IP) to meet the needs of customers.
  • Our People’s Passion: Our employees are passionate about cloud transformation. Passion is a key contributor to the company’s success and plays a crucial role in gathering input and analysis from a diverse set of disciplines. Just check out our blog to see the wide range of thought leadership from AIS employees.
  • Our Focus on Regulated Industry Expertise: AIS has led many Microsoft firsts in Government, Financial Services, and other regulated industries. We are focused on accelerating our clients’ journey to the cloud to meet mission needs and business goals by addressing roadblocks, such as security and compliance challenges, and leveraging automation and IP for speed and scale.

Looking Ahead

We are just as excited about what is on the horizon. Microsoft closed out the end of the fiscal year 2021 with several announcements, such as:

  • Increasing cross-platform synergy for apps and data (like Power BI and Azure Synapse)
  • Significant investments in business applications
  • Emphasis on security
  • Continued focus on industry and customer success

It’s an exciting time to be a Microsoft partner! The potential results of the collective Microsoft cloud ecosystem and technology and services partners that extend these capabilities are overwhelming.


Vishwas Lele AIS

“We strive to be the most trusted Microsoft partner for technically challenging, enterprise-level cloud adoption, app modernization, and data work.” said Vishwas Lele, EVP and CTO at AIS, Microsoft MVP, and Regional Director. “Our team continues to lead first of their kind design and build solutions on Azure, Power Platform, and M365 in regulated industries. We earned the 2020 Worldwide Partner of the Year Award. We’re seeing amazing convergence across the various solution areas and look forward to driving these new capabilities into the mission space for each of our clients.”


Our Methodology

Outcomes are at the center of our approach. Whether it’s delivering 100% uptime and better performance for a mission-critical defense application or using automation to reduce manual processes and advance a business outcome for a Fortune 500 client, we always work from the outcome first.

Delivering results-based solutions means having an arsenal of skillsets that allow technology to meet the needs of users. It takes all kinds of talent, and AIS provides a range of disciplines (e.g., .NET, SharePoint, Power Platform, Data, UX, Knowledge Management, Change Management, Web Dev, and more).

We believe in multiple “deployment” models for modern, scalable, and intelligent app and data projects. This includes composite applications leveraging Microsoft 365, Dynamics, and Power Platform, with Azure at the tip of the spear.

Fostering a community that cuts across diverse backgrounds, cultures, experiences, and skills is a significant core value at AIS.  The challenges of remote work won’t stop us from getting together, sharing best practices, and staying connected as a community! The growing demand for low code with both new and longtime SharePoint, Dynamics, and Azure customers is exciting to see. The early indicators of swift transformation these capabilities have on organizations are gaining attention from business leaders and promoting valuable thought and solution design diversity.

The AIS core values are rooted in education and knowledge share. We invest in our employee’s education with aisUniversity, boot camps, morning learning series, lunch and learns, self-directed educational materials, certification bonuses, and more. These learning opportunities allow individuals across projects and communities of interest access to new skills and approaches.

We put humans at the focal point of technology. Engagement from all corners of the business world is a sure indicator of what lies ahead with advancements in low code solutions. We’re helping business users, most of whom have no previous technical background, quickly embrace new technology and enable them to scale solutions so we can focus on solving their most challenging technology problems.

Our Investment in People and IP

AIS CDF

An investment in people, process, and tooling is an investment in the success of our employees and the organizations they support. Last year, we built the AIS Cloud Delivery Framework (CDF). The CDF provides opinionated guidance and resources which help our clients modernize in the cloud. The CDF combines our experience executing enterprise cloud projects with the techniques presented in Microsoft’s Cloud Adoption Framework (CAF).

The CDF includes backlogs, deliverable samples, starter files, project experience write-ups, and much more. On top of this foundation, our teams continue to develop resources for business applications, cloud security/compliance, governance, data & AI, and many more.

These innovations would not be possible without the support of the AIS Cloud Acceleration Hub (HUB). The HUB is a dedicated team of AIS consultants organized to help our project teams deliver successful cloud solutions. The HUB team consolidates knowledge and experience to provide rapid research and guidance services for AIS delivery teams at no additional cost to our customers.

Critical to the AIS growth plan is our culture. We’ve added an essential role to our leadership team, VP of Culture and Employee Experience. We are proactively hiring and developing skills around emerging areas of our business (e.g., data intelligence, business applications, RPA, user experience, and knowledge management). We’re heading into the next fiscal year with a vibrant (and growing) team, and we couldn’t be more excited about what comes next.

WE'RE GROWING! WON'T YOU JOIN US?
Check out our career openings at ais.com/careers and apply today.

Partnering with AIS

Microsoft Gold Partner

AIS prides itself on being one of the few Microsoft partners that consistently delivers challenging, enterprise-level work across all Microsoft cloud platforms (Azure, Microsoft 365, Dynamics, and Power Platform). We support the entire Microsoft cloud ecosystem.  We deliver foundational landing zones, security & governance, complex app development and modernization, and large-scale data and AI projects.

With AIS, you get consultants who offer the best possible solution. These consultants will advance your highest priority goals and objectives.

We bring 13+ years of enterprise experience on Azure, global leadership in Power Platform, and unmatched cloud adoption, security, and governance experience in highly regulated industries.

Along with this experience, our customers also get the full backing of Microsoft. Through our 26+ year partnership, AIS and Microsoft have established close relationships across product engineering, account teams, and executive leadership. This partnership offers our joint customers access to all the resources they need for digital transformation.

DRIVE YOUR MISSION FORWARD! PARTNER WITH AIS.
Work with a team of Microsoft technology leaders to enable your teams and partner with AIS today.

So you’ve written Infrastructure As Code, Automated All The Things, and drank deeply of the DevOps kool-aid on your current project – what’s next?

You’ve been reaping the rewards of DevOps on a small scale, and your one-off DevOps effort has earned a lot of positive attention – now management would like you to implement DevOps for all the projects! So how do you spread the DevOps wealth, and what do you need to be aware of?

Delivering DevOps

What is DevOps

For this article, we’ll need shorthand “DevOps” to mean the code bits – such as bash/posh/cmd for scripting, TF/CLI for IAC, and YAML for Azure DevOps CI/CD pipelines. Start with identifying the valuable bits from your current DevOps efforts and use that as a basis for what you want to disseminate to all other projects.

Cross-Project DevOps

Not all projects will have the exact DevOps requirements. Still, over time you will build up a collection of useful scripts and templates that are generic enough to provide value across all software projects in your organization.

But you can’t simply copy/paste these files into every repo, as that would be a massive headache to manage and keep updated. Instead, you’ll want to version and package these scripts so that every project that adopts the enterprise DevOps approach can track and plan for DevOps package updates.

Custom Package Repository

The easiest way to distribute file packages is through a custom package repository. Chances are your software project is already using at least Nuget, NPM, Maven, or PyPI. Azure DevOps can create all of these kinds of repositories, which is how you can seamlessly distribute your company-proprietary DevOps package without making it publicly available. These custom repositories are also handy as a local cache for public packages.

DevOps Integration

Usually, downloaded packages are not committed to the repo, only the reference to the package – then the build/release agents download them as needed. But for DevOps packages, should be committed to the repo for a variety of reasons. To do that, include an install script with your package, like this example for an NPM package:

#!/usr/bin/env node

const path = require('path');
const fse = require('fs-extra');
const rootFolder = path.resolve('../..'); // backwards from “./node_modules/<package>”
const installFolder = `${rootFolder}/.devops`;
const oldInstallFolder = `${rootFolder}/.devops.backup`;
const nodePkgFolder = path.resolve('.');
const srcFolder = `${nodePkgFolder}/src`;

let oldFolderRenamed = false;

// rename .devops to .devops.backup
if (fse.existsSync(installFolder)) {
    oldFolderRenamed = true;
    if (fse.existsSync(oldInstallFolder)) {
        console.log('removing last [.devops.backup] folder...');
        fse.removeSync(oldInstallFolder);
    }
    console.log('renaming [.devops] as [.devops.backup]...');
    fse.renameSync(installFolder, oldInstallFolder);
}

// copy package src folder to install folder
console.log('installing devops package...');
fse.copySync(srcFolder, installFolder);

// read version from package.json and save to installFolder
const packageJson = fse.readFileSync(`${nodePkgFolder}/package.json`);
const package = JSON.parse(packageJson);
fse.writeFileSync(`${installFolder}/pkg_version.txt`, package.id);

if (oldFolderRenamed) {
    console.warn('Existing devops configuration has been backed up and replaced, please manually merge your configuration back into the new devops package!');
    console.log(`Please read ${nodePkgFolder}\\changelog.md`)
} else {
    console.log(`Devops package has been installed, please read ${nodePkgFolder}\\readme.md`)
}

This script copies the DevOps package from the node_modules (NPM package cache) directory into the project’s root directory. If the DevOps package directory is already present, it renames the old directory and copies it in the new one. It is then trivial to diff the old and new directories for changes and merge them.

For NPM install, use the –no-save option, as we are using NPM as a downloader/installer so it doesn’t need to save the reference in the package.json

Package Versioning

DevOps package files should be tracked in a Version Control System like Git, and each feature should then be developed in a separate branch, then PR’d, and each PR should create a new version of the DevOps package. You can then Devops your Devops by setting up CI pipelines to automatically publish new package versions whenever the master branch is changed.

Don’t forget to document! Include a readme for first-time consumers and a changelog for updates.

Adopt and Migrate Incrementally

Generally, DevOps development will follow this kind of flow:

  1. Implement a project-specific DevOps functionality
  2. Test, Fix and Refine
  3. Generalize and extract useful bits of functionality
  4. Test, Fix and Refine
  5. Repeat

When integrating an enterprise DevOps package, a similar process can be followed:

  1. Migrate project-specific DevOps functionality to use the enterprise DevOps package
  2. Test, Fix and Refine
  3. Generic and extract useful bits of functionality
  4. Test, Fix and Refine
  5. Merge the generically useful bits into the enterprise DevOps package
  6. Repeat

Standardize and Prevent Making Changes

A critical benefit of the DevOps package approach is that it allows for the standardization of DevOps processes across the enterprise and provides a straightforward way to keep all projects in sync. A bug fix or new process can be quickly rolled out to all consumers of the package.

Standardization could also be accomplished using Azure DevOps Pipelines or Task Groups, but any change to those will immediately affect all consumers and invisibly break things. But if the DevOps package is stored in the project repo, those projects are insulated from breaking changes and bugs.

Track Changes and Update Easily

Each project will likely have a separate set of custom configuration files that must be modified from the package baseline to customize the build and release pipelines. It is essential to separate these files and keep track of changes to them after a package update. Conversely, all the other files in the DevOps package should not be modified to ensure a smooth update. Suppose one of the package files must be modified. In that case, it should be either A) temporary with the expectation of merging the changes up to the package or B) copied locally to the custom configuration directory so that it is evident that it must be inspected during the package update process.

Enterprise DevOps Principles

To sum everything up, there are several themes here that will ensure a successful enterprise DevOps approach:

  • Consistency – standardize DevOps processes across the enterprise
  • Modularity – design DevOps processes so that each component is focused and reusable – follow DRY and SRP guidelines
  • Resiliency/Stability – make DevOps processes resistant to unexpected changes
  • Traceability – easily understand changes to DevOps processes and merge quickly

Example

Consider this NPM package:

  • scripts/install.js – module install script
  • src/config – all project-specific customization and scripts
  • src/pipelines – enterprise-wide CI/CD pipelines
  • src/scripts – enterprise-wide bash/posh/cmd scripts
  • src/tf – enterprise-wide terraform templates
  • src/Update-DevopsPackage.ps1 – helper script that triggers the download of the latest package
  • changelog.md – description of differences between versions, upgrade notes
  • package.json – NPM publishes metadata, contains version
  • readme.md – introduction to DevOps package, contains getting started instructions, directory, and file descriptions
  • And here’s what a consuming project may look like:
  • .devops/config – src/config directory from package
  • .devops/pipelines – src/pipelines directory from package
  • .devops/scripts – src/scripts directory from package
  • .devops/tf – src/tf directory from package
  • .devops/version.txt – text file containing the package version (handy for tracking updates)
  • .devops/Update-DevopsPackage.ps1 – copied from src directory from package
  • src/ – project source code, etc.
Microsoft affirms its commitment to the Java ecosystem by offering open-source distribution of Java with the Microsoft Build of OpenJDK™. It’s a no-cost, Long-Term Support (LTS) distribution of OpenJDK currently available for preview.

The preview build includes the binaries for Java 11 available on x64 server and desktop environments on Windows, macOS, and Linux. An early-access release for Java 16 is available for Windows on ARM. Microsoft is currently collecting feedback on the packing and installer of the build on various platforms from the users of the preview build and aims to release the GA build by the end of 2021.

Why Microsoft is Offering JDK Distribution?

Microsoft has been deeply involved in the Java ecosystem by offering Java tools for Visual Studio Code, supporting Java on Azure, running several internal platforms, and some Azure infrastructure services on Java Virtual Machine (JVM). Microsoft runs more than 500,000 JVMs internally, excluding all Azure services and customer workloads. Azure has experienced significant growth in Java workloads through Azure Spring Cloud, Azure App Service, Azure Functions, and Azure Kubernetes Service. LinkedIn, Minecraft, and Yammer are examples of major internal platforms run on Java.

Below are few critical triggers outside the rise of internal JVM workloads for Microsoft to jump into OpenJDK distribution:

  • Cost reduction: Java support costs on Azure cloud can be reduced by avoiding commercial licenses for Java. It helps to reduce the overall Azure subscription costs by removing the cost passed to other vendors for JDK licenses.
  • Security & performance improvement: The build may contain backported fixes and enhancements for security and performance, which may not have formally backported upstream.
  • Support Java devs: Part of Microsoft’s dedication to better support Java developers on Azure.
  • Become a leader in providing toolkits for the open-source community: Microsoft has been a sponsor and contributor to the Java open-source community through AdoptOpenJDK projects since 2018, and has contributed to OpenJDK by providing more than 50 patches for OpenJDK in the last 18 months, which includes resolutions for issues in macOS packaging, build, infrastructure, and garbage collection.

Benefits to Java Developers on Azure

  • Multi-OS Support: Windows, macOS, and Linux
  • Multi-Environment Support: cloud, local data centers, and user development environments
  • Free to Use: the Microsoft Build of OpenJDK is at no cost to use.
  • Backed by Microsoft: This is backed by the promise of Microsoft. Many backported fixes and enhancements are recommended by Microsoft that may not be available in upstream OpenJDK implementations.
  • Ease of Migration: Smooth and transparent transition for existing workloads.

Limitations

  • Docker image is not available at this point
  • Medium-term support releases of OpenJDK are not supported
  • No API available to access the binary distribution
  • No ARM-based macOS binary at this point

How to Install on Windows

The packages and installer for Microsoft’s OpenJDK preview build are available at https://www.microsoft.com/openjdk#11. Azure customers can try out the preview using Azure Cloud Shell too.

Download the package on the platform of your choice and confirm the Java version. For Windows, the installer takes care of the default location and setting PATH and JAVA_HOME environment variables.

Microsoft Build of OpenJDK on Windows 10

Picture 1: Microsoft Build of OpenJDK on Windows 10
A build of Open JDK 11 is already available in the Azure Cloud Shell; developers can use it with Shell.

Microsoft Build of OpenJDK on Azure Cloud Shell

Picture 2: Microsoft Build of OpenJDK on Azure Cloud Shell

We expect more enterprise organizations modernizing Java on Azure with increased support and options from Microsoft. This is welcome news to organizations with a large investment in Java and has experienced a large increase in the cost of legacy Java workloads. Need a partner to accelerate your modernization journey? Reach out to AIS today.

Reference

This past Thanksgiving marked the first anniversary of going live with a SharePoint environment that AIS migrated from on-prem to Microsoft Azure IL5. Since then, our client has experienced 100% uptime during business hours and reduced deployment timelines, from weeks to minutes.

Challenge: Improve Performance, Speed Up Deployments

AIS set out to help a DoD agency that had experienced ongoing service issues with their existing provider while operating their on-prem SharePoint farm. During the year before the migration, the DoD customer experienced three service outages during business hours, which halted the ability to perform mission-critical activities. Additionally, their existing enterprise service provider required a lead-in time of 1-2 weeks to deploy any code changes or new capabilities into the environment. AIS was tasked with building a cloud solution to maximize uptime and accommodate rapid deployments to better serve the fast tempo required by our DoD customer.

Solution: Hybrid IaaS/Paas in Azure IL5

To provide a solution tailored to maximize uptime and accommodate rapid deployments, AIS architected a DoD first: a hybrid IaaS/PaaS environment in Azure Government IL5 that utilized the DISA Cloud Access Point to integrate with NIPRNet. We leveraged a suite of technologies to employ DevSecOps methodologies, allowing the solution to remain scalable while adhering to industry best practices. By implementing an automated code scanning solution, we reduced deployment lead-in time from weeks to minutes. Our infrastructure as code (IaC) development also drastically reduced the time required to build a new environment from several days to under one hour.

Looking Ahead: Cost-Sharing, Scale Across the DoD

AIS has worked with our DoD customers to offer these cloud services to neighboring agencies to benefit from cost-sharing. In doing so, we have passed on lessons learned and processes that we have developed to share our success across the DoD enterprise. As we grow, we continue to integrate evolving best practices to remain at the DoD DevSecOps initiative’s forefront.

AIS FIRST TO DEPLOY AZURE IL6 EVNVIRONMENT AND ACHIEVE ATO!

Have you spent a lot of time getting comfortable using Team Foundation Server (TFS) or Azure Pipelines only to be switched to GitLab CI/CD for a project? That’s what happened to me about four months ago. Since then, I’ve been learning the differences between the tools. There are several differences between Azure Pipelines and GitLab CI/CD that I’ve experienced. This post aims to be a primer for you to understand these differences and save you a lot of research time.

Classic Pipelines vs YAML

I worked with TFS’/Azure Pipelines’ Classic Editor the most before this project. While it’s a great visual designer for automating builds, they aren’t very portable from project-to-project or environment-to-environment because the exported JSON files reference instances of other resources like Service Connections by ID, not name. When importing these JSON files into another TFS or Azure Pipelines instance, changes need to be made to update affected references. This is where YAML, which stands for “YAML Ain’t Markup Language,” comes in. YAML pipelines are the future for both Azure Pipelines and GitLab CI/CD. YAML references the Service Connection by name, making it easier to use the pipeline in other environments if the connection name remains the same.

I didn’t realize before diving into YAML for GitLab because it’s not a language but more of a file format. I had expected the little Azure Pipelines YAML I had written would translate almost perfectly to GitLab. However, GitLab doesn’t use a lot of the syntax that Azure Pipelines does. I’ve listed some of the differences I’ve become aware of, but this is likely just a partial list.

Azure Pipeline and Description

For more details, here is the YAML reference for GitLab and Azure.

Integration with Azure

Azure Pipelines offers Service Connections to connect to remote services like Azure, Docker, or GitHub.
GitLab doesn’t have an equivalent offering. Making those connections must be done in Jobs. One of the concerns is that you would have to keep the connection credentials in environment variables. When doing this, it’s important to remember to mask sensitive variables like passwords to not appear in job logs. This also means you can’t use Azure Key Vault Secrets as pipeline variables, either, which is something my previous projects used heavily.

Runners

While project-specific, the GitLab runners that were made available don’t have all the tools needed for my project. Another team that is developing the runners has provided some basic images that don’t include tools like the Azure or JFrog CLIs, which are required for the environment. This is likely to be remedied, but it’s caused a need to learn how to install it. This isn’t an issue with more open solutions. Still, my project will be isolated from the web in the staging and production environments, making it challenging to use tools like Chocolatey, Nuget, or NPM to install packages from the web. This is still something the other team is working to resolve. Still, the only resolution I see is maintaining their own repository of packages in something like Artifactory and having images or scripts pull from there.

I’m far more comfortable with YAML because of this project. While GitLab is powerful and has received a few excellent features since I began the project, the creature comforts I’m used to with Azure Pipelines (notably the Service Connections) require additional work that seems unnecessary.

When building and managing an Azure environment, Microsoft maintains control of the network traffic as a core operations responsibility. The primary Azure platform resource to implement network traffic control is the Network Security Group (NSG). A Network Security Group allows you to define security rules, like firewall rules, that control traffic by specifying allowed and denied sources, destinations, ports, and protocols. Like all Azure resources, there are multiple options to manage NSGs, including the standard Azure Management tools: The Azure Portal, Scripts (PowerShell and CLI), APIs, and Azure Resource Manager (ARM) templates.

Managing NSG security rules using an ARM template can be challenging. Each security rule is defined using a large chunk of JSON, and many security rules may be required. The verbose JSON structure makes it difficult to see many rules at once, visualize changes from version to version, and encourage team members to revert to the Azure Portal to view and edit rules. Why use the Azure Portal? It turns out the portal’s grid format for NSG Security Rules was comfortable for quickly viewing multiple rules and for making minor edits to individual rules.

Since the portal’s grid view was comfortable the CSV file format seemed like the right idea based on its similarity to a grid. CSV files have a few pros:

  • Good viewers and editors including Excel and VS Code.
  • One vertically compact line for each security rule.
  • A vertically compact view that makes it easier to visually scan rules and to see the changes that are made from version to version when viewing differences.
  • Anyone who can edit a CSV can edit the NSG Security Rules allowing a larger group of security rule editors.

NSG in JSON format

This is a simple example of the NSG Security Rule JSON. A rule like this can get much larger vertically when numerous ports and address prefixes are defined:

{
          "name": "I-L-All_HttpHttps-UI_Layer-T",
          "description": "Allow HTTP + HTTPS traffic inbound.",
          "priority": 1110,
          "access": "Allow",
          "direction": "Inbound",
          "protocol": "Tcp",
          "sourceAddressPrefix": "",
          "sourceAddressPrefixes": [
            "AzureLoadBalancer"
          ],
          "sourceApplicationSecurityGroups": null,
          "sourcePortRange": "*",
          "sourcePortRanges": null,
          "destinationAddressPrefix": "*",
          "destinationAddressPrefixes": null,
          "destinationApplicationSecurityGroups": null,
          "destinationPortRange": "",
          "destinationPortRanges": [
            "80",
            "443"
          ]
        }

NSG in CSV Format

Excel

Example CSV Excel

Example CSV

Converting Between CSV and JSON

The transition from CSV to JSON and from JSON back to CSV must be repeatable and simple. In this scenario, PowerShell scripts manage this process: Convert-NsgCsvToJson.ps1 and Convert-NsgJsonToCsv.ps1.

The Convert-NsgCsvToJson.ps1 script is straightforward and does the following:

  1. Read the source CSV file.
  2. Read the destination JSON file.
  3. Split multi-value fields into an array based on the parameter: CsvArraySeparator. The default is the pipe character ‘|’. For fields like source and destination port ranges, this collapses multiple values into a single CSV field.
  4. Structure of the CSV format data into objects that match the ARM Template NSG Security Rule JSON structure.
  5. Use a JsonFileType parameter to determine where in the destination JSON structure to place the security rules array. This allows placement of the security rules array into a parameter file, template file, or into an empty JSON file.

A New Workflow

With PowerShell scripts, the new workflow for NSGs is:

  1. Create and edit NSG Security Rules in a CSV file – usually using Excel.
  2. Visually scan the CSV looking for obvious anomalies (Excel makes it easy to see when one rule stands out from the others and as an example, a value is in the wrong column).
  3. Execute the script: Convert-NsgCsvToJson.ps1 to convert the rules to the Json Structure and update the destination JSON file.
  4. Deploy the ARM Template and updated parameters file to a dev/test environment using standard deployment approaches such as the Azure CLI. This will fully validate the NSG Json prior to production deployment.
  5. Deploy to Production during a planned change window.

From JSONback to CSV

At times, a team member may change the portal, for example, during troubleshooting. Once an update is made in the portal, transfer Azure changes back to the code that defines this infrastructure. The CSV files are the canonical source, so there needs to be a process to return to CSV from JSON.

  1. To retrieve the NSG Security Rules from the portal execute a CLI command to retrieve NSG security rules and export them to a JSON File.
    az network nsg rule list --nsg-name subnet-01-nsg --resource-group net-rgp-01 | set-content subnet-01-export.json
  2. Execute the Convert-NsgJsonToCsv.ps1 script using the generated file as the input and the corresponding CSV file as the output.

Constraints

The environment these scripts were built for may not match your own. This environment includes several constraints:

  • Azure Resource Manager Templates are the language for Azure Infrastructure as Code.
  • Manual steps are required: automated build and release pipelines are not yet available.
  • There is no guarantee that NSG security rules will not be modified in the Azure Portal, so a mechanism is required to synchronize the code with the environment.

Future Improvements

This solution represented a significant improvement for this team instead of managing NSG security rules directly in the JSON format. As with every answer, there are ideas on how to improve. Here are a few that have come to mind:

  • Use CI/CD tools such as GitHub Actions to automatically execute the Convert-NsgJsonToCsv.ps1 script when an NSG CSV file is committed.
  • Implement a release pipeline so that modified NSG Csv files trigger the conversion script, wait for approval to deploy, and deploy the ARM Template to the dev/test environment.
  • Add Pester tests to the PowerShell scripts.
  • Try this approach with other IaC languages such as Terraform.

Additional Notes

  • The example template has been dramatically simplified.
    • The production template also configures NSG Diagnostic Settings and NSG Flow Logs.
    • The production template builds all resource names based on several segments defined in a naming convention.
  • There are NSG Security Rules that are considered baseline rules that should be applied to every NSG. These rules are managed in a CSV file and placed in an array in the base template and not repeated in each parameter file. An example of this is a rule that allows all servers to contact the organization’s DNS servers.
  • Application Security Groups are used to group servers in the local VNET so that NSG Security Rules do not need to include IP addresses for servers contained in the VNET. The only IP Address Prefixes specified directly in our rules are from outside the current VNET. As with the NSGs, this template defines ASGs in the template (baseline) and parameters file (local) combined and created during template deployment. Only the unique portion of the name is used to define the group, and to specify rules. The remainder of the term is built during deployment. ASGs in Azure are currently only valid for the VNET where they are created, and only one ASG may be specified per security rule. This script creates all the ASGs defined in the template and parameters file.

Code

The code for these scripts including the conversion scripts and a sample ARM Template, ARM Template Parameters files, and matching NSG Security Rule CSV files is available on GitHub: https://github.com/matthew-dupre/azure-arm-nsg-via-csv

Code Scripts

Databricks provides a robust notebook environment that is excellent for ad-hoc and interactive access to data. However, it lacks robust software development tooling. Databricks Connect and Visual Studio (VS) Code can help bridge the gap. Once configured, you use the VS Code tooling like source control, linting, and your other favorite extensions and, at the same time, harness the power of your Databricks Spark Clusters.

Configure Databricks Cluster

Your Databricks cluster must be configured to allow connections.

  1. In the Databricks UI edit your cluster and add this/these lines to the spark.conf:
    spark.databricks.service.server.enabled true
    spark.databricks.service.port 8787
  2. Restart Cluster

Configure Local Development Environment

The following instructions are for Windows, but the tooling is cross-platform and will work wherever Java, Python, and VSCode will run.

  • Install Java JDK 8 (enable option to set JAVA_HOME) – https://adoptopenjdk.net/
  • Install Miniconda (3.7, default options) – https://docs.conda.io/en/latest/miniconda.html
  • From the Miniconda prompt run (follow prompts):Note: Python and databricks-connect library must match the cluster version. Replace {version-number} with version i.e. Python 3.7, Databricks Runtime 7.3
    “` cmd
    conda create –name dbconnect python={version-number}
    conda activate dbconnect
    pip install -U databricks-connect=={version-number}
    databricks-connect configure
    “`
  • Download http://public-repo-1.hortonworks.com/hdp-win-alpha/winutils.exe to C:\Hadoop
    From command prompt run:

    “` cmd
    setx HADOOP_HOME “C:\Hadoop\” /M
    “`
  • Test Databricks connect. In the Miniconda prompt run:
    “` cmd
    databricks-connect test
    “`

You should see an “* All tests passed.” if everything is configured correctly.

  • Install VSCode and Python Extension
    •  https://code.visualstudio.com/docs/python/python-tutorial
    • Open Python file and select “dbconnect” interpreter in lower toolbar of VSCode
  • Activate Conda environment in VSCode cmd terminal
    From VSCode Command Prompt:
    This only needs to be run once (replace username with your username):
    “` cmd
    C:\Users\{username}\Miniconda3\Scripts\conda init cmd.exe
    “`
    Open new Cmd terminal
    “` cmd
    conda activate dbconnect
    “`
    Optional: You can run the command ` databricks-connect test` from Step 5 to insure the Databricks connect library is configured and working within VSCode.

Run Spark commands on Databricks cluster

You now have VS Code configured with Databricks Connect running in a Python conda environment. You can use the below code to get a Spark session and any dependent code will be submitted to the Spark cluster for execution.

“` python
from pyspark.sql import SparkSession
spark = SparkSession.builder.getOrCreate()
“`
Once a context is established, you can interactively send commands to the cluster by selecting them and right-click “Run Select/Line in Python Interactive Window” or by pressing Shift+Enter.

Context established to send commandsThe results of the command executed on the cluster will display in the Visual Studio Code Terminal. Commands can also be executed from the command line window.

Executed Command Cluster

Summary

To recap, we set up a Python virtual environment with Miniconda and installed the dependencies required to run Databricks Connect. We configured Databricks Connect to talk to our hosted Azure Databricks Cluster and setup Visual Studio code to use the conda command prompt to execute code remotely. Now that you can develop locally in VS Code, all its robust developer tooling can be utilized to build a more robust and developer-centric solution.

DevOps implements a Continuous Integration/Continuous Delivery (CI/CD) process. When multiple team members work in the same codebase, anyone’s update could break the integrated code. So, Continuous Integration is to trigger a build pipeline whenever a code update is pushed. The build pipeline will fail if the newly updated code is incompatible with the existing codebase if there are any conflicts. The codebase might work well within a single developer environment, but in a build pipeline where all configurations and dependencies are expected to be in place can fail. Continuous Delivery speeds up the deployment process. The release pipeline helps to deploy the same code base to multiple environments based on configurations. This helps to support code to be deployed in all environments without many manual changes.

Having an approval process helps peer code reviews, identifies potential issues, and any security flaws ahead of time. The current production applications are very distributed and complex. Whether it is an on-premise or cloud-based solution, missing a dependency or proper configurations could cost significant risk in deployments. DevOps helps to maintain the same code base for repeatable deployment in many environments with just configuration changes. DevOps avoids manually building the deployment packages and handing over to the operations team who would not have insights on what is being deployed. If an error occurs during deployment or post-deployment, then the development team jumps in at that time, which is time-consuming. This will cost in production timeline and end up with some unhappy customers also!
DevOps ImagePicture credit: DoD DevOps

Popular DevOps Tools

Follow here to learn more about DevOps practices from other AIS bloggers!

Why not just “DevOps”?

DevOps is fundamental for any organization’s build and deployment process with seamless CI/CD integration. Then, what is ‘DevSecOps’ and why is ‘Sec’ added between Dev and Ops. The ‘Sec’ in DevSecOps is ‘Security.‘ Though it’s added in between, security implementation should start from Development and continue in Operations. As development and deployment packages add many dependencies from both internal and external, this could introduce vulnerabilities. It could cost severe issues in production if not identified earlier in the build pipeline. Code scans help identify possible weaknesses in code implementations. But for any cybersecurity-related vulnerabilities, only specific tools at different stages of the pipeline must be used to identify as early as possible. Adding security scanning earlier in the pipeline and automating are essential for DevSecOps.

DevSecOps Software Lifecycle

Picture Credit: DoD DevSecOps

DevSecOps is not a tool or pattern but a practice and can be enhanced by adding appropriate tools. It is a process in securing the build and deployment by using several security tools by shifting security to the left. These security tools help to identify vulnerabilities that the code could have introduced, recommend possible solutions to fix those issues, and in some instances, the tools can mitigate some of those issues as well. This is to use the ‘fail fast’ method to identify vulnerabilities earlier in the build pipeline. As more applications moved into the cloud, it is highly imperative to follow Cloud Native Computing Foundation (CNCF) certified tools and implement security benchmarks that provided CIS benchmarks. DevSecOps avoids manual changes once the code is in the pipeline, deployed, and deployed. The codebase will be a single source of truth and should not be manipulated at any point.

Adding scanning tools for security and vulnerabilities helps to mitigate any flaws introduced in code and operations. Many open-source tools provide these functionalities. Enabling logging, continuous monitoring, alerting processes, and any self-fix for faster remediation are key for ongoing business operations. Containerizing with hardened container images from DoD Iron Bank helps to protect application container images. Hardened images can be kept up to date from reliable providers. Containers provide cloud-agnostic and no vendor lock-in solutions.

All the security tools in the DevSecOps pipeline must be deployed and running for pipeline scanning in the customer environment. A request will be sent to those security tools from the pipeline code via API request or trigger command-line interface (CLI) commands. Those tools then respond with their findings, statistics, and provide pass/fail criteria. If a tool identifies any vulnerability findings in the scan, then the pipeline will fail.

Deploying the security tools as SaaS services will require permission from the security team. Not all are approved to run in highly secured cloud environments. Those tools all need to be Authority to Operate (ATO) to deploy and configure. Whereas getting the hardened container images for those tools is a safer and secure approach to deploy those tools in the cloud. As the containers are already hardened, which means scanned, secured, and ready to go with all dependencies, they will provide continuous ATO. The hardened container images can be downloaded from DoD Iron Bank, and almost all tool providers provide container images. Many of these providers have different downloads, whether as a software download or a container image. When downloading as a software image, additional tasks to be done to ensure all the dependencies are appropriately configured or should pre-exist. Simultaneously, downloading as hardened container images comes with dependencies and are pre-scanned. The tools can be deployed into Kubernetes in your cloud environment to provide scalable functionality.

Below is a sample DevSecOps pipeline implementation with recommended security tools, as depicted in the picture below:

  • Source code pull request is approved by reviewers
  • The build pipeline kicks off and code scan is run after a successful initial build
    • If any code vulnerabilities are identified, then the pipeline fails
  • Build pipeline continues with DAST and PEN testing
    • If any vulnerabilities are identified, then the pipeline fails
  • Build artifacts are added to private repository either as packages or container
    • Repository scan is performed using repository scanning tools and vulnerabilities are reported
  • Release pipeline picks up artifacts from private repositories and deploys to Azure (or cloud of your choice)
    • Kubernetes is a highly recommended deployment for orchestration, but deployment can be an application of your choice such as Function App, App Service, Azure Container Instances, etc.
  • Security has been applied throughout the pipeline process and will continue once the application is deployed. Both native security tools such as Azure Monitor, Azure Security Center, Azure Policies, etc., and third-party tools such as Twistlock, Qualys, etc. Can be used to monitor the health of your production environment.DevSecOps Diagram

Let’s look at a few of the recommended tools to support the security validations in the DevSecOps process.

Build tools/CLI

A developer can write their code in their favorite editor such as Visual Studio, VS Code, and run/execute to test their applications. The code editor also generates debug/release packages generating binaries using the build tool that comes with the editor. The application works seamlessly from the developer environment as the dependencies and correct configurations exist. For the build to work in the pipeline, the build tool must be available to build the code. Based on the code language, the build tool varies, and they must be available in the pipeline.

Some of the build tools are:

  • DotNet Build
  • MSBuild
  • Maven
  • Gradle

Static Application Security Testing (SAST)

A code scan is one of the essential steps in securing the codebase. Automated testing helps identify failures, but these specific code scan tools help identify security flaws and vulnerabilities. The application does not need to be running for code scan tools as it scans only the codebase and not any dependencies.

Some of the Code scanning tools are:

  • SonarQube
  • Fortify
  • Anchore
  • JFrog Xray
  • OpenSCAP
  • HBSS
  • OWASP dependency check

Dynamic Application Security Testing (DAST)

DAST scans the application while its running or a container image that is hosted in private repositories. Container scanning before deploying helps resolve many security vulnerabilities.

Some of the DAST scanning tools are:

Penetration (Pen) Testing

Provides Web Applications scanner to help to find security vulnerabilities. Read here to learn about, “Top 10 Web Application Security Risks”

PEN testing tools:

  • OWASP ZAP

Deploy Code & IaC (Infrastructure as Code)

IaC is paramount in DevOps to avoid any manual work in customer environments and help with immutable infrastructure.

Popular IaC tools are:

  • Azure ARM Templates
  • Terraform
  • HELM
  • Private Repositories

In DevSecOps, a private repository is recommended to host the build dependencies, reference container images, container images for tools, and the built packages or application container images. This is to keep all the artifacts together in one centralized location, and the release pipeline can continue with deployments from there.
Some of the private repositories are:
JFrog
Docker Hub
Azure Container Registry (ACR)

Private Repository Scanning

As the pipeline requires security scanning, the repositories require scanning also. These tools scan for vulnerabilities in all packages and container artifacts stored in the repository. A scan report is being sent/notified for any issues.

Some artifact scanning tools are:

  • XRay
  • SonaType
  • Azure Monitor
  • Azure Security Center

Deploy

As the recommendation to deploy the security tools with container orchestration, the same recommendation goes to deployed applications. Containers provide high security with limited ways to be affected by attackers. Sidecar containers protect by continually monitoring applications with a container security stack built-in. Applications are scalable on a demand basis using Kubernetes and tools such as Kubectl; HELM packages are used to deploy and manage K8S clusters. ArgoCD is a declarative tool specifically for Kubernetes deployment in CI/CD pipeline.

Deployments to Azure could be:

  • Azure function app
  • Azure App Service
  • Azure Container Instance
  • Azure Kubernetes Service (AKS)
  • Open Shift in Azure
  • Monitoring/Alerting

Monitoring/Alerting

As the applications deployed and running in a cloud environment, it must be continuously monitored for attacks and identify any security vulnerabilities. For containers, these tools act as sidecar containers to regularly protect main containers from attacks, and some mitigate the issue. All these tools have built-in alert/notify operations team for immediate actions.

Monitoring/alerting tools:

  • Azure Monitor
  • Azure Security Center
  • Twistlock
  • Qualys
  • Aqua Security

So, all powered up with learning DevSecOps! Follow up back here for the next blog post in container-based deployments and containers scanning in the DevSecOps pipeline!

References for continuing your DevSecOps Journey

Azure Kubernetes Service is a Microsoft Azure-hosted offering that allows for the ease of deploying and managing your Kubernetes clusters. There is much to be said about AKS and its abilities, but I will discuss another crucial role of AKS and containers, security. Having a secure Kubernetes infrastructure is a must, and it can be challenging to find out where to start. I’ll break down best practices, including baseline security for clusters and pods, and implement network hardening practices that you can apply to your own AKS environment that will lay the foundation for a more secure container environment, including how to maintain updates.

Cluster and Pod Security

Let’s first look at some best practices for securing your cluster and pods using policies and initiatives. To get started, Azure has pre-defined policies that are AKS specific. These policies help to improve the posture of your cluster and pods. These policies also allow for additional control over things such as root privileges. A best practice Microsoft recommends is limiting access to the actions that containers can provide and avoiding root/privileged escalation. When the Azure Policy Add-on for AKS is enabled, it will install a managed instance of Gatekeeper. This instance handles enforcement and validation through a controller. The controller inspects each request when a resource is created or updated. You’ll then need to validate (based on your policies). Features such as these are ever-growing and can make creating a baseline easier. Azure Policy also includes a feature called initiatives. Initiatives are collections of policies that align with organizational compliance goals. Currently, there are two built-in AKS initiatives which are baseline and restricted. Both come with many policies that lockdown items, such as limiting the host filesystem, networking, and ports. By combining both initiatives and policies, you can tighten security and meet compliance goals in a more managed fashion.

Another way to secure your cluster is to protect the access to the Kubernetes API-Server. This is accomplished by integrating RBAC with AD or other identity providers. This feature allows for granular access, similar to how you control access to your Azure resources. The Kubernetes API is the single connection point to perform actions on a cluster. For this reason, it’s imperative to deploy logging\auditing and to enforce the least privileged access. The below diagram depicts this process:

Cluster and Pod Security

Reference:https://docs.microsoft.com/en-us/azure/aks/operator-best-practices-cluster-security#secure-access-to-the-api-server-and-cluster-nodes

Network Security

Next, let’s look at network security and how it pertains to securing your environment. A first step would be to apply network policies. Much like above, Azure has many built-in policies that assist with network hardenings, such as using a policy that only allows for specific network traffic from authorized networks based on IP addresses or namespaces. It’s also important to note this can only occur when the cluster is first created. You also have the option for ingress controllers that access internal IP addresses. This ensures they can only get accessed from that internal network. These small steps can narrow the attack surface of your cluster and tighten traffic flows. The below diagram demonstrates using a Web Application Firewall (WAF) and an egress firewall to manage defined routing in/out of your AKS environment. Even more granular control is possible using network security groups. These allow only specific ports and protocols based on source/destination. By default, AKS creates subnet level NSGs for your cluster. As you add services such as load balancers, port mappings, and ingress routes, it will automatically modify the NSGs. This ensures the correct traffic flow and makes it easier to manage change. Overall these effortless features and policies can allow for a secure network posture.

Network Security Graphic

Reference: Microsoft Documentation

The Final Piece

The final piece of securing your AKS environment is staying current on new AKS features and bug fixes. Specifically, upgrading the Kubernetes version in your cluster. These upgrades can also include security fixes. These fixes are paramount to remain up to date on vulnerabilities that could leave you exposed. I won’t go too deep on best practices for Linux node updates or managing reboot. This link dives deeper into what Kured is and how it can be leveraged to process updates safely. There are many ways to foundationally secure your AKS clusters. I hope this article helps future implementations and maintainability of your deployment.

Introduction

As enterprises start to utilize Azure resources, even a reasonably small footprint can begin to accumulate thousands of individual resources. This means that the resource count for much larger enterprises could quickly grow to hundreds of thousands of resources.

Establishing a naming convention during the early stages of establishing Azure architecture for your enterprise is vital for automation, maintenance, and operational efficiency. For most enterprises, these aspects involve both humans and machines, and hence the naming should cater to both of them.

It would be too naive and arrogant to propose a one-size-fits-all naming convention. Each enterprise has its own unique culture, tools, and processes. So, here are seven rules for scalable and flexible Azure resource naming conventions. To emphasize, these are rules for establishing naming conventions and not the actual naming convention itself.

Rule #1: Break them Up

  • Breakup resource names into segments. Each segment may contain one or more characters to indicate a specific attribute of the resource.
  • For example: Resource name cte2-sales-prod-rgp has four segments. First segment represents Contoso’s [ct] Enterprise’s, in East US 2 [e2], production [prod] Resource Group [rgp] for Sales application [sales]
    Why This Rule: Logically partitioning resource names into segments allows for the comprehension of resource information by both machines and humans.

Rule #2: Make them Uniquely Identifiable

  • Every resource should have a unique name. Meaning, a name should only belong to a singular resource. Do not hesitate to add additional segments to make the name unique.
  • Ideally, the name should be unique globally across Azure, but if that is too hard to achieve, then at a minimum, it must be unique across all Azure Subscriptions under your Azure AD Tenant.
  • For our Contoso example form Rule # 1, using a couple of characters that identify enterprise would increase chances for Azure wide uniqueness. For cte2-sales-prod-rgp, [ct] represents Contoso enterprise. Other segments, as explained in Rule # 1, also increases uniqueness.
    Why This Rule: Following this rule will eliminate misidentification and resource name conflicts.

Rule #3: Make them Easily Recognizable

  • Names must convey ordinary, but some critical pieces of information about the resource. This rule also serves as a backstop to Rule # 1, whereby taking Rule # 1 to an extreme, one might be tempted to use something like GUID to name the resource.
  • The information may include Azure Region, Environment or Environment Category, Resource Type, etc. to name a few.
  • For our Contoso example, each segment helps with the identification of Azure Region, Application, Environment, and Resource type. All good things for recognizing the resource.
    Why This Rule: Following this rule will eliminate needing a lookup to get information, as the information is embedded in the name itself. Do not use random name generation such as GUIDs, as it might generate a unique name, but it would serve no other purpose.

Rule #4: Make Exceptions Obedient

  • Some resources may not want to fit into the convention. For those resources, establish a convention for exceptions. Don’t let exceptions dictate the overall convention.
  • For example: Storage account names cannot have non-alphanumeric characters. So, if your convention happens to use a dash to separate segments, don’t use the Storage account name and have a dash. Don’t drop a dash for all other resource names.
    Why This Rule: Following this rule prevents a Convention that is too rigid and draconian, leading to convoluted and confusing names.

Rule # 5: Know When To Stop

  • Initially, establish a naming convention for high-level resources and maybe one level deeper. Do not try to establish a naming convention for resources that are three, four, or five levels deep within a resource.
  • If there is a need, let the convention for those lower levels be established by folks who have the expertise and happen to work with them daily.
  • For example, establish a convention for Storage accounts, do not go too deep into naming Container, Blob, and Table.
    Why This Rule: It is impossible to know everything about every resource type used by an enterprise. Leaving room for future extensions is essential for resilient and scalable naming conventions. Your future self and your colleagues will thank you for it.

Rule # 6. Keep Them Handsome, Pretty & Melodic

  • Names created by convention should be pleasing to the eye and melodic to the ears. This means that you should pay special attention to following
    • Acronyms
    • Segment sizes
    • Juxtaposition of segments
    • Sequencing of segments
    • Separators
  • Go back to our Contoso example and see how you can improve so that it lives up to Rule # 6.
    Why This Rule: You will live with the names for a long time and spend a lot of time with them. So you might as well make them a pleasure to work with.

Rule # 7: Toot Your Horn, But With Open Ears

  • Document your convention where it is easily accessible and searchable such as a central Wiki. Present your convention at every opportunity. Demo real-life excellent and bad examples. Write blogs and make videos to explain.
  • But, always keep an open mind. Listen to feedback and be open to refining when needed.
    Why This Rule: Your established naming pattern is only as good as it’s last use. So practice, preach, persuade, push, peddle, profligate, pander but never be pedantic.

These rules have been battle-tested in several large enterprises over the past decade, so following these rules for flawless Azure Naming Convention.