Creating Self Documenting Azure Functions with C# and OpenAPI: Part Three

When migrating existing business services to Azure PaaS as part of an App Modernization project, you may find yourself seriously considering serverless computing using Azure Functions, especially if your target architecture includes MicroServices.

Azure Functions let you focus on what counts — your requirements, your time, and your code — and less about boilerplate code, infrastructure, and processes.

When creating new APIs in any technology, one thing is essential: Documenting those APIs so that others can use them. This is especially important in large enterprises or situations where you are exposing these APIs to the public.

This blog series guides you through creating a C# Function App, creating self-documenting APIs, ensuring the quality of that generated documentation, and separating documentation based on the audience.

The blog post assumes the following:

  • You are familiar with C#
  • You know software development fundamentals
  • You are comfortable with command-line interfaces
  • You have completed Part Two of this series

At AIS, we’ve determined that one of the best approaches to documenting your APIs is to use OpenAPI (formerly Swagger) to have the APIs (nearly) document themselves. This saves time in the long run and enables API clients to generate client code to interact with your APIS automatically. In addition, this helps with a shelf life – if six months or a year down the road, we decide a better approach is best.

For these articles, I will walk you through the steps for creating well-documented Azure Functions for our fictitious shopping site called “Bmazon” and its modernization effort.

This is the final post in this series.

We Need to Lock it Down

In the previous article, we increased the quality of our OpenAPI spec by adding various C# attributes and XML comments to the mix. This resulted in a very useful and informative OpenAPI spec being generated.

Now, it turns out that our Security Team alerted us that some folks in the Warehouse were using their knowledge and access to the “Create Order” API to generate fake orders for themselves. This is a problem, and they have now updated the Security Procedures to require restricting people to the API calls they are supposed to use.

Currently, we have the following functions and departments that need to access them:

Function Description Shopping Department Warehouse
Create Order Creates an order to send to the Warehouse
Order Shipped Shipment update from the Warehouse to the System
Get Shipping Status Gets the current shipping status of an order

We have two Clients (Shopping Dept and Warehouse) that each need access to two functions.
We need to separate these things into two groups.

Put Functions In Groups

Swashbuckle supports putting things in Groups by using the ApiExplorerSettings attribute from Microsoft.AspNetCore.Mvc namespace. This attribute can be applied more than one time, so we can add these for all the functions like this:

Unfortunately, since you can’t use more than one ApiExplorerSettings attribute per function, we will need three groupings for this, “Warehouse,” “Shopping,” and “Shared,” to handle the method that was shared between. Therefore we’ll include the “Shared” method in all Swagger Documents generated.

[ApiExplorerSettings(GroupName = "Warehouse")]
[FunctionName("OrderShipped")]
public async Task<IActionResult> Run(
//...

[ApiExplorerSettings(GroupName = "Shopping")]
[FunctionName("CreateOrder")]
public async Task<IActionResult> Run(
//...

[ApiExplorerSettings(GroupName = "Shared")]
[FunctionName("OrderShippingStatus")]
public async Task<OrderShippingInfo> Run(
//...

By itself, putting them into these groups will not separate things into separate documents for you. It will just add a group name to the API method. For example, in the UI this renders like this:

Swagger UI showing groups for APIs

Create Separate API Specs

To create separate specs, you need to configure Swashbuckle to generate multiple documents and show it how to divide up the methods.

Configure the documents

Back to Startup.cs, we update the configuration with this:

builder.AddSwashBuckle(Assembly.GetExecutingAssembly(), opts =>
{
  // incorporate the XML documentation
  opts.XmlPath = "Bmazon.xml";

  // set up an "Everything" document and 2 documents with the 
  // same names as the group names used in the code
  opts.Documents = new SwaggerDocument[] {
    new SwaggerDocument()
    {
      Name = "Everything",
      Title = "Bmazon Shopping API",
      Description = "All APIs",
      Version = "1.0"
    },
    new SwaggerDocument()
    {
      Name = "Shopping",
      Title = "Bmazon Shopping API",
      Description = "API for the Shopping Department",
      Version = "1.0"
    },
    new SwaggerDocument()
    {
      Name = "Warehouse",
      Title = "Bmazon Warehouse API",
      Description = "API for the Bmazon Warehouse",
      Version = "1.0"
    }
  };
  //...

We now have one “Everything” that we’ll use as a default and two others that will be used for their respective clients.

Let’s configure Swashbuckle, so it knows what APIs to put in which documents.

Update the OpenAPI Functions to support individual API Specs

In that same method in the Startup, we also need to add the following:

opts.ConfigureSwaggerGen = genOpts =>
{
  // configure the separate document inclusion logic
  genOpts.DocInclusionPredicate((docName, apiDesc) =>
  {
    // generating the "everything" doc? then include this method
    if (docName == "Everything")
      return true;

    if (!apiDesc.TryGetMethodInfo(out MethodInfo methodInfo))
      return false;

    // get the value of the [ApiExplorerSettings(GroupName= "foo")]
    var attr = methodInfo.GetCustomAttributes(true)
      .OfType<ApiExplorerSettingsAttribute>().FirstOrDefault();

    var groupName = attr?.GroupName;

    // always return it if it's shared. Otherwise compare doc names
    return groupName == "Shared" || groupName == docName;
  });
};

Add Function Support For Selecting A Group

To allow the clients to select a specific group, we need to modify the JSON and UI OpenAPI functions to support selecting a group.

To do this, we add a new parameter to the JSON and UI Functions called “group” (defaulting to “Everything”)

/// <summary>
/// function implementation
/// </summary>
/// <param name="req">the http request</param>
/// <param name="swashbuckleClient">the injected Swashbuckle client</param>
/// <param name="group">the document to get (default: "Everything")</param>
/// <returns>the JSON data as an http response</returns>
[SwaggerIgnore]
[FunctionName(nameof(OpenApiJson))]
public static Task<HttpResponseMessage> Run(
    [HttpTrigger(AuthorizationLevel.Anonymous, "get", Route = "openapi/json/{group?}")]
    HttpRequestMessage req,
    [SwashBuckleClient] ISwashBuckleClient swashbuckleClient,
    string group)
{
  return Task.FromResult(swashbuckleClient
    .CreateSwaggerJsonDocumentResponse(req, group ?? "Everything"));
}

/// <summary>
/// the function implementation
/// </summary>
/// <param name="req">the http request</param>
/// <param name="swashbuckleClient">the injected Swashbuckle client</param>
/// <param name="group">the optional document from the URL (default: "Everything")</param>
/// <returns>the HTML page as an http response</returns>
[SwaggerIgnore]
[FunctionName(nameof(OpenApiUi))]
public static Task<HttpResponseMessage> Run(
    [HttpTrigger(AuthorizationLevel.Anonymous, "get", Route = "openapi/ui/{group?}")]
    HttpRequestMessage req,
    [SwashBuckleClient] ISwashBuckleClient swashbuckleClient,
    string group)
{
  // the CreateSwaggerUIResponse method generates the HTTP page from the JSON Function results
  return Task.FromResult(swashbuckleClient.CreateSwaggerUIResponse(
    req, $"openapi/json/{group ?? "Everything"}"));
}

Now, when you run the functions, you will have the option to have separate API specs for each Client by appending the document name to the URL, like “http://localhost:7071/api/openapi/ui/Shopping”, so that they will only know about the APIs they can call. To further lock this down, you can add authorization to the specific endpoints at a later time, possibly with Azure API Management.

Swagger UI showing the shopping APIs

Swagger UI showing the Warehouse APIs

In the future, rather than using security through obscurity, you can import these separate OpenAPI JSON files into Azure API Management to lock down the individual APIs by client, but we’ll leave that as an exercise for the reader.

Conclusion

Now that you have gone through these three articles, you have self-documenting APIs separated into different groupings that you can expose to individual clients. All you need to do is properly comment your code and decorate the Functions with the proper Attributes and you and your clients will be very satisfied.

Get the completed code from GitHub

When migrating existing business services to Azure PaaS as part of an App Modernization project, you may find yourself seriously considering serverless computing using Azure Functions, especially if your target architecture includes MicroServices.
Azure Functions let you focus on what counts — your requirements, your time and your code — and less about boilerplate code, infrastructure and processes.

When creating new APIs in any technology, one thing is very important: Documenting those APIs so that others can use them. This is especially important in large enterprises or situations where you are exposing these APIs to the public.

This blog series guides you through creating a C# Function App, creating self-documenting APIs, ensuring the quality of that generated documentation, and separating documentation based on the audience.

The blog post assumes the following:

  • You are familiar with C#
  • You have knowledge of software development fundamentals
  • You have completed Part One of this series

At AIS, we’ve determined that the one of the best approaches to documenting your APIs is to use OpenAPI (formerly Swagger) to have the APIs (nearly) document themselves. This saves time in the long run and even enables API clients to automatically generate client code to interact with your APIS. This helps with shelf life – if 6 months or a year down the road, we decide a better approach is best.

For these articles, I will walk you through the steps for creating well-documented Azure Functions for our fictitious shopping site called “Bmazon” and its modernization effort.

We Need Better Documentation

In the Previous Post, we got the Azure Functions application to start generating OpenAPI docs for our functions, but the results were somewhat underwhelming:

Swagger UI page for the CreateOrder operation showing very little detail

Here you can see that, even though the CreateOrder call takes an Order object in the body of the HTTP Post, there is no documentation describing this. This is because, unlike when writing traditional dotnet core APIs, the order is not a parameter to the function. Swashbuckle only has access to the function signature and anything that can be discovered through reflection.

This output is not very helpful to our clients. They need to know our inputs, potential HTTP Codes to expect (it just assumes that it will return a 200), and other pertinent information like what the method does and what the return data will look like.

For instance, if we add data validation to this method, we may wind up returning a 400 (Bad Request). We could also possibly return a 409 (Conflict) if the order already exists.

Since you’re reading this, you know there is a way to do this. Let’s get to it.

Give Swashbuckle More Info

In order for the OpenAPI documentation to be much better, we need to add a few things that Swashbuckle will be able to use to generate the docs.

As I stated previously, Swashbuckle only has access to things that can be discovered through reflection, which means the definition of your function, its parameters and any attributes decorating it, so the following translates to very little information.

[FunctionName("CreateOrder")]
public async Task<IActionResult> Run(
    [HttpTrigger(AuthorizationLevel.Anonymous, "post", Route = "order")]
    HttpRequestMessage req,
    ILogger log)

This doesn’t even have any information about the Order type that is expected in the body of the method, never mind return codes.

Expected Body Type

To document the type expected in the body of the POST, we need to tell Swashbuckle what to expect. We do this by using the RequestBodyType attribute from the AzureFunctions.Extensions.Swashbuckle.Attribute namespace.

Note that this is an additional attribute on the req parameter on top of the existing HttpTrigger attribute.

[FunctionName("CreateOrder")]
public async Task<IActionResult> Run(
    [HttpTrigger(AuthorizationLevel.Anonymous, "post", Route = "order")]
    [RequestBodyType(typeof(Order), "The Order To Create")] // Describes the Body
    HttpRequestMessage req,
    ILogger log)

With this in place, Swashbuckle knows what type the body contains and we now see that the body type is present in the UI:

The Create Order Swagger UI with body type specified

If you click on the “Schema” link, you will even see the data type names being used:

Create Order Swagger UI showing Schema of the Order type

Note that the Items array is marked nullable: true which is not desirable. We will address that below in the Data Annotations section.

The bottom of the page also shows you all the current objects in the Schema that are known:

Swagger UI with the all Schemas

This information documents all the details about the DTOs being used in this API. But we need to fix the nullability and other validation-related information.

Add Data Annotations

Above, the Order‘s Items collection was marked as nullable. We want to fix that and other validation information that Swashbuckle can read. To do that, we need to add Data Annotations to the object definitions.

Currently, the Order looks like this:

public class Order
{
  public int OrderId { get; set; }
  public IEnumerable<OrderLineItem> Items { get; set; }
}

In order to tell Swashbuckle (and our clients) that the Items collection is required, we have to mark it [Required] and [NotNull] from the System.Diagnostics.CodeAnalysis namespace.

The NotNull attribute is also needed because OpenAPI, not being language-specific, supports the concept of null along with the lack of presence of the variable. JavaScript developers will relate to this concept with undefined and null keywords.

So, in order to tell clients that fields MUST have a value, you need to add both attributes to the Items property.

  public class Order
  {
    public int OrderId { get; set; }

    [Required, NotNull]
    public IEnumerable<OrderLineItem> Items { get; set; }
  }

The results:

Create Order Swagger UI with corrected Required fields

Note the red "*", meaning required, next to the items collection and the lack of the nullable:true.

To properly annotate the objects, we’ll mark the ID, Quantity and SKU all required as well. Additionally, we’ll put rational [Range] and other appropriate restrictions as well:

public class Order
{
  // if we get to 2 billion orders, we'll all be retired anyway
  [Required, NotNull, Range(1, int.MaxValue)]
  public int OrderId { get; set; }

  [Required, NotNull, MinLength(1)]
  public IEnumerable<OrderLineItem> Items { get; set; }
}

public class OrderLineItem
{
  [Required, NotNull, Range(1, 1000)]
  public int Quantity { get; set; }

  [Required, NotNull, MinLength(1)]
  public string SKU { get; set; }
}

So, the final schema looks like this:
Create Order Swagger UI with Full data annotations

Now your clients know the simple data validations for these objects, but don’t know what the return payloads and HTTP codes are.

Potential Return Types and Codes

By default, Swashbuckle will tell the clients to expect a 200 (Success) HTTP result with no payload.

Swagger UI showing only a 200 response

This doesn’t include any information about any payload sent back to the user and is most likely incorrect or at least not the whole story.

If we know our Function is going to return multiple HTTP codes with different payloads, we need to tell Swashbuckle by using the [ProducesResponseType] attribute on the Function itself.

Assuming we return the following:

  • 200/Success with a string message payload
  • 400/BadRequest with a collection of error messages

We decorate our function like this:

[ProducesResponseType(typeof(string), StatusCodes.Status200OK)]
[ProducesResponseType(typeof(IEnumerable<string>), StatusCodes.Status400BadRequest)]
HttpStatusCode.BadRequest)]
[FunctionName("CreateOrder")]
public async Task<IActionResult> Run(

This results in

Swagger UI - multiple response types with http codes

So, we’ve now exposed the input and output types, but we haven’t been able to add any additional information to describe objects or fields to our clients. To do that, we need to add XML comments to the output as well.

To make this information even better, we can comment our code properly. Of course, you were already doing that, right? RIGHT?

Better Comments in the OpenAPI Spec

One thing that you may notice is that, at the top of the function, there is very little information about the method except the name (e.g. “CreateOrder”). We should add a summary about the method.

Now, I need to apologize because I lied to you. Previously, when I said “Swashbuckle only has access to things that can be discovered through reflection”, I was lying (Forgive me!). To give client devs more information about the methods being exposed by an API, we can add C# XML Documentation information to the code and, if configured for it, Swashbuckle will incorporate that too, which can be invaluable.

Add XML Comments

We now add comments like this to our C# code (Functions and DTOs)

/// <summary>
/// Creates an Order that will be shipped to the Warehouse for fulfillment.
/// </summary>
/// <param name="req">the HTTP request</param>
/// <param name="log">the logger</param>
/// <returns>a success message or a collection of error messages</returns>
/// <response code="200">
///   Indicates success and returns a user-friendly message
/// </response>
/// <response code="400">
///   Indicates a data validation issue and will return a list of data validation errors
/// </response>
[ProducesResponseType(typeof(string), StatusCodes.Status200OK)]
[ProducesResponseType(typeof(IEnumerable<string>), StatusCodes.Status400BadRequest)]
HttpStatusCode.BadRequest)]
[FunctionName("CreateOrder")]
public async Task<IActionResult> Run() {
// ...
}

/// <summary>
/// An Order sent from the Shipping Division to be sent to the Warehouse
/// </summary>
public class Order
{
//...
}

The Swagger UI won’t have changed yet. In order for the Swashbuckle library to read this information, you need to tell the C# compiler to generate the documentation in an XML file and tell Swashbuckle about it.

Generate XML Doc file

At the top of the csproj file, add the following line to the first <PropertyGroup> you see.

<DocumentationFile>Bmazon.xml</DocumentationFile>

If you are using Visual Studio, you can access this setting from the Build tab on the Project settings

Now tell Swashbuckle about the XML file

Currently, we’re configuring Swashbuckle in the StartUp.cs file with:

builder.AddSwashBuckle(Assembly.GetExecutingAssembly());

Replace this with

builder.AddSwashBuckle(Assembly.GetExecutingAssembly(), opts => {
  opts.XmlPath = "Bmazon.xml";
});

Now, when you rerun the app, the final result will be a page with the new title and the order schema will have much more detail.

Swagger UI showing Create Order Scheme with XML comment information

The users of your service will thank you for documenting your APIs this thoroughly. Additionally, they won’t have to ask you questions about the details about how to use the APIs. They can even generate client code with various tools as well.

Get the completed code from GitHub

Next Steps

Now that you have really descriptive documentation for your APIs being automatically generated, your security folks may not like you sending all your API documentation to every single client, regardless of need.

In part three of the series, I will show you how to separate the APIs out into separate groupings and keep your clients all in their own lane.

TOP MODERNIZATION APPROACHES
Learn about the top three modernization approaches — rehosting, replatforming, and refactoring — uncovering pros, cons, and process ideas as you go.

When migrating existing business services to Azure PaaS as part of an App Modernization project, you may find yourself seriously considering serverless computing using Azure Functions, especially if your target architecture includes MicroServices.

Azure Functions let you focus on what counts — your requirements, your time, and your code — and less about boilerplate code, infrastructure, and processes.

When creating new APIs in any technology, one thing is very important: Documenting those APIs so that others can use them. This is especially important in large enterprises or situations where you are exposing these APIs to the public.

This blog series guides you through creating a C# Function App, creating self-documenting APIs, ensuring the quality of that generated documentation, and separating documentation based on the audience.

The blog series assumes the following:

  • You are familiar with C#.
  • You have knowledge of software development fundamentals.
  • You are comfortable with command-line interfaces.

At AIS, we’ve determined that one of the best approaches to documenting your APIs is to use OpenAPI (formerly Swagger) to have the APIs (nearly) document themselves. This saves time in the long run and even enables API clients to automatically generate client code to interact with your APIs. This helps with shelf life – if 6 months or a year down the road, we decide a better approach is best.

For these articles, I will walk you through the steps for creating well-documented Azure Functions for our fictitious shopping site called “Bmazon” and its modernization effort.

Creating the App

To create the app, we will start with the Azure Functions Core Tools. At the time of this writing, the current version of this library is 3.0.3477

NOTE: This version uses dotnet cli version 3.1 internally, so if your dotnet executable in the path is not that version, it could cause you issues. If you run into errors, this may be fixed by adding global.json file in the current directory with the following content, which will tell the dotnet cli to use whatever 3.1.x version you have installed.

{
  "sdk": {
    "version": "3.1.0",
    "rollForward": "latestMinor"
  }
}

At the PowerShell prompt, we’ll run the following to create our project

C:\dev&gt; func --version
3.0.3477
C:\dev&gt; func init Bmazon --worker-runtime dotnet

Writing C:\dev\Bmazon\.vscode\extensions.json

This will create the shell of a project inside the C:\dev\Bmazon folder.

While creating the app, I’ve copied in an OrderService and the related DTOs from the existing application we’re modernizing to be used by the newly new functions we are creating. You can see the completed code on GitHub. You’ll see a bit more of them in the next article.

Learn more about Azure Functions From Microsoft Docs

Add Functions

We’re going to add 3 different functions to our app.

Shopping API

The Shopping division needs to call HTTP APIs to make an order to the warehouse, so we will add a CreateOrder function that performs this action.

(This can be done interactively by running func new and following prompts, but using the command line parameters is more concise.)

C:\dev\Bmazon&gt; func new --template HttpTrigger --name CreateOrder `
    --authlevel Anonymous
Use the up/down arrow keys to select a template:Function name: CreateOrder

The function &quot;CreateOrder&quot; was created successfully from the 
&quot;HTTPTrigger&quot; template.

Strangely, it outputs a prompt to select the template even when you have passed in the selection as a parameter. You can ignore this.

Warehouse API

Later in our process, the Warehouse team needs to call an HTTP endpoint to send tracking information back to the Shopping division.

We will follow the pattern above and create an API for them to call.

C:\dev\Bmazon&gt; func new --template HTTPTrigger --name OrderShipped `
    --authlevel Anonymous
Use the up/down arrow keys to select a template:Function name: OrderShipped

The function &quot;OrderShipped&quot; was created successfully from the 
&quot;HTTPTrigger&quot; template.

Shared APIs

Since both the Shopping and Warehouse divisions will need to check on the status of an order at various times, there will be a shared function to check status.

C:\dev\Bmazon&gt; func new --template HTTPTrigger --name OrderShippingStatus `
    --authlevel Anonymous
Use the up/down arrow keys to select a template:Function name: OrderShipped

The function &quot;OrderShippingStatus&quot; was created successfully from the 
&quot;HTTPTrigger&quot; template.

Code Cleanup

We’ll do a bit of code cleanup before moving on.

Choose GET or POST

If you look at the code, you’ll notice that, by default, the Functions were created supporting both GET and POST.

public async Task<IActionResult> Run(
   [HttpTrigger(AuthorizationLevel.Anonymous, &quot;get&quot;, &quot;post&quot;, Route = null)]
   HttpRequest req
...

We can fix that by changing the code on each function by removing either "get" or "post" appropriately (Typically you will have the first 2 operations be POSTs and the latter be GET).

Organizing the Code

The func calls above will create all the Function files in the top folder. We’ll move ours into a Functions folder to keep things cleaner. They all just happened to start with “O”, so we can be terse.

C:\dev\Bmazon&gt; mkdir Functions; mv O*.cs Functions\

Add OpenAPI Document Generation

In order to add OpenAPI to Azure Functions, I chose to use the Swashbuckle library. There are a few other libraries out there to work with .Net and OpenAPI, but I chose Swashbuckle because I’m familiar with it.

Installing the Package

The core Swashbuckle project doesn’t support Azure Functions directly, so I used the AzureExtensions.Swashbuckle package, a nice extension written by Vitaly Bibikov.

To install it:

C:\dev\Bmazon&gt; dotnet add package AzureExtensions.Swashbuckle

  Determining projects to restore...
  Writing C:\Users\XXX\AppData\Local\Temp\tmp69AA.tmp
info : Adding PackageReference for package 'AzureExtensions.Swashbuckle' into project 'C:\dev\Bmazon\Bmazon.csproj'.
info : Restoring packages for C:\dev\Bmazon\Bmazon.csproj...
...
...
info : Committing restore...
info : Generating MSBuild file C:\dev\Bmazon\obj\Bmazon.csproj.nuget.g.props.
info : Writing assets file to disk. Path: C:\dev\Bmazon\obj\project.assets.json
log  : Restored C:\dev\Bmazon\Bmazon.csproj (in 525 ms).

Setting up Swashbuckle

In order to configure Swashbuckle, your Functions App needs a Functions Startup class like the following, which we’ll put in Startup.cs in the Bmazon folder.

using System.Reflection;
using AzureFunctions.Extensions.Swashbuckle;
using Microsoft.Azure.Functions.Extensions.DependencyInjection;

[assembly: FunctionsStartup(typeof(Bmazon.Startup))]
namespace Bmazon
{
  public class Startup : FunctionsStartup
  {
    public override void Configure(IFunctionsHostBuilder builder)
    {
      builder.AddSwashBuckle(Assembly.GetExecutingAssembly());
    }
  }
}

Exposing OpenAPI Endpoints

Your code will also need to expose the OpenAPI JSON and UI endpoints as HTTP-triggered Azure Functions so that client code can load them on demand.

(Adding them in a single OpenApi\OpenApiFunctions.cs file for now)

using System.Net.Http;
using System.Threading.Tasks;
using AzureFunctions.Extensions.Swashbuckle;
using AzureFunctions.Extensions.Swashbuckle.Attribute;
using Microsoft.Azure.WebJobs;
using Microsoft.Azure.WebJobs.Extensions.Http;

namespace Bmazon.OpenApi
{
  public static class OpenApiFunctions
  {
    [SwaggerIgnore]
    [FunctionName(&quot;OpenApiJson&quot;)]
    public static Task&lt;HttpResponseMessage&gt; RunJson(
        [HttpTrigger(
            AuthorizationLevel.Anonymous, 
            &quot;get&quot;, Route = &quot;openapi/json&quot;)]
        HttpRequestMessage req,
        [SwashBuckleClient] ISwashBuckleClient swashbuckleClient)
    {
      return Task.FromResult(
            swashbuckleClient.CreateSwaggerJsonDocumentResponse(req));
    }

    [SwaggerIgnore]
    [FunctionName(&quot;OpenApiUI&quot;)]
    public static Task&lt;HttpResponseMessage&gt; RunUi(
        [HttpTrigger(
            AuthorizationLevel.Anonymous, 
            &quot;get&quot;, 
            Route = &quot;openapi/ui&quot;)]
        HttpRequestMessage req,
        [SwashBuckleClient] ISwashBuckleClient swashbuckleClient)
    {
      // CreateOpenApiUIResponse generates the HTML page from the JSON results
      return Task.FromResult(
            swashbuckleClient.CreateSwaggerUIResponse(req, &quot;openapi/json&quot;));
    }
  }
}

This sets up 2 new Functions on the openapi/json and openapi/ui URLs to load the JSON file and Swagger UI respectively. The [SwaggerIgnore] attribute causes Swashbuckle to ignore these API methods for document generation purposes.

Generate and View the API Documentation

NOTE: You must have the Azure Storage Emulator or Azurite RUNNING locally in order for this to work properly.

C:\dev\Bmazon&gt; func start
Microsoft (R) Build Engine version 16.8.3+39993bd9d for .NET
Copyright (C) Microsoft Corporation. All rights reserved.

  Determining projects to restore...
  Restored C:\dev\Bmazon\Bmazon.csproj (in 840 ms).
  Bmazon -&gt; C:\dev\Bmazon\bin\output\bin\Bmazon.dll

Build succeeded.

Time Elapsed 00:00:05.60

Azure Functions Core Tools
Core Tools Version:       3.0.3284 Commit hash: 98bc25e668274edd175a1647fe5a9bc4ffb6887d
Function Runtime Version: 3.0.15371.0

[2021-02-27T15:05:33.871Z] Found C:\dev\Bmazon\Bmazon.csproj. Using for user secrets file configuration.

Functions:

  CreateOrder: [POST] http://localhost:7071/api/order
  OpenApiJson: [GET] http://localhost:7071/api/openapi/json
  OpenApiUi: [GET] http://localhost:7071/api/openapi/ui
  OrderShipped: [POST] http://localhost:7071/api/order/shipment
  OrderShippingStatus: [GET] http://localhost:7071/api/order/shipment/{id}

For detailed output, run func with --verbose flag.
[2021-02-27T15:05:41.693Z] Host lock lease acquired by instance ID '000000000000000000000000016514FF'.

If you don’t see that last line after a few seconds, you probably don’t have the storage emulator running

Take note of the list of functions shown with the URLs next to them, especially the ones starting with “OpenApi”.

If you visit the OpenApiUI URL listed above, you will see the following in your browser:

Rendered Swagger UI displaying the 3 created Operations

That’s it! You now have a modernized serverless architecture with APIs that are documenting themselves!

If you add any new Functions, they will automatically show up here as well. Your clients can download from the JSON endpoint and import the definitions into Postman or client generators.

Get the completed code from GitHub

Next Steps

Now that you have self-documenting APIs, you may notice that the information in the Swagger UI is rather underwhelming. Due to the nature of Azure Functions, there is very little information that Swagger can glean from the runtime type information it can gather.

In part two of the series, I will show you how to make the documentation MUCH better.

TOP MODERNIZATION APPROACHES
Learn about the top three modernization approaches — rehosting, replatforming, and refactoring — uncovering pros, cons, and process ideas as you go.

Gain control with Azure Management Groups

As enterprises start to move to Azure, managing subscriptions becomes tedious with the growing number of subscriptions. In an organization, there are usually many employees and, in some cases, many applications. If all these employees are provided Azure subscriptions and start creating Azure resources at will, it may soon become difficult to control, manage, and track who creates what. Eventually, the costs may go out of control. However, organizing your subscriptions using Azure Management Group can make the job very easy.

What is an Azure Management Group?

Management groups are logical groups for Azure subscriptions, allowing you to organize subscriptions and apply governance controls, such as Azure Policy and Role-Based Access Controls (RBAC), to the management groups. All subscriptions within a management group automatically inherit the controls applied to the management group.

What is an Azure Management Group?

When do you need to organize your subscriptions?

Management groups are logical groups for Azure subscriptions, allowing you to organize subscriptions and apply governance controls, such as Azure Policy and Role-Based Access Controls (RBAC), to the management groups. All subscriptions within a management group automatically inherit the controls applied to the management group.

Rules to consider when organizing

  • Below can be accomplished through Management Groups:
    • Group Subscriptions according to your organizational models and single assignment of controls that apply to all subscriptions.
    • Create a flexible hierarchy that can be updated quickly and can easily scale up or down depending on the organization’s needs.
    • Use Azure Resource Manager to integrate with other Azure services like Policy, Cost Management, Blueprints, and Security Center.
  • A few questions to answer before you should create a management group hierarchy:
    • What kinds of workloads are in each subscription?
    • What environment?
    • Which department/teams?
    • What are security rules being applied to these workloads?

What is a Root Management Group?

Benefits of Azure Management Groups

  • Provides a scope higher than a subscription to manage access, policies, and compliance efficiently.
  • Aggregated views above the subscription level.
  • Inheritance allows for a single assignment of controls that apply to a grouping of subscriptions.
  • Create a hierarchy of Management Groups that fit your organization.
  • Management Groups can be scaled up/down as needed.
  • Management tree can support up to six levels.
  • Each management group and subscription can only support one parent.
  • Each management group can have multiple children.
  • All subs and management groups fold up to one root management group within the directory.
  • A newly onboarded subscription is added to Root Management Group by default.

Limitations of Azure Management Groups

  • 10,000 Management Groups can be supported in a single directory.
  • Management Group tree can support up to six levels of depth.
  • Each management group and subscription can only support one parent.
  • Each management group can have multiple children.
  • Root management groups can’t be moved or deleted, unlike other management groups.

Role-Based Access (RBAC) in Management Groups

  • You could apply your RBAC roles at the Management Group scope to have one central place that you could go to instead of fiddling around with every subscription.
  • Azure roles can be applied to Management Groups, then inherited to all management groups and their subscriptions.
  • Classic admin roles can’t be applied at the Management Group level and need to be applied at the subscription level.
  • It is not possible to define a custom RBAC role at a Management Group.

Azure Account Roles

Azure Blueprint to Management Group

Azure Blueprint is a feature that allows defining a package of artifacts (resource groups, Azure policies, role assignments & Resource Manager templates, and more) targeted to Management groups and Azure subscriptions to create consistent and repeatable environments.

What is an Azure Blueprint?

Azure Policy Management Group

  • Management groups are a convenient place for defining Azure ARM policies.
  • You can apply Azure Policy, the service that allows you to create, assign and manage Azure policies at Management Group.
  • All these policies are then applied to all resources under the management group.
  • Allows your organization to set up Azure environments with consistent security policies.
  • The ARM policy violations can be seen either in the Azure Policy blade or in Azure Security Center.

Easing Subscription Management

This blog has explained the Azure Management Group concept and how you can use this service to ease your managing subscriptions. Other services can be used at the Management Groups level but will be applied or inherited at the Subscription level. It is suggested to use Management Groups when the number of subscriptions, departments, applications, and users grows.

Whether you have an Enterprise Agreement, Cloud Solution Partner, Pay-As-You-Go, or any other type of subscription, this service gives all Azure customers enterprise-grade management at a large scale for no additional cost.

Resources:

WHITEPAPER DOWNLOAD
Discover considerations, cost optimization techniques and how Azure capabilities can impact your organization.

Microsoft Azure Government DC is a group created for anyone in the IT world modernizing Government to bring real-world lessons to innovators in Government. AIS has supported and presented during these events since there were just five members. Now, the group is nearing 4,000. The July meetup focused on getting your agency to next-level cloud adoption with Microsoft Azure. Check out the recording and overview below.

Here’s What You Missed

Cloud Adoption has come a long way over the years. We have gone from a basic “lift and shift” model to migrating priority workloads to the cloud and optimizing for both high-security workloads and to tap into cloud-native services. If one thing is clear, hybrid capabilities are critical. It is important that we start thinking about the challenges as we start to move legacy IT infrastructure to the cloud. Two Microsoft Federal CTO’s, Susie Adams (Sales) and Kent Cunningham (Civilian), talk about changes Microsoft has been creating to simplify the migration processes. AIS Chief Technology Officer (CTO), Vishwas Lele, moderated a panel discussion with our customer, Richard Inzunza from Immigration and Customs Enforcement (ICE), who provided excellent insights around cloud adoption and his experience. The panel also included Jorge Gallardo, Program Manager from Peraton, who discusses his experience in regulated environments in the cloud.

Watch the Full Session:

Session Recap

Challenges when Migrating Workloads

Organizations can take their cloud adoption to the next level with Microsoft Azure when moving priority workloads in the cloud.
Microsoft breaks out the following based on maturity:

  • Identity and Security
  • DevOps and DevSecOps
  • Data
  • Cloud-Native
  • The Edge

When organizations migrate their workloads to the cloud but have a diverse IT estate, it poses challenges to manage security, access to their data, and understanding where that data lives. As a result, Microsoft has brought tools and resources that customers need to easily manage their workloads and simplify the migration process in a multi-cloud world.

Many customers come with questions on how to implement and manage Infrastructure as a Service (IaaS) while meeting regulatory requirements like FedRAMP and HIPAA. Microsoft has a portal full of Azure Blueprints that allow organizations to select and deploy the chosen environment inside of their subscription. The goal? To simplify the deployment of these methods with peace of mind that they align with regulatory and compliance standards.

What Tools Can We Use to Simplify

Mission Landing Zones are highly opinionated templates that IT oversight organizations can configure or customize to quickly create a cloud management system. These are then deployed on Azure environments for their teams. Based on the Terraform platform, this will provide a secure, scalable landing zone for customer workloads. Mission Landing Zones enable expedited Cloud Adoption across Commercial, IL4, IL5, & IL6.

Customer’s environments are becoming increasingly complex, and Azure is helping organizations securely and efficiently move to the cloud by creating a single management experience for your entire environment. We recognize that companies are struggling with multiple different environments, and we are focusing on providing companies with granular access.

Azure Hybrid is a cost savings benefit that lets you bring your existing on-premises Windows Server and SQL Server licenses with active Software Assurance or subscriptions to Azure.

Azure Hybrid consists of the following:

  • Azure Stack: A portfolio of products that extend Azure services and capabilities to your environment of choice – from datacenter to edge locations and remote offices.
  • Azure Arc: Enables you to manage your entire environment, with a single pane of glass, by projecting your existing non-Azure, on-premises, or other cloud resources into Azure Resource Manager.
  • Azure Internet of Things (IoT): A collection of Microsoft-managed cloud services that connect, monitor, and control billions of IoT assets hosted in the cloud.
  • Azure Lighthouse: Facilitates a secure relationship between companies and their managed service providers while providing on-demand transparency into vendor access and actions.

Implementing Capabilities

With the release of the Cybersecurity Executive Order in May 2021, Microsoft is developing new ways to support and deploy these capabilities while meeting security and compliance standards.

  • Enable security modernization: Help empower security teams to combat the increase in sophisticated attacks.
  • Increase collaboration: Improve information sharing and security practices across public and private sector organizations.
  • Deliver trusted and secure services: Build trust in government by securing information, improving software supply chain, and facilitating compliance.

Zero Trust architecture is based on the principle: never trust, always verify. This security approach protects customers by managing and granting access based on the continual verification of identities, devices, and services. Zero Trust architecture addresses challenges modern enterprises face. Microsoft Threat Protection powered by Azure is a comprehensive and seamless integration solution that provides end-to-end security for your organization using tools like Azure Sentinel and M365 functionalities. Learn more at https://aka.ms/cybereo.

Panel Discussion

A panel discussion was led by AIS CTO, Vishwas Lele. We were honored to have an AIS customer, Richard Inzunza, IT Specialist from the Department of Homeland Security speak on the panel. He was joined by Jorge Gallardo, Program Manager from Peraton.

Richard has been with the Federal Government for 36 years and with ICE since its inception in 2003. He has been a key player in the implementation of their hybrid cloud environment. ICE is in the process of building, extending, and expanding their ability to use cloud services from more than one service provider. AWS (Amazon Web Services) and Microsoft Azure are their biggest providers today, but their focus is to be able to take any valuable cloud service and integrate it into the ICE cloud to pass these capabilities onto their users and employees.

Common Challenges

There are several challenges Richard and ICE face in their line of work. Physical servers are no longer the main source for storing data, and helping customers understand the virtual aspect and how data is managed has been a challenge. Getting development teams and ITPMS, and other support teams to understand how to apply concepts of virtualization is extremely important for future development.

Many developers want to provision a capability without a true understanding of how this can open ICE to vulnerabilities. To address this ongoing challenge, they are helping their teams understand the responsibility level around cost and actions taken when provisioning new capabilities. Creating a vehicle that is compliant and future-proof is imperative for federal organizations to adapt and free time up for other key focuses. ICE’s goal is to get their teams to automate the delivery of their releases for their custom and third-party applications using pipelines.

Adjusting to a new virtual culture and applying security to a specific type of environment is a challenge that the assurance side of government IT is facing. ICE partnered with Peraton early on to align the implementation phase to begin their journey to the cloud. With this joint effort, three years later, ICE security teams are becoming more familiar with virtual environments at the beginning phase.

Ensuring Compliance

Policy Compliance & Security Compliance are a few types that ICE operates within. With their Policy, tagging is a method that ICE uses along with serverless LAMDA scripts to enforce compliance. They also have databases that store the tag values for valid metadata that correlates with an infrastructure or application. Ensuring that type of policy compliance helps at the management and administration level to understand the information they pull is accurate and helpful in many ways.

Security Compliance is now managed with advanced scanning tools and different checks to ensure when a policy has been adjusted. With accurate scanning, Richard is notified when policies have been adjusted and can reach out to the appropriate network to validate.

AIS: Your Trusted Azure Partner

We help government organizations create cohesive multi-cloud strategies, bringing the expertise you require for successful cloud adoption, modernization, data intelligence, and beyond. At AIS, we help you take a step back and look at the requirements needed and what services can be used with Azure or other tools to meet needs, offering templates and support for documentation. Our scalable cloud solutions address our clients’ most complex concerns, with deep experience across national security and military operations, as well as Federal, State, and Local Governments and their supporting agencies. We have been working with Azure for 12+ years and will have you well on your way to realizing the best that the cloud can offer.

Join us for future virtual meetings at the Microsoft Azure Government User Community: https://www.meetup.com/dcazuregov/.

Our team enters the fiscal year 2022 with excitement, intention, and confidence. In 2022, AIS turns 40. We’ve thrived through four decades of IT consulting and continue to achieve record growth numbers while keeping pace with technology advancements.

As we enter another year alongside our partners and teammates at Microsoft, we reflect on and take great pride in our accomplishments and the innovative, dedicated people that help drive them forward.

It is also the time of year we look ahead to what’s next and sharpen our strategy for the next fiscal year. But, even as new technologies and tactics emerge, for AIS, a few things remain constant:

  • Our Focus on Client Success: First and foremost is the success of the organizations we serve. The impact we have on our clients, their employees, and their customers is what keeps us motivated. This dedication to client success is core to the solutions we build and the problems we solve. Speed and scalability are consistent outcomes of our digital transformation strategies.
  • Our Investments in Microsoft: Microsoft technology is in our DNA. As a partner since 1994, we invest in Microsoft’s vision by training our people, working with product engineering, and developing intellectual property (IP) to meet the needs of customers.
  • Our People’s Passion: Our employees are passionate about cloud transformation. Passion is a key contributor to the company’s success and plays a crucial role in gathering input and analysis from a diverse set of disciplines. Just check out our blog to see the wide range of thought leadership from AIS employees.
  • Our Focus on Regulated Industry Expertise: AIS has led many Microsoft firsts in Government, Financial Services, and other regulated industries. We are focused on accelerating our clients’ journey to the cloud to meet mission needs and business goals by addressing roadblocks, such as security and compliance challenges, and leveraging automation and IP for speed and scale.

Looking Ahead

We are just as excited about what is on the horizon. Microsoft closed out the end of the fiscal year 2021 with several announcements, such as:

  • Increasing cross-platform synergy for apps and data (like Power BI and Azure Synapse)
  • Significant investments in business applications
  • Emphasis on security
  • Continued focus on industry and customer success

It’s an exciting time to be a Microsoft partner! The potential results of the collective Microsoft cloud ecosystem and technology and services partners that extend these capabilities are overwhelming.


Vishwas Lele AIS

“We strive to be the most trusted Microsoft partner for technically challenging, enterprise-level cloud adoption, app modernization, and data work.” said Vishwas Lele, EVP and CTO at AIS, Microsoft MVP, and Regional Director. “Our team continues to lead first of their kind design and build solutions on Azure, Power Platform, and M365 in regulated industries. We earned the 2020 Worldwide Partner of the Year Award. We’re seeing amazing convergence across the various solution areas and look forward to driving these new capabilities into the mission space for each of our clients.”


Our Methodology

Outcomes are at the center of our approach. Whether it’s delivering 100% uptime and better performance for a mission-critical defense application or using automation to reduce manual processes and advance a business outcome for a Fortune 500 client, we always work from the outcome first.

Delivering results-based solutions means having an arsenal of skillsets that allow technology to meet the needs of users. It takes all kinds of talent, and AIS provides a range of disciplines (e.g., .NET, SharePoint, Power Platform, Data, UX, Knowledge Management, Change Management, Web Dev, and more).

We believe in multiple “deployment” models for modern, scalable, and intelligent app and data projects. This includes composite applications leveraging Microsoft 365, Dynamics, and Power Platform, with Azure at the tip of the spear.

Fostering a community that cuts across diverse backgrounds, cultures, experiences, and skills is a significant core value at AIS.  The challenges of remote work won’t stop us from getting together, sharing best practices, and staying connected as a community! The growing demand for low code with both new and longtime SharePoint, Dynamics, and Azure customers is exciting to see. The early indicators of swift transformation these capabilities have on organizations are gaining attention from business leaders and promoting valuable thought and solution design diversity.

The AIS core values are rooted in education and knowledge share. We invest in our employee’s education with aisUniversity, boot camps, morning learning series, lunch and learns, self-directed educational materials, certification bonuses, and more. These learning opportunities allow individuals across projects and communities of interest access to new skills and approaches.

We put humans at the focal point of technology. Engagement from all corners of the business world is a sure indicator of what lies ahead with advancements in low code solutions. We’re helping business users, most of whom have no previous technical background, quickly embrace new technology and enable them to scale solutions so we can focus on solving their most challenging technology problems.

Our Investment in People and IP

AIS CDF

An investment in people, process, and tooling is an investment in the success of our employees and the organizations they support. Last year, we built the AIS Cloud Delivery Framework (CDF). The CDF provides opinionated guidance and resources which help our clients modernize in the cloud. The CDF combines our experience executing enterprise cloud projects with the techniques presented in Microsoft’s Cloud Adoption Framework (CAF).

The CDF includes backlogs, deliverable samples, starter files, project experience write-ups, and much more. On top of this foundation, our teams continue to develop resources for business applications, cloud security/compliance, governance, data & AI, and many more.

These innovations would not be possible without the support of the AIS Cloud Acceleration Hub (HUB). The HUB is a dedicated team of AIS consultants organized to help our project teams deliver successful cloud solutions. The HUB team consolidates knowledge and experience to provide rapid research and guidance services for AIS delivery teams at no additional cost to our customers.

Critical to the AIS growth plan is our culture. We’ve added an essential role to our leadership team, VP of Culture and Employee Experience. We are proactively hiring and developing skills around emerging areas of our business (e.g., data intelligence, business applications, RPA, user experience, and knowledge management). We’re heading into the next fiscal year with a vibrant (and growing) team, and we couldn’t be more excited about what comes next.

WE'RE GROWING! WON'T YOU JOIN US?
Check out our career openings at ais.com/careers and apply today.

Partnering with AIS

Microsoft Gold Partner

AIS prides itself on being one of the few Microsoft partners that consistently delivers challenging, enterprise-level work across all Microsoft cloud platforms (Azure, Microsoft 365, Dynamics, and Power Platform). We support the entire Microsoft cloud ecosystem.  We deliver foundational landing zones, security & governance, complex app development and modernization, and large-scale data and AI projects.

With AIS, you get consultants who offer the best possible solution. These consultants will advance your highest priority goals and objectives.

We bring 13+ years of enterprise experience on Azure, global leadership in Power Platform, and unmatched cloud adoption, security, and governance experience in highly regulated industries.

Along with this experience, our customers also get the full backing of Microsoft. Through our 26+ year partnership, AIS and Microsoft have established close relationships across product engineering, account teams, and executive leadership. This partnership offers our joint customers access to all the resources they need for digital transformation.

DRIVE YOUR MISSION FORWARD! PARTNER WITH AIS.
Work with a team of Microsoft technology leaders to enable your teams and partner with AIS today.

So you’ve written Infrastructure As Code, Automated All The Things, and drank deeply of the DevOps kool-aid on your current project – what’s next?

You’ve been reaping the rewards of DevOps on a small scale, and your one-off DevOps effort has earned a lot of positive attention – now management would like you to implement DevOps for all the projects! So how do you spread the DevOps wealth, and what do you need to be aware of?

Delivering DevOps

What is DevOps

For this article, we’ll need shorthand “DevOps” to mean the code bits – such as bash/posh/cmd for scripting, TF/CLI for IAC, and YAML for Azure DevOps CI/CD pipelines. Start with identifying the valuable bits from your current DevOps efforts and use that as a basis for what you want to disseminate to all other projects.

Cross-Project DevOps

Not all projects will have the exact DevOps requirements. Still, over time you will build up a collection of useful scripts and templates that are generic enough to provide value across all software projects in your organization.

But you can’t simply copy/paste these files into every repo, as that would be a massive headache to manage and keep updated. Instead, you’ll want to version and package these scripts so that every project that adopts the enterprise DevOps approach can track and plan for DevOps package updates.

Custom Package Repository

The easiest way to distribute file packages is through a custom package repository. Chances are your software project is already using at least Nuget, NPM, Maven, or PyPI. Azure DevOps can create all of these kinds of repositories, which is how you can seamlessly distribute your company-proprietary DevOps package without making it publicly available. These custom repositories are also handy as a local cache for public packages.

DevOps Integration

Usually, downloaded packages are not committed to the repo, only the reference to the package – then the build/release agents download them as needed. But for DevOps packages, should be committed to the repo for a variety of reasons. To do that, include an install script with your package, like this example for an NPM package:

#!/usr/bin/env node

const path = require('path');
const fse = require('fs-extra');
const rootFolder = path.resolve('../..'); // backwards from “./node_modules/<package>”
const installFolder = `${rootFolder}/.devops`;
const oldInstallFolder = `${rootFolder}/.devops.backup`;
const nodePkgFolder = path.resolve('.');
const srcFolder = `${nodePkgFolder}/src`;

let oldFolderRenamed = false;

// rename .devops to .devops.backup
if (fse.existsSync(installFolder)) {
    oldFolderRenamed = true;
    if (fse.existsSync(oldInstallFolder)) {
        console.log('removing last [.devops.backup] folder...');
        fse.removeSync(oldInstallFolder);
    }
    console.log('renaming [.devops] as [.devops.backup]...');
    fse.renameSync(installFolder, oldInstallFolder);
}

// copy package src folder to install folder
console.log('installing devops package...');
fse.copySync(srcFolder, installFolder);

// read version from package.json and save to installFolder
const packageJson = fse.readFileSync(`${nodePkgFolder}/package.json`);
const package = JSON.parse(packageJson);
fse.writeFileSync(`${installFolder}/pkg_version.txt`, package.id);

if (oldFolderRenamed) {
    console.warn('Existing devops configuration has been backed up and replaced, please manually merge your configuration back into the new devops package!');
    console.log(`Please read ${nodePkgFolder}\\changelog.md`)
} else {
    console.log(`Devops package has been installed, please read ${nodePkgFolder}\\readme.md`)
}

This script copies the DevOps package from the node_modules (NPM package cache) directory into the project’s root directory. If the DevOps package directory is already present, it renames the old directory and copies it in the new one. It is then trivial to diff the old and new directories for changes and merge them.

For NPM install, use the –no-save option, as we are using NPM as a downloader/installer so it doesn’t need to save the reference in the package.json

Package Versioning

DevOps package files should be tracked in a Version Control System like Git, and each feature should then be developed in a separate branch, then PR’d, and each PR should create a new version of the DevOps package. You can then Devops your Devops by setting up CI pipelines to automatically publish new package versions whenever the master branch is changed.

Don’t forget to document! Include a readme for first-time consumers and a changelog for updates.

Adopt and Migrate Incrementally

Generally, DevOps development will follow this kind of flow:

  1. Implement a project-specific DevOps functionality
  2. Test, Fix and Refine
  3. Generalize and extract useful bits of functionality
  4. Test, Fix and Refine
  5. Repeat

When integrating an enterprise DevOps package, a similar process can be followed:

  1. Migrate project-specific DevOps functionality to use the enterprise DevOps package
  2. Test, Fix and Refine
  3. Generic and extract useful bits of functionality
  4. Test, Fix and Refine
  5. Merge the generically useful bits into the enterprise DevOps package
  6. Repeat

Standardize and Prevent Making Changes

A critical benefit of the DevOps package approach is that it allows for the standardization of DevOps processes across the enterprise and provides a straightforward way to keep all projects in sync. A bug fix or new process can be quickly rolled out to all consumers of the package.

Standardization could also be accomplished using Azure DevOps Pipelines or Task Groups, but any change to those will immediately affect all consumers and invisibly break things. But if the DevOps package is stored in the project repo, those projects are insulated from breaking changes and bugs.

Track Changes and Update Easily

Each project will likely have a separate set of custom configuration files that must be modified from the package baseline to customize the build and release pipelines. It is essential to separate these files and keep track of changes to them after a package update. Conversely, all the other files in the DevOps package should not be modified to ensure a smooth update. Suppose one of the package files must be modified. In that case, it should be either A) temporary with the expectation of merging the changes up to the package or B) copied locally to the custom configuration directory so that it is evident that it must be inspected during the package update process.

Enterprise DevOps Principles

To sum everything up, there are several themes here that will ensure a successful enterprise DevOps approach:

  • Consistency – standardize DevOps processes across the enterprise
  • Modularity – design DevOps processes so that each component is focused and reusable – follow DRY and SRP guidelines
  • Resiliency/Stability – make DevOps processes resistant to unexpected changes
  • Traceability – easily understand changes to DevOps processes and merge quickly

Example

Consider this NPM package:

  • scripts/install.js – module install script
  • src/config – all project-specific customization and scripts
  • src/pipelines – enterprise-wide CI/CD pipelines
  • src/scripts – enterprise-wide bash/posh/cmd scripts
  • src/tf – enterprise-wide terraform templates
  • src/Update-DevopsPackage.ps1 – helper script that triggers the download of the latest package
  • changelog.md – description of differences between versions, upgrade notes
  • package.json – NPM publishes metadata, contains version
  • readme.md – introduction to DevOps package, contains getting started instructions, directory, and file descriptions
  • And here’s what a consuming project may look like:
  • .devops/config – src/config directory from package
  • .devops/pipelines – src/pipelines directory from package
  • .devops/scripts – src/scripts directory from package
  • .devops/tf – src/tf directory from package
  • .devops/version.txt – text file containing the package version (handy for tracking updates)
  • .devops/Update-DevopsPackage.ps1 – copied from src directory from package
  • src/ – project source code, etc.
Microsoft affirms its commitment to the Java ecosystem by offering open-source distribution of Java with the Microsoft Build of OpenJDK™. It’s a no-cost, Long-Term Support (LTS) distribution of OpenJDK currently available for preview.

The preview build includes the binaries for Java 11 available on x64 server and desktop environments on Windows, macOS, and Linux. An early-access release for Java 16 is available for Windows on ARM. Microsoft is currently collecting feedback on the packing and installer of the build on various platforms from the users of the preview build and aims to release the GA build by the end of 2021.

Why Microsoft is Offering JDK Distribution?

Microsoft has been deeply involved in the Java ecosystem by offering Java tools for Visual Studio Code, supporting Java on Azure, running several internal platforms, and some Azure infrastructure services on Java Virtual Machine (JVM). Microsoft runs more than 500,000 JVMs internally, excluding all Azure services and customer workloads. Azure has experienced significant growth in Java workloads through Azure Spring Cloud, Azure App Service, Azure Functions, and Azure Kubernetes Service. LinkedIn, Minecraft, and Yammer are examples of major internal platforms run on Java.

Below are few critical triggers outside the rise of internal JVM workloads for Microsoft to jump into OpenJDK distribution:

  • Cost reduction: Java support costs on Azure cloud can be reduced by avoiding commercial licenses for Java. It helps to reduce the overall Azure subscription costs by removing the cost passed to other vendors for JDK licenses.
  • Security & performance improvement: The build may contain backported fixes and enhancements for security and performance, which may not have formally backported upstream.
  • Support Java devs: Part of Microsoft’s dedication to better support Java developers on Azure.
  • Become a leader in providing toolkits for the open-source community: Microsoft has been a sponsor and contributor to the Java open-source community through AdoptOpenJDK projects since 2018, and has contributed to OpenJDK by providing more than 50 patches for OpenJDK in the last 18 months, which includes resolutions for issues in macOS packaging, build, infrastructure, and garbage collection.

Benefits to Java Developers on Azure

  • Multi-OS Support: Windows, macOS, and Linux
  • Multi-Environment Support: cloud, local data centers, and user development environments
  • Free to Use: the Microsoft Build of OpenJDK is at no cost to use.
  • Backed by Microsoft: This is backed by the promise of Microsoft. Many backported fixes and enhancements are recommended by Microsoft that may not be available in upstream OpenJDK implementations.
  • Ease of Migration: Smooth and transparent transition for existing workloads.

Limitations

  • Docker image is not available at this point
  • Medium-term support releases of OpenJDK are not supported
  • No API available to access the binary distribution
  • No ARM-based macOS binary at this point

How to Install on Windows

The packages and installer for Microsoft’s OpenJDK preview build are available at https://www.microsoft.com/openjdk#11. Azure customers can try out the preview using Azure Cloud Shell too.

Download the package on the platform of your choice and confirm the Java version. For Windows, the installer takes care of the default location and setting PATH and JAVA_HOME environment variables.

Microsoft Build of OpenJDK on Windows 10

Picture 1: Microsoft Build of OpenJDK on Windows 10
A build of Open JDK 11 is already available in the Azure Cloud Shell; developers can use it with Shell.

Microsoft Build of OpenJDK on Azure Cloud Shell

Picture 2: Microsoft Build of OpenJDK on Azure Cloud Shell

We expect more enterprise organizations modernizing Java on Azure with increased support and options from Microsoft. This is welcome news to organizations with a large investment in Java and has experienced a large increase in the cost of legacy Java workloads. Need a partner to accelerate your modernization journey? Reach out to AIS today.

Reference

This past Thanksgiving marked the first anniversary of going live with a SharePoint environment that AIS migrated from on-prem to Microsoft Azure IL5. Since then, our client has experienced 100% uptime during business hours and reduced deployment timelines, from weeks to minutes.

Challenge: Improve Performance, Speed Up Deployments

AIS set out to help a DoD agency that had experienced ongoing service issues with their existing provider while operating their on-prem SharePoint farm. During the year before the migration, the DoD customer experienced three service outages during business hours, which halted the ability to perform mission-critical activities. Additionally, their existing enterprise service provider required a lead-in time of 1-2 weeks to deploy any code changes or new capabilities into the environment. AIS was tasked with building a cloud solution to maximize uptime and accommodate rapid deployments to better serve the fast tempo required by our DoD customer.

Solution: Hybrid IaaS/Paas in Azure IL5

To provide a solution tailored to maximize uptime and accommodate rapid deployments, AIS architected a DoD first: a hybrid IaaS/PaaS environment in Azure Government IL5 that utilized the DISA Cloud Access Point to integrate with NIPRNet. We leveraged a suite of technologies to employ DevSecOps methodologies, allowing the solution to remain scalable while adhering to industry best practices. By implementing an automated code scanning solution, we reduced deployment lead-in time from weeks to minutes. Our infrastructure as code (IaC) development also drastically reduced the time required to build a new environment from several days to under one hour.

Looking Ahead: Cost-Sharing, Scale Across the DoD

AIS has worked with our DoD customers to offer these cloud services to neighboring agencies to benefit from cost-sharing. In doing so, we have passed on lessons learned and processes that we have developed to share our success across the DoD enterprise. As we grow, we continue to integrate evolving best practices to remain at the DoD DevSecOps initiative’s forefront.

AIS FIRST TO DEPLOY AZURE IL6 EVNVIRONMENT AND ACHIEVE ATO!

Have you spent a lot of time getting comfortable using Team Foundation Server (TFS) or Azure Pipelines only to be switched to GitLab CI/CD for a project? That’s what happened to me about four months ago. Since then, I’ve been learning the differences between the tools. There are several differences between Azure Pipelines and GitLab CI/CD that I’ve experienced. This post aims to be a primer for you to understand these differences and save you a lot of research time.

Classic Pipelines vs YAML

I worked with TFS’/Azure Pipelines’ Classic Editor the most before this project. While it’s a great visual designer for automating builds, they aren’t very portable from project-to-project or environment-to-environment because the exported JSON files reference instances of other resources like Service Connections by ID, not name. When importing these JSON files into another TFS or Azure Pipelines instance, changes need to be made to update affected references. This is where YAML, which stands for “YAML Ain’t Markup Language,” comes in. YAML pipelines are the future for both Azure Pipelines and GitLab CI/CD. YAML references the Service Connection by name, making it easier to use the pipeline in other environments if the connection name remains the same.

I didn’t realize before diving into YAML for GitLab because it’s not a language but more of a file format. I had expected the little Azure Pipelines YAML I had written would translate almost perfectly to GitLab. However, GitLab doesn’t use a lot of the syntax that Azure Pipelines does. I’ve listed some of the differences I’ve become aware of, but this is likely just a partial list.

Azure Pipeline and Description

For more details, here is the YAML reference for GitLab and Azure.

Integration with Azure

Azure Pipelines offers Service Connections to connect to remote services like Azure, Docker, or GitHub.
GitLab doesn’t have an equivalent offering. Making those connections must be done in Jobs. One of the concerns is that you would have to keep the connection credentials in environment variables. When doing this, it’s important to remember to mask sensitive variables like passwords to not appear in job logs. This also means you can’t use Azure Key Vault Secrets as pipeline variables, either, which is something my previous projects used heavily.

Runners

While project-specific, the GitLab runners that were made available don’t have all the tools needed for my project. Another team that is developing the runners has provided some basic images that don’t include tools like the Azure or JFrog CLIs, which are required for the environment. This is likely to be remedied, but it’s caused a need to learn how to install it. This isn’t an issue with more open solutions. Still, my project will be isolated from the web in the staging and production environments, making it challenging to use tools like Chocolatey, Nuget, or NPM to install packages from the web. This is still something the other team is working to resolve. Still, the only resolution I see is maintaining their own repository of packages in something like Artifactory and having images or scripts pull from there.

I’m far more comfortable with YAML because of this project. While GitLab is powerful and has received a few excellent features since I began the project, the creature comforts I’m used to with Azure Pipelines (notably the Service Connections) require additional work that seems unnecessary.