We know most software applications, desktop, mobile, or web, require a database at the backend to store data. When we look at current software applications, the complexity is very high, and they have a high frequency of data transactions. So, we need testing of the data stored and retrieved from the database to make sure it has proper data integrity. Any database operation performed by the application is always one of the four, Create, Retrieve, Update and Delete.

We were required to do database unit tests in one of our recent client application implementations, as the application is mainly data-centric. The decisive reason to push for database unit testing is to ensure the application has data integrity. We were required to develop database test cases to include checks for data integrity and confirming business rules.

For database unit testing, we have the following frameworks that can be used:

  • Microsoft SQL Server Data Tools (Using Visual Studio)
  • tSQLt Unit Testing Framework

This blog post will give a high-level experience in implementing and demonstrating how to create test cases that use tSQLt features on AdventureWorks sample databases. We choose the tSQLt Database Unit Testing framework for Azure SQL Database to implement database unit testing. tSQLt allows us to create isolated test cases defined with the data we need; each test case runs in its own transaction.

A tSQLt unit test case is based on the AAA rule, which consists of:
Rules for Database Testing

Step One: tSQLt Environment Setup

The first step is to install tSQLt in your database.

  • Setup includes a set of objects (tables, stored procedures, functions, and more) that you add to the database you want to test. Download “tSQLt_V1.0.5873.27393.zip” from the tSQLt site, and unzip the file.
  • Run the “tSQLt.class.sql” script against the database. You can find the script in the zip file that you downloaded from the tSQLt site.
    Once the run is successful, you can find the tSQLt schema. Assigned to that schema will be tables, views, stored procedures, and user-defined functions. These are the components that do all the processing when creating and running test cases against your database. As a part of best practices, we will not be moving any objects created by “tSQLt.class.sql” and unit test cases outside the development database.

Object Explorer

Before you create a test case, you need to create a test class where the test case will be located. A test class is a schema configured with an extended property that tells tSQLt that it is a test class. To create the test class, you use the NewTestClass stored procedure part of the tSQLt schema.

NewTestClass Stored Procedure

The EXEC statement creates a test class named TestPerson. Once we’ve completed this step, we can add one or more test cases to the test class, so let’s get started doing that.

Step 2: Testing [Person].[GetAddressByCity] Stored Procedure

In tSQLt, a test case is a stored procedure that’s part of a test class and uses tSQLt elements to perform the testing. We can develop and test stored procedures and functions in a database.

As part of developing the tSQLt test case, we’ll create a test case for a stored procedure [Person].[GetAddressByCity] which will return address details for a given city.

Creating a Test Procedure by City

Use the CREATE PROCEDURE statement to create a test case. The procedure name must start with the word “test” and be created in an existing test class; otherwise, making the test case is much like creating any other procedure. The following T-SQL script creates a test case named “TestGetAddressByCitySuccess” in the “TestPerson” test class:

Follow tSQL Script to Create Test Case

As you can see, we are using CREATE OR ALTER PROCEDURE statement to create a test case. The critical part of the test case is the main body of the procedure definition, between BEGIN and END. First, we use tSQLt’s FakeTable stored procedure, which creates a unique temporary table with the same name as the table referenced within the actual stored procedure we are testing. This will ensure that the data in an actual database and the table is not updated or deleted. Any subsequent references we make in our test case to that table will always point to the test table and not the actual table in the database. Then we populate the table with test data.

AIS achieved Windows Server and SQL Server-based workloads to Azure, reflecting our commitment and investments in delivery around helping organizations migrate workloads to Azure.

In the test case, we use two temporary tables, #Expected, used to store the expected data, and #Actual, which will store data once the stored procedure is successfully run. We are keeping both temp tables schema similar.

Finally, we use the tSQLt AssertEqualsTable stored procedure to compare the data in the #Actual table to the data in the #Expected table.

Run the test case using tSQLt.Run the stored procedure.

When we run the test case, it should evaluate to true and return the following results:

Final result for unit testing

Now you have the basics, which will allow you to go a long way with unit testing using tSQLt for SQL Server. You can refer to documentation by visiting the tSQLt User Guide.

While recently working on moving an FTP server (using Passive FTP) from on-premises to Azure, I needed to expose the FTP server to the internet via an Azure External Load Balancer. A few things were not well documented. I thought it would be good to put together a step-by-step guide on setting up the load balancer, configuring the FTP server on the backend, and setting the Network Security Group rules associated with the FTP server. In this example, the FTP server will be running on IIS, and it is assumed that there is no Azure Firewall\NVA associated with the VNET that is hosting the FTP server. Suppose your environment does have an Azure Firewall\NVA. In that case, there will be an additional Azure resource needed, an Azure Route Table with a 0.0.0.0\0 out to the internet associated with the subnet the FTP server is hosted. This is required in order to route the outbound traffic back out to the internet. If this is not done the outbound FTP traffic will die on the Azure Firewall\NVA.

Windows IIS FTP Server Configuration

I won’t be going into the details of the FTP server configuration here. I will just be going over the FTP firewall settings and set the Data Channel Port (Passive FTP Ports). Below is how the FTP server is configured to support the external load balancer. On the IIS server settings – FTP Firewall Support is where you define the Data Channel Port Range.

FTP Firewall Support

I have defined ports 5000-5002.

Define Ports

At the site, the level is where you configure the public IP address of the external load balancer.

FTP Testt

As you see in the screenshot below the public IP of the external load balancer is configured.

FTP Firewall Support

Azure External Load Balancer

Configuring an Azure External Load Balancer for FTP is strait forward. One thing to note with Passive FTP is that all the Data Channel ports for FTP must be defined in the load balancing rules. If you define ports 5000-5100, there will be 100 load balancing rules for each data channel port. I recommend keeping the number of passive ports to a minimum since Azure External Load Balancers do not support a port range.

Requirements:

  • Azure Public IP – this will be configured as the load balancer’s front-end IP
  • Azure Load Balancer
  • FTP Port requirements for inbound traffic and public IP address(es) of the client(s) that will be accessing the FTP server

Deploy the Load Balancer

Search for Azure Load Balancer in the search bar in the Azure Portal

Search for Azure Load Balancer
Select Create

Creating a Load Balancer in Azure

Define the following parameters then select Next: Frontend IP configuration

  • Resource Group
  • Name
  • Region
  • Leave SKU: Standard
  • Leave Type: Public
  • Tier: Region

Project Details Load Balancing

Select Add a frontend IP Configuration, define the following parameters

  • Frontend IP Name
  • IP version: leave IPv4
  • IP type: leave IP address
  • Public IP address
  • Gateway Load balancer: leave None

Select Add

Name Load Balancer ad Public IP Address

Select Review + create – we will go over the configuration of the remaining items of the Load Balancer after it has been deployed.

Add Backend Pool

Configuration of the Azure External Load Balancer

I will now go over the configuration of the Azure Load Balancer. This will detail how the Backend pool, Health probes, Load balancing rules, and Outbound rules are configured.

Configuration in the Azure External Load Balancer

Frontend IP Configuration

As you see in the screenshot below, the frontend IP is defined by the rules associated with the front end. After additional load balancing rules are added I will review the frontend configuration again.

Frontend IP configuration

Backend Pools

The backend pool configuration is how you associate a virtual machine with the load balancer. In the screenshot below, I use the following configuration

  • Virtual network – the VNET that the target virtual machine is associated with
  • Backend Pool Configuration: NIC
  • IP Version: IPv4
  • Add the virtual machine by selecting the +Add button
  • Select Save

Add virtual machines to backend pools

Add Virtual Network

Health Probes

I have created an FTP Health probe for port 21. I will also be using this health probe for my FTP data channel ports. You can make a health probe for each data channel port.

  • Name: FTP
  • Protocol: TCP
  • Port: 21
  • Interval: 5
  • Unhealthy threshold:2

Use Health Probe for FTP Data

Load Balancing Rules

I have the port 21 load balancing rule already configured but need to add the FTP data channel ports that I have defined in FTP Firewall (5000-5002).

Define the following parameters:

  • Name: I like to give it the name of the port number, in this example, I will use 5000
  • IP Version: IPv4
  • Frontend IP address: Select your Frontend IP Config – FEIP
  • Backend Pool: Select your backend pool – BEP-01
  • Protocol: TCP
  • Port: 5000
  • Backend Port: 5000
  • Health Probe: ftp (TCP:21)
  • Session persistence: None
  • Idle timeout (minutes): 4
  • TCP reset: Disabled
  • Floating IP: Disabled
  • Outbound source network address translation (SNAT): (Recommended) Use outbound rules to provide backend pool members access to the internet
  • Select Add

Here is the full set of Load Balancing rules:

Load Balancing Rules

How to Add a Loan Balancing Rule

Outbound Rules

There is a requirement to create an outbound rule for the back-end resource to be able to reach out to the internet.

Create an outbound rule for back-end resource

Network Security Group Rules

The final step in configuring this solution is configuring the inbound rules for the NSG. The rule should have TCP ports 21 and the data channel ports defined in your FTP firewall (in my case, this would be 5000-5002).

External IP client accessing the FTP server

Conclusion

Passive FTP via an Azure Load Balancer combined with Network Security Group is a great solution to expose your FTP server on the internet securely. Understanding the total possible number of connections at a single time will help determine the minimum number of passive data channel ports required to configure on the Azure External Load Balancer.

This blog will explain how a short-staffed team overcame security issues in a critical legacy application by combining modern authentication with legacy MVC frameworks. This allowed the team to upgrade the application to meet enterprise security standards without disrupting the users.

Background

I managed an in-house application, ERMS, developed a decade ago with MVC and forms authentication. The forms authentication used custom user management with SQL Server database. The skilled developer created a productive application that served the needs of the HR, managers, and employees. ERMS was developed for AIS in-house use and had lower priority than the customer engagements. Over the years, it’s been stable, requiring few changes. When changes were required, we had to juggle resources to accomplish them. Figure 1 shows the application architecture before the change.

Upgrading Legacy MVC Forms Authentication to Azure AD

Challenge

The application was developed before the standard security practices. Over time, it became non-compliant. ERMS needed to be upgraded the legacy forms authentication to Azure AD authentication. This required sweeping changes to the way the users logged in to the application, which would be a significant undertaking. The solution was not challenging, but it must be done with minimal downtime and minimal resources. In addition to this, ERMS uses custom roles that do not map to the Active Directory roles.

Solution

We considered several ways to solve this problem, as outlined below.

Upgrade Authentication and Authorization

The first option was to remove forms authentication and custom role management to use Active Directory, as shown in Figure 2. Equivalent AD roles would have to be created. The code at various layers needed to be updated to refer to the corresponding AD roles. This was not a viable option as it is risky with many changes.

Removing forms authentication

SCALING CLOUD FOR GLOBAL ENGINEERING
Looking to migrate your enterprise to the cloud? AIS can help you scale, reduce technical debt, and mitigate risk.

Upgrade Authentication and Use Legacy Authorization

Figure 3 shows another approach we explored to retain the existing role management and use Azure AD for authentication. This was a sensible fallback in the given context. That did not work as the User Principal in the HTTP request context was always not set, which would cause the authentication to break.

Retain the existing role management

We learned two points from the failed trials. First, we should only attempt to upgrade the authentication but not touch the custom role management. Second, it was a resource-heavy effort to integrate Active Directory with custom role management in ERMS.

Using a Connector for Authentication

The solution that worked was to have a lightweight authenticator app that the ERMS application consumes to validate the users, as shown in the high-level flow in Figure 5. This authenticator service app would validate against Azure AD, and role management would stay the same. Figure 4 shows the solution overview.

Complete Solution OverviewHigh Level Flow

The Right Approach

An independent authentication connector service and maintaining the existing role management are the key to the solution. This is a dependable approach if you are looking for a quick turnaround using minimal coding or resources.

Challenges with Public Cloud

One of the oldest problems facing cloud infrastructure services has been access control. Ensuring that resources can be accessed by users and services that need to access them and not by anything else has been problematic when those services are delivered through the public internet. Even as late as last year, cybersecurity data firm BinaryEdge.io listed over 35,000 publicly accessible and unsecured databases, most of which were cloud-hosted. That’s over 35,000 data breaches that have likely already happened because of misconfigured cloud resources. In short, the problem with the public cloud is that most resources shouldn’t be public.

Service Endpoints to the Rescue

Azure’s first step in service access control was the service endpoint. A service endpoint allows virtual networks to use private IP addresses to route traffic to an Azure service’s public IP address. But more importantly, it also allows you to deny traffic over the service endpoint unless it comes from its subnet. This effectively denies access to all traffic coming in from the public internet, making your Azure service accessible only to other Azure services and to selectively whitelisted IP addresses.

This approach has some limitations, potentially the biggest being that your service still has a public endpoint. It has a firewall in front of it, but if you only want other Azure services to connect to your private database, why should that database server allow traffic over its public IP address? Availability is another issue. Some Azure services, such as Azure Kubernetes Services, don’t support service endpoints. The network traffic pattern is also inefficient since Azure services using service Endpoints need to interact through public IP addresses. Finally, you have the issue of scope. Service Endpoints are scoped to the service, meaning any service in that subnet has access to the service. This means either a high degree of granularity in subnets, an administrative burden, or simply ignoring the access issue and hoping nothing happens. And while storage accounts have a third option, service endpoint policies, they only apply to storage accounts.

US MILITARY TRANSITIONS TRADITIONAL IT
AIS used Microsoft Azure services to migrate Windows 2016 Servers to Azure PaaS, reducing costs and scaling application performance as needed.

Private Endpoints: The Next Step

It’s not that Service Endpoints were a bad idea or were poorly implemented. Rather, the needs and usage of the public cloud have evolved rapidly. In response to evolving needs, Microsoft introduced another way to control connections to a service. Private Endpoints allow services to be accessible through their private IP addresses and Private Link, making connections between private IP addresses possible. And that doesn’t just mean private IP addresses in Azure. You can connect to your Azure instance from your on-prem network without leaving your firewall or using the public internet with the correct Domain Name Server (DNS) configuration.

To that end, there is no public IP address for a private endpoint. An Azure service can have both a private endpoint and a public endpoint. Creating a private endpoint does not automatically deny traffic over the public endpoint. However, if you take the extra step of denying all publicly routed traffic, you have isolated your Azure service far more effectively than if you had used a service endpoint. We can take this private connectivity a step further. It’s possible to integrate a private endpoint with a private DNS zone in Azure, resulting in private endpoints that can accept traffic from on-premises networks without routing through the internet. It’s as if the Azure service was part of the on-premises network. Finally, private endpoints solve our scoping and availability problems. Private endpoints are available on more services than service endpoints. For a full list, reference Microsoft documentation on Private Endpoint Availability and Service Endpoint Availability. Private endpoints can be scoped to a specific resource, such as a storage account, or even a sub-resource, such as specific blobs or tables in the storage account. Because Azure Active Directory can govern access to the private endpoint, this offers a very granular Role-Based Access Control (RBAC) for Azure resources.

Considerations

Private endpoints aren’t all sunshine and roses, however. They do come with some significant downsides to consider. First, they aren’t free. Private endpoints are charged by usage, while service endpoints are free. Secondly, private endpoints require more setup than service endpoints because you need to set up and configure DNS. Azure services have Fully Qualified Domain Names (FQDN) that resolve to public IP addresses, so you must configure DNS so that the public address resolves to the private IP of its private endpoint. In the end, it takes more to use private endpoints, but you get more when using them.

Putting It All Together

Private Endpoints involve a fair number of steps and a good bit of supporting infrastructure. Because of that, a practical implementation might help visualize how everything fits together. Fortunately, Microsoft has provided a thorough set of tutorials and quickstarts to help you get your hands dirty. If you try these, and I encourage you to do so, please remember to keep an eye on how services are charged and tear down your work so that you don’t receive a surprise bill.

Conclusion

Private Endpoints are an evolution of Azure infrastructure. The upfront configuration effort and ongoing service billing for use means that you should carefully consider whether your organization needs them. For example, if you need to block all internet traffic to a service while making services available to on-premises traffic, or if you need to secure specific sub-resources in your virtual network, Azure now offers that capability through private endpoints.

Point-to-Site Virtual Private Network (VPN) connections are helpful when you want to connect to your VNet from a remote location. This helps us securely connect individual clients running Windows, Linux, or macOS to an Azure VNet. This blog will outline steps to create and test a Point to Site VPN while using an Azure Certificate Authentication method.

Create a VNet

Sign in to the Azure portal.
In Search, type Virtual Network.

Creating virtual network

Select Virtual Network from the Marketplace results.

Virtual Network

Once you select Create, the Create virtual network page will open.
On the Basics tab, configure Project details and Instance details VNet settings.

Virtual Network Creation

SLVNET

Create the VPN Gateway

A VPN gateway is a specific type of virtual network gateway used to send encrypted traffic between an Azure virtual network and an on-premises location over the public Internet. Each virtual network can have only one VPN gateway. The virtual network gateway uses a specific subnet called the gateway subnet. The gateway subnet is part of the virtual network IP address range you specify when configuring your virtual network. It contains the IP addresses that the virtual network gateway resources and services use.

Virtual network gateway

On the Basics tab, fill in the values for Project details and Instance details.

Completing VPN basics

Public IP Address input

Note: Deployment of the virtual network gateway may take up to 45 minutes.

VirtualNGateway

Looking to grow your career in Azure? We're growing quickly and we're looking for talent to join the team. View our open career opportunities today.

Generating Certificates

Azure uses certificates to authenticate clients connecting to a VNet over a Point-to-Site VPN connection. Once you obtain a root certificate, you upload the public key information to Azure. The root certificate is then considered ‘trusted’ by Azure to connect P2S to the virtual network. You also generate client certificates from the trusted root certificate and then install them on each client computer. The client certificate is used to authenticate the client when it initiates a connection to the VNet.

Generate a Root Certificate

Use either a root certificate generated with an enterprise solution (recommended) or generate a self-signed certificate. After creating the root certificate, export the public certificate data (not the private key) as a Base64 encoded X.509 .cer file. Then, upload the public certificate data to the Azure server.

Open PowerShell as an Administrator and run the following script.

$cert = New-SelfSignedCertificate -Type Custom -KeySpec Signature `
-Subject “CN=SLP2SRootCert” -KeyExportPolicy Exportable `
-HashAlgorithm sha256 -KeyLength 2048 `
-CertStoreLocation “Cert:\CurrentUser\My” -KeyUsageProperty Sign -KeyUsage CertSign

Admin Windows PowerShell

This will create a root cert and install it under the current user cert store.

Generating Client Certificates from Root Certificate

Open PowerShell as an Administrator and run the following command:

Get-ChildItem -Path “Cert:\CurrentUser\My”

This should provide a thumbprint:

PowerShell thumbprint

Next, run the following command. The thumbprint should mutch to your Certificate.

$cert = Get-ChildItem -Path “Cert:\CurrentUser\My\B1C79D177D465E76FF74243F7553EA4837FD137B”

Thumbprint to match certificate

Finally, you’ll need to run this to generate your client certificate.

New-SelfSignedCertificate -Type Custom -KeySpec Signature `
-Subject “CN=SLP2SClientCert” -KeyExportPolicy Exportable -NotAfter (Get-Date).AddYears(1) `
-HashAlgorithm sha256 -KeyLength 2048 `
-CertStoreLocation “Cert:\CurrentUser\My” `
-Signer $cert -TextExtension @(“2.5.29.37={text}1.3.6.1.5.5.7.3.2”)

Run and generate certificate

We now have certs in place, But we need to export the root certificate to upload it in Azure.
First, export the root certificate public key (.cer)

Hit the Windows Key + “R”, to bring up the Run dialog box and type in “certmgr.msc”. When the management console opens, you should see your newly created certificate in “Current User\Personal\Certificates”. Right-click on your newly created cert and go to All Tasks > Export.

Export certificates

In the Wizard, click Next.

Export Wizard

Select No, do not export the private key, and then click Next.

Do not export private key

On the Export File Format page, select Base-64 encoded X.509 (.CER)., and then click Next.

Pick file format

For File to Export, Browse to the location to which you want to export the certificate. Specify your file name.  Then, click Next.

Name File to export

Click Finish to export the certificate. Your certificate is successfully exported!
The exported certificate looks similar to this:

Exported Certificate

If you open the exported certificate using Notepad, you see something similar to this example. The section in blue contains the information that is uploaded to Azure. If you open your certificate with Notepad and it does not look similar to this, typically, this means you did not export it using the Base-64 encoded X.509(.CER) format. Additionally, if you want to use a different text editor, some editors can introduce unintended formatting in the background. This can create problems when uploading the text from this certificate to Azure.

Open Certificate in notepad

Configure Point to Site Connection

  • The next step of this configuration is to configure the point-to-site connection. Here we will define the client IP address pool as well. It is for VPN clients.
  • Click on the newly created VPN gateway connection.
  • Then in a new window, click on Point-to-site configuration
  • Click on Configure Now
  • In a new window, type the IP address range for the VPN address pool. We will be using 20.20.20.0/24. For tunnel, type use both SSTP & IKEv2. Linux and other mobile clients, by default, use IKEv2 to connect. Windows also use IKEv2 first and then try SSTP. For authentication type, use Azure Certificates.
  • In the same window, there is a place to define a root certificate. Under root certificate name, type the cert name and under public certificate data, paste the root certificate data ( you can open cert in notepad to get data).
  • Then click on Save to complete the process.
  • Note: when you paste certificate data, do not copy —–BEGIN CERTIFICATE—– & —–END CERTIFICATE—– text.

Point to Site configuration

Testing VPN Connection

Log in to Azure portal from the machine and go to VPN gateway configuration page.
Click on Point-to-site configuration.
Next, click on Download VPN client.

Download VPN client

We can see a new connection under the windows 10 VPN page.

New VPN connection

Click on connect to VPN. Then it will open this new window. Click on Connect.

Connect new VPN

Adding a VPN Connection

Run ipconfig to verify IP allocation from VPN address pool.

Run ipconfig to verify IP location

Congratulations! You’ve successfully configured a Point to Site VPN Connection using Azure Certificate Authentication.

AIS has been working with Azure since 2008. Interested in learning more? Reach out to AIS today.

Creating Self Documenting Azure Functions with C# and OpenAPI: Part Three

When migrating existing business services to Azure PaaS as part of an App Modernization project, you may find yourself seriously considering serverless computing using Azure Functions, especially if your target architecture includes MicroServices.

Azure Functions let you focus on what counts — your requirements, your time, and your code — and less about boilerplate code, infrastructure, and processes.

When creating new APIs in any technology, one thing is essential: Documenting those APIs so that others can use them. This is especially important in large enterprises or situations where you are exposing these APIs to the public.

This blog series guides you through creating a C# Function App, creating self-documenting APIs, ensuring the quality of that generated documentation, and separating documentation based on the audience.

The blog post assumes the following:

  • You are familiar with C#
  • You know software development fundamentals
  • You are comfortable with command-line interfaces
  • You have completed Part Two of this series

At AIS, we’ve determined that one of the best approaches to documenting your APIs is to use OpenAPI (formerly Swagger) to have the APIs (nearly) document themselves. This saves time in the long run and enables API clients to generate client code to interact with your APIS automatically. In addition, this helps with a shelf life – if six months or a year down the road, we decide a better approach is best.

For these articles, I will walk you through the steps for creating well-documented Azure Functions for our fictitious shopping site called “Bmazon” and its modernization effort.

This is the final post in this series.

We Need to Lock it Down

In the previous article, we increased the quality of our OpenAPI spec by adding various C# attributes and XML comments to the mix. This resulted in a very useful and informative OpenAPI spec being generated.

Now, it turns out that our Security Team alerted us that some folks in the Warehouse were using their knowledge and access to the “Create Order” API to generate fake orders for themselves. This is a problem, and they have now updated the Security Procedures to require restricting people to the API calls they are supposed to use.

Currently, we have the following functions and departments that need to access them:

Function Description Shopping Department Warehouse
Create Order Creates an order to send to the Warehouse
Order Shipped Shipment update from the Warehouse to the System
Get Shipping Status Gets the current shipping status of an order

We have two Clients (Shopping Dept and Warehouse) that each need access to two functions.
We need to separate these things into two groups.

Put Functions In Groups

Swashbuckle supports putting things in Groups by using the ApiExplorerSettings attribute from Microsoft.AspNetCore.Mvc namespace. This attribute can be applied more than one time, so we can add these for all the functions like this:

Unfortunately, since you can’t use more than one ApiExplorerSettings attribute per function, we will need three groupings for this, “Warehouse,” “Shopping,” and “Shared,” to handle the method that was shared between. Therefore we’ll include the “Shared” method in all Swagger Documents generated.

[ApiExplorerSettings(GroupName = "Warehouse")]
[FunctionName("OrderShipped")]
public async Task<IActionResult> Run(
//...

[ApiExplorerSettings(GroupName = "Shopping")]
[FunctionName("CreateOrder")]
public async Task<IActionResult> Run(
//...

[ApiExplorerSettings(GroupName = "Shared")]
[FunctionName("OrderShippingStatus")]
public async Task<OrderShippingInfo> Run(
//...

By itself, putting them into these groups will not separate things into separate documents for you. It will just add a group name to the API method. For example, in the UI this renders like this:

Swagger UI showing groups for APIs

Create Separate API Specs

To create separate specs, you need to configure Swashbuckle to generate multiple documents and show it how to divide up the methods.

Configure the documents

Back to Startup.cs, we update the configuration with this:

builder.AddSwashBuckle(Assembly.GetExecutingAssembly(), opts =>
{
  // incorporate the XML documentation
  opts.XmlPath = "Bmazon.xml";

  // set up an "Everything" document and 2 documents with the 
  // same names as the group names used in the code
  opts.Documents = new SwaggerDocument[] {
    new SwaggerDocument()
    {
      Name = "Everything",
      Title = "Bmazon Shopping API",
      Description = "All APIs",
      Version = "1.0"
    },
    new SwaggerDocument()
    {
      Name = "Shopping",
      Title = "Bmazon Shopping API",
      Description = "API for the Shopping Department",
      Version = "1.0"
    },
    new SwaggerDocument()
    {
      Name = "Warehouse",
      Title = "Bmazon Warehouse API",
      Description = "API for the Bmazon Warehouse",
      Version = "1.0"
    }
  };
  //...

We now have one “Everything” that we’ll use as a default and two others that will be used for their respective clients.

Let’s configure Swashbuckle, so it knows what APIs to put in which documents.

Update the OpenAPI Functions to support individual API Specs

In that same method in the Startup, we also need to add the following:

opts.ConfigureSwaggerGen = genOpts =>
{
  // configure the separate document inclusion logic
  genOpts.DocInclusionPredicate((docName, apiDesc) =>
  {
    // generating the "everything" doc? then include this method
    if (docName == "Everything")
      return true;

    if (!apiDesc.TryGetMethodInfo(out MethodInfo methodInfo))
      return false;

    // get the value of the [ApiExplorerSettings(GroupName= "foo")]
    var attr = methodInfo.GetCustomAttributes(true)
      .OfType<ApiExplorerSettingsAttribute>().FirstOrDefault();

    var groupName = attr?.GroupName;

    // always return it if it's shared. Otherwise compare doc names
    return groupName == "Shared" || groupName == docName;
  });
};

Add Function Support For Selecting A Group

To allow the clients to select a specific group, we need to modify the JSON and UI OpenAPI functions to support selecting a group.

To do this, we add a new parameter to the JSON and UI Functions called “group” (defaulting to “Everything”)

/// <summary>
/// function implementation
/// </summary>
/// <param name="req">the http request</param>
/// <param name="swashbuckleClient">the injected Swashbuckle client</param>
/// <param name="group">the document to get (default: "Everything")</param>
/// <returns>the JSON data as an http response</returns>
[SwaggerIgnore]
[FunctionName(nameof(OpenApiJson))]
public static Task<HttpResponseMessage> Run(
    [HttpTrigger(AuthorizationLevel.Anonymous, "get", Route = "openapi/json/{group?}")]
    HttpRequestMessage req,
    [SwashBuckleClient] ISwashBuckleClient swashbuckleClient,
    string group)
{
  return Task.FromResult(swashbuckleClient
    .CreateSwaggerJsonDocumentResponse(req, group ?? "Everything"));
}

/// <summary>
/// the function implementation
/// </summary>
/// <param name="req">the http request</param>
/// <param name="swashbuckleClient">the injected Swashbuckle client</param>
/// <param name="group">the optional document from the URL (default: "Everything")</param>
/// <returns>the HTML page as an http response</returns>
[SwaggerIgnore]
[FunctionName(nameof(OpenApiUi))]
public static Task<HttpResponseMessage> Run(
    [HttpTrigger(AuthorizationLevel.Anonymous, "get", Route = "openapi/ui/{group?}")]
    HttpRequestMessage req,
    [SwashBuckleClient] ISwashBuckleClient swashbuckleClient,
    string group)
{
  // the CreateSwaggerUIResponse method generates the HTTP page from the JSON Function results
  return Task.FromResult(swashbuckleClient.CreateSwaggerUIResponse(
    req, $"openapi/json/{group ?? "Everything"}"));
}

Now, when you run the functions, you will have the option to have separate API specs for each Client by appending the document name to the URL, like “http://localhost:7071/api/openapi/ui/Shopping”, so that they will only know about the APIs they can call. To further lock this down, you can add authorization to the specific endpoints at a later time, possibly with Azure API Management.

Swagger UI showing the shopping APIs

Swagger UI showing the Warehouse APIs

In the future, rather than using security through obscurity, you can import these separate OpenAPI JSON files into Azure API Management to lock down the individual APIs by client, but we’ll leave that as an exercise for the reader.

Conclusion

Now that you have gone through these three articles, you have self-documenting APIs separated into different groupings that you can expose to individual clients. All you need to do is properly comment your code and decorate the Functions with the proper Attributes and you and your clients will be very satisfied.

Get the completed code from GitHub

When migrating existing business services to Azure PaaS as part of an App Modernization project, you may find yourself seriously considering serverless computing using Azure Functions, especially if your target architecture includes MicroServices.
Azure Functions let you focus on what counts — your requirements, your time and your code — and less about boilerplate code, infrastructure and processes.

When creating new APIs in any technology, one thing is very important: Documenting those APIs so that others can use them. This is especially important in large enterprises or situations where you are exposing these APIs to the public.

This blog series guides you through creating a C# Function App, creating self-documenting APIs, ensuring the quality of that generated documentation, and separating documentation based on the audience.

The blog post assumes the following:

  • You are familiar with C#
  • You have knowledge of software development fundamentals
  • You have completed Part One of this series

At AIS, we’ve determined that the one of the best approaches to documenting your APIs is to use OpenAPI (formerly Swagger) to have the APIs (nearly) document themselves. This saves time in the long run and even enables API clients to automatically generate client code to interact with your APIS. This helps with shelf life – if 6 months or a year down the road, we decide a better approach is best.

For these articles, I will walk you through the steps for creating well-documented Azure Functions for our fictitious shopping site called “Bmazon” and its modernization effort.

We Need Better Documentation

In the Previous Post, we got the Azure Functions application to start generating OpenAPI docs for our functions, but the results were somewhat underwhelming:

Swagger UI page for the CreateOrder operation showing very little detail

Here you can see that, even though the CreateOrder call takes an Order object in the body of the HTTP Post, there is no documentation describing this. This is because, unlike when writing traditional dotnet core APIs, the order is not a parameter to the function. Swashbuckle only has access to the function signature and anything that can be discovered through reflection.

This output is not very helpful to our clients. They need to know our inputs, potential HTTP Codes to expect (it just assumes that it will return a 200), and other pertinent information like what the method does and what the return data will look like.

For instance, if we add data validation to this method, we may wind up returning a 400 (Bad Request). We could also possibly return a 409 (Conflict) if the order already exists.

Since you’re reading this, you know there is a way to do this. Let’s get to it.

Give Swashbuckle More Info

In order for the OpenAPI documentation to be much better, we need to add a few things that Swashbuckle will be able to use to generate the docs.

As I stated previously, Swashbuckle only has access to things that can be discovered through reflection, which means the definition of your function, its parameters and any attributes decorating it, so the following translates to very little information.

[FunctionName("CreateOrder")]
public async Task<IActionResult> Run(
    [HttpTrigger(AuthorizationLevel.Anonymous, "post", Route = "order")]
    HttpRequestMessage req,
    ILogger log)

This doesn’t even have any information about the Order type that is expected in the body of the method, never mind return codes.

Expected Body Type

To document the type expected in the body of the POST, we need to tell Swashbuckle what to expect. We do this by using the RequestBodyType attribute from the AzureFunctions.Extensions.Swashbuckle.Attribute namespace.

Note that this is an additional attribute on the req parameter on top of the existing HttpTrigger attribute.

[FunctionName("CreateOrder")]
public async Task<IActionResult> Run(
    [HttpTrigger(AuthorizationLevel.Anonymous, "post", Route = "order")]
    [RequestBodyType(typeof(Order), "The Order To Create")] // Describes the Body
    HttpRequestMessage req,
    ILogger log)

With this in place, Swashbuckle knows what type the body contains and we now see that the body type is present in the UI:

The Create Order Swagger UI with body type specified

If you click on the “Schema” link, you will even see the data type names being used:

Create Order Swagger UI showing Schema of the Order type

Note that the Items array is marked nullable: true which is not desirable. We will address that below in the Data Annotations section.

The bottom of the page also shows you all the current objects in the Schema that are known:

Swagger UI with the all Schemas

This information documents all the details about the DTOs being used in this API. But we need to fix the nullability and other validation-related information.

Add Data Annotations

Above, the Order‘s Items collection was marked as nullable. We want to fix that and other validation information that Swashbuckle can read. To do that, we need to add Data Annotations to the object definitions.

Currently, the Order looks like this:

public class Order
{
  public int OrderId { get; set; }
  public IEnumerable<OrderLineItem> Items { get; set; }
}

In order to tell Swashbuckle (and our clients) that the Items collection is required, we have to mark it [Required] and [NotNull] from the System.Diagnostics.CodeAnalysis namespace.

The NotNull attribute is also needed because OpenAPI, not being language-specific, supports the concept of null along with the lack of presence of the variable. JavaScript developers will relate to this concept with undefined and null keywords.

So, in order to tell clients that fields MUST have a value, you need to add both attributes to the Items property.

  public class Order
  {
    public int OrderId { get; set; }

    [Required, NotNull]
    public IEnumerable<OrderLineItem> Items { get; set; }
  }

The results:

Create Order Swagger UI with corrected Required fields

Note the red "*", meaning required, next to the items collection and the lack of the nullable:true.

To properly annotate the objects, we’ll mark the ID, Quantity and SKU all required as well. Additionally, we’ll put rational [Range] and other appropriate restrictions as well:

public class Order
{
  // if we get to 2 billion orders, we'll all be retired anyway
  [Required, NotNull, Range(1, int.MaxValue)]
  public int OrderId { get; set; }

  [Required, NotNull, MinLength(1)]
  public IEnumerable<OrderLineItem> Items { get; set; }
}

public class OrderLineItem
{
  [Required, NotNull, Range(1, 1000)]
  public int Quantity { get; set; }

  [Required, NotNull, MinLength(1)]
  public string SKU { get; set; }
}

So, the final schema looks like this:
Create Order Swagger UI with Full data annotations

Now your clients know the simple data validations for these objects, but don’t know what the return payloads and HTTP codes are.

Potential Return Types and Codes

By default, Swashbuckle will tell the clients to expect a 200 (Success) HTTP result with no payload.

Swagger UI showing only a 200 response

This doesn’t include any information about any payload sent back to the user and is most likely incorrect or at least not the whole story.

If we know our Function is going to return multiple HTTP codes with different payloads, we need to tell Swashbuckle by using the [ProducesResponseType] attribute on the Function itself.

Assuming we return the following:

  • 200/Success with a string message payload
  • 400/BadRequest with a collection of error messages

We decorate our function like this:

[ProducesResponseType(typeof(string), StatusCodes.Status200OK)]
[ProducesResponseType(typeof(IEnumerable<string>), StatusCodes.Status400BadRequest)]
HttpStatusCode.BadRequest)]
[FunctionName("CreateOrder")]
public async Task<IActionResult> Run(

This results in

Swagger UI - multiple response types with http codes

So, we’ve now exposed the input and output types, but we haven’t been able to add any additional information to describe objects or fields to our clients. To do that, we need to add XML comments to the output as well.

To make this information even better, we can comment our code properly. Of course, you were already doing that, right? RIGHT?

Better Comments in the OpenAPI Spec

One thing that you may notice is that, at the top of the function, there is very little information about the method except the name (e.g. “CreateOrder”). We should add a summary about the method.

Now, I need to apologize because I lied to you. Previously, when I said “Swashbuckle only has access to things that can be discovered through reflection”, I was lying (Forgive me!). To give client devs more information about the methods being exposed by an API, we can add C# XML Documentation information to the code and, if configured for it, Swashbuckle will incorporate that too, which can be invaluable.

Add XML Comments

We now add comments like this to our C# code (Functions and DTOs)

/// <summary>
/// Creates an Order that will be shipped to the Warehouse for fulfillment.
/// </summary>
/// <param name="req">the HTTP request</param>
/// <param name="log">the logger</param>
/// <returns>a success message or a collection of error messages</returns>
/// <response code="200">
///   Indicates success and returns a user-friendly message
/// </response>
/// <response code="400">
///   Indicates a data validation issue and will return a list of data validation errors
/// </response>
[ProducesResponseType(typeof(string), StatusCodes.Status200OK)]
[ProducesResponseType(typeof(IEnumerable<string>), StatusCodes.Status400BadRequest)]
HttpStatusCode.BadRequest)]
[FunctionName("CreateOrder")]
public async Task<IActionResult> Run() {
// ...
}

/// <summary>
/// An Order sent from the Shipping Division to be sent to the Warehouse
/// </summary>
public class Order
{
//...
}

The Swagger UI won’t have changed yet. In order for the Swashbuckle library to read this information, you need to tell the C# compiler to generate the documentation in an XML file and tell Swashbuckle about it.

Generate XML Doc file

At the top of the csproj file, add the following line to the first you see.

<DocumentationFile>Bmazon.xml</DocumentationFile>

If you are using Visual Studio, you can access this setting from the Build tab on the Project settings

Now tell Swashbuckle about the XML file

Currently, we’re configuring Swashbuckle in the StartUp.cs file with:

builder.AddSwashBuckle(Assembly.GetExecutingAssembly());

Replace this with

builder.AddSwashBuckle(Assembly.GetExecutingAssembly(), opts => {
  opts.XmlPath = "Bmazon.xml";
});

Now, when you rerun the app, the final result will be a page with the new title and the order schema will have much more detail.

Swagger UI showing Create Order Scheme with XML comment information

The users of your service will thank you for documenting your APIs this thoroughly. Additionally, they won’t have to ask you questions about the details about how to use the APIs. They can even generate client code with various tools as well.

Get the completed code from GitHub

Next Steps

Now that you have really descriptive documentation for your APIs being automatically generated, your security folks may not like you sending all your API documentation to every single client, regardless of need.

In part three of the series, I will show you how to separate the APIs out into separate groupings and keep your clients all in their own lane.

TOP MODERNIZATION APPROACHES
Learn about the top three modernization approaches — rehosting, replatforming, and refactoring — uncovering pros, cons, and process ideas as you go.

When migrating existing business services to Azure PaaS as part of an App Modernization project, you may find yourself seriously considering serverless computing using Azure Functions, especially if your target architecture includes MicroServices.

Azure Functions let you focus on what counts — your requirements, your time, and your code — and less about boilerplate code, infrastructure, and processes.

When creating new APIs in any technology, one thing is very important: Documenting those APIs so that others can use them. This is especially important in large enterprises or situations where you are exposing these APIs to the public.

This blog series guides you through creating a C# Function App, creating self-documenting APIs, ensuring the quality of that generated documentation, and separating documentation based on the audience.

The blog series assumes the following:

  • You are familiar with C#.
  • You have knowledge of software development fundamentals.
  • You are comfortable with command-line interfaces.

At AIS, we’ve determined that one of the best approaches to documenting your APIs is to use OpenAPI (formerly Swagger) to have the APIs (nearly) document themselves. This saves time in the long run and even enables API clients to automatically generate client code to interact with your APIs. This helps with shelf life – if 6 months or a year down the road, we decide a better approach is best.

For these articles, I will walk you through the steps for creating well-documented Azure Functions for our fictitious shopping site called “Bmazon” and its modernization effort.

Creating the App

To create the app, we will start with the Azure Functions Core Tools. At the time of this writing, the current version of this library is 3.0.3477

NOTE: This version uses dotnet cli version 3.1 internally, so if your dotnet executable in the path is not that version, it could cause you issues. If you run into errors, this may be fixed by adding global.json file in the current directory with the following content, which will tell the dotnet cli to use whatever 3.1.x version you have installed.

{
  "sdk": {
    "version": "3.1.0",
    "rollForward": "latestMinor"
  }
}

At the PowerShell prompt, we’ll run the following to create our project

C:\dev> func --version
3.0.3477
C:\dev> func init Bmazon --worker-runtime dotnet

Writing C:\dev\Bmazon\.vscode\extensions.json

This will create the shell of a project inside the C:\dev\Bmazon folder.

While creating the app, I’ve copied in an OrderService and the related DTOs from the existing application we’re modernizing to be used by the newly new functions we are creating. You can see the completed code on GitHub. You’ll see a bit more of them in the next article.

Learn more about Azure Functions From Microsoft Docs

Add Functions

We’re going to add 3 different functions to our app.

Shopping API

The Shopping division needs to call HTTP APIs to make an order to the warehouse, so we will add a CreateOrder function that performs this action.

(This can be done interactively by running func new and following prompts, but using the command line parameters is more concise.)

C:\dev\Bmazon> func new --template HttpTrigger --name CreateOrder `
    --authlevel Anonymous
Use the up/down arrow keys to select a template:Function name: CreateOrder

The function "CreateOrder" was created successfully from the 
"HTTPTrigger" template.

Strangely, it outputs a prompt to select the template even when you have passed in the selection as a parameter. You can ignore this.

Warehouse API

Later in our process, the Warehouse team needs to call an HTTP endpoint to send tracking information back to the Shopping division.

We will follow the pattern above and create an API for them to call.

C:\dev\Bmazon> func new --template HTTPTrigger --name OrderShipped `
    --authlevel Anonymous
Use the up/down arrow keys to select a template:Function name: OrderShipped

The function "OrderShipped" was created successfully from the 
"HTTPTrigger" template.

Shared APIs

Since both the Shopping and Warehouse divisions will need to check on the status of an order at various times, there will be a shared function to check status.

C:\dev\Bmazon> func new --template HTTPTrigger --name OrderShippingStatus `
    --authlevel Anonymous
Use the up/down arrow keys to select a template:Function name: OrderShipped

The function "OrderShippingStatus" was created successfully from the 
"HTTPTrigger" template.

Code Cleanup

We’ll do a bit of code cleanup before moving on.

Choose GET or POST

If you look at the code, you’ll notice that, by default, the Functions were created supporting both GET and POST.

public async Task<IActionResult> Run(
   [HttpTrigger(AuthorizationLevel.Anonymous, "get", "post", Route = null)]
   HttpRequest req
...

We can fix that by changing the code on each function by removing either "get" or "post" appropriately (Typically you will have the first 2 operations be POSTs and the latter be GET).

Organizing the Code

The func calls above will create all the Function files in the top folder. We’ll move ours into a Functions folder to keep things cleaner. They all just happened to start with “O”, so we can be terse.

C:\dev\Bmazon> mkdir Functions; mv O*.cs Functions\

Add OpenAPI Document Generation

In order to add OpenAPI to Azure Functions, I chose to use the Swashbuckle library. There are a few other libraries out there to work with .Net and OpenAPI, but I chose Swashbuckle because I’m familiar with it.

Installing the Package

The core Swashbuckle project doesn’t support Azure Functions directly, so I used the AzureExtensions.Swashbuckle package, a nice extension written by Vitaly Bibikov.

To install it:

C:\dev\Bmazon> dotnet add package AzureExtensions.Swashbuckle

  Determining projects to restore...
  Writing C:\Users\XXX\AppData\Local\Temp\tmp69AA.tmp
info : Adding PackageReference for package 'AzureExtensions.Swashbuckle' into project 'C:\dev\Bmazon\Bmazon.csproj'.
info : Restoring packages for C:\dev\Bmazon\Bmazon.csproj...
...
...
info : Committing restore...
info : Generating MSBuild file C:\dev\Bmazon\obj\Bmazon.csproj.nuget.g.props.
info : Writing assets file to disk. Path: C:\dev\Bmazon\obj\project.assets.json
log  : Restored C:\dev\Bmazon\Bmazon.csproj (in 525 ms).

Setting up Swashbuckle

In order to configure Swashbuckle, your Functions App needs a Functions Startup class like the following, which we’ll put in Startup.cs in the Bmazon folder.

using System.Reflection;
using AzureFunctions.Extensions.Swashbuckle;
using Microsoft.Azure.Functions.Extensions.DependencyInjection;

[assembly: FunctionsStartup(typeof(Bmazon.Startup))]
namespace Bmazon
{
  public class Startup : FunctionsStartup
  {
    public override void Configure(IFunctionsHostBuilder builder)
    {
      builder.AddSwashBuckle(Assembly.GetExecutingAssembly());
    }
  }
}

Exposing OpenAPI Endpoints

Your code will also need to expose the OpenAPI JSON and UI endpoints as HTTP-triggered Azure Functions so that client code can load them on demand.

(Adding them in a single OpenApi\OpenApiFunctions.cs file for now)

using System.Net.Http;
using System.Threading.Tasks;
using AzureFunctions.Extensions.Swashbuckle;
using AzureFunctions.Extensions.Swashbuckle.Attribute;
using Microsoft.Azure.WebJobs;
using Microsoft.Azure.WebJobs.Extensions.Http;

namespace Bmazon.OpenApi
{
  public static class OpenApiFunctions
  {
    [SwaggerIgnore]
    [FunctionName("OpenApiJson")]
    public static Task<HttpResponseMessage> RunJson(
        [HttpTrigger(
            AuthorizationLevel.Anonymous, 
            "get", Route = "openapi/json")]
        HttpRequestMessage req,
        [SwashBuckleClient] ISwashBuckleClient swashbuckleClient)
    {
      return Task.FromResult(
            swashbuckleClient.CreateSwaggerJsonDocumentResponse(req));
    }

    [SwaggerIgnore]
    [FunctionName("OpenApiUI")]
    public static Task<HttpResponseMessage> RunUi(
        [HttpTrigger(
            AuthorizationLevel.Anonymous, 
            "get", 
            Route = "openapi/ui")]
        HttpRequestMessage req,
        [SwashBuckleClient] ISwashBuckleClient swashbuckleClient)
    {
      // CreateOpenApiUIResponse generates the HTML page from the JSON results
      return Task.FromResult(
            swashbuckleClient.CreateSwaggerUIResponse(req, "openapi/json"));
    }
  }
}

This sets up 2 new Functions on the openapi/json and openapi/ui URLs to load the JSON file and Swagger UI respectively. The [SwaggerIgnore] attribute causes Swashbuckle to ignore these API methods for document generation purposes.

Generate and View the API Documentation

NOTE: You must have the Azure Storage Emulator or Azurite RUNNING locally in order for this to work properly.

C:\dev\Bmazon> func start
Microsoft (R) Build Engine version 16.8.3+39993bd9d for .NET
Copyright (C) Microsoft Corporation. All rights reserved.

  Determining projects to restore...
  Restored C:\dev\Bmazon\Bmazon.csproj (in 840 ms).
  Bmazon -> C:\dev\Bmazon\bin\output\bin\Bmazon.dll

Build succeeded.

Time Elapsed 00:00:05.60

Azure Functions Core Tools
Core Tools Version:       3.0.3284 Commit hash: 98bc25e668274edd175a1647fe5a9bc4ffb6887d
Function Runtime Version: 3.0.15371.0

[2021-02-27T15:05:33.871Z] Found C:\dev\Bmazon\Bmazon.csproj. Using for user secrets file configuration.

Functions:

  CreateOrder: [POST] http://localhost:7071/api/order
  OpenApiJson: [GET] http://localhost:7071/api/openapi/json
  OpenApiUi: [GET] http://localhost:7071/api/openapi/ui
  OrderShipped: [POST] http://localhost:7071/api/order/shipment
  OrderShippingStatus: [GET] http://localhost:7071/api/order/shipment/{id}

For detailed output, run func with --verbose flag.
[2021-02-27T15:05:41.693Z] Host lock lease acquired by instance ID '000000000000000000000000016514FF'.

If you don’t see that last line after a few seconds, you probably don’t have the storage emulator running

Take note of the list of functions shown with the URLs next to them, especially the ones starting with “OpenApi”.

If you visit the OpenApiUI URL listed above, you will see the following in your browser:

Rendered Swagger UI displaying the 3 created Operations

That’s it! You now have a modernized serverless architecture with APIs that are documenting themselves!

If you add any new Functions, they will automatically show up here as well. Your clients can download from the JSON endpoint and import the definitions into Postman or client generators.

Get the completed code from GitHub

Next Steps

Now that you have self-documenting APIs, you may notice that the information in the Swagger UI is rather underwhelming. Due to the nature of Azure Functions, there is very little information that Swagger can glean from the runtime type information it can gather.

In part two of the series, I will show you how to make the documentation MUCH better.

TOP MODERNIZATION APPROACHES
Learn about the top three modernization approaches — rehosting, replatforming, and refactoring — uncovering pros, cons, and process ideas as you go.

Gain control with Azure Management Groups

As enterprises start to move to Azure, managing subscriptions becomes tedious with the growing number of subscriptions. In an organization, there are usually many employees and, in some cases, many applications. If all these employees are provided Azure subscriptions and start creating Azure resources at will, it may soon become difficult to control, manage, and track who creates what. Eventually, the costs may go out of control. However, organizing your subscriptions using Azure Management Group can make the job very easy.

What is an Azure Management Group?

Management groups are logical groups for Azure subscriptions, allowing you to organize subscriptions and apply governance controls, such as Azure Policy and Role-Based Access Controls (RBAC), to the management groups. All subscriptions within a management group automatically inherit the controls applied to the management group.

What is an Azure Management Group?

When do you need to organize your subscriptions?

Management groups are logical groups for Azure subscriptions, allowing you to organize subscriptions and apply governance controls, such as Azure Policy and Role-Based Access Controls (RBAC), to the management groups. All subscriptions within a management group automatically inherit the controls applied to the management group.

Rules to consider when organizing

  • Below can be accomplished through Management Groups:
    • Group Subscriptions according to your organizational models and single assignment of controls that apply to all subscriptions.
    • Create a flexible hierarchy that can be updated quickly and can easily scale up or down depending on the organization’s needs.
    • Use Azure Resource Manager to integrate with other Azure services like Policy, Cost Management, Blueprints, and Security Center.
  • A few questions to answer before you should create a management group hierarchy:
    • What kinds of workloads are in each subscription?
    • What environment?
    • Which department/teams?
    • What are security rules being applied to these workloads?

What is a Root Management Group?

Benefits of Azure Management Groups

  • Provides a scope higher than a subscription to manage access, policies, and compliance efficiently.
  • Aggregated views above the subscription level.
  • Inheritance allows for a single assignment of controls that apply to a grouping of subscriptions.
  • Create a hierarchy of Management Groups that fit your organization.
  • Management Groups can be scaled up/down as needed.
  • Management tree can support up to six levels.
  • Each management group and subscription can only support one parent.
  • Each management group can have multiple children.
  • All subs and management groups fold up to one root management group within the directory.
  • A newly onboarded subscription is added to Root Management Group by default.

Limitations of Azure Management Groups

  • 10,000 Management Groups can be supported in a single directory.
  • Management Group tree can support up to six levels of depth.
  • Each management group and subscription can only support one parent.
  • Each management group can have multiple children.
  • Root management groups can’t be moved or deleted, unlike other management groups.

Role-Based Access (RBAC) in Management Groups

  • You could apply your RBAC roles at the Management Group scope to have one central place that you could go to instead of fiddling around with every subscription.
  • Azure roles can be applied to Management Groups, then inherited to all management groups and their subscriptions.
  • Classic admin roles can’t be applied at the Management Group level and need to be applied at the subscription level.
  • It is not possible to define a custom RBAC role at a Management Group.

Azure Account Roles

Azure Blueprint to Management Group

Azure Blueprint is a feature that allows defining a package of artifacts (resource groups, Azure policies, role assignments & Resource Manager templates, and more) targeted to Management groups and Azure subscriptions to create consistent and repeatable environments.

What is an Azure Blueprint?

Azure Policy Management Group

  • Management groups are a convenient place for defining Azure ARM policies.
  • You can apply Azure Policy, the service that allows you to create, assign and manage Azure policies at Management Group.
  • All these policies are then applied to all resources under the management group.
  • Allows your organization to set up Azure environments with consistent security policies.
  • The ARM policy violations can be seen either in the Azure Policy blade or in Azure Security Center.

Easing Subscription Management

This blog has explained the Azure Management Group concept and how you can use this service to ease your managing subscriptions. Other services can be used at the Management Groups level but will be applied or inherited at the Subscription level. It is suggested to use Management Groups when the number of subscriptions, departments, applications, and users grows.

Whether you have an Enterprise Agreement, Cloud Solution Partner, Pay-As-You-Go, or any other type of subscription, this service gives all Azure customers enterprise-grade management at a large scale for no additional cost.

Resources:

WHITEPAPER DOWNLOAD
Discover considerations, cost optimization techniques and how Azure capabilities can impact your organization.

Microsoft Azure Government DC is a group created for anyone in the IT world modernizing Government to bring real-world lessons to innovators in Government. AIS has supported and presented during these events since there were just five members. Now, the group is nearing 4,000. The July meetup focused on getting your agency to next-level cloud adoption with Microsoft Azure. Check out the recording and overview below.

Here’s What You Missed

Cloud Adoption has come a long way over the years. We have gone from a basic “lift and shift” model to migrating priority workloads to the cloud and optimizing for both high-security workloads and to tap into cloud-native services. If one thing is clear, hybrid capabilities are critical. It is important that we start thinking about the challenges as we start to move legacy IT infrastructure to the cloud. Two Microsoft Federal CTO’s, Susie Adams (Sales) and Kent Cunningham (Civilian), talk about changes Microsoft has been creating to simplify the migration processes. AIS Chief Technology Officer (CTO), Vishwas Lele, moderated a panel discussion with our customer, Richard Inzunza from Immigration and Customs Enforcement (ICE), who provided excellent insights around cloud adoption and his experience. The panel also included Jorge Gallardo, Program Manager from Peraton, who discusses his experience in regulated environments in the cloud.

Watch the Full Session:

Session Recap

Challenges when Migrating Workloads

Organizations can take their cloud adoption to the next level with Microsoft Azure when moving priority workloads in the cloud.
Microsoft breaks out the following based on maturity:

  • Identity and Security
  • DevOps and DevSecOps
  • Data
  • Cloud-Native
  • The Edge

When organizations migrate their workloads to the cloud but have a diverse IT estate, it poses challenges to manage security, access to their data, and understanding where that data lives. As a result, Microsoft has brought tools and resources that customers need to easily manage their workloads and simplify the migration process in a multi-cloud world.

Many customers come with questions on how to implement and manage Infrastructure as a Service (IaaS) while meeting regulatory requirements like FedRAMP and HIPAA. Microsoft has a portal full of Azure Blueprints that allow organizations to select and deploy the chosen environment inside of their subscription. The goal? To simplify the deployment of these methods with peace of mind that they align with regulatory and compliance standards.

What Tools Can We Use to Simplify

Mission Landing Zones are highly opinionated templates that IT oversight organizations can configure or customize to quickly create a cloud management system. These are then deployed on Azure environments for their teams. Based on the Terraform platform, this will provide a secure, scalable landing zone for customer workloads. Mission Landing Zones enable expedited Cloud Adoption across Commercial, IL4, IL5, & IL6.

Customer’s environments are becoming increasingly complex, and Azure is helping organizations securely and efficiently move to the cloud by creating a single management experience for your entire environment. We recognize that companies are struggling with multiple different environments, and we are focusing on providing companies with granular access.

Azure Hybrid is a cost savings benefit that lets you bring your existing on-premises Windows Server and SQL Server licenses with active Software Assurance or subscriptions to Azure.

Azure Hybrid consists of the following:

  • Azure Stack: A portfolio of products that extend Azure services and capabilities to your environment of choice – from datacenter to edge locations and remote offices.
  • Azure Arc: Enables you to manage your entire environment, with a single pane of glass, by projecting your existing non-Azure, on-premises, or other cloud resources into Azure Resource Manager.
  • Azure Internet of Things (IoT): A collection of Microsoft-managed cloud services that connect, monitor, and control billions of IoT assets hosted in the cloud.
  • Azure Lighthouse: Facilitates a secure relationship between companies and their managed service providers while providing on-demand transparency into vendor access and actions.

Implementing Capabilities

With the release of the Cybersecurity Executive Order in May 2021, Microsoft is developing new ways to support and deploy these capabilities while meeting security and compliance standards.

  • Enable security modernization: Help empower security teams to combat the increase in sophisticated attacks.
  • Increase collaboration: Improve information sharing and security practices across public and private sector organizations.
  • Deliver trusted and secure services: Build trust in government by securing information, improving software supply chain, and facilitating compliance.

Zero Trust architecture is based on the principle: never trust, always verify. This security approach protects customers by managing and granting access based on the continual verification of identities, devices, and services. Zero Trust architecture addresses challenges modern enterprises face. Microsoft Threat Protection powered by Azure is a comprehensive and seamless integration solution that provides end-to-end security for your organization using tools like Azure Sentinel and M365 functionalities. Learn more at https://aka.ms/cybereo.

Panel Discussion

A panel discussion was led by AIS CTO, Vishwas Lele. We were honored to have an AIS customer, Richard Inzunza, IT Specialist from the Department of Homeland Security speak on the panel. He was joined by Jorge Gallardo, Program Manager from Peraton.

Richard has been with the Federal Government for 36 years and with ICE since its inception in 2003. He has been a key player in the implementation of their hybrid cloud environment. ICE is in the process of building, extending, and expanding their ability to use cloud services from more than one service provider. AWS (Amazon Web Services) and Microsoft Azure are their biggest providers today, but their focus is to be able to take any valuable cloud service and integrate it into the ICE cloud to pass these capabilities onto their users and employees.

Common Challenges

There are several challenges Richard and ICE face in their line of work. Physical servers are no longer the main source for storing data, and helping customers understand the virtual aspect and how data is managed has been a challenge. Getting development teams and ITPMS, and other support teams to understand how to apply concepts of virtualization is extremely important for future development.

Many developers want to provision a capability without a true understanding of how this can open ICE to vulnerabilities. To address this ongoing challenge, they are helping their teams understand the responsibility level around cost and actions taken when provisioning new capabilities. Creating a vehicle that is compliant and future-proof is imperative for federal organizations to adapt and free time up for other key focuses. ICE’s goal is to get their teams to automate the delivery of their releases for their custom and third-party applications using pipelines.

Adjusting to a new virtual culture and applying security to a specific type of environment is a challenge that the assurance side of government IT is facing. ICE partnered with Peraton early on to align the implementation phase to begin their journey to the cloud. With this joint effort, three years later, ICE security teams are becoming more familiar with virtual environments at the beginning phase.

Ensuring Compliance

Policy Compliance & Security Compliance are a few types that ICE operates within. With their Policy, tagging is a method that ICE uses along with serverless LAMDA scripts to enforce compliance. They also have databases that store the tag values for valid metadata that correlates with an infrastructure or application. Ensuring that type of policy compliance helps at the management and administration level to understand the information they pull is accurate and helpful in many ways.

Security Compliance is now managed with advanced scanning tools and different checks to ensure when a policy has been adjusted. With accurate scanning, Richard is notified when policies have been adjusted and can reach out to the appropriate network to validate.

AIS: Your Trusted Azure Partner

We help government organizations create cohesive multi-cloud strategies, bringing the expertise you require for successful cloud adoption, modernization, data intelligence, and beyond. At AIS, we help you take a step back and look at the requirements needed and what services can be used with Azure or other tools to meet needs, offering templates and support for documentation. Our scalable cloud solutions address our clients’ most complex concerns, with deep experience across national security and military operations, as well as Federal, State, and Local Governments and their supporting agencies. We have been working with Azure for 12+ years and will have you well on your way to realizing the best that the cloud can offer.

Join us for future virtual meetings at the Microsoft Azure Government User Community: https://www.meetup.com/dcazuregov/.