For those new to Kubernetes, it’s a portable, extensible, open-source platform for managing containerized workloads and services facilitating configuration and automation. This blog will cover a high-level overview of Kubernetes architecture and components. The image below outlines the various components that we will walk through in this guide.

Master Node

The master node controls and manages a set of worker nodes and contains the Kubernetes cluster. We can talk to the master node via CLI, GUI, or API, and more than one master node can be used for fault tolerance. Kubernetes uses the etcd, and all master nodes are connected to etcd, which is a distributed key-value store.

API Server

API servers perform all administrative tasks on the master nodes. Users send the command to the API server, which then validates the request process and executes them. The API server determines if the request is valid or not and then processes it.

API Architecture

Key-Value Store (etcd)

Etcd is an open-source distributed Key-Value Store used to hold and manage the critical information that distributed systems need to keep running. The Key-Value Store, also called etcd, is a database Kubernetes uses to back up all cluster data. It stores the entire configuration and state of the cluster.

Controller

The role of the Controller is to obtain the desired state from the API Server. It checks the current state of the nodes it is tasked to control, and determines if there are any differences, and resolves them if any.

Scheduler

The Scheduler’s main job is to watch for new requests coming from the API Server and assign them to healthy nodes. It ranks the quality of the nodes and deploys pods to the best-suited node.

Worker Node

Worker nodes listen to the API Server for new work assignments; they execute the work assignments and then report the results to the Kubernetes Master node.

Kubernetes Master

Kubelet

The kubelet runs on every node in the cluster. It is the principal Kubernetes agent. By installing kubelet, the node’s CPU, RAM, and storage become part of the broader cluster. It watches for tasks sent from the API Server, executes the task, and reports back to the Master.

Container Runtime

The container runtime pulls images from a container image registry and starts and stops containers. A 3rd party software or plugin, such as Docker, usually performs this function.

Kube-proxy

The kube-proxy makes sure that each node gets its IP address.

Pod

A pod is the smallest element of scheduling in Kubernetes. Without it, a container cannot be part of a cluster.

Pod in Kubernetes

Kubernetes Services

One of the best features Kubernetes offers is that non-functioning pods get replaced by new ones automatically. The new pods have a different set of IPs. It can lead to processing issues and IP churn as the IPs no longer match. If left unattended, this property would make pods highly unreliable.

FREE WHITEPAPER
Discover impactful cost savings with Azure with our free white paper download.

Container Deployment

Traditional Deployment

Initially, developers deployed applications on individual physical servers. This type of deployment posed several challenges. The sharing of physical resources meant that one application could take up most of the processing power, limiting the performance of other applications on the same machine.

Traditional Kubernetes Deployment

It takes a long time to expand hardware capacity, which in turn increases costs. To resolve hardware limitations, organizations began virtualizing physical machines.

Virtualized Deployment

Virtualized deployments allow you to scale quickly and spread the resources of a single physical server, update at will, and keep hardware costs in check. Each VM has its operating system and can run all critical systems on top of the virtualized hardware.

Virtualized Server

Container Deployment

Container Deployment is the next step in the drive to create a more flexible and efficient model. Much like VMs, containers have individual memory, system files, and processing space. However, strict isolation is no longer a limiting factor. Multiple applications can now share the same underlying operating system. This feature makes containers much more efficient than full-blown VMs. They are portable across clouds, different devices, and almost any OS distribution.

Container Deployment Structure

The container structure also allows for applications to run as smaller, independent parts. These parts can then be deployed and managed dynamically on multiple machines. The elaborate structure and the segmentation of tasks are too complex to manage manually.

GROW WITH US
AIS continues to expand Kubernetes work and is looking for talent to join our team. View open career opportunities and apply today!

What is Ansible?

It is the most trusted DevOps tool for many users, which can be used in platforms like Unix, Linux, and Windows Azure. It’s an open-source tool used to automate cloud provisioning, configuration management, and App deployments. In addition, it helps provision virtual machines, networks, containers, and complete cloud infrastructures.

How is Ansible Used?

It’s been used in the form of a playbook file written in YAML format like AKS to define a configuration or process model. A playbook is a combination of all the various tasks that are combined into a single file. Once we have the playbook ready, we can create a DevOps pipeline and maintain static configuration for application deployment and other configurations. Ansible is safe to use on any cloud service as they don’t need to install any agent to perform DevOps activities. It will perform the deployment and exit the cloud without leaving any traces, so there won’t be any scope of vulnerability and safe to use, and this won’t consume CPU like other deployment tools. With the help of Ansible, we can create pipelines in Azure DevOps for various activities like building VM’s, application deployments, and configuration settings.

Deploy a New Application from an Azure DevOps Pipeline

Once our application is ready on your desktop locally, we can repeatedly deploy and re-deploy the application without any changes in the cloud using the CI/CD pipeline. The advantage of using Ansible is your application will run in the same way it ran on your local desktop, so there won’t be any environment-related issues. Re-deploying it to another infrastructure is as straightforward as defining your Azure environment and applying your application’s Playbook. It ensures your Azure deployments work seamlessly in a hybrid cloud configuration as efficiently as you build a single system.

Ansible Capabilities in Azure

Ansible has modules for many different Azure capabilities, including:

  • Creating Virtual Machines
  • Creating Virtual Networks
  • Creating Storage and Storage Accounts
  • Creating Resource Groups
  • Creating Security Groups
  • Creating Resource Manager Templated Deployments

To deploy using Ansible, we need to create an Ansible VM in Azure:

az ad sp create-for-rbac --name Ansible VM name

Once VM is created, connect to the Ansible VM with login credentials under the SSH tab. Ansible is an agentless architecture-based automation tool. Only it needs ssh authentication:

  1. Navigate to your project. Select Repos.
  2. Select the webapp.yml file under the ansible-scripts folder.

Below are a few examples of YAML files.

    1. Create a resource group and Azure VM azure_windows_vm.yml
      name: Create Azure VM
      hosts: localhost
      connection: local
      tasks:
      - name: Create resource group
      azure_rm_resourcegroup:
      name: myResourceGroup
      location: eastus
    2. Create the virtual network and subnet azure_windows_vm.yml
      name: virtual network Name
      azure_rm_virtualnetwork:
      resource_group: myResourceGroup
      name: vNet
      address_prefixes: "10.0.0.0/16"
      - name: Add subnet
      azure_rm_subnet:
      resource_group: myResourceGroup
      name: subnet
      address_prefix: "10.0.1.0/24"
      virtual_network: vNet
      register: output_ip_address 
    3. We will save all these YAML files as playbooks and save them in Azure DevOps as Artifacts.

Build Your Application Using Azure CI Pipeline

  • Navigate to Pipeline > Builds. Select Ansible-CI and click Edit.
  • Select the Azure Dev stage and click View stage tasks to view the pipeline tasks.
  • Select the Replace Tokens task.

Replace the values as per the environment like SQL password, Environment name, etc. Once complete, you can save the changes and create a release, and run the application deployment by creating a new Pipeline. I have created screenshots below for reference.

Creating a release

ENERGY PROVIDER POWERS ITS WORKFORCE WITH DEVOPS
AIS created pipelines in Azure DevOps for a leading energy provider to enable automation, reduce manual errors, and more frequent releases.

Stage one and creating a new release

In the Artifacts Tab, select the one that we created by using playbook and click on create.

Stage one in progress

Pipeline suceeded

Introduce Ansible to Your Organization

Ansible can provide you with a simple and powerful package for configuration management and automation, and takes care of the repetitive work to free up admins so they can focus on other efforts. It helps DevOps improve technological scalability, reliability, and consistency. AIS has the application development experience, IT operations experience, and cultural change management skills that DevOps adoption requires. We have deep DevOps and Agile development expertise at the enterprise level and have helped many clients’ development organizations successfully adopt these new practices.

Thank you to Prasantha Raju for his contributions to this blog.

We have multiple DevOps engagement options. If you’re eager to modernize your software development practices, contact us today.

Creating Self Documenting Azure Functions with C# and OpenAPI: Part Three

When migrating existing business services to Azure PaaS as part of an App Modernization project, you may find yourself seriously considering serverless computing using Azure Functions, especially if your target architecture includes MicroServices.

Azure Functions let you focus on what counts — your requirements, your time, and your code — and less about boilerplate code, infrastructure, and processes.

When creating new APIs in any technology, one thing is essential: Documenting those APIs so that others can use them. This is especially important in large enterprises or situations where you are exposing these APIs to the public.

This blog series guides you through creating a C# Function App, creating self-documenting APIs, ensuring the quality of that generated documentation, and separating documentation based on the audience.

The blog post assumes the following:

  • You are familiar with C#
  • You know software development fundamentals
  • You are comfortable with command-line interfaces
  • You have completed Part Two of this series

At AIS, we’ve determined that one of the best approaches to documenting your APIs is to use OpenAPI (formerly Swagger) to have the APIs (nearly) document themselves. This saves time in the long run and enables API clients to generate client code to interact with your APIS automatically. In addition, this helps with a shelf life – if six months or a year down the road, we decide a better approach is best.

For these articles, I will walk you through the steps for creating well-documented Azure Functions for our fictitious shopping site called “Bmazon” and its modernization effort.

This is the final post in this series.

We Need to Lock it Down

In the previous article, we increased the quality of our OpenAPI spec by adding various C# attributes and XML comments to the mix. This resulted in a very useful and informative OpenAPI spec being generated.

Now, it turns out that our Security Team alerted us that some folks in the Warehouse were using their knowledge and access to the “Create Order” API to generate fake orders for themselves. This is a problem, and they have now updated the Security Procedures to require restricting people to the API calls they are supposed to use.

Currently, we have the following functions and departments that need to access them:

Function Description Shopping Department Warehouse
Create Order Creates an order to send to the Warehouse
Order Shipped Shipment update from the Warehouse to the System
Get Shipping Status Gets the current shipping status of an order

We have two Clients (Shopping Dept and Warehouse) that each need access to two functions.
We need to separate these things into two groups.

Put Functions In Groups

Swashbuckle supports putting things in Groups by using the ApiExplorerSettings attribute from Microsoft.AspNetCore.Mvc namespace. This attribute can be applied more than one time, so we can add these for all the functions like this:

Unfortunately, since you can’t use more than one ApiExplorerSettings attribute per function, we will need three groupings for this, “Warehouse,” “Shopping,” and “Shared,” to handle the method that was shared between. Therefore we’ll include the “Shared” method in all Swagger Documents generated.

[ApiExplorerSettings(GroupName = "Warehouse")]
[FunctionName("OrderShipped")]
public async Task<IActionResult> Run(
//...

[ApiExplorerSettings(GroupName = "Shopping")]
[FunctionName("CreateOrder")]
public async Task<IActionResult> Run(
//...

[ApiExplorerSettings(GroupName = "Shared")]
[FunctionName("OrderShippingStatus")]
public async Task<OrderShippingInfo> Run(
//...

By itself, putting them into these groups will not separate things into separate documents for you. It will just add a group name to the API method. For example, in the UI this renders like this:

Swagger UI showing groups for APIs

Create Separate API Specs

To create separate specs, you need to configure Swashbuckle to generate multiple documents and show it how to divide up the methods.

Configure the documents

Back to Startup.cs, we update the configuration with this:

builder.AddSwashBuckle(Assembly.GetExecutingAssembly(), opts =>
{
  // incorporate the XML documentation
  opts.XmlPath = "Bmazon.xml";

  // set up an "Everything" document and 2 documents with the 
  // same names as the group names used in the code
  opts.Documents = new SwaggerDocument[] {
    new SwaggerDocument()
    {
      Name = "Everything",
      Title = "Bmazon Shopping API",
      Description = "All APIs",
      Version = "1.0"
    },
    new SwaggerDocument()
    {
      Name = "Shopping",
      Title = "Bmazon Shopping API",
      Description = "API for the Shopping Department",
      Version = "1.0"
    },
    new SwaggerDocument()
    {
      Name = "Warehouse",
      Title = "Bmazon Warehouse API",
      Description = "API for the Bmazon Warehouse",
      Version = "1.0"
    }
  };
  //...

We now have one “Everything” that we’ll use as a default and two others that will be used for their respective clients.

Let’s configure Swashbuckle, so it knows what APIs to put in which documents.

Update the OpenAPI Functions to support individual API Specs

In that same method in the Startup, we also need to add the following:

opts.ConfigureSwaggerGen = genOpts =>
{
  // configure the separate document inclusion logic
  genOpts.DocInclusionPredicate((docName, apiDesc) =>
  {
    // generating the "everything" doc? then include this method
    if (docName == "Everything")
      return true;

    if (!apiDesc.TryGetMethodInfo(out MethodInfo methodInfo))
      return false;

    // get the value of the [ApiExplorerSettings(GroupName= "foo")]
    var attr = methodInfo.GetCustomAttributes(true)
      .OfType<ApiExplorerSettingsAttribute>().FirstOrDefault();

    var groupName = attr?.GroupName;

    // always return it if it's shared. Otherwise compare doc names
    return groupName == "Shared" || groupName == docName;
  });
};

Add Function Support For Selecting A Group

To allow the clients to select a specific group, we need to modify the JSON and UI OpenAPI functions to support selecting a group.

To do this, we add a new parameter to the JSON and UI Functions called “group” (defaulting to “Everything”)

/// <summary>
/// function implementation
/// </summary>
/// <param name="req">the http request</param>
/// <param name="swashbuckleClient">the injected Swashbuckle client</param>
/// <param name="group">the document to get (default: "Everything")</param>
/// <returns>the JSON data as an http response</returns>
[SwaggerIgnore]
[FunctionName(nameof(OpenApiJson))]
public static Task<HttpResponseMessage> Run(
    [HttpTrigger(AuthorizationLevel.Anonymous, "get", Route = "openapi/json/{group?}")]
    HttpRequestMessage req,
    [SwashBuckleClient] ISwashBuckleClient swashbuckleClient,
    string group)
{
  return Task.FromResult(swashbuckleClient
    .CreateSwaggerJsonDocumentResponse(req, group ?? "Everything"));
}

/// <summary>
/// the function implementation
/// </summary>
/// <param name="req">the http request</param>
/// <param name="swashbuckleClient">the injected Swashbuckle client</param>
/// <param name="group">the optional document from the URL (default: "Everything")</param>
/// <returns>the HTML page as an http response</returns>
[SwaggerIgnore]
[FunctionName(nameof(OpenApiUi))]
public static Task<HttpResponseMessage> Run(
    [HttpTrigger(AuthorizationLevel.Anonymous, "get", Route = "openapi/ui/{group?}")]
    HttpRequestMessage req,
    [SwashBuckleClient] ISwashBuckleClient swashbuckleClient,
    string group)
{
  // the CreateSwaggerUIResponse method generates the HTTP page from the JSON Function results
  return Task.FromResult(swashbuckleClient.CreateSwaggerUIResponse(
    req, $"openapi/json/{group ?? "Everything"}"));
}

Now, when you run the functions, you will have the option to have separate API specs for each Client by appending the document name to the URL, like “http://localhost:7071/api/openapi/ui/Shopping”, so that they will only know about the APIs they can call. To further lock this down, you can add authorization to the specific endpoints at a later time, possibly with Azure API Management.

Swagger UI showing the shopping APIs

Swagger UI showing the Warehouse APIs

In the future, rather than using security through obscurity, you can import these separate OpenAPI JSON files into Azure API Management to lock down the individual APIs by client, but we’ll leave that as an exercise for the reader.

Conclusion

Now that you have gone through these three articles, you have self-documenting APIs separated into different groupings that you can expose to individual clients. All you need to do is properly comment your code and decorate the Functions with the proper Attributes and you and your clients will be very satisfied.

Get the completed code from GitHub

What is Puppet?

Puppet is a configuration management tool developed by Puppet labs to automate infrastructure management and configuration. Puppet includes its own declarative language to describe system configuration. It is a model-driven solution that requires limited programming knowledge to use.

Puppet Architecture

Puppet Architecture Diagram

  • Puppet Master: It performs a critical role by handling all the configuration-related tasks. It applies the configuration to all the target nodes or the client machines.
  • Puppet Agent: The Puppet Agent is the application that manages the configurations on your nodes. It requires a Puppet primary server to fetch configuration catalogs from. Depending on your infrastructure and needs, you can manage systems with Puppet agent as a service, as a cron job, or on-demand.
  • Configuration Repository: A shared location where all configurations are saved and retrieved when required by the Puppet master.
  • Puppet Facts: These are the details related to Master. The target machine consists of classification, functional and regional, etc., related attributes, so Puppet master can analyze them and apply required configurations.
  • Catalog: It is a compiled version of configuration which needs to be pushed onto target machines.

Puppet Workflow

Puppet Workflow Diagram

Puppet master collects all the details of the target node using Factor, which consists of Fact files of the Node. Puppet master compares retrieved configuration and the defined configuration based on the server’s facts. Then, it creates a Catalog and pushes it to the Puppet agent on the target node. Puppet agent executes the catalog in the next run and updates it back to the Puppet master.

Note: Above all explained to understand the basic functionality of Puppet, so we can quickly troubleshoot the Issues, and below are the few commonly occurring issues and solutions for them.

Puppet Use Cases

Case 1

Challenge: Target Machine stops reflecting in Puppet Console CLI or the Puppet web console. This is probably because the Puppet agent on the target machine is no longer reporting to the Puppet master.

Solution:

  1. Log in to the target machine
  2. Open PowerShell or CMD with elevated privileges.
  3. Set Path to: C:\ProgramData\PuppetLabs\puppet\etc
  4. >Notepad Puppet.conf
    In the configuration file, remove the agent server Name, which is highlighted below.
    Remove Agent server name
  5. Save the file Puppet.conf
  6. Run Command > Puppet agent -t

Now you should see the target machine in Puppet Web console/CLI if the Agent rerun completes successfully.

Case 2

Challenge: Run of Puppet configuration client already in progress.
This is probably due to the Puppet agent getting stuck during its periodic Run.
Puppet configuration in progress

Solution:

  1. Log into the target machine
  2. Open File Explorer
  3. Navigate to C:\Programdata\puppetlabs\puppet\cache\state
  4. Remove the file named “agent_catelog_run.lock”
  5. Now open Powershell/CMD with elevated privileges
  6. Run the command> Puppet agent -t

Now the Puppet agent rerun should complete successfully with applying updated configurations.

Case 3

Challenge: Certificate error after Puppet agent rerun, This is probably due to the certificate mismatch of the target machine in the Puppet master configuration file.

Solution:

  1. Login to the target machine
  2. Open PowerShell/cmd with elevated privileges
  3. To clean the certificate from the target machine
  4. Run Command>Puppet SSL clean
  5. Login to Puppet Master Server
  6. If its CLI >run command > puppetserver ca clean –certname targetmachine.contoso.com
  7. If its UI> Run the task “Clean Certificate” for the targetmachine.contoso.com,
  8. Note: The task must be preconfigured in Web UI.
  9. Run>Puppet agent -t in target machine Powershell/CMD

Now the puppet agent rerun should be complete successfully.

Case 4

Error 1

Error One: Command Prompt

Error 2

“Could not evaluate: Failed to open TCP connection to Puppetmaster.contoso.com:443 (getaddrinfo: No such host is known. )”

Error 3

Exception calling “DownloadFile” with “2” argument(s): “Unable to connect to the remote server.”
This is probably due to a communication block between the target machine and the Puppet master on ports 443 and 8140.

Solution:

  1. Login to Target machine
  2. Telnet (Puppet Master IP) 443/8140 or Test-NetConnection -ComputerName (PuppemasterIP) -Port 8140
  3. If the connection is a success, you can proceed with the rerun of the Puppet agent or the puppet installation.
  4. If not, A firewall request to be submitted to open the ports between Puppet master and the subnet of Target machine for ports 443 and 8140

Then you can proceed with the rerun or the installation of Puppet.

Case 5

Challenge: Puppet facts mismatch with the details in the Puppet.config
This is probably due to a mismatch of the details between Puppet.config file and facts.d.

Solution:

  1. Login to Target machine
  2. Open Puppet.config file at C:\programdata\puppetlabs\puppet\etc
  3. Open Fact files at C:\programdata\puppetlabs\factor\facts.d
  4. Make sure they have identical attributes for server classification or, if any, in common.
  5. Make necessary changes and save them and Rerun the agent

Puppet agent rerun should be successful now.

Case 6:

Challenge: Puppet installs but Puppet.config file missing. While installing the Puppet on the target machine, the installation will be successful, but it will not write the puppet.config file.
Solution: In this case, we can copy the puppet.config file from the adjacent server and same subnet and do the necessary changes for server classification ex: Production, Test, etc., on the environment attribute. Now Rerun the Puppet agent, and it should be successful.

Conclusion

Being Puppet is a vast subject and I have come across the above troubleshooting scenarios commonly encountered and provided with solutions. I am glad if my article will help troubleshoot the issues related to Puppet and resolve them. This is documented completely in my personal view and looking forward to hearing any feedback or advice. Good luck!

When migrating existing business services to Azure PaaS as part of an App Modernization project, you may find yourself seriously considering serverless computing using Azure Functions, especially if your target architecture includes MicroServices.
Azure Functions let you focus on what counts — your requirements, your time and your code — and less about boilerplate code, infrastructure and processes.

When creating new APIs in any technology, one thing is very important: Documenting those APIs so that others can use them. This is especially important in large enterprises or situations where you are exposing these APIs to the public.

This blog series guides you through creating a C# Function App, creating self-documenting APIs, ensuring the quality of that generated documentation, and separating documentation based on the audience.

The blog post assumes the following:

  • You are familiar with C#
  • You have knowledge of software development fundamentals
  • You have completed Part One of this series

At AIS, we’ve determined that the one of the best approaches to documenting your APIs is to use OpenAPI (formerly Swagger) to have the APIs (nearly) document themselves. This saves time in the long run and even enables API clients to automatically generate client code to interact with your APIS. This helps with shelf life – if 6 months or a year down the road, we decide a better approach is best.

For these articles, I will walk you through the steps for creating well-documented Azure Functions for our fictitious shopping site called “Bmazon” and its modernization effort.

We Need Better Documentation

In the Previous Post, we got the Azure Functions application to start generating OpenAPI docs for our functions, but the results were somewhat underwhelming:

Swagger UI page for the CreateOrder operation showing very little detail

Here you can see that, even though the CreateOrder call takes an Order object in the body of the HTTP Post, there is no documentation describing this. This is because, unlike when writing traditional dotnet core APIs, the order is not a parameter to the function. Swashbuckle only has access to the function signature and anything that can be discovered through reflection.

This output is not very helpful to our clients. They need to know our inputs, potential HTTP Codes to expect (it just assumes that it will return a 200), and other pertinent information like what the method does and what the return data will look like.

For instance, if we add data validation to this method, we may wind up returning a 400 (Bad Request). We could also possibly return a 409 (Conflict) if the order already exists.

Since you’re reading this, you know there is a way to do this. Let’s get to it.

Give Swashbuckle More Info

In order for the OpenAPI documentation to be much better, we need to add a few things that Swashbuckle will be able to use to generate the docs.

As I stated previously, Swashbuckle only has access to things that can be discovered through reflection, which means the definition of your function, its parameters and any attributes decorating it, so the following translates to very little information.

[FunctionName("CreateOrder")]
public async Task<IActionResult> Run(
    [HttpTrigger(AuthorizationLevel.Anonymous, "post", Route = "order")]
    HttpRequestMessage req,
    ILogger log)

This doesn’t even have any information about the Order type that is expected in the body of the method, never mind return codes.

Expected Body Type

To document the type expected in the body of the POST, we need to tell Swashbuckle what to expect. We do this by using the RequestBodyType attribute from the AzureFunctions.Extensions.Swashbuckle.Attribute namespace.

Note that this is an additional attribute on the req parameter on top of the existing HttpTrigger attribute.

[FunctionName("CreateOrder")]
public async Task<IActionResult> Run(
    [HttpTrigger(AuthorizationLevel.Anonymous, "post", Route = "order")]
    [RequestBodyType(typeof(Order), "The Order To Create")] // Describes the Body
    HttpRequestMessage req,
    ILogger log)

With this in place, Swashbuckle knows what type the body contains and we now see that the body type is present in the UI:

The Create Order Swagger UI with body type specified

If you click on the “Schema” link, you will even see the data type names being used:

Create Order Swagger UI showing Schema of the Order type

Note that the Items array is marked nullable: true which is not desirable. We will address that below in the Data Annotations section.

The bottom of the page also shows you all the current objects in the Schema that are known:

Swagger UI with the all Schemas

This information documents all the details about the DTOs being used in this API. But we need to fix the nullability and other validation-related information.

Add Data Annotations

Above, the Order‘s Items collection was marked as nullable. We want to fix that and other validation information that Swashbuckle can read. To do that, we need to add Data Annotations to the object definitions.

Currently, the Order looks like this:

public class Order
{
  public int OrderId { get; set; }
  public IEnumerable<OrderLineItem> Items { get; set; }
}

In order to tell Swashbuckle (and our clients) that the Items collection is required, we have to mark it [Required] and [NotNull] from the System.Diagnostics.CodeAnalysis namespace.

The NotNull attribute is also needed because OpenAPI, not being language-specific, supports the concept of null along with the lack of presence of the variable. JavaScript developers will relate to this concept with undefined and null keywords.

So, in order to tell clients that fields MUST have a value, you need to add both attributes to the Items property.

  public class Order
  {
    public int OrderId { get; set; }

    [Required, NotNull]
    public IEnumerable<OrderLineItem> Items { get; set; }
  }

The results:

Create Order Swagger UI with corrected Required fields

Note the red "*", meaning required, next to the items collection and the lack of the nullable:true.

To properly annotate the objects, we’ll mark the ID, Quantity and SKU all required as well. Additionally, we’ll put rational [Range] and other appropriate restrictions as well:

public class Order
{
  // if we get to 2 billion orders, we'll all be retired anyway
  [Required, NotNull, Range(1, int.MaxValue)]
  public int OrderId { get; set; }

  [Required, NotNull, MinLength(1)]
  public IEnumerable<OrderLineItem> Items { get; set; }
}

public class OrderLineItem
{
  [Required, NotNull, Range(1, 1000)]
  public int Quantity { get; set; }

  [Required, NotNull, MinLength(1)]
  public string SKU { get; set; }
}

So, the final schema looks like this:
Create Order Swagger UI with Full data annotations

Now your clients know the simple data validations for these objects, but don’t know what the return payloads and HTTP codes are.

Potential Return Types and Codes

By default, Swashbuckle will tell the clients to expect a 200 (Success) HTTP result with no payload.

Swagger UI showing only a 200 response

This doesn’t include any information about any payload sent back to the user and is most likely incorrect or at least not the whole story.

If we know our Function is going to return multiple HTTP codes with different payloads, we need to tell Swashbuckle by using the [ProducesResponseType] attribute on the Function itself.

Assuming we return the following:

  • 200/Success with a string message payload
  • 400/BadRequest with a collection of error messages

We decorate our function like this:

[ProducesResponseType(typeof(string), StatusCodes.Status200OK)]
[ProducesResponseType(typeof(IEnumerable<string>), StatusCodes.Status400BadRequest)]
HttpStatusCode.BadRequest)]
[FunctionName("CreateOrder")]
public async Task<IActionResult> Run(

This results in

Swagger UI - multiple response types with http codes

So, we’ve now exposed the input and output types, but we haven’t been able to add any additional information to describe objects or fields to our clients. To do that, we need to add XML comments to the output as well.

To make this information even better, we can comment our code properly. Of course, you were already doing that, right? RIGHT?

Better Comments in the OpenAPI Spec

One thing that you may notice is that, at the top of the function, there is very little information about the method except the name (e.g. “CreateOrder”). We should add a summary about the method.

Now, I need to apologize because I lied to you. Previously, when I said “Swashbuckle only has access to things that can be discovered through reflection”, I was lying (Forgive me!). To give client devs more information about the methods being exposed by an API, we can add C# XML Documentation information to the code and, if configured for it, Swashbuckle will incorporate that too, which can be invaluable.

Add XML Comments

We now add comments like this to our C# code (Functions and DTOs)

/// <summary>
/// Creates an Order that will be shipped to the Warehouse for fulfillment.
/// </summary>
/// <param name="req">the HTTP request</param>
/// <param name="log">the logger</param>
/// <returns>a success message or a collection of error messages</returns>
/// <response code="200">
///   Indicates success and returns a user-friendly message
/// </response>
/// <response code="400">
///   Indicates a data validation issue and will return a list of data validation errors
/// </response>
[ProducesResponseType(typeof(string), StatusCodes.Status200OK)]
[ProducesResponseType(typeof(IEnumerable<string>), StatusCodes.Status400BadRequest)]
HttpStatusCode.BadRequest)]
[FunctionName("CreateOrder")]
public async Task<IActionResult> Run() {
// ...
}

/// <summary>
/// An Order sent from the Shipping Division to be sent to the Warehouse
/// </summary>
public class Order
{
//...
}

The Swagger UI won’t have changed yet. In order for the Swashbuckle library to read this information, you need to tell the C# compiler to generate the documentation in an XML file and tell Swashbuckle about it.

Generate XML Doc file

At the top of the csproj file, add the following line to the first you see.

<DocumentationFile>Bmazon.xml</DocumentationFile>

If you are using Visual Studio, you can access this setting from the Build tab on the Project settings

Now tell Swashbuckle about the XML file

Currently, we’re configuring Swashbuckle in the StartUp.cs file with:

builder.AddSwashBuckle(Assembly.GetExecutingAssembly());

Replace this with

builder.AddSwashBuckle(Assembly.GetExecutingAssembly(), opts => {
  opts.XmlPath = "Bmazon.xml";
});

Now, when you rerun the app, the final result will be a page with the new title and the order schema will have much more detail.

Swagger UI showing Create Order Scheme with XML comment information

The users of your service will thank you for documenting your APIs this thoroughly. Additionally, they won’t have to ask you questions about the details about how to use the APIs. They can even generate client code with various tools as well.

Get the completed code from GitHub

Next Steps

Now that you have really descriptive documentation for your APIs being automatically generated, your security folks may not like you sending all your API documentation to every single client, regardless of need.

In part three of the series, I will show you how to separate the APIs out into separate groupings and keep your clients all in their own lane.

TOP MODERNIZATION APPROACHES
Learn about the top three modernization approaches — rehosting, replatforming, and refactoring — uncovering pros, cons, and process ideas as you go.

Establishing a Clear Cloud Adoption Strategy

As we hear a lot of buzz about cloud adoption, I wanted to share my thoughts about cloud adoption approaches and strategies to migrating on-premises workloads to public clouds in this blog. This blog predominantly covers infrastructure assets migration in line with applications. As many would feel that cloud adoption brings significant cost savings to OpEx and CapEx, it requires meticulous planning, design, and architecting. This requires careful consideration of various aspects such as network traffic inflow and outflow, Business Continuity Plan (BCP) like High Availability Disaster Recovery (HA/DR), Data Replication within Regions or Geographies, etc. Data Transfer to Cloud during migration, which is a one-time exercise.

There are many ways you can migrate your applications to the cloud. However, before the migration phase, it is essential to determine the current environment, dependencies, servers and applications, licenses, and much more.

Three Popular Migration Approaches

  • Rehost: Generally referred to as lift and shift or like to like migration, lifts servers or applications from the current hosting environment and shifts them to infrastructure in the public cloud. This approach has low risks in the cloud journey but may not leverage full cloud benefits.
  • Replatform: Minor changes to the application and/or Infrastructure architecture during the migration. This approach leverages cloud benefits such as App Pools, Availability Sets, AZs, etc.
  • Refactor: Re-architect your existing application to leverage the features and services offered by the cloud environments. Refactor can become an expensive option but can avail the best possible benefits of the cloud. One can re-architect the application to become highly available and scalable. It is the most time-consuming among all options since it requires complete awareness of current architecture and building a new solution by leveraging cloud services.

TOP MODERNIZATION APPROACHES
Learn about the top three modernization approaches — rehosting, replatforming, and refactoring — uncovering pros, cons, and process ideas as you go.

Critical Factors to Consider

Vendor Lock-in

  • Vendor lock-in is the common term for a situation where an organization wishes to transfer its business away from one of its current vendors but cannot do so due to the projected cost, duration, or complexity of switching.
  • Considerations when choosing a cloud partner:
    • Create and Negotiate Your Exit Strategy
      • Always remember that the best time to create an exit strategy is before signing an initial service agreement.
    • Develop or design portable vendor-agnostic applications
      • Building portable applications can also help organizations avoid cloud vendor lock-in. Suppose you develop a business-critical application whose core functionality depends on a platform-specific feature like Azure’s data lake analytics or Amazon Lex. In that case, you’ll end up locked into that cloud service provider.
  • Multi-Cloud
    • This cloud computing model encompasses the usage of multiple cloud providers, choosing specific services from each, as per their requirements. This approach reduces the dependency on single cloud technology or cloud service provider. And thus, it offers avoidance of cloud vendor Lock-in and provides the best of multiple cloud service providers.

Business Continuity and Disaster Recovery

  • Business Continuity and Disaster Recovery (BCDR or BC/DR) is a set of processes and techniques used to help an organization recover from a disaster and continue or resume routine business operations. It is a broad term that combines the roles and functions of IT and business in the aftermath of a disaster.
  • High Availability Design: When there is a system outage, the Recovery Point Objective (RPO) is a measure of how frequently you take backups. Recovery Time Objective (RTO) is the amount of downtime a business can tolerate.
  • Fault Domains: A set of hardware components that share a single point of failure.
  • Update Domains:  A logical group of the underlying hardware that can undergo maintenance or be rebooted at the same time.
  • Availability Set: A logical grouping of VMs that allows Azure to understand how your application is built to provide for redundancy and availability.
  • Availability Zones: A logical data center in a region available for use by any AWS customer.

Network Costs

  • While on-premises, data transfer doesn’t cost a dime; data transit in the cloud does. Various cloud providers have multiple charges. This is one of the significant components that need to be factored in while moving to the cloud.
  • Ingres or inflow to the cloud is not chargeable. Egress or outflow traffic is chargeable and is applicable for data flow from cloud to on-premises or cloud to the Internet.
  • While data within the same availability zone is free for some cloud providers, it is chargeable across regions or Geographies.

Security Risks and Compliance

Shared Resources in Public Cloud

  • For some clients, their applications or systems (VMs) shouldn’t be hosted on the same physical host as their competitors. When opted for VMs, one won’t control which physical system these VMs are created or migrated to.
    For this kind of compliance requirement, the client should opt for dedicated hosts.

Data Security During Transit and Rest

  • Most of the CSPs offer data encryption at rest, by default, while it’s in transit over the internet from the user to cloud, and then internally when it’s moving within the provider network, for example, between data centers, zones, and regions.
  • Data at rest – data stored in buckets/disks can also be secured by encrypting, and cloud providers offer various services such as managed keys and client-sourced keys, etc.

Backup and Restoration

  • While backing up the data to cloud location doesn’t incur network traffic costs, many CSPs have storage costs during the backup stage. However, while the recovery process, network costs do get charged.
  • While storing data at cloud locations has many offerings by CSPs, things to be considered while deciding storage solution are frequency of data access – hot/cold/archive, etc.
  • Restoration of backed up data
  • Data archival solution for compliance requirements

Four Stages of Cloud Adoption: A Quick Glance

  1. Assessment and Design: The first stage has Asset Discovery, Asset Consolidation, Scope Definitions, Costing.
  2. Planning: Wave group creation, cutover planning, defining exit criteria, stakeholder identification, and notification channel creation.
  3. Cutover and Execution: Migration rollout, sign-off approvals, and rollback (if any).
  4. Analysis and Optimization: Baselining and performance analysis, Optimize the resource utilization by leveraging cloud-native tools.

Asset Discovery

  • The first and most crucial step in cloud adoption is tracing out the inventory like hardware and software assets and how they are interconnected. For example, Servers (physical, virtual, Linux or Windows), Network gear like switches, routers, wireless devices, Voice devices, etc., Storage arrays, Backup devices, Security devices such as Firewalls, VPN gateways, etc.
  • Validate identified assets with available CMDB of the customer and create application affinity mapping by having workshops, surveys, interviews with application teams
  • Discover infrastructure workload footprint
  • Application to server grouping
  • High-level application dependency mapping
  • Gather as-is resource parameters

Application Views

Define Migration Scenarios

  • Assess various approaches for migration and best fits for multiple scenarios
  • Proof of concept and benchmarking cut over timelines
  • The Migration strategy finalization for various scenarios. This includes 2 tier architecture, 3 tier architecture, INFRA Utility servers like
  • AD, print servers, collaboration servers, etc.

Wave Group Creation

  • Finalize wave groups in line with Application dependency
  • Create cutover plans and finalize downtimes, roll back time also need to be included
    • The cutover plan also needs to have test cases from INFRA and App/Biz teams
  • Identification of stakeholders and communication modes

Migration Execution

  • Communication channels initiation and notification to all stakeholders
  • Initiate pre-migration activities:
    • Target environment building, NW connectivity establishment, and necessary firewall rules enablement and data replication
    • Migration tool setup
  • Cutover initiation and progress monitoring
  • Cutover to Target environment on public cloud
  • UAT by application team and sign off
  • Cutover and go-live
  • Rollback, in case of failures

Hand Over to Support Team

  • Heightened support for 48 hours by the Migration team
  • Handover of successful workloads on cloud environment to the operations team
  • Request support team to decommission source workloads at On-Premises after one week (As per accepted retention policy by customer)

Bringing Organizations Cost-Effective Solutions

Cloud adoption has become one of the critical strategies for many organizations to bring cost-effective solutions to their business community. Nowadays, many cloud Solution Specialists (CSPs) and System Integrators (SI) offer professional consulting services to facilitate the cloud adoption journey. It is recommended to decide which cloud strategy works for your organization and validate the abovementioned items to monitor and optimize resource utilization.

Transform your business into a modern enterprise that engages customers, supports innovation, and differentiates your organization, all while cutting costs.