Creating Self Documenting Azure Functions with C# and OpenAPI: Part Three

When migrating existing business services to Azure PaaS as part of an App Modernization project, you may find yourself seriously considering serverless computing using Azure Functions, especially if your target architecture includes MicroServices.

Azure Functions let you focus on what counts — your requirements, your time, and your code — and less about boilerplate code, infrastructure, and processes.

When creating new APIs in any technology, one thing is essential: Documenting those APIs so that others can use them. This is especially important in large enterprises or situations where you are exposing these APIs to the public.

This blog series guides you through creating a C# Function App, creating self-documenting APIs, ensuring the quality of that generated documentation, and separating documentation based on the audience.

The blog post assumes the following:

  • You are familiar with C#
  • You know software development fundamentals
  • You are comfortable with command-line interfaces
  • You have completed Part Two of this series

At AIS, we’ve determined that one of the best approaches to documenting your APIs is to use OpenAPI (formerly Swagger) to have the APIs (nearly) document themselves. This saves time in the long run and enables API clients to generate client code to interact with your APIS automatically. In addition, this helps with a shelf life – if six months or a year down the road, we decide a better approach is best.

For these articles, I will walk you through the steps for creating well-documented Azure Functions for our fictitious shopping site called “Bmazon” and its modernization effort.

This is the final post in this series.

We Need to Lock it Down

In the previous article, we increased the quality of our OpenAPI spec by adding various C# attributes and XML comments to the mix. This resulted in a very useful and informative OpenAPI spec being generated.

Now, it turns out that our Security Team alerted us that some folks in the Warehouse were using their knowledge and access to the “Create Order” API to generate fake orders for themselves. This is a problem, and they have now updated the Security Procedures to require restricting people to the API calls they are supposed to use.

Currently, we have the following functions and departments that need to access them:

Function Description Shopping Department Warehouse
Create Order Creates an order to send to the Warehouse
Order Shipped Shipment update from the Warehouse to the System
Get Shipping Status Gets the current shipping status of an order

We have two Clients (Shopping Dept and Warehouse) that each need access to two functions.
We need to separate these things into two groups.

Put Functions In Groups

Swashbuckle supports putting things in Groups by using the ApiExplorerSettings attribute from Microsoft.AspNetCore.Mvc namespace. This attribute can be applied more than one time, so we can add these for all the functions like this:

Unfortunately, since you can’t use more than one ApiExplorerSettings attribute per function, we will need three groupings for this, “Warehouse,” “Shopping,” and “Shared,” to handle the method that was shared between. Therefore we’ll include the “Shared” method in all Swagger Documents generated.

[ApiExplorerSettings(GroupName = "Warehouse")]
[FunctionName("OrderShipped")]
public async Task<IActionResult> Run(
//...

[ApiExplorerSettings(GroupName = "Shopping")]
[FunctionName("CreateOrder")]
public async Task<IActionResult> Run(
//...

[ApiExplorerSettings(GroupName = "Shared")]
[FunctionName("OrderShippingStatus")]
public async Task<OrderShippingInfo> Run(
//...

By itself, putting them into these groups will not separate things into separate documents for you. It will just add a group name to the API method. For example, in the UI this renders like this:

Swagger UI showing groups for APIs

Create Separate API Specs

To create separate specs, you need to configure Swashbuckle to generate multiple documents and show it how to divide up the methods.

Configure the documents

Back to Startup.cs, we update the configuration with this:

builder.AddSwashBuckle(Assembly.GetExecutingAssembly(), opts =>
{
  // incorporate the XML documentation
  opts.XmlPath = "Bmazon.xml";

  // set up an "Everything" document and 2 documents with the 
  // same names as the group names used in the code
  opts.Documents = new SwaggerDocument[] {
    new SwaggerDocument()
    {
      Name = "Everything",
      Title = "Bmazon Shopping API",
      Description = "All APIs",
      Version = "1.0"
    },
    new SwaggerDocument()
    {
      Name = "Shopping",
      Title = "Bmazon Shopping API",
      Description = "API for the Shopping Department",
      Version = "1.0"
    },
    new SwaggerDocument()
    {
      Name = "Warehouse",
      Title = "Bmazon Warehouse API",
      Description = "API for the Bmazon Warehouse",
      Version = "1.0"
    }
  };
  //...

We now have one “Everything” that we’ll use as a default and two others that will be used for their respective clients.

Let’s configure Swashbuckle, so it knows what APIs to put in which documents.

Update the OpenAPI Functions to support individual API Specs

In that same method in the Startup, we also need to add the following:

opts.ConfigureSwaggerGen = genOpts =>
{
  // configure the separate document inclusion logic
  genOpts.DocInclusionPredicate((docName, apiDesc) =>
  {
    // generating the "everything" doc? then include this method
    if (docName == "Everything")
      return true;

    if (!apiDesc.TryGetMethodInfo(out MethodInfo methodInfo))
      return false;

    // get the value of the [ApiExplorerSettings(GroupName= "foo")]
    var attr = methodInfo.GetCustomAttributes(true)
      .OfType<ApiExplorerSettingsAttribute>().FirstOrDefault();

    var groupName = attr?.GroupName;

    // always return it if it's shared. Otherwise compare doc names
    return groupName == "Shared" || groupName == docName;
  });
};

Add Function Support For Selecting A Group

To allow the clients to select a specific group, we need to modify the JSON and UI OpenAPI functions to support selecting a group.

To do this, we add a new parameter to the JSON and UI Functions called “group” (defaulting to “Everything”)

/// <summary>
/// function implementation
/// </summary>
/// <param name="req">the http request</param>
/// <param name="swashbuckleClient">the injected Swashbuckle client</param>
/// <param name="group">the document to get (default: "Everything")</param>
/// <returns>the JSON data as an http response</returns>
[SwaggerIgnore]
[FunctionName(nameof(OpenApiJson))]
public static Task<HttpResponseMessage> Run(
    [HttpTrigger(AuthorizationLevel.Anonymous, "get", Route = "openapi/json/{group?}")]
    HttpRequestMessage req,
    [SwashBuckleClient] ISwashBuckleClient swashbuckleClient,
    string group)
{
  return Task.FromResult(swashbuckleClient
    .CreateSwaggerJsonDocumentResponse(req, group ?? "Everything"));
}

/// <summary>
/// the function implementation
/// </summary>
/// <param name="req">the http request</param>
/// <param name="swashbuckleClient">the injected Swashbuckle client</param>
/// <param name="group">the optional document from the URL (default: "Everything")</param>
/// <returns>the HTML page as an http response</returns>
[SwaggerIgnore]
[FunctionName(nameof(OpenApiUi))]
public static Task<HttpResponseMessage> Run(
    [HttpTrigger(AuthorizationLevel.Anonymous, "get", Route = "openapi/ui/{group?}")]
    HttpRequestMessage req,
    [SwashBuckleClient] ISwashBuckleClient swashbuckleClient,
    string group)
{
  // the CreateSwaggerUIResponse method generates the HTTP page from the JSON Function results
  return Task.FromResult(swashbuckleClient.CreateSwaggerUIResponse(
    req, $"openapi/json/{group ?? "Everything"}"));
}

Now, when you run the functions, you will have the option to have separate API specs for each Client by appending the document name to the URL, like “http://localhost:7071/api/openapi/ui/Shopping”, so that they will only know about the APIs they can call. To further lock this down, you can add authorization to the specific endpoints at a later time, possibly with Azure API Management.

Swagger UI showing the shopping APIs

Swagger UI showing the Warehouse APIs

In the future, rather than using security through obscurity, you can import these separate OpenAPI JSON files into Azure API Management to lock down the individual APIs by client, but we’ll leave that as an exercise for the reader.

Conclusion

Now that you have gone through these three articles, you have self-documenting APIs separated into different groupings that you can expose to individual clients. All you need to do is properly comment your code and decorate the Functions with the proper Attributes and you and your clients will be very satisfied.

Get the completed code from GitHub

When migrating existing business services to Azure PaaS as part of an App Modernization project, you may find yourself seriously considering serverless computing using Azure Functions, especially if your target architecture includes MicroServices.
Azure Functions let you focus on what counts — your requirements, your time and your code — and less about boilerplate code, infrastructure and processes.

When creating new APIs in any technology, one thing is very important: Documenting those APIs so that others can use them. This is especially important in large enterprises or situations where you are exposing these APIs to the public.

This blog series guides you through creating a C# Function App, creating self-documenting APIs, ensuring the quality of that generated documentation, and separating documentation based on the audience.

The blog post assumes the following:

  • You are familiar with C#
  • You have knowledge of software development fundamentals
  • You have completed Part One of this series

At AIS, we’ve determined that the one of the best approaches to documenting your APIs is to use OpenAPI (formerly Swagger) to have the APIs (nearly) document themselves. This saves time in the long run and even enables API clients to automatically generate client code to interact with your APIS. This helps with shelf life – if 6 months or a year down the road, we decide a better approach is best.

For these articles, I will walk you through the steps for creating well-documented Azure Functions for our fictitious shopping site called “Bmazon” and its modernization effort.

We Need Better Documentation

In the Previous Post, we got the Azure Functions application to start generating OpenAPI docs for our functions, but the results were somewhat underwhelming:

Swagger UI page for the CreateOrder operation showing very little detail

Here you can see that, even though the CreateOrder call takes an Order object in the body of the HTTP Post, there is no documentation describing this. This is because, unlike when writing traditional dotnet core APIs, the order is not a parameter to the function. Swashbuckle only has access to the function signature and anything that can be discovered through reflection.

This output is not very helpful to our clients. They need to know our inputs, potential HTTP Codes to expect (it just assumes that it will return a 200), and other pertinent information like what the method does and what the return data will look like.

For instance, if we add data validation to this method, we may wind up returning a 400 (Bad Request). We could also possibly return a 409 (Conflict) if the order already exists.

Since you’re reading this, you know there is a way to do this. Let’s get to it.

Give Swashbuckle More Info

In order for the OpenAPI documentation to be much better, we need to add a few things that Swashbuckle will be able to use to generate the docs.

As I stated previously, Swashbuckle only has access to things that can be discovered through reflection, which means the definition of your function, its parameters and any attributes decorating it, so the following translates to very little information.

[FunctionName("CreateOrder")]
public async Task<IActionResult> Run(
    [HttpTrigger(AuthorizationLevel.Anonymous, "post", Route = "order")]
    HttpRequestMessage req,
    ILogger log)

This doesn’t even have any information about the Order type that is expected in the body of the method, never mind return codes.

Expected Body Type

To document the type expected in the body of the POST, we need to tell Swashbuckle what to expect. We do this by using the RequestBodyType attribute from the AzureFunctions.Extensions.Swashbuckle.Attribute namespace.

Note that this is an additional attribute on the req parameter on top of the existing HttpTrigger attribute.

[FunctionName("CreateOrder")]
public async Task<IActionResult> Run(
    [HttpTrigger(AuthorizationLevel.Anonymous, "post", Route = "order")]
    [RequestBodyType(typeof(Order), "The Order To Create")] // Describes the Body
    HttpRequestMessage req,
    ILogger log)

With this in place, Swashbuckle knows what type the body contains and we now see that the body type is present in the UI:

The Create Order Swagger UI with body type specified

If you click on the “Schema” link, you will even see the data type names being used:

Create Order Swagger UI showing Schema of the Order type

Note that the Items array is marked nullable: true which is not desirable. We will address that below in the Data Annotations section.

The bottom of the page also shows you all the current objects in the Schema that are known:

Swagger UI with the all Schemas

This information documents all the details about the DTOs being used in this API. But we need to fix the nullability and other validation-related information.

Add Data Annotations

Above, the Order‘s Items collection was marked as nullable. We want to fix that and other validation information that Swashbuckle can read. To do that, we need to add Data Annotations to the object definitions.

Currently, the Order looks like this:

public class Order
{
  public int OrderId { get; set; }
  public IEnumerable<OrderLineItem> Items { get; set; }
}

In order to tell Swashbuckle (and our clients) that the Items collection is required, we have to mark it [Required] and [NotNull] from the System.Diagnostics.CodeAnalysis namespace.

The NotNull attribute is also needed because OpenAPI, not being language-specific, supports the concept of null along with the lack of presence of the variable. JavaScript developers will relate to this concept with undefined and null keywords.

So, in order to tell clients that fields MUST have a value, you need to add both attributes to the Items property.

  public class Order
  {
    public int OrderId { get; set; }

    [Required, NotNull]
    public IEnumerable<OrderLineItem> Items { get; set; }
  }

The results:

Create Order Swagger UI with corrected Required fields

Note the red "*", meaning required, next to the items collection and the lack of the nullable:true.

To properly annotate the objects, we’ll mark the ID, Quantity and SKU all required as well. Additionally, we’ll put rational [Range] and other appropriate restrictions as well:

public class Order
{
  // if we get to 2 billion orders, we'll all be retired anyway
  [Required, NotNull, Range(1, int.MaxValue)]
  public int OrderId { get; set; }

  [Required, NotNull, MinLength(1)]
  public IEnumerable<OrderLineItem> Items { get; set; }
}

public class OrderLineItem
{
  [Required, NotNull, Range(1, 1000)]
  public int Quantity { get; set; }

  [Required, NotNull, MinLength(1)]
  public string SKU { get; set; }
}

So, the final schema looks like this:
Create Order Swagger UI with Full data annotations

Now your clients know the simple data validations for these objects, but don’t know what the return payloads and HTTP codes are.

Potential Return Types and Codes

By default, Swashbuckle will tell the clients to expect a 200 (Success) HTTP result with no payload.

Swagger UI showing only a 200 response

This doesn’t include any information about any payload sent back to the user and is most likely incorrect or at least not the whole story.

If we know our Function is going to return multiple HTTP codes with different payloads, we need to tell Swashbuckle by using the [ProducesResponseType] attribute on the Function itself.

Assuming we return the following:

  • 200/Success with a string message payload
  • 400/BadRequest with a collection of error messages

We decorate our function like this:

[ProducesResponseType(typeof(string), StatusCodes.Status200OK)]
[ProducesResponseType(typeof(IEnumerable<string>), StatusCodes.Status400BadRequest)]
HttpStatusCode.BadRequest)]
[FunctionName("CreateOrder")]
public async Task<IActionResult> Run(

This results in

Swagger UI - multiple response types with http codes

So, we’ve now exposed the input and output types, but we haven’t been able to add any additional information to describe objects or fields to our clients. To do that, we need to add XML comments to the output as well.

To make this information even better, we can comment our code properly. Of course, you were already doing that, right? RIGHT?

Better Comments in the OpenAPI Spec

One thing that you may notice is that, at the top of the function, there is very little information about the method except the name (e.g. “CreateOrder”). We should add a summary about the method.

Now, I need to apologize because I lied to you. Previously, when I said “Swashbuckle only has access to things that can be discovered through reflection”, I was lying (Forgive me!). To give client devs more information about the methods being exposed by an API, we can add C# XML Documentation information to the code and, if configured for it, Swashbuckle will incorporate that too, which can be invaluable.

Add XML Comments

We now add comments like this to our C# code (Functions and DTOs)

/// <summary>
/// Creates an Order that will be shipped to the Warehouse for fulfillment.
/// </summary>
/// <param name="req">the HTTP request</param>
/// <param name="log">the logger</param>
/// <returns>a success message or a collection of error messages</returns>
/// <response code="200">
///   Indicates success and returns a user-friendly message
/// </response>
/// <response code="400">
///   Indicates a data validation issue and will return a list of data validation errors
/// </response>
[ProducesResponseType(typeof(string), StatusCodes.Status200OK)]
[ProducesResponseType(typeof(IEnumerable<string>), StatusCodes.Status400BadRequest)]
HttpStatusCode.BadRequest)]
[FunctionName("CreateOrder")]
public async Task<IActionResult> Run() {
// ...
}

/// <summary>
/// An Order sent from the Shipping Division to be sent to the Warehouse
/// </summary>
public class Order
{
//...
}

The Swagger UI won’t have changed yet. In order for the Swashbuckle library to read this information, you need to tell the C# compiler to generate the documentation in an XML file and tell Swashbuckle about it.

Generate XML Doc file

At the top of the csproj file, add the following line to the first you see.

<DocumentationFile>Bmazon.xml</DocumentationFile>

If you are using Visual Studio, you can access this setting from the Build tab on the Project settings

Now tell Swashbuckle about the XML file

Currently, we’re configuring Swashbuckle in the StartUp.cs file with:

builder.AddSwashBuckle(Assembly.GetExecutingAssembly());

Replace this with

builder.AddSwashBuckle(Assembly.GetExecutingAssembly(), opts => {
  opts.XmlPath = "Bmazon.xml";
});

Now, when you rerun the app, the final result will be a page with the new title and the order schema will have much more detail.

Swagger UI showing Create Order Scheme with XML comment information

The users of your service will thank you for documenting your APIs this thoroughly. Additionally, they won’t have to ask you questions about the details about how to use the APIs. They can even generate client code with various tools as well.

Get the completed code from GitHub

Next Steps

Now that you have really descriptive documentation for your APIs being automatically generated, your security folks may not like you sending all your API documentation to every single client, regardless of need.

In part three of the series, I will show you how to separate the APIs out into separate groupings and keep your clients all in their own lane.

TOP MODERNIZATION APPROACHES
Learn about the top three modernization approaches — rehosting, replatforming, and refactoring — uncovering pros, cons, and process ideas as you go.

When migrating existing business services to Azure PaaS as part of an App Modernization project, you may find yourself seriously considering serverless computing using Azure Functions, especially if your target architecture includes MicroServices.

Azure Functions let you focus on what counts — your requirements, your time, and your code — and less about boilerplate code, infrastructure, and processes.

When creating new APIs in any technology, one thing is very important: Documenting those APIs so that others can use them. This is especially important in large enterprises or situations where you are exposing these APIs to the public.

This blog series guides you through creating a C# Function App, creating self-documenting APIs, ensuring the quality of that generated documentation, and separating documentation based on the audience.

The blog series assumes the following:

  • You are familiar with C#.
  • You have knowledge of software development fundamentals.
  • You are comfortable with command-line interfaces.

At AIS, we’ve determined that one of the best approaches to documenting your APIs is to use OpenAPI (formerly Swagger) to have the APIs (nearly) document themselves. This saves time in the long run and even enables API clients to automatically generate client code to interact with your APIs. This helps with shelf life – if 6 months or a year down the road, we decide a better approach is best.

For these articles, I will walk you through the steps for creating well-documented Azure Functions for our fictitious shopping site called “Bmazon” and its modernization effort.

Creating the App

To create the app, we will start with the Azure Functions Core Tools. At the time of this writing, the current version of this library is 3.0.3477

NOTE: This version uses dotnet cli version 3.1 internally, so if your dotnet executable in the path is not that version, it could cause you issues. If you run into errors, this may be fixed by adding global.json file in the current directory with the following content, which will tell the dotnet cli to use whatever 3.1.x version you have installed.

{
  "sdk": {
    "version": "3.1.0",
    "rollForward": "latestMinor"
  }
}

At the PowerShell prompt, we’ll run the following to create our project

C:\dev> func --version
3.0.3477
C:\dev> func init Bmazon --worker-runtime dotnet

Writing C:\dev\Bmazon\.vscode\extensions.json

This will create the shell of a project inside the C:\dev\Bmazon folder.

While creating the app, I’ve copied in an OrderService and the related DTOs from the existing application we’re modernizing to be used by the newly new functions we are creating. You can see the completed code on GitHub. You’ll see a bit more of them in the next article.

Learn more about Azure Functions From Microsoft Docs

Add Functions

We’re going to add 3 different functions to our app.

Shopping API

The Shopping division needs to call HTTP APIs to make an order to the warehouse, so we will add a CreateOrder function that performs this action.

(This can be done interactively by running func new and following prompts, but using the command line parameters is more concise.)

C:\dev\Bmazon> func new --template HttpTrigger --name CreateOrder `
    --authlevel Anonymous
Use the up/down arrow keys to select a template:Function name: CreateOrder

The function "CreateOrder" was created successfully from the 
"HTTPTrigger" template.

Strangely, it outputs a prompt to select the template even when you have passed in the selection as a parameter. You can ignore this.

Warehouse API

Later in our process, the Warehouse team needs to call an HTTP endpoint to send tracking information back to the Shopping division.

We will follow the pattern above and create an API for them to call.

C:\dev\Bmazon> func new --template HTTPTrigger --name OrderShipped `
    --authlevel Anonymous
Use the up/down arrow keys to select a template:Function name: OrderShipped

The function "OrderShipped" was created successfully from the 
"HTTPTrigger" template.

Shared APIs

Since both the Shopping and Warehouse divisions will need to check on the status of an order at various times, there will be a shared function to check status.

C:\dev\Bmazon> func new --template HTTPTrigger --name OrderShippingStatus `
    --authlevel Anonymous
Use the up/down arrow keys to select a template:Function name: OrderShipped

The function "OrderShippingStatus" was created successfully from the 
"HTTPTrigger" template.

Code Cleanup

We’ll do a bit of code cleanup before moving on.

Choose GET or POST

If you look at the code, you’ll notice that, by default, the Functions were created supporting both GET and POST.

public async Task<IActionResult> Run(
   [HttpTrigger(AuthorizationLevel.Anonymous, "get", "post", Route = null)]
   HttpRequest req
...

We can fix that by changing the code on each function by removing either "get" or "post" appropriately (Typically you will have the first 2 operations be POSTs and the latter be GET).

Organizing the Code

The func calls above will create all the Function files in the top folder. We’ll move ours into a Functions folder to keep things cleaner. They all just happened to start with “O”, so we can be terse.

C:\dev\Bmazon> mkdir Functions; mv O*.cs Functions\

Add OpenAPI Document Generation

In order to add OpenAPI to Azure Functions, I chose to use the Swashbuckle library. There are a few other libraries out there to work with .Net and OpenAPI, but I chose Swashbuckle because I’m familiar with it.

Installing the Package

The core Swashbuckle project doesn’t support Azure Functions directly, so I used the AzureExtensions.Swashbuckle package, a nice extension written by Vitaly Bibikov.

To install it:

C:\dev\Bmazon> dotnet add package AzureExtensions.Swashbuckle

  Determining projects to restore...
  Writing C:\Users\XXX\AppData\Local\Temp\tmp69AA.tmp
info : Adding PackageReference for package 'AzureExtensions.Swashbuckle' into project 'C:\dev\Bmazon\Bmazon.csproj'.
info : Restoring packages for C:\dev\Bmazon\Bmazon.csproj...
...
...
info : Committing restore...
info : Generating MSBuild file C:\dev\Bmazon\obj\Bmazon.csproj.nuget.g.props.
info : Writing assets file to disk. Path: C:\dev\Bmazon\obj\project.assets.json
log  : Restored C:\dev\Bmazon\Bmazon.csproj (in 525 ms).

Setting up Swashbuckle

In order to configure Swashbuckle, your Functions App needs a Functions Startup class like the following, which we’ll put in Startup.cs in the Bmazon folder.

using System.Reflection;
using AzureFunctions.Extensions.Swashbuckle;
using Microsoft.Azure.Functions.Extensions.DependencyInjection;

[assembly: FunctionsStartup(typeof(Bmazon.Startup))]
namespace Bmazon
{
  public class Startup : FunctionsStartup
  {
    public override void Configure(IFunctionsHostBuilder builder)
    {
      builder.AddSwashBuckle(Assembly.GetExecutingAssembly());
    }
  }
}

Exposing OpenAPI Endpoints

Your code will also need to expose the OpenAPI JSON and UI endpoints as HTTP-triggered Azure Functions so that client code can load them on demand.

(Adding them in a single OpenApi\OpenApiFunctions.cs file for now)

using System.Net.Http;
using System.Threading.Tasks;
using AzureFunctions.Extensions.Swashbuckle;
using AzureFunctions.Extensions.Swashbuckle.Attribute;
using Microsoft.Azure.WebJobs;
using Microsoft.Azure.WebJobs.Extensions.Http;

namespace Bmazon.OpenApi
{
  public static class OpenApiFunctions
  {
    [SwaggerIgnore]
    [FunctionName("OpenApiJson")]
    public static Task<HttpResponseMessage> RunJson(
        [HttpTrigger(
            AuthorizationLevel.Anonymous, 
            "get", Route = "openapi/json")]
        HttpRequestMessage req,
        [SwashBuckleClient] ISwashBuckleClient swashbuckleClient)
    {
      return Task.FromResult(
            swashbuckleClient.CreateSwaggerJsonDocumentResponse(req));
    }

    [SwaggerIgnore]
    [FunctionName("OpenApiUI")]
    public static Task<HttpResponseMessage> RunUi(
        [HttpTrigger(
            AuthorizationLevel.Anonymous, 
            "get", 
            Route = "openapi/ui")]
        HttpRequestMessage req,
        [SwashBuckleClient] ISwashBuckleClient swashbuckleClient)
    {
      // CreateOpenApiUIResponse generates the HTML page from the JSON results
      return Task.FromResult(
            swashbuckleClient.CreateSwaggerUIResponse(req, "openapi/json"));
    }
  }
}

This sets up 2 new Functions on the openapi/json and openapi/ui URLs to load the JSON file and Swagger UI respectively. The [SwaggerIgnore] attribute causes Swashbuckle to ignore these API methods for document generation purposes.

Generate and View the API Documentation

NOTE: You must have the Azure Storage Emulator or Azurite RUNNING locally in order for this to work properly.

C:\dev\Bmazon> func start
Microsoft (R) Build Engine version 16.8.3+39993bd9d for .NET
Copyright (C) Microsoft Corporation. All rights reserved.

  Determining projects to restore...
  Restored C:\dev\Bmazon\Bmazon.csproj (in 840 ms).
  Bmazon -> C:\dev\Bmazon\bin\output\bin\Bmazon.dll

Build succeeded.

Time Elapsed 00:00:05.60

Azure Functions Core Tools
Core Tools Version:       3.0.3284 Commit hash: 98bc25e668274edd175a1647fe5a9bc4ffb6887d
Function Runtime Version: 3.0.15371.0

[2021-02-27T15:05:33.871Z] Found C:\dev\Bmazon\Bmazon.csproj. Using for user secrets file configuration.

Functions:

  CreateOrder: [POST] http://localhost:7071/api/order
  OpenApiJson: [GET] http://localhost:7071/api/openapi/json
  OpenApiUi: [GET] http://localhost:7071/api/openapi/ui
  OrderShipped: [POST] http://localhost:7071/api/order/shipment
  OrderShippingStatus: [GET] http://localhost:7071/api/order/shipment/{id}

For detailed output, run func with --verbose flag.
[2021-02-27T15:05:41.693Z] Host lock lease acquired by instance ID '000000000000000000000000016514FF'.

If you don’t see that last line after a few seconds, you probably don’t have the storage emulator running

Take note of the list of functions shown with the URLs next to them, especially the ones starting with “OpenApi”.

If you visit the OpenApiUI URL listed above, you will see the following in your browser:

Rendered Swagger UI displaying the 3 created Operations

That’s it! You now have a modernized serverless architecture with APIs that are documenting themselves!

If you add any new Functions, they will automatically show up here as well. Your clients can download from the JSON endpoint and import the definitions into Postman or client generators.

Get the completed code from GitHub

Next Steps

Now that you have self-documenting APIs, you may notice that the information in the Swagger UI is rather underwhelming. Due to the nature of Azure Functions, there is very little information that Swagger can glean from the runtime type information it can gather.

In part two of the series, I will show you how to make the documentation MUCH better.

TOP MODERNIZATION APPROACHES
Learn about the top three modernization approaches — rehosting, replatforming, and refactoring — uncovering pros, cons, and process ideas as you go.

Introduction

PowerShell DSC is possibly one of the most potent configuration managers for Windows Operating Systems. Even with the increase in popularity of Ansible and SaltStack, Powershell DSC remains supreme when enforcing the desired state on a Windows VM. Ansible itself has included the win_dsc module, which allows Ansible to run PowerShell DSC. In this blog post, we will dive deeper into one of PowerShell DSC’s most powerful aspects, 3rd Party Resources, and how they interact with Azure Automation.

3rd Party Resources are PowerShell DSC Modules that are created by the community. Any PowerShell community member can create modules, and there are tons of modules out there to choose from. Modules are kept in repositories, the most well known and default PowerShell repository being the PowerShell Gallery run by Microsoft. This is a common repository for PowerShell modules that are deployed to the Gallery by the community. PowerShell Modules in the PSGallery can be downloaded and installed by the PowerShellGet Module.

As developers and infrastructure engineers, there are many different reasons to script various services you are creating. Often, instead of developing behavior or scripts from scratch, it is much easier to leverage the work that others have done to expedite a task’s completion. 3rd Party Modules allow for easily repeatable code that can become production-ready through collaboration.

Often, DSC Configuration can become complicated. Engineers can be asked to do many things, from creating an Azure AD Domain, configuring OMS Solutions associated with a VM, and even interactions with non-native Azure products, such as Splunk.

These may all seem very daunting, but don’t fret! Members of the PowerShell community have dealt with these problems and many others, and often you will find third party modules to help do the work for you.

Here is an example of a Third Party Resource, named ActiveDirectoryDsc, which will help in the promotion, configuration, and management of Active Directory

Azure Automation is a robust PaaS offering from Azure that allows for a cloud-based DSC pull server. Within Azure Automation, it is possible to add both custom modules that the user develops and third-party modules available in any hosted source.
⚠ It should be known that organizations in locked-down environments can manage their Repository of PowerShell Modules, which have been vetted by the respective InfoSec team. It is possible to deploy your Artifact repo using the Azure DevOps product shown here. It allows an internal team to deploy its versions of packages, and you can use that as your URI references.
⚠ There are a few ways to upload modules to the Azure Portal natively. You can upload manually through the portal as shown here in this picture:

Uploading modules to the Azure Portal

However, being DevOps Engineers, we want to automate this process as much as possible. One way to do this is via ARM Templates, like the ones we used in the previous module.
Below is an example of how to add a 3rd party module to your Azure Automation Account via ARM Templates:

{
"name": "[concat(parameters('automationAccountName'), '/', parameters('name'))]",
"type": "Microsoft.Automation/automationAccounts/modules",
"apiVersion": "2015-10-31",
"location": "[resourceGroup().location]",
"properties": {
"isGlobal": false,
"sizeInBytes": 0,
"contentLink": {
"uri": "uri to package"
}
}
}

If you are deploying from the PowerShellGallery, your Uri would look something like this:

"uri": "[concat('https://www.powershellgallery.com/api/v2/package/', parameters('name'), '/', parameters('version'))]"

Alternatively, you can script the import of modules using the New-AzAutomationModule module in a Powershell Script.

Oddly enough, there is sometimes some difficulty understanding the correct ContentUri to use in both the ARM and Powershell case. Finding the correct one can be done by navigating the right module in the Powershell Gallery, and adding /api/v2 to the URL, and replacing packages (plural) with package (singular).

Add the /api/v2 to a URL

Conclusion

3rd Party Modules are a great way for developers to speed up development and productivity. If you are inclined to help in the development of these modules, head over to GitHub and contribute!

In January, AIS’ Steve Suing posted a great article titled “What to Know When Purchasing a COTS Product and Moving to the Cloud.” In his post, Steve discusses some things to consider such as security, automation, and integration. Springboarding off of those topics for this blog, I am going to discuss another major consideration: legacy data.

Many COTS implementations are done to modernize a business function that is currently being performed in a legacy application. There are some obvious considerations here. For example, when modernization a legacy app, you might ask:

  • How much of the business needs are met by purely out of the out-of-the-box functionality?
  • How much effort will be necessary to meet the rest of your requirements in the form of configuration and customization of the product or through changing your business process?

However, legacy data is a critical consideration that should not be overlooked during this process.

Dealing with Data When Retiring a Legacy System

If one of the motivators for moving to a new system is to realize cost savings by shutting down legacy systems, that means the legacy data likely needs to be migrated. If there is a considerable amount of data and length of legacy history, the effort involved in bringing that data into the new product should be carefully considered. This might seem trivial, with most products at least proclaiming to have a multitude of ways to connect to other systems but recent experience has shown me that the cost of not properly verifying the level of effort can far exceed the savings provided by a COTS product. These types of questions can help avoid a nightmare scenario:

  1. How will the legacy data be brought into the product?
  2. How much transformation of data is necessary, and can it be done with tools that already exist?
  3. Does the product team have significant experience of similarly legacy systems (complexity, amount of data, industry space, etc.) moving to the product?

The first and second conversations usually start by sitting down and talking to the product team and having them outline the common processes that take place to bring legacy data into the product. During these conversations, be sure both sides have technical folks in the room. This will ensure the conversations have the necessary depth to uncover how the process works.

Once your team has a good understanding of the migration methods recommended by the product team, start exploring some actual use cases related to your data. After exploring some of the common scenarios to get a handle on the process, jump quickly into some of the toughest use cases. Sure, the product might be able to bring in 80% of the data with a simple extract, transform, load (ETL), but what about that other 20%? Often legacy systems were created in a time long, long ago, and sometimes functionality will have been squished into them in ways that make the data messy. Also, consider how the business processes have changed over time and how the migration will address legacy data created as rules changed over the history of the system. Don’t be afraid to look at that messy data and ask the tough questions about it. This is key.

CONVERTING LEGACY DATA TO BIG DATA PLATFORMS
Accelerate your data migration project with our free white paper download.

Your Stakeholders During Legacy Data Migration

Be sure those involved are not too invested in the product to decrease confirmation bias, the tendency to search for, interpret, and/or focus on information that will confirm that this is your silver bullet. This way they will ask the hard questions. Bring in a good mix of technical and business thought leaders. For instance, challenge the DBA’s and SME’s to work together to come up with things that will simply not work. Be sure to set up an environment in which people are rewarded for coming up with blockers and not demotivated by being seen as difficult. Remember every blocker you come up with in this early phase, could be the key to making better decisions, and likely have a major downstream impact.

The product of these sessions should be a list of tough use cases. Now it’s time to bring the product team again. Throw the use cases up on a whiteboard and see how the product team works through them. On your side of the table, be sure to include people that will be responsible for the effort of bringing the new system to life. With skin in the game, these people are much less likely to allow problems to be glossed over and drive a truly realistic conversation. Because this involves the tough questions, the exercise will likely take multiple sessions. Resist any pressure to get this done quickly, keeping in mind that a poor decision now, can have ripple efforts that will last for years.

After some of these sessions, both sides should have a good understanding of the legacy system, the target product, and some of the complexities of meshing the two. With that in place, ask the product team for examples of similar legacy systems they have tackled. This question should be asked up front, but it really cannot be answered intelligently until at least some of the edge cases of the legacy system are well understood. While the product team might not be able to share details due to non-disclosure agreements, they should be able to speak in specific enough terms to demonstrate experience. Including those involved in the edge case discovery sessions is a must. With the familiarity gained during those sessions, those are the people that are going to be most likely to make a good call on whether the experience being presented by the product team relates closely enough to your needs.

Using Lessons Learned to Inform Future Data Migration Projects

Have I seen this process work? Honestly, the answer is no. My hope is that the information I have presented, based on lessons learned from projects past helps others avoid some of the pitfalls I have faced and the ripple effects that could lead to huge cost overruns. It’s easy to lose sight of the importance of asking hard questions up front and continuing to do so in light of the pressure to get started. If you feel you are giving in to such pressure, just reach out to future you and ask them for some input.

And a final thought. As everyone knows, for complex business processes, COTS products don’t always perfectly align to your legacy system. There are two ways to find alignment. Customizing the system is the obvious route, although it often makes more sense to change business processes to align with a product you are investing in rather than trying to force a product to do what it was not designed for. If you find that this is not possible, perhaps you should be looking at another product. Keep in mind that if a product cannot be found that fits your business processes closely enough, it might make more financial sense to consider creating your own system from the ground up.

Sound Familiar?

It’s not a sentiment you would expect from most IT decision-makers. However, it’s something we hear from an increasing number of organizations.

The benefits of a well-thought-out cloud transformation roadmap are not lost on them.

  • They know that, in an ideal world, they ought to start with an in-depth assessment of their application portfolio, in line with the best practice – “migrate your capabilities, not apps or VMs”.
  • They also realize the need to develop a robust cloud governance model upfront.
  • And ultimately, they understand the need to undertake an iterative migration process that takes into account “organizational change management” best practices.

At the same time, these decision-makers face real challenges with their existing IT infrastructure that simply cannot wait months and years for a successful cloud transformation to take shape. They can’t get out of their on-premises data centers soon enough. This notion isn’t limited to organizations with fast-approaching Data Center (DC) lease renewal deadlines or end of support products, either.

So, how do we balance the two competing objectives:

  • Immediate need to move out of the DC
  • Carefully crafted long-term cloud transformation

A Two-Step Approach to Your Cloud Transformation Journey

From our experience with a broad range of current situations, goals, and challenges, we recommend a two-step cloud transformation approach that addresses both your immediate challenges and the organization’s long-term vision for cloud transformation.

  1. Tactical “Lift-n-Shift” to the Cloud – As the name suggests, move the current DC footprint as is (VMs, databases, storage network. etc.) to Azure
  2. Strategic Cloud Transformation – Once operational in the cloud, incrementally and opportunistically move parts of your application portfolio to higher-order Azure PaaS/cloud-native services

Tactical “Lift-n-Shift” to the Cloud

Lift n Shift Approach to Cloud Transformation

On the surface, step #1 above may appear wasteful. After all, we are duplicating your current footprint in Azure. But keep in mind that step #1 is designed for completion in days or weeks, not months or years. As a result, the duplication is minimized. At the same time, step #1 immediately puts you in a position to leverage Azure capabilities, giving you tangible benefits with minimal to no changes to your existing footprint.

Here are a few examples of benefits:

  • Improve the security posture – Once you are in Azure, you tap into security capabilities such as intrusion detection and denial of service attack solely by being in Azure. Notice that I deliberately did not cite Security Information and Event Management (SIEM) tools like Azure Sentinel. Technically you can take advantage of Azure Sentinel for on-premises workloads.
  • Replace aging hardware – Your hardware may be getting old but isn’t old enough for a Capex-powered refresh. Moving your VMs to Azure decouples you from the underlying hardware. “But won’t that be expensive, since you are now paying by usage per minute?” you ask. Not necessarily and certainly not in the long run. Consider options like Reserved Instance (RI) pricing that can offer an up to 80% discount based on a one- or three-year commitment.

Furthermore, you can combine RI with Azure Hybrid Benefits (AHUB) which provides discounts for licenses already owned. Finally, don’t forget to take into account the savings from decreased needs for power, networks, real estate, and the cost of resources to manage all the on-premises assets. Even if you can’t get out of the DC lease completely, you may be able to negotiate a modular reduction of your DC footprint. Please refer to Gartner research that suggests that over time, the cloud can become cost-effective.

AMP Move out of Data Center

Source – https://blogs.gartner.com/marco-meinardi/2018/11/30/public-cloud-cheaper-than-running-your-data-center/

  • Disaster Recovery (DR) – Few organizations have a DR plan setup that is conducive for ongoing DR tests. Having an effective DR plan is one of the most critical responsibilities of IT. Once again, since geo-replication is innate to Azure, your disks are replicated to an Azure region that is at least 400 miles away, by default. Given this, DR is almost out-of-the-box.
  • Extended lease of life on out of support software – If you are running an Operating System (OS), such as Windows Server 2008 or SQL Server 2008, moving to Azure extends the security updates for up to three years from the “end of support” date.
  • Getting out of the business of “baby-sitting” database servers – Azure managed instances offer you the ability to take your existing on-premises SQL Server databases and move them to Azure with minimal downtime. Once your database is an Azure SQL Managed Instance, you don’t have to worry about patching and backup, thereby significantly reducing the cost of ownership.
  • Take baby steps towards automation and self-service – Self-service is one of the key focus areas for most IT organizations. Once again, since every aspect of Azure is API driven, organizations can take baby steps towards automated provisioning.
  • Get closer to a data lake – I am sure you have heard the quote “AI is the new electricity”. We also know that Artificial Intelligence (AI) needs lots and lots of data to train the Machine Learning (ML) algorithms. By moving to Azure, it is that much easier to capture the “data exhaust” coming out the applications in a service like Azure Data Lake. In turn, Azure Data Lake can help turn this data into intelligence.

Strategic Cloud Transformation

Strategic Cloud Transformation

Once you have completed step #1 by moving your on-premises assets to the cloud, you are now in a position to undertake continuous modernization efforts aligned to your business priorities.

Common approaches include:

  • Revise – Capture application and application tiers “as-is” in containers and run on a managed orchestrator like Azure Kubernetes Service. This approach requires minimal changes to the existing codebase. For more details of this approach, including a demo, read Migrate and Modernize with Kubernetes on Azure Government.
  • Refactor – Modernize by re-architecting to target Platform as a Service (PaaS) and “serverless” technologies. This approach requires more significant recoding to target PaaS services but allows you to take advantage of cloud provider managed services. For more information, check out our “Full PaaS” Approach to Modernizing Legacy Apps.
  • Rebuild – Complete rewrite of the applications using cloud-native technologies like Kubernetes, Envoy, and Istio. Read our blog, What Are Cloud-Native Technologies & How Are They Different from Traditional PaaS Offerings, for more information.
  • Replace – Substitute an existing application, in its entirety, with Software as a Service (SaaS) or an equivalent application developed using a no-code/low-code platform.

CHECK OUT OUR WHITEPAPER & LEARN ABOUT CLOUD-BASED APP MODERNIZATION APPROACHES

The following table summarizes the various approaches for modernization in terms of factors such as code changes, operational costs, and DevOps maturity.

Compare App Modernization Approaches

Azure Migration Program (AMP)

Microsoft squarely aligns with this two-step approach. At the recent Microsoft partner conference #MSInspire, Julia White announced AMP (Azure Migration Program).

AMP brings together the following:

Wrapping Up

A two-step migration offers a programmatic approach to unlock the potential of the cloud quickly. You’ll experience immediate gains from a tactical move to the cloud and long-term benefits from a strategic cloud transformation that follows. Microsoft programs like AMP, combined with over 200+ Azure services, make this approach viable. If you’re interested in learning more about how you can get started with AMP, and which migration approach makes the most sense for your business goals, reach out to AIS today.

GET YOUR ORGANIZATION ON THE RIGHT TRACK TO TRANSFORMATION. CONTACT AIS TODAY TO DISCUSS YOUR OPTIONS.

PaaS & Cloud-Native Technologies

If you have worked with Azure for a while, you’re aware of the benefits of PaaS, such as the ability to have the cloud provider manage the underlying storage and compute infrastructure so you don’t have to worry about things like patching, hardware failures, and capacity management. Another important benefit of PaaS is the rich ecosystem of value-add services like database, identity, and monitoring as a service that can help reduce time to market.

So if PaaS is so cool, why are cloud-native technologies like Kubernetes and Prometheus all the rage these days? In fact, not just Kubernetes and Prometheus, there is a groundswell of related cloud-native projects. Just visit the cloud-native landscape to see for yourself.

Key Benefits of Cloud-Native Architecture

Here are ten reasons why cloud-native architecture is getting so much attention:

  1. Application as a first-class construct — Rather than speak in terms of VMs, storage, firewall rules, etc. cloud-native is about application-specific constructs. Whether it is a Helm chart that defines the blueprint of your application or a service mesh configuration that defines the network in application-specific terms.
  1. Portability — Applications can run on any CNCF certified clouds and on-premises and edge devices. The API surface is exactly the same.
  1. Cost efficiency — By densely packing the application components (or containers) on the underlying cluster, the cost of running an application is significantly more efficient.
  1. Extensibility model — Standards-based extensibility model allows you to tap into innovations offered by the cloud provider of your choice. For instance, using the service catalog and open service broker for Azure, you can package a Kubernetes application with a service like Cosmos DB.
  1. Language agnostic — Cloud-native architecture can support a wide variety of languages and frameworks including .NET, Java, Node etc.
  1. Scale your ops teams — Because the underlying infrastructure is decoupled from the applications, there is greater consistency for lower levels of your infrastructure. This allows your ops team to scale much more efficiently.
  1. Consistent and “decoupled” — In addition to greater consistency at the lower levels of infrastructure, applications developers are exposed to a consistent set of constructs for deploying their applications. For example, Pod, Service Deployment and Job. These constructs remain the same across cloud, on-premises and edge environments. Furthermore, these constructs also help decouple the developers from the underlying layers (Cluster, Kernel and Hardware layers ) shown in the diagram below.decoupling
  1. Declarative Model – Kubernetes, Istio, and other projects are based on a declarative, configuration-based model that support self-healing. This means that any deviation from the “desired state” is automatically “healed” by the underlying system. Declarative models reduce the need for imperative automation scripts that can be expensive to develop and maintain.
  1. Community momentum – As stated earlier, the community momentum behind CNCF is unprecedented. Kubernetes is #1 open source project in terms of contributions. In addition to Kubernetes and Prometheus, there are close to 500 projects that have collectively attracted over $5 B of venture funding! In the latest survey, (August 2018), the use of cloud-native technologies in production has gone up by 200% since Dec 2017.
  1. Ticket to DevOps 2.0 – Cloud-native combines the well-recognized benefits of what is being termed as “DevOps 2.0” that combines hermetically sealed and immutable container images, microservices and continuous deployment. Please refer to the excellent book by Victor Farcic.

Now that we understand the key benefits of cloud-native technologies, let us compare it to a traditional PaaS offering:

Attribute Tradition PaaS Cloud-Native as a Service
Portability Limited Advanced
Application as a first-class construct Limited (application construct limited to the specific PaaS service) Advanced construct including Helm, network and security policies
Managed offering Mature (fully managed) Maturing (some aspects of the cluster management currently require attention)
Stateful applications Advanced capabilities offered by the database as service offerings Some cloud-native support for stateful applications (However, cloud-native applications can be integrated with PaaS database offerings through the service catalog)
Extensibility Limited Advanced (extensibility includes Container Network Interface, Container Runtime Interface)

Azure & CNCF

Fortunately, Microsoft has been a strong supporter of CNCF, as they joined CNCF back in 2017 as a platinum member. Since then, they have made significant investments in a CNCF-compliant offering in the form of Azure Kubernetes Service (AKS). AKS combines the aforementioned benefits of a cloud-native computing with a fully managed offering – think of AKS as a PaaS solution that is also CNCF compliant.

Additionally, AKS addresses enterprise requirements such as compliance standards, integration with capabilities like Azure AD, Key Vault, Azure Files etc. Finally, offerings like Azure Dev Spaces and Azure DevOps greatly enhance the CI/ CD experience in working with cloud-native applications. I will be remiss not to talk about VS Code extension for Kubernetes that also brings a useful tooling to the mix.

Cloud-Native Use Cases

Here are few key use cases for cloud-native applications. Microservices are something you would expect, of course.  Customers are also being used to run Apache Spark on AKS.  There is also thinking around managing IoT Edge deployments right from within the Kubernetes environment. Finally, “Lift and shift to containers” – this use case is getting a lot of attention from customers as the preferred route for moving on-premises applications to the cloud. Please refer to our recent blog post on this very topic “A “Modernize-by-Shifting” App Modernization Approach” for more details!

Cloud-Native Scenarios

FREE HALF DAY SESSION: APP MODERNIZATION APPROACHES & BEST PRACTICES
Transform your business into a modern enterprise that engages customers, supports innovation, and has a competitive advantage, all while cutting costs with cloud-based app modernization.

Since PaaS and Windows Azure have both been in the news recently, thanks to a favorable report from Gartner, I thought it might be good timing to revisit a whitepaper I wrote on PaaS. And particularly, look at the differences between Azure IaaS vs. PaaS vs. SaaS.

There’s been a lot of talk about the different cloud-based services available today, including Infrastructure as a Service (IaaS), Software as a Service (SaaS) and Platform as a Service (PaaS). While each of these services is unique, PaaS stands out from the mix. This is not to suggest that PaaS is somehow better than IaaS; that would be an improper comparison. Before we discuss any further, it’s important to answer the question: what is paas cloud computing? The diagram below can help demonstrate how PaaS simply builds on the capabilities offered by IaaS. For a more in-depth breakdown, view a blog here about all the PaaS options available..

But if you are a developer, IT shop or an ISV responsible for building, deploying and maintaining solutions, leveraging PaaS is where you reap the maximum cloud-computing benefits. Read More…

AIS will be presenting a new series of Azure ‘n’ Action Café webinars in early 2014. We will have expert speakers offering new and exciting content in a convenient “lunch and learn” online format, one Wednesday a month, from 12 p.m. to 1 p.m. EST.

Our first presentation date is Wednesday, January 29. You won’t want to miss this session, as we’ll cover both updates to Azure and new ways to extend your data center.

In this session we will share an example Platform as a Service (PaaS) Web Role workload in Azure.  This workload will help you understand how Azure constructs can be used in support of different scenarios.

The workload we’ll discuss makes use of the following Constructs:

  • A private data center extended into multiple Azure Regional Data Centers (East Coast / West Coast) with IPSEC tunnels
  • The on-premises AD / DNS infrastructure extended into Azure IaaS
  • An on-premises SQL Server 2012 Server database supporting a High Availability Group (Synchronous) with a local primary and a remote secondary located in Azure IaaS
  • The PaaS Web Role Load Balanced across data centers using Azure Traffic Manager

Click here to register!

Yesterday, Microsoft announced the general availability of its offering of Infrastructure as a Service (IaaS). They join an already-crowded market of IaaS providers, but this offering gives all companies the ability to offload workloads that have traditionally run in a company data center to the cloud. Welcome, Microsoft — the water is fine.

This announcement also represents a major chunk of Microsoft’s family of Azure offerings…and in my opinion, a stepping stone many companies simply must take in moving out of the traditional data center and into the cloud.  The following diagram shows the stepping stones out of the traditional data center:

Read More…