In a previous article, we explored using the CSI Secrets Store Driver and Azure Provider to mount an HTTPS certificate stored in Azure Key Vault on pods deployed to Azure Kubernetes Service (AKS). This was done using Linux containers and an ASP.NET Core application. But what about .NET Framework applications on Windows containers? Let’s see if we can take the same approach.

About Windows Containers

Containers initially grew out of the Linux world based on its natural support for container isolation using cgroups and namespaces. Therefore, Linux containers are usually the preferable choice when possible, such as for apps written on the cross-platform .NET Core 3.1 or .NET 6.0. However, many organizations have a significant investment in .NET Framework applications, which must run on a Windows OS. In this case, you may use Windows containers to deploy these applications to AKS, leveraging an organization’s investments in both existing applications and Kubernetes.

A few important things to note about Windows containers in AKS:

  1. The kubenet networking option is not supported on Windows nodes; therefore, you can use Azure CNI for the cluster. This requires additional planning and a larger range of IP addresses.
  2. The first nodepool in AKS runs system services and must use Linux nodes; therefore, creating a cluster for Windows containers is to make the cluster and then add a Windows nodepool.
  3. Your container image Windows OS version must be compatible with the node OS version. As of writing, AKS nodes are created with Windows Server 2019, so the to use for the .NET Framework parent images is 4.8-windowsservercore-ltsc2019.
    See the AKS Windows FAQ for more information.

Using the CSI Secrets Store Driver for Windows Containers

Fortunately, AKS supports Container Storage Interface (CSI) drivers on Windows containers. Windows containers are also supported by the same CSI Secrets Store Driver and Azure Provider that we used for Linux containers and ASP.NET Core.
However, Windows is not enabled by default if you install using the Helm carts; you need to set the following configuration overrides to true:

  • windows.enabled
  • secrets-store-csi-driver.windows.enabled

Once we have the driver and provider installed on our cluster, we can mount certificates stored in Azure Key Vault as files on our Windows container pods in AKS, just as we did for Linux containers.

The diagram below represents the flow from Key Vault to a pod and a volume mount on the container:

Key Vault to a pod and a volume mount on the container

Configuring ASP.NET With HTTPS in Windows Containers

Running ASP.NET Core applications on Linux containers uses the Kestrel web server, and it is easy to configure Kestrel with the HTTPS certificate to use either through configuration or code. But ASP.NET applications running on Windows will use IIS as the webserver. How does this work on Windows containers, and how can we configure IIS in the container to use our mounted HTTPS certificate?

Looking at the Dockerfile used to create the .NET Framework ASP.NET image gives us a clue with this line:

ENTRYPOINT ["C:\\ServiceMonitor.exe", "w3svc"]

Its entry point uses the IIS Service Monitor app to run the IIS World Wide Web Publishing Service  (w3svc).

So in our application’s Dockerfile we could set a new entry point that calls a script that:

  1. Install the mounted certificate file into the Windows Certificate Store.
  2. Configures IIS to use HTTPS with the imported certificate.
  3. Start the ServiceMonitor.exe process.

Here is a PowerShell example that expects the HTTPS_CERTIFICATE_PATH environment variable to be set with the certificate path:

$certFilePath = $env:HTTPS_CERTIFICATE_PATH
Write-Host "Importing HTTPS certificate $certFilePath"
$cert = Import-PfxCertificate -FilePath $certFilePath -CertStoreLocation Cert:\LocalMachine\My

Write-Host "Creating HTTPS Binding"
New-WebBinding -Name "Default Web Site" -IP "*" -Port 443 -Protocol https

Write-Host "Binding Certificate to HTTPS Binding"
Set-Location IIS:\SslBindings
$cert | New-Item 0.0.0.0!443

Write-Host "Starting Service Monitor"
C:\\ServiceMonitor.exe w3svc

Then in our application’s Dockerfile we copy in our startup script and set ENTRYPOINT the to call it, for example:

COPY ./Bootstrap-IIS.ps1 ./
ENTRYPOINT ["powershell.exe", "./Bootstrap-IIS.ps1"]

Finally, we can set the HTTPS_CERTIFICATE_PATH environment variable in our Kubernetes YAML to match the mount point and file name we configure the driver and volume mount to use.

Sample Repo

For a complete example with setup, configuration, and deployment instructions, see the aks-csi-keyvault-certs-win repo in GitHub.

From Pivotal Cloud Foundry (PCF) to TAS

Previously known as Pivotal Cloud Foundry (PCF), Tanzu Application Service (TAS) is an open-source, scalable platform as a service (PaaS) cloud application that allows users to manage Kubernetes-backed container services. TAS is designed to manage containers, Kubernetes, and microservices. An advantage of using this with a government client is that there seems to be a solid push to adopt this technology as a standard for Kubernetes-backed services. Allowing TAS to manage infrastructure lets users not worry about building configuration files for Kubernetes. This allows more time spent on developing the product for the customer.

What Are the Prerequisites?

  • The application that wants to be hosted by TAS needs to be “cloud-ready.”
    • There are specific buildpacks that TAS has built into it
    • More URL-backed buildpacks can be used if the one needed is not built into TAS.
  • TAS supports the application technology.
  • Credentials to the TAS service.
  • Application services (PostgreSQL, Redis, etc.) are available if the application needs to utilize them.
  • Access to the Organization and Space that your administrator set up.
  • Install the latest version of Cloud Foundry-Command Line Interface (cf-cli) on the machines that push apps to TAS. You can utilize many commands to maintain and push your applications to TAS. This article uses a few key commands to get started.

If desired, the Cloud Foundry Docs: https://docs.cloudfoundry.org/cf-cli/

How Do I Log into TAS?

Utilizing cf-cli, you can log into the organization and space you or your administrator set up. The `cf login -a api-url ` command handles this and will prompt for credentials, org, and space if they are required. Alternatively, flags can be passed in the command to provide credentials, the org you belong to, and the space where you would like your code pushed. The -a flag of the cf login command is to specify the URL where TAS is located. If required, your administrator may have set up two-factor authentication. Once the proper login command is entered, a message will display a URL to get a code to supply to the command line to finish logging in. Once you have finished logging in, you are ready to push your first application.

Pushing Your Application into TAS

Install and Manage Your Applications in TAS

  • TAS applications can be managed in two different ways: with the Apps Manager user interface (if enabled by your administrator) or using cf-cli. Utilizing the command line is the most versatile way of managing your application after it is set up. This must be installed on the machine used to push the applications to TAS. If your administrator allows, Spaces and Orgs can be created by you using cf-cli or through the web user interface. The structure for what is needed for an application to run in TAS is as follows:
  • Access to the TAS API (credentials and/or 2FA).
  • cf-cli installed on the machines that will be performing maintenance and installations.
  • Access to a company Organization (Org) in TAS.
  • Access to the Space in the Org.
  • Ability to bind Services to the application that you are installing.
  • Utilizing the built-in buildpacks or adding your own that is not built-in.

After logging into the org and space in the TAS instance you have access to, you can utilize cf-cli to push your app package to the space.

In this example, we are pushing a Spring Boot application to TAS. We need a manifest.yml file to exist in the root of the project files.

manifest.yml file to exist in the root of the project files

This manifest file is an application deployment file that contains the information needed to deploy the application to the TAZ instance. This file always starts with three dashes (—) at the start of the file. Critical attributes are explicitly used in the manifest to describe deployment needs. We use a select few here, but the rest can be found in the App Manifest Attribute Reference (https://docs.cloudfoundry.org/devguide/deploy-apps/manifest-attributes.html).

App Manifest Attribute Reference Window

  • The applications attribute describes the application(s) you want to deploy to TAS.
  • The name attribute is the name of your app to be deployed.
  • The path is the target of your build package for your app.
  • The memory attribute specifies how much memory to allocate for your app when running.
  • The disk_quota attribute specifies how much disk space to allocate for your app when running.
  • The instance attribute specifies how many instances should be running for your app. This can be scaled via the user interface or through cf-cli after the app is deployed.
  • The buildpacks attribute specifies which build pack to use when deploying your app. In this instance, we are building a Spring Boot application. We have selected to use the java buildpack.
  • The env attribute specifies any environment variables that are needed for build or run time.

Once your manifest.yml file is created, use the -f flag in the push command to specify where the manifest is located. Then, you are ready to push your app. The `cf push` command will pick up the manifest.yml file you specified and push it to the org and space you are logged into. If you do not want to allow the app to start right after pushing the app, use the `–no-start flag in the command. You may not want the app to auto-start after the push if you need to bind a service to the app before it runs. That can be achieved via the TAS UI or via the command line. The docs will provide more information as to how you bind services. After set up is complete and pushed, you can now run your app with `cf start your-app-name.

After logging into the Apps Manager for your Org, your spaces will be displayed.

Spaces Displayed in App Manager

Under one of those spaces will be your app you have pushed via cf-cli. The right-most column displays the route that was created for your app. Below is the published URL for your application where your users will have access.

Route Created for Your Application

Once you are into the app, you have pushed in the Apps Manager user interface. All the info about your running app will be displayed. From here, you can manage your application.

Manage Your Application with your User Interface

Let’s Wrap Up

With TAS, you can migrate any running applications into a cloud platform that can run and scale at any given time. It provides the back-end clustering capabilities that many applications use in a cloud environment without the user having to set it up. Its open-source code enables companies to offer their own ‘in-house’ service for their engineers to utilize for their customers.

In the previous series of articles (Part 1, Part 2, Part 3), we went over how to create Self-Documenting Azure Functions with C# and OpenAPI with generated OpenAPI specs. At the time, this was done with the current LTS versions, which meant Azure Functions V3 and .NET 3.1.

Since then, .NET 6 and Azure Functions V4 have been released. Along with those, another very useful package has also been released: Microsoft.Azure.WebJobs.Extensions.OpenApi. This package removes the need for many of the extra steps taken in the previous articles.

In this article, I will walk you through the process of upgrading our “Bmazon” application to support .NET 6 and convert our OpenAPI document generation to use the new NuGet package.

The new OpenAPI Package

The Microsoft.Azure.WebJobs.Extensions.OpenApi package incorporates a lot of logic that we did in a custom way in the previous posts. Here is a quick comparison of our old approach with this new package:

Basic Configuration
Old – Previously, we used a call to
builder.AddSwashBuckle() in the Startup.cs file.
New – Works Automagically, but it is also customizable by implementing an interface (see below).
API Endpoints for UI / Document Download
Old – Create custom Azure Functions to handle UI and json document downloads
New – Works Automagically
Document Parameters/Return Values
Old – Combination of XML documentation and Attributes from disparate namespaces
New – OpenAPI Attributes
Schema/Validation
Old – Use Data Annotations
New – Use Data Annotations or JsonProperty Attributes
Create Separate Documents
Old – Lots of Custom Startup Code
New – Not Supported. Not really the best approach anyway, since it doesn’t restrict users. Better to use separate Function apps or an API Gateway like Azure API Management.

Learn more about integrating this with Azure API Management

Update Frameworks

Now, we need to update to the latest framework versions (.Net6 and Functions v4).

Remove global.json

First, if you were using a global.json file to lock things to the dotnet 3.1 version, go ahead and delete that now. Alternately, you could update the contents to lock it to version 6.

Download the SDKs

You’ll need to download and install the following

Update the Project

Next, you need to update your Bmazon.csproj file to target the new SDKs.

Remove XML documentation

Unless you need it for another reason, you can remove the following line from the csproj file:

<DocumentationFile>Bmazon.xml</DocumentationFile>

Old Frameworks

<PropertyGroup>
  <TargetFramework>netcoreapp3.1</TargetFramework>
  <AzureFunctionsVersion>v3</AzureFunctionsVersion>
  ...
</PropertyGroup>

New Frameworks

<PropertyGroup>
  <TargetFramework>net6.0</TargetFramework>
  <AzureFunctionsVersion>v4</AzureFunctionsVersion>
  ...
</PropertyGroup>

Then you need to change your package references as well. We also will need to add the Microsoft.Azure.Functions.Extensions because that was being pulled in as a secondary dependency in the past, but not now.

> dotnet remove package AzureExtensions.Swashbuckle
> dotnet add package Microsoft.NET.Sdk.Functions
> dotnet add package Microsoft.Azure.Functions.Extensions
> dotnet add package Microsoft.Azure.WebJobs.Extensions.OpenApi

This will update the package to the latest versions and install the new OpenApi package.

Now we’re ready to update the code.

Update Startup.cs

Configuration is handled differently with the new package, so the entire builder.AddSwashBuckle method call should be removed. Your Startup class should look like this now:

public class Startup : FunctionsStartup
{
  public override void Configure(IFunctionsHostBuilder builder)
  {
    builder.Services.AddSingleton&lt;OrderService&gt;();
  }
}

Update OpenAPI Config

First, the new package automatically creates the Azure Functions for the API endpoints for you, so you can just delete the OpenApiJson and OpenApiUi files from the “OpenApi” folder.

Once those files are removed, you need to add a new class that implements the IOpenApiConfigurationOptions interface so that we can make some minor configuration changes. The package will find this class at runtime to get its configuration from.

The implementation is quite simple. We are defaulting to OpenAPI V3 and setting the title and API version here. All the rest are effectively defaults.

public class OpenApiConfigurationOptions : IOpenApiConfigurationOptions
{
  public OpenApiInfo Info { get; set; } =
    new OpenApiInfo
    {
      Title = "Bmazon APIs",
      Version = "1.0"
    };

  public List&lt;OpenApiServer&gt; Servers { get; set; } =
    new();

  public OpenApiVersionType OpenApiVersion { get; set; } =
    OpenApiVersionType.V3;

  public bool IncludeRequestingHostName { get; set; } = false;
  public bool ForceHttp { get; set; } = true;
  public bool ForceHttps { get; set; } = false;
}

Using the New Attributes

Since we’ve switch packages away from the Swashbuckle library, we need to change the attributes we use to annotate our Azure Functions. Follow a few simple steps to update your code.

OpenApiOperation

OpenApiOperation is a new attribute that each API Function needs to be decorated with to designate it as an API. This attribute takes the name of the function along with a description and some optional “tags” to be used for categorizing the function. In our case, we will be adding a tag for the same groupings we used before (“Shopping” or “Warehouse”) to designate the target audience.

Here’s an example for the CreateOrder Function:

[OpenApiOperation(
  "CreateOrder",
  tags: new[] { "Shopping" },
  Description = "Creates an Order that ...")]

OpenApiRequestBody

Each instance of RequestBodyType should be replaced with OpenApiRequestBody

Old (attribute on the HttpRequestMessage parameter):

[RequestBodyType(
  typeof(Order),
  "The Order To Create")]

New:

[OpenApiRequestBody(
  contentType: "application/json",
  bodyType: typeof(Order),
  Description = "The Order To Create")]

OpenApiResponseWithBody

Everywhere your code has ProducesResponseType, it should be replaced with OpenApiResponseWithBody.

Old (description was in the XML comments):

[ProducesResponseType(
  typeof(string),
  StatusCodes.Status200OK)]

New:

[OpenApiResponseWithBody(
    statusCode: HttpStatusCode.OK,
    contentType: "application/json",
    bodyType: typeof(string),
    Description = "Indicates success and returns a user-friendly message")]

OpenApiResponseWithoutBody

Any API methods/scenarios that do not return a Body in the response should also have an OpenApiResponseWithoutBody attribute

[OpenApiResponseWithoutBody(
    statusCode: HttpStatusCode.OK,
    Description = "Indicates success. Returns no payload")]

NOTE: Since the Descriptions for the Responses are now in these attributes, you can remove all the <response code="XXX"> tags in your XML comments

Full Example

Here is a comparison of the entire CreateOrder method definition:

Old:

[ProducesResponseType(
  typeof(string),
  StatusCodes.Status200OK)]
[ProducesResponseType(
  typeof(IEnumerable&lt;string&gt;),
  StatusCodes.Status400BadRequest)]
[FunctionName("CreateOrder")]
[ApiExplorerSettings(GroupName = "Shopping")]
public async Task&lt;IActionResult&gt; Run(
  [HttpTrigger(AuthorizationLevel.Anonymous, "post", Route = "order")]
  [RequestBodyType(typeof(Order), "The Order To Create")]
  HttpRequestMessage req,
  ILogger log)
{
  ...
}

New:

[OpenApiOperation("CreateOrder",
  tags: new[] { "Shopping" },
  Description = "Creates an Order that will be shipped to the Warehouse for fulfillment.")]
[OpenApiRequestBody(
  contentType: "application/json",
  bodyType: typeof(Order),
  Description = "The Order To Create")]
[OpenApiResponseWithBody(
  statusCode: HttpStatusCode.OK,
  contentType: "application/json",
  bodyType: typeof(string),
  Description = "Indicates success and returns a user-friendly message")]
[OpenApiResponseWithBody(
  statusCode: HttpStatusCode.BadRequest,
  contentType: "application/json",
  bodyType: typeof(IEnumerable&lt;string&gt;),
  Description = "Indicates a data validation issue and will return a list of data validation errors")]
[FunctionName("CreateOrder")]
public async Task&lt;IActionResult&gt; Run(
  [HttpTrigger(AuthorizationLevel.Anonymous, "post", Route = "order")]
  HttpRequestMessage req,
  ILogger log)
{
  ...
}

Test It All

Now that things are all updated, you should be able to build the code with dotnet build.

NOTE: If you’re updating your own project, you may want to check the Microsoft documentation on Breaking changes between 3.x and 4.x to be sure there’s nothing else you need to be concerned with.

The new URLs

Once you update everything and run func host start, you should see these new URLs replace the old ones:

Shows the URLs output by the "func host start" process

RenderSwaggerUI

This method renders the default view as configured.

RenderSwaggerDocument

This URL allows you to download the document directly. Just substitute “{extension}” with either of the supported file extensions (json or yaml) and it will download the appropriate file

RenderOpenApiDocument

If you are looking for a specific OpenAPI version, you can use this URL to do that by adding a supported OpenAPI version (“v2”, “v3”) along with the extension in order to get that specific version. (e.g. http://localhost:7071/api/openapi/v2.yaml)

Conclusion

Now that we’ve gone through this exercise, you are now generating Swagger/OpenAPI documents in your Azure Functions in a better way using a Microsoft-recommended approach.

Happy Coding!

TOP MODERNIZATION APPROACHES
Learn about the top three modernization approaches — rehosting, replatforming, and refactoring — uncovering pros, cons, and process ideas as you go.

I recently enjoyed being involved in an internal Power Platform modernization hackathon. I want to share my experience to provide valuable insights and/or ideas to help others decide if they will participate in or host a hackathon themselves.

What is a Hackathon?

A hackathon is typically a single sprint event where a group of motivated technical folks collaborates intensely to implement and demonstrate the solution to a problem use case chosen by each team. It is a rapid development effort where the solutions are designed and implemented towards the overall goals of the hackathon event. Besides valuable business/account benefits, hackathons are known to be great opportunities for hands-on learning and upgrading technical skillsets.

AIS Internal Hackathon

AIS held an internal Power Platform hackathon in the summer of 2021. One such stirring event helped a few motivated AIS team members to learn and upgrade their Power Platform skills while applying those to solve practical problem scenarios at one of our major clients. The event focused on relevant use cases that can use the many features of Power Platform to solve real-time gaps and/or problems across our client enterprise. There were six teams of 4-6 team members each. Each team had at least one Power Platform developer and one person familiar with the client’s business domain. A set of senior Power Platform SMEs acted as advisors for the event.

The Case and Team

I’ve had the opportunity to propose two of the six use cases selected by a team for implementation. My team focused on a crucial financial reconciliation process which proved to be a spot-on candidate for Power Platform modernization. The existing production system was a dusty Windows Forms application created at lightning speed by AIS to help our client avoid renewing a costly COTS product. Power Rippers’ hackathon team consisted of a Power Platform developer and three .NET developers with no Power Platform experience. Two of the team members had domain experience specific to the client we focused on. We’ve had an excellent experience working intensely on the effort as a mini-project. We leveraged Microsoft Teams for all collaboration, including recorded meet-ups, shared OneNote’s, and linked a OneDrive library app to the chat space.

Power Platform and its Wide Range

We learned, utilized, and integrated a wide range of Power Platform components like Model-Driven App, Dataverse, Dataflow, Power BI, paginated reports, data alerts, Power Automate, and OneDrive. Using these technologies, we modernized the existing business processes. In addition, we added a few Power Platform-backed enhancements to demonstrate how the platform can empower business users further.

Component Level Interaction

Gains

We had our share of hiccups in the project, which proved to be a significant part of the learning experience. Our team secured 2nd place, which came with a monetary reward!

From a business standpoint, it did not stop there. We made our application into a proof concept. We presented to the application leadership as a possible solution to replace the existing application, with benefits such as the following:

  • An application that is easier to maintain
  • More functionality than the original application
  • Decreased level of effort and costs for continued enhancements
  • The ability for the client to configure many aspects of the application through model-driven application
  • Moving the application to a platform where the functionality could be maintained, and enhancements could be done by the client themselves with limited training.

From a personal standpoint, it ticked a few checkboxes for my growth, like becoming familiar with PowerApps and Power BI to an intermediary level vs. a lesser-than-a-novice before this. It also allowed me to present my first Lightning Talk, a short presentation to pass on what I learned to others within my company.

The Transformation Saga

This excellent transformation story of a third-party product rewrites into a Power Platform opportunity that materialized to reality due to the hackathon.

The Transformation Story of Power Platform

Why AIS?

This effort is a testament to the technical learning appetite of AIS folks, how we as a company strive to make things beneficial and better for our clients. It also demonstrated how internal activities like hackathons, level-up sessions, lightning talks, etc. help participants achieve personal goals and work together to share their skills and knowledge.

Microsoft’s RPA (Robotic Process Automation) solution will transform organizations, helping teams realize cost savings and increased productivity. And the beauty of Microsoft’s RPA capabilities is that they build atop an organization’s existing Azure infrastructure, tapping into native Microsoft cloud services – like Azure Active Directory, On-Demand Compute, Native Network Security Tools, and RBAC Access Control – rather than building these capabilities from scratch on another solution.

While automation has been around for decades, the Microsoft one cloud platform provides seamless, integrated process automation services. Microsoft automation capabilities can extend across all departments of large enterprise organizations, optimizing processes and significantly trimming costs.

Many years of Azure experience and award-winning Power Platform capabilities provided the AIS team the opportunity to collaborate with Microsoft on a whitepaper for enterprise deployment of RPA. We’re grateful for the opportunity to help create opinionated guidance and resources for organizations looking to enable and govern the rollout of Power Automate for RPA and other hyperautomation scenarios. Many talented SMEs across Microsoft and AIS partnered to deliver valuable guidance for you to accelerate your path to automation. In this blog, our team shares six key learnings and takeaways from the project.

What We Learned Helping Write the RPA Whitepaper

It’s hard to beat the opportunity to work with a product team. We’ve been a dedicated Microsoft partner for nearly as long as we’ve been a company. Throughout our partnership, we’ve been lucky enough to work closely with product engineering across various teams. It’s always enlightening and inspiring and gives both of our teams an opportunity to apply product vision to customer priorities.

Opinionated first, genericized later. New technology is released at a rapid clip. We know there’s a lot of noise. You need to get things done efficiently and the last thing you need is a case of analysis paralysis. Opinionated guidance will help you do that, giving you a leg up and a head start on the best approach. We’ve enabled cloud capabilities at some of the largest and most complex enterprises for over 13 years. We’ve infused those learnings and hardened processes into opinionated guidance by identifying what’s most effective. From identity and network operations to migration and modernization approaches, we arm delivery teams with constantly improving best practices and resources. Many organizations approaching RPA won’t be starting from ground zero in the cloud; the Microsoft RPA solution, with some help from other cloud resources (like our CDF), will allow you to leverage and build on your cloud investments.

Successful enterprise programs require insights from real-world, practical experience. The result of this whitepaper is an example of the powerful impact of talented product teams and veteran systems integrators coming together. We were able to bring learnings from building out enterprise-level Azure environments. Together with Microsoft, we shaped the vision for RPA technology into practical, hands-on resources for building and supporting hyperautomation. Insight from previous experience, enterprise technologies outside the Microsoft suite, and an understanding of customers’ business outcomes allowed us to inform the product and develop the resources to enable it. The combination of product team vision and insight and hands-on, experienced practitioners is a winning formula for developing valuable customer guidance.

Checklists are always helpful. Checklists drove the format of the whitepaper to help us make the content as actionable as possible. The goal of this whitepaper was to set forth a set of optimal milestones and share the thinking to help teams make progress faster. This approach was driven by Apostolis Papaioannou, Principal Program Manager at Microsoft. His vision was to create content that was consumable for such a vast topic, building on the foundation of the Holistic Enterprise Automation Techniques (HEAT). What you will find with this whitepaper is a thorough overview and actionable steps to get a workable environment up and going quickly. There’s a wealth of additional material and documentation available today with more coming soon.

Cross-cutting teams are the means to succeed. We brought in AIS Power Platform, Azure, M365, and other architects with varying skillsets to support the whitepaper creation, in addition to the Microsoft contributors, and the range of experience and perspectives made a big impact on the outcomes and guidance. This experience was yet another example of the success and trend towards cross-functional and cross-cloud collaboration, a concept and model the AIS team has adopted with much success.

Write a lot. Trim a lot more. Repeat. This was the flow of operations. We would quickly get up to 100 pages, then trim back a lot. This helped us get all the ideas and perspectives out there, before evaluating to focus on the right message and resources. In some cases, guidance can boil the ocean. Our goal was to vet the options and provide an opinionated best path forward to support organizations in focusing on the right things.

Get Started: Administering a Low-Code Intelligent Automation Platform for Your Organization

Are you considering RPA solutions for your organization? Check out the whitepaper, Enterprise Deployment for RPA and more in Power Automate, as well as other resources below. Please share a link with your partners and colleagues and let us know what you think. If you have any questions, please reach out to the AIS team.

Access the whitepaper: https://aka.ms/autocoeadminwhitepaper

Helpful Microsoft RPA Resources and Guidance Links

• Automation CoE Blueprint https://aka.ms/autocoeblueprint
• Automation CoE Strategy https://aka.ms/autocoestrategy
• HEAT https://aka.ms/rpapnp
• HEAT video series https://aka.ms/rpapnpvideo
• Whitepaper overview blog: https://powerautomate.microsoft.com/en-us/blog/administer-and-govern-a-low-code-intelligent-automation-platform-whitepaper-enterprise-deployment-for-rpa-and-more-in-power-automate/
• Automation Admin & Governance Whitepaper https://aka.ms/autocoeadminwhitepaper
• Manage Power Automate for Desktop on Windows https://aka.ms/padonwindowspnp
• Hyperautomation SAP Playbook (https://aka.ms/MicrosoftRPAPlaybookForSAPGUI) & video series (https://aka.ms/AutomateItSAPSeries)
• Automate It video series: https://aka.ms/AutomateIt
• RPA in a Day training: https://aka.ms/RPAinaDayPackage

Acknowledgments

Thank you to Apostolis Papaioannou, Kent Weare, Pranav Rastogi, Anitha Natarajan, Jonathan Eckman, Lav Gupta, Brent Wodicka, Vishwas Lele, Gautier Chastan, Kathy Osborne, Rakesh Krishnan, Amit Bhambri, Ashvini Sharma, and Jonathan Kendall for the partnership on this whitepaper.