The first AWS service (S3) was launched in September 2006. My humble journey in the cloud started in 2007, leading up to my first cloud talk in New York in 2008. In this blog post, I want to talk about what I, and we collectively, have learned from the last decade in the cloud and how to use this knowledge to drive cloud value. Let us get started.

By now, the transformational benefits of the cloud are well established. Look no further than spending data on the top three cloud providers, which was nearly 150 billion in 2021. However, current cloud spending is only a fraction of the global IT spend at 2-3 trillion dollars. This suggests that the main chapters of the cloud story are yet to be written. As is the case with the progression of any successful technology, the cloud is expected to become so pervasive that it will become invisible – just as traditional IaaS is becoming largely invisible with the advent of Kubernetes and serverless models.

Future discussions about the cloud will be less about the cloud capabilities, and more about the outcomes empower by cloud services. At some point, we can expect to stop talking about enterprise landing zones, VNETs/ VPCs, PaaS v-next, i.e., containers and serverless. I also hope that we will stop measuring our progress in the cloud using “proxy” metrics like the number of VMs migrated, number of cloud deployments, number of workloads moved to the cloud, Cloud policy violations, and more. Later in this blog, we will talk about cloud transformation metrics that matter most at the end of the day.

But first, let us get back to the theme of this blog post – driving cloud value. This depends on a few things:

  1. Mature cloud foundation
  2. Cloud value stream

Cloud Value Diagram Cloud Foundation and Cloud Value Stream Ven Diagram

Mature Cloud Foundation

With cloud providers releasing close to three thousand features each year, cloud entropy is absolute. Cloud consumers, including big enterprises, don’t have the resources to take advantage of all the new features announced daily. As a result, cloud consumers need to focus on improving their foundational capabilities. Let us look at a few (non-exhaustive) foundational capabilities.

  1. Consistency – Pick a few (3-5) architectural blueprints that will support 80% of the application types. Ensure that users can provision these architectural blueprints consistently, quickly, and in a self-service manner using a service catalog, like Zeuscale. The service catalog, in turn, is powered by a collection of robust automation scripts that address cross-cutting objectives. Here are a few attributes to consider:
    • Resilience – spreading the subsystems across availability zones
    • Observability – automatically enabling monitoring machinery
    • Security – a zero-trust approach that assumes breach
    • It is crucial that you treat the above-mentioned automation assets as first-class code artifacts and invest in their continual improvement.

  2. Cost Optimization – Too many organizations are witnessing unplanned increases in their cloud bills. Worst, it has been suggested that 30% of all cloud consumption can be attributed to waste. Cloud cost maturity comes with cloud maturity. Ensure that you have enough governance in place, including resource tags, provisioning policies, reservations for long-term resource use, monitoring & alerts, and detailed usage reporting at a level of granularity that matches your organizational hierarchy. Doing so will allow you to catch unplanned cloud expenditure and improve your ability to predict costs over time.

  3. Security – Resources provisioned in the cloud are dynamic, and so is security monitoring. Security monitoring does *not* end with provisioning a compliant architectural blueprint. In fact, it starts with the provisioning step. You will need automation jobs that continuously monitor your applications’ security configurations and security policies to prevent “drift” from a compliant state.

  4. Data Gravity – In 2013, Dave McCrory proposed the software concept of “data gravity,” where applications, services, and business logic in software applications will move physically closer to where data is stored. As you can imagine, data gravity is a critical consideration for companies with a multi-cloud** setup i.e it is hard to spread applications across clouds because of data gravity. One way to dent the data gravity challenge is to have a data-sharing strategy in place. Data sharing can be based on copy / in-place access of datasets and can span across single or multiple clouds.** Almost 80% of Fortune 500 companies find themselves in a multi-cloud setup. These companies are provisioning similar technology stacks across more than one cloud provider.

  5. Center of Excellence – We talked about settling on a small set of architecture blueprints. You will need to invest in a forward-looking CoE group that continues to track the advances in the cloud and ensures that your organization is not caught flat-footed in the face of a disruptive new cloud capability or an architecturally significant improvement to an existing service. Without a CoE team focused on tracking and evaluating new capabilities in the cloud, you are likely to accrue cloud debt rapidly.

  6. Inclusiveness – Cloud is not just for professional developers and infrastructure engineers. An inclusive cloud strategy needs to support the needs of a growing community of citizen developers as well. Constructs like self-service provisioning and architectural blueprints that we discussed earlier need to be accessible to citizen developers. For example, it should be seamless for citizen developers to mix and match low / no-code constructs with advanced cloud platform construct.

  7. Data Analytics – As you plan to migrate/reimagine your applications in the cloud, recognize the immense potential of the data being collected. By planning for data ingestion, data transformation upfront, you can help bridge the divide between operations and analytics data. Architectures like the data mesh that think of data (operations and analytics) as the product are headed in this direction.

  8. Cloud Operating Model – Your traditional infrastructure, networking, and security teams must embrace a cloud operating model. They must rely on modern development practices of iterative development, DevSecOps, maintaining infrastructure/network/security as code. You cannot succeed in the cloud with a traditional IT operating mindset.

  9. Continuous Learning – Your organization may have become fluent in the cloud basics, but you will need continuous learning and upskilling programs to reach the next level. Only an organization that embeds a culture of learning can truly achieve its cloud transformation goals.

  10. Sandbox – Along with upskilling programs, cloud teams need the freedom to experiment and fail. This is where a cloud sandbox unencumbered by enterprise cloud security policies is essential for innovation. It should be possible for teams to experiment with any new, fast arriving preview capabilities within hours (not weeks or months).

Focus on Cloud Value Stream

Working on the cloud foundation alone will not be enough to leverage all the benefits cloud has to offer. You will need to consider the entire cloud value stream – a set of actions from start to finish that bring the value of the cloud to an organization. Cloud value streams allow businesses to specify the value proposition that they want to realize from the cloud.

Align cloud strategy with business objectives

The key idea is to start with a business strategy that can help realize the value proposition, then map that strategy into a list of cloud services. The list of cloud services, in turn, determines your cloud adoption plan. Let us break this down.

One of my favorite tools to develop a business strategy is a Wardley map. Wikipedia describes a Wardley map as a set of components positioned within a value chain (vertical axis) and anchored by the user need, with movement defined by an evolution axis (horizontal axis). Don’t worry if you are feeling a bit lost with the definition. A simple example can help. Let us assume that business leaders of fictitious financial services companies want to set up an insurance exchange in the future.

Starting from the perspective of the user of the insurance exchange, you can create a Wardley map, as shown in the diagram below.

Cloud Transformation Wardley Map

Mapped along the value chain (vertical axis) are the value-line capabilities of the insurance exchange. These capabilities are pegged on an evolution axis (horizontal axis) that represents the evolution of the components from genesis (high value) to utility (commodity).

A map like this allows you to organize your cloud investments. For example, the Matching Algorithm that pairs incoming purchase requests with the insurance providers may need to be a custom-built capability. A custom-built capability requires additional investment, but it also offers a differentiator and potentially higher profit. In the future, the previously mentioned matching capability may become available as a pre-built ML product or rental capability through evolution. So, there is indeed a risk of commoditization. But the question is – how soon can that happen? Wardley maps excel in engendering discussion across various teams into a single map.

The End-to-End Flow of Business Value

Earlier in this post, we talked about “proxy” metrics such as the number of VMs or workloads migrated to the cloud. While these metrics are helpful as IT and agile metrics, they fail to communicate the overall progress of a cloud transformation effort from the perspective of business outcomes. This is where Flow Framework®, introduced by Dr. Mik Kersten, comes in.

The core premise of the Flow Framework is the need to measure the end-to-end flow of business value and what results it produces. Flow Metrics measures the flow of business value through all the activities involved in producing business value through a software value stream. For example, consider the following chart depicting the Flow Efficiency metric. Flow Efficiency is the ratio of active time out of the total Flow Time.

A few things to note in the diagram below:

  • We are measuring end-to-end for the migration time of the application.
  • Additionally, we are considering the entire capability area and not an individual app.
  • The process of containerizing the app seems to be quick, but we are spending a significant time on security scanning and Authority to Operate (ATO) certification – not surprising for a highly regulated environment with very stringent security expectations.

Perhaps we need to make an upfront down payment on “technical debt” associated with security scanning and certification. Doing so would improve the flow efficiency of cloud migration.

Flow Chart for Driving Cloud Value Streams

In summary, to drive cloud value, you need a robust cloud foundation, as well as a keen eye towards the overall cloud value stream. Focusing on a few well-defined architectural blueprints will accord you the opportunity to mature in the areas of cloud costs, automation, and readiness. Focusing on the overall cloud value stream will ensure that your cloud investments are aligned with your business strategic goals.

Contact AIS to talk cloud strategy and business objectives.

When migrating existing business services to Azure PaaS as part of an App Modernization project, you may find yourself seriously considering serverless computing using Azure Functions, especially if your target architecture includes MicroServices.

Azure Functions let you focus on what counts — your requirements, your time, and your code — and less about boilerplate code, infrastructure, and processes.

When creating new APIs in any technology, one thing is very important: Documenting those APIs so that others can use them. This is especially important in large enterprises or situations where you are exposing these APIs to the public.

This blog series guides you through creating a C# Function App, creating self-documenting APIs, ensuring the quality of that generated documentation, and separating documentation based on the audience.

The blog series assumes the following:

  • You are familiar with C#.
  • You have knowledge of software development fundamentals.
  • You are comfortable with command-line interfaces.

At AIS, we’ve determined that one of the best approaches to documenting your APIs is to use OpenAPI (formerly Swagger) to have the APIs (nearly) document themselves. This saves time in the long run and even enables API clients to automatically generate client code to interact with your APIs. This helps with shelf life – if 6 months or a year down the road, we decide a better approach is best.

For these articles, I will walk you through the steps for creating well-documented Azure Functions for our fictitious shopping site called “Bmazon” and its modernization effort.

Creating the App

To create the app, we will start with the Azure Functions Core Tools. At the time of this writing, the current version of this library is 3.0.3477

NOTE: This version uses dotnet cli version 3.1 internally, so if your dotnet executable in the path is not that version, it could cause you issues. If you run into errors, this may be fixed by adding global.json file in the current directory with the following content, which will tell the dotnet cli to use whatever 3.1.x version you have installed.

{
  "sdk": {
    "version": "3.1.0",
    "rollForward": "latestMinor"
  }
}

At the PowerShell prompt, we’ll run the following to create our project

C:\dev> func --version
3.0.3477
C:\dev> func init Bmazon --worker-runtime dotnet

Writing C:\dev\Bmazon\.vscode\extensions.json

This will create the shell of a project inside the C:\dev\Bmazon folder.

While creating the app, I’ve copied in an OrderService and the related DTOs from the existing application we’re modernizing to be used by the newly new functions we are creating. You can see the completed code on GitHub. You’ll see a bit more of them in the next article.

Learn more about Azure Functions From Microsoft Docs

Add Functions

We’re going to add 3 different functions to our app.

Shopping API

The Shopping division needs to call HTTP APIs to make an order to the warehouse, so we will add a CreateOrder function that performs this action.

(This can be done interactively by running func new and following prompts, but using the command line parameters is more concise.)

C:\dev\Bmazon> func new --template HttpTrigger --name CreateOrder `
    --authlevel Anonymous
Use the up/down arrow keys to select a template:Function name: CreateOrder

The function "CreateOrder" was created successfully from the 
"HTTPTrigger" template.

Strangely, it outputs a prompt to select the template even when you have passed in the selection as a parameter. You can ignore this.

Warehouse API

Later in our process, the Warehouse team needs to call an HTTP endpoint to send tracking information back to the Shopping division.

We will follow the pattern above and create an API for them to call.

C:\dev\Bmazon> func new --template HTTPTrigger --name OrderShipped `
    --authlevel Anonymous
Use the up/down arrow keys to select a template:Function name: OrderShipped

The function "OrderShipped" was created successfully from the 
"HTTPTrigger" template.

Shared APIs

Since both the Shopping and Warehouse divisions will need to check on the status of an order at various times, there will be a shared function to check status.

C:\dev\Bmazon> func new --template HTTPTrigger --name OrderShippingStatus `
    --authlevel Anonymous
Use the up/down arrow keys to select a template:Function name: OrderShipped

The function "OrderShippingStatus" was created successfully from the 
"HTTPTrigger" template.

Code Cleanup

We’ll do a bit of code cleanup before moving on.

Choose GET or POST

If you look at the code, you’ll notice that, by default, the Functions were created supporting both GET and POST.

public async Task<IActionResult> Run(
   [HttpTrigger(AuthorizationLevel.Anonymous, "get", "post", Route = null)]
   HttpRequest req
...

We can fix that by changing the code on each function by removing either "get" or "post" appropriately (Typically you will have the first 2 operations be POSTs and the latter be GET).

Organizing the Code

The func calls above will create all the Function files in the top folder. We’ll move ours into a Functions folder to keep things cleaner. They all just happened to start with “O”, so we can be terse.

C:\dev\Bmazon> mkdir Functions; mv O*.cs Functions\

Add OpenAPI Document Generation

In order to add OpenAPI to Azure Functions, I chose to use the Swashbuckle library. There are a few other libraries out there to work with .Net and OpenAPI, but I chose Swashbuckle because I’m familiar with it.

Installing the Package

The core Swashbuckle project doesn’t support Azure Functions directly, so I used the AzureExtensions.Swashbuckle package, a nice extension written by Vitaly Bibikov.

To install it:

C:\dev\Bmazon> dotnet add package AzureExtensions.Swashbuckle

  Determining projects to restore...
  Writing C:\Users\XXX\AppData\Local\Temp\tmp69AA.tmp
info : Adding PackageReference for package 'AzureExtensions.Swashbuckle' into project 'C:\dev\Bmazon\Bmazon.csproj'.
info : Restoring packages for C:\dev\Bmazon\Bmazon.csproj...
...
...
info : Committing restore...
info : Generating MSBuild file C:\dev\Bmazon\obj\Bmazon.csproj.nuget.g.props.
info : Writing assets file to disk. Path: C:\dev\Bmazon\obj\project.assets.json
log  : Restored C:\dev\Bmazon\Bmazon.csproj (in 525 ms).

Setting up Swashbuckle

In order to configure Swashbuckle, your Functions App needs a Functions Startup class like the following, which we’ll put in Startup.cs in the Bmazon folder.

using System.Reflection;
using AzureFunctions.Extensions.Swashbuckle;
using Microsoft.Azure.Functions.Extensions.DependencyInjection;

[assembly: FunctionsStartup(typeof(Bmazon.Startup))]
namespace Bmazon
{
  public class Startup : FunctionsStartup
  {
    public override void Configure(IFunctionsHostBuilder builder)
    {
      builder.AddSwashBuckle(Assembly.GetExecutingAssembly());
    }
  }
}

Exposing OpenAPI Endpoints

Your code will also need to expose the OpenAPI JSON and UI endpoints as HTTP-triggered Azure Functions so that client code can load them on demand.

(Adding them in a single OpenApi\OpenApiFunctions.cs file for now)

using System.Net.Http;
using System.Threading.Tasks;
using AzureFunctions.Extensions.Swashbuckle;
using AzureFunctions.Extensions.Swashbuckle.Attribute;
using Microsoft.Azure.WebJobs;
using Microsoft.Azure.WebJobs.Extensions.Http;

namespace Bmazon.OpenApi
{
  public static class OpenApiFunctions
  {
    [SwaggerIgnore]
    [FunctionName("OpenApiJson")]
    public static Task<HttpResponseMessage> RunJson(
        [HttpTrigger(
            AuthorizationLevel.Anonymous, 
            "get", Route = "openapi/json")]
        HttpRequestMessage req,
        [SwashBuckleClient] ISwashBuckleClient swashbuckleClient)
    {
      return Task.FromResult(
            swashbuckleClient.CreateSwaggerJsonDocumentResponse(req));
    }

    [SwaggerIgnore]
    [FunctionName("OpenApiUI")]
    public static Task<HttpResponseMessage> RunUi(
        [HttpTrigger(
            AuthorizationLevel.Anonymous, 
            "get", 
            Route = "openapi/ui")]
        HttpRequestMessage req,
        [SwashBuckleClient] ISwashBuckleClient swashbuckleClient)
    {
      // CreateOpenApiUIResponse generates the HTML page from the JSON results
      return Task.FromResult(
            swashbuckleClient.CreateSwaggerUIResponse(req, "openapi/json"));
    }
  }
}

This sets up 2 new Functions on the openapi/json and openapi/ui URLs to load the JSON file and Swagger UI respectively. The [SwaggerIgnore] attribute causes Swashbuckle to ignore these API methods for document generation purposes.

Generate and View the API Documentation

NOTE: You must have the Azure Storage Emulator or Azurite RUNNING locally in order for this to work properly.

C:\dev\Bmazon> func start
Microsoft (R) Build Engine version 16.8.3+39993bd9d for .NET
Copyright (C) Microsoft Corporation. All rights reserved.

  Determining projects to restore...
  Restored C:\dev\Bmazon\Bmazon.csproj (in 840 ms).
  Bmazon -> C:\dev\Bmazon\bin\output\bin\Bmazon.dll

Build succeeded.

Time Elapsed 00:00:05.60

Azure Functions Core Tools
Core Tools Version:       3.0.3284 Commit hash: 98bc25e668274edd175a1647fe5a9bc4ffb6887d
Function Runtime Version: 3.0.15371.0

[2021-02-27T15:05:33.871Z] Found C:\dev\Bmazon\Bmazon.csproj. Using for user secrets file configuration.

Functions:

  CreateOrder: [POST] http://localhost:7071/api/order
  OpenApiJson: [GET] http://localhost:7071/api/openapi/json
  OpenApiUi: [GET] http://localhost:7071/api/openapi/ui
  OrderShipped: [POST] http://localhost:7071/api/order/shipment
  OrderShippingStatus: [GET] http://localhost:7071/api/order/shipment/{id}

For detailed output, run func with --verbose flag.
[2021-02-27T15:05:41.693Z] Host lock lease acquired by instance ID '000000000000000000000000016514FF'.

If you don’t see that last line after a few seconds, you probably don’t have the storage emulator running

Take note of the list of functions shown with the URLs next to them, especially the ones starting with “OpenApi”.

If you visit the OpenApiUI URL listed above, you will see the following in your browser:

Rendered Swagger UI displaying the 3 created Operations

That’s it! You now have a modernized serverless architecture with APIs that are documenting themselves!

If you add any new Functions, they will automatically show up here as well. Your clients can download from the JSON endpoint and import the definitions into Postman or client generators.

Get the completed code from GitHub

Next Steps

Now that you have self-documenting APIs, you may notice that the information in the Swagger UI is rather underwhelming. Due to the nature of Azure Functions, there is very little information that Swagger can glean from the runtime type information it can gather.

In part two of the series, I will show you how to make the documentation MUCH better.

TOP MODERNIZATION APPROACHES
Learn about the top three modernization approaches — rehosting, replatforming, and refactoring — uncovering pros, cons, and process ideas as you go.

In May, AIS held an internal hackathon for Microsoft Power Platform to expose our team to the platform, concepts, approaches through hands-on experience and to demonstrate the role Power Platform plays in modernizing legacy applications in the cloud.

The integration team focused on leveraging existing Power Platform connectors to add new functionality to the system. For example, the system sends the order confirmation email to the user using the Office Outlook connector in the Power Automate Flow. Similarly, it sends text messages to users through the Twilio Connector. The team also leveraged SQL Server Connector for data sync so that the legacy reporting systems remained unaffected.

The model-driven app and Power Apps portal needed to communicate with external systems to achieve the complete end-to-end solution for the E-shop web application. This includes an Outlook Exchange server to send emails, a third-party SMS provider for sending SMS messages, and a custom Web API to communicate with a legacy SQL database.

Introduction

Power Platform connectors leverage a pre-built set of actions and triggers to integrate model-driven apps or the Power Apps portal and external systems. Connectors are used with Microsoft Power Automate, Power Apps, and Azure Logic Apps. With these pre-built connectors, we can achieve complex integrations without building a Web API or setting up an SMTP server. As it is named ‘Work Less, Do More’, it replaces the work which might take many days or months to few hours.

Architecture and Flowchart Design

PowerApps Flowchart

Technical Approach

Several pre-built connectors are identified and used for the use case scenarios listed below.

  1. On order confirmation, send an email to the user.
    Approach: Utilize the out-of-the-box Office 365 Outlook connector in Power Automate to connect to Microsoft Exchange for authentication with the provided user connection and send email to the registered user on the customer portal.
  2. On order confirmation, send an SMS message to the user.
    Approach: Register a new Twilio account to initialize the Twilio connector in Power Automate and send the SMS message to the user mobile number captured in the Microsoft Dataverse database.
  3. On update of order, update the legacy SQL database.
    Approach: Create a new Power Automate flow with the SQL Server connector to communicate with the target SQL Server and make updates as required.

Lessons Learned and Next Steps

  1. We can quickly build secure apps within a few minutes to a few hours using connectors.
  2. Innovate and Improve Business, as these connectors are easily customizable, and end-users can easily change or create the content for Email or SMS templates.
  3. Implemented an actual ‘No Code or Low Code’ application.

Thank you to the Integration Team for sharing their experience

  • Kranthi Kiran (team lead)
  • Varalika Bishnoi
  • Sravan Kumar
  • Pavan Bandi

Recommended Content

In May, AIS held an internal hackathon for Microsoft Power Platform to expose our team to the platform, concepts, approaches through hands-on experience and to demonstrate the role Power Platform plays in modernizing legacy applications in the cloud.

The DevOps team automated the portal deployment process using Power DevOps Tools and deployed the solution across three environments (dev, test, prod). Since Microsoft Power Platform does not support source control and versioning, the team used Azure DevOps as the solution repository and version control.

Introduction

When building apps on Microsoft Power Platform, use Microsoft Power Platform Build Tools to automate common build and deployment tasks. These tasks include:

  • Synchronization of solution metadata (also known as solutions) that contains the various platform components like model-driven apps, connectors, and virtual agents.
  • Generating build artifacts
  • Deploying to downstream environments
  • Provisioning or de-provisioning environments
  • Checking solutions to identify problematic patterns

Our Purpose

We can easily create compelling apps, and automation flows using Microsoft Power Platform. But, regardless of platform technology – how do you package and deploy the solution between various environments? Usually, at least three are for development, one for testing, and one for production. Power Platform is no exception: you should have separate development (dev), test, and production (prod) environments for your solution. So, let’s look at how to automate the deployment of Power Platform solutions from one environment to another.

Architecture Flowchart Diagram

DevOps Architecture Flowchart

Technical Approach

Problem:

The following picture shows the manual and repetitive steps required to deploy the Power Platform solution from one environment to another. Manual tasks are prone to errors, and actions are missed or incorrectly executed. This can lead to defects and inconsistency across environments.

Power Platform Technical Approach

Approach

Azure DevOps and Power Platform Build Tools help automate manual tasks and minimize issues to make the team more efficient. Once there is automation, releases become more consistent and predictable, and requirements move more quickly between environments. This gives the team more confidence plus the ability to release more frequently.

Automate Build and Deploy Solutions
Microsoft Power Platform Build Tools tasks are used along with any other available Azure DevOps tasks to compose, build and release pipelines. Pipelines that teams commonly put in place include Initiate, Export from Dev, Build, and Release.

  • In our application, we used three environments (dev, test, prod). We exported the solution from dev and source controlled to git.
  • Packed the solution from source control and deployed it to test.
  • Exported the managed solution from test and imported it into prod.

Automate Portal Deployment
Portal development involves several configurations and customizations to achieve the desired experience for portal end-users. After the development and configuration of a portal instance are complete, it is deployed in other environments like test and prod. Creating a manual backup of your portal configuration and importing it to different environments is a very time-consuming process. The portal deployment process can be easily automated by installing Power DevOps Tools, an additional toolset available in the Azure Marketplace.

Lessons Learned & Next Steps

  • With the help of Power Platform build tools, we can easily deploy the solution into various environments.
  • We can source control the Power Platform environment.
  • Increase the release frequency.

Thank you to the DevOps team for sharing their experience

  • Vikram Reddy (team lead)

Recommended Content

In May, AIS held an internal hackathon for Microsoft Power Platform to expose our team to the platform, concepts, approaches through hands-on experience and to demonstrate the role Power Platform plays in modernizing legacy applications in the cloud.

The front-end team focused on building the Power Apps Portal for the end-users and a model-driven app for the administrators. The Portal allowed the users to browse through the product catalog, add an item to the cart, place an order, view their past orders, and manage their profile. The model-driven app allowed administrators to manage the product catalog just like the legacy application. The team used Portals Web API to fetch data from Dataverse and used Liquid templates for web pages.

The Front End team had a goal of migrating the legacy E-Shop web application to the Microsoft Power Platform to provide a website for customers to browse through products and place orders. We also require an application to manage backend data. The Power Platform provides app-building solutions with Power Apps. We developed two applications for our app modernization effort – a Power Apps portal and a model-driven app. Read more to learn how we did it.

Our Approach

We used a Power Apps portal to build an external-facing, responsive website for customers. This portal replaces the front-end of our legacy application. The selling point of the Power Apps portal is the capability to securely target an authenticated user and the flexibility of catering to anonymous users, all in the same product. In addition, the portal app integrates with Microsoft Dataverse, a feature-rich data storage solution.

For data management, we developed a model-driven application. A model-driven app follows a data-first approach and provides a customizable interface, including views, forms, charts, and dashboards, to manage data present in Microsoft Dataverse. This application is only shared with internal users, i.e., users who exist in our Azure Active Directory. The app allows the internal user to create, update, or delete products from the catalog; these catalogs and products are shown on the Power Apps portal.

Technical Approach

Power Apps Portal

The Power Apps portal allows users to browse through the product catalog, add an item to the cart, place an order, view past orders, and manage their user profile. A default domain is provided for the Power Apps portal but can also be set to a custom domain.

The authentication method used is Azure B2C, which is Microsoft’s preferred method of authentication. An extension of the Azure Active Directory model enables external customers to sign in with either local credentials or through a choice of several common social identity providers. Each authenticated portal user associates to a record in the Contact table in Dataverse.

A key concept of the Power Apps portal is building reusable web templates. They come in handy when multiple web pages need a standard template. For this portal application, we created two templates – header and footer are used by all web pages. We used HTML and Liquid programming language for design. Power Apps portals can be further customized with CSS and JavaScript where needed.

The portal app consists of multiple web pages to provide different abilities to users, and each page’s relationships to other web pages form the website’s hierarchy. Permission can also be set up for which pages each web role has access to. The web pages created for this portal app:

  1. Home: Displays the product catalog, which allows users to filter products by brand or type and add a product of their choosing to the cart. This page is accessible to all users who visit the portal.
  2. My Cart: Displays the product(s) added to the cart, accessible only to authenticated users.
  3. My Orders: An authenticated user can see previous orders and further drill down to get the order details.
  4. Profile: Authenticated users can edit their profile information if needed.

When a user checks out and places an order, each product is associated with an order ID for tracking. Since all the data is stored in Dataverse, the portal’s web API interacts with this data. The portal Web API can be used to perform CRUD operations across all Microsoft Dataverse tables from portal pages.​

NAMED 2021 PARTNER OF THE YEAR AWARD FINALIST
AIS was recognized as a Finalist for the Microsoft 2021 Power Apps and Power Automate Partner of the Year Award!

For security, table permissions and web roles are implemented to ensure the privacy of the data from unauthorized users. Users are assigned web roles to determine their level of access. There are three out-of-the-box web roles available- Administrators, Authenticated Users, Anonymous User (Unauthenticated User). Users are automatically assigned the Authenticated user web role once registered with the portal making user roles easy to maintain.

Table permissions are enabled, which allows showing records based on user context. For example, users can only see their order(s) if they are authenticated and no other user’s order information is available. Table permissions are also used to show products within the order, based on the product’s parent/child relationship to the order.

Model-Driven App

The model-driven app is used for catalog management and serves as the back end for our application. It contains a custom form where users can add, delete, or view details of products and catalogs. Active (referring to status) products and catalogs are displayed on the portal app. When a product is no longer available, it can be deactivated, which refers to soft deletion. I.e., the product is no longer be visible on the portal website but is still part of Dataverse.

The model-driven app also allows users to view relationships between data and tables, add personal views of data, and make additional customizations to the app based on the permissions of the user’s assigned security role.

Lessons Learned

We can use a Power Apps portal as a modern low code alternative to create websites and interact with data in Dataverse.
Model-driven apps provide a rich no-code design environment and can also be distributed as a solution.

Thank you to the Front End team for sharing their experience
Ritika Agarwal (team lead)
Devyanshi Tiwari
Pooranendu Patel

Next steps

PowerApps Portal supports many languages, so in the future, we can make this website available in multiple languages.

We will be diving deeper into each team, so stay tuned for more blog posts around our AIS Internal Hackathon!

JOIN OUR GROWING TEAM
AIS provides employees with opportunities to learn and grow in their careers. Won't you join us?

Recommended Content

https://docs.microsoft.com/en-us/powerapps/maker/model-driven-apps/
https://docs.microsoft.com/en-us/powerapps/maker/portals/

In this article, I am going to explain Distributed Resource Scheduler (DRS) in detail. First, we’ll cover what it is, how it works in the backend, and our options in DRS settings. DRS is not to balance the load perfectly across every host. Rather, DRS monitors the resource demand and ensures that every Virtual Machine (VM) is entitled to the resources. When DRS determines that a better host exists for the VM, it recommends moving that VM.

Two Primary functions of DRS are:

  1. Load balancing VMs due to imbalanced cluster
  2. VM placement when Powering on.

Let’s take a closer look at how DRS achieves its goal of ensuring VMs are happy, with effective placement and efficient load balancing.

Effective VM Placement

One of the first steps in ensuring good VM performance is to ensure that the VM gets all the resources it needs as soon as it is powered on. DRS considers the demand of a VM, so it will never be short of resources whenever it is started. A VM’s demand includes the number of resources it needs to run, and the way DRS calculates this is described below:

DRS looks for the demand for every running VM in the cluster. VM demand is the number of resources that the VM currently needs to run. For CPU, demand is calculated based on the amount of CPU the VM is currently consuming. For memory, demand is calculated based on the following formula.

VM memory demand = Function(Active memory used, Swapped, Shared) + 25% (idle consumed memory)

Efficient Load Balancing

DRS uses a cluster-level balance metric to make load-balancing decisions. This balance metric is calculated from the standard deviation of resource utilization data from hosts in the cluster. DRS runs its algorithm once every 5 minutes (by default) to study imbalance in the cluster. If it needs to balance the load in each round, DRS uses VMotion to migrate running VMs from one ESXi host to another.

Detecting VM Demand Changes

During each round, along with resource usage data, DRS collects resource availability data from every VM and host in the cluster. Data like VM CPU average and VM CPU max over the last collection interval depict the resource usage trend for a given VM. DRS then correlates the resource usage data with the available data and runs its load balancing algorithm before taking necessary VMotion actions to keep the cluster balanced and ensure that VMs are always getting the resources they need to run.

Cost-Benefit Analysis

VMotion of live VMs comes with a performance cost, which depends on the size of the VM being migrated. If the VM is large, it will use many of the current host’s and target host’s CPU and memory for VMotion. The benefit, however, is in terms of performance for VMs on the source host, the migrated VM on the destination host, and improved load balance across the cluster. Therefore, the DRS algorithm constantly evaluates the cost and benefit of each load balancing VMotion move.

Factors that Affect DRS Behavior

DRS Automation Levels

During initial placement and load balancing, DRS generates placement and VMotion recommendations, respectively. DRS can apply these recommendations automatically, or you can use them manually. DRS has three levels of automation:

  • Fully Automated – DRS applies both initial placements and load balancing recommendations automatically.
  • Partially Automated – DRS applies recommendations only for initial placement.
  • Manual – You must apply both initial placement and load balancing recommendations.

DRS Automation Levels

DRS Aggression Levels (Migration Threshold)

Migration Threshold DRS

The DRS aggression level controls the amount of imbalance that is tolerated in the cluster. DRS has five aggression levels ranging between 1 (most conservative) and 5 (most aggressive). The more aggressive the level, the less DRS tolerates imbalance in the cluster. Conversely, the more traditional, the more DRS accepts imbalance. As a result, you might see DRS initiate more migrations and generate a more even load distribution when increasing the aggression level. By default, the DRS aggression level is set to 3.

When the DRS aggression is set to level 1, DRS will not load balance the VMs. Instead, DRS will only apply move recommendations that must be taken either to satisfy hard constraints, such as affinity or anti-affinity rules or to evacuate VMs from a host entering maintenance or standby mode.

VM Overrides

DRS automation levels and migration threshold are normally applied at the cluster level. In some cases, you might require DRS to treat some VMs especially. For example, you might decide DRS should not consider a specific VM when generating its recommendations, or you might decide DRS should not migrate that VM at all. You can set VM overrides under Cluster -> Manage -> Settings -> VM Overrides. Here you can set the automation or migration threshold for a VM to a value different than that at the cluster level or even disable them.

Configure VM Override

VM/Host Rules

Rules help define special conditions on VMs and hosts in a DRS cluster. Once a rule is set, DRS has to honor it and make recommendations by the rule, along with its placement and load balancing logic.

Create VM Host Rule

There are different types of rules that can be set:

  1. Keep Virtual Machines Together (VM-VM)—This rule ensures that the VMs specified in the rule are always running on the same host.
  2. Separate Virtual Machines (VM-VM)—This rule will keep the VMs always running on different hosts.
  3. Virtual Machines to Hosts (VM-Host)—This type of rule is set on groups of one or more VMs and one or more hosts.
  4. A host or a VM group can be created in the web client, under VM overrides under Cluster -> Manage -> Settings -> VM/Host Groups.

In VM-Host rules, there are sub-rules of type should and must. With these sub-rules, you can specify if a VM group should/must, or should not/must not run on a host group. Sub-rules of type must (mandatory) will always be honored by DRS under all circumstances. However, sub-rules of type should (preferential) are dropped if DRS determines that the imbalance in the Cluster is very high.

Reservation, Limit, and Shares

DRS provides many tools for you to customize your VMs and workloads according to specific use cases. Reservation, limit, and shares are three such tools borrowed from ESXi’s resource management paradigm.

Customize VM

Reservation

You might need to guarantee to compute resources to some critical VMs in your clusters. This is often the case when running applications that cannot tolerate any resource shortage or when running an application that is always expected to be up and serving requests from other parts of the infrastructure. With the help of reservations, you can guarantee a specified amount of CPU or memory to your critical VMs. Reservations can be made for an individual VM or at the resource pool level. For example, in a resource pool with several VMs, a reservation guarantees resources collectively for all the VMs in the pool.

Limit

In some cases, you might want to limit the resource usage of some VMs in their cluster to prevent them from consuming resources from other VMs in the cluster. This can be useful, for example, when you want to ensure that when the load spikes in a non-critical VM, it does not end up consuming all the resources and thereby starving other critical VMs in the cluster.

Shares

Shares provide you a way to prioritize resources for VMs when there is competition in the cluster. They can be set at a VM or a resource pool level. By default, a cluster has a resource pool hierarchy, with the root resource pool (the cluster itself) is at the top, and all VMs are its children. Shares are defined as numbers for all the sibling VMs under this root resource pool. Shares are distributed equally, by default, on a per-resource basis (per-vCPU and per-unit of memory). This means that by default, a VM with more configured resources will get more shares than a VM with fewer resources. Resources available at the root resource pool are shared among the children during resource contention based on their shares’ values.

DRS provides four types of shares for VMs and resource pools – Low, Normal, High, and Custom – to change their priority compared to their siblings. Normal shares are typically 2x Low, and High shares are typically 2x Normal. Custom can be used to set specific share values. When setting custom shares at a VM level, you need to account for all the vCPUs and memory of that VM since shares are assigned based on the amount of configured resources of a VM.

Disclaimer

Thanks for visiting this blog. Everything I shared here is based on my knowledge and experience. Your environment may differ from what’s mine. Please be conscious before executing anything in a production environment.

Gain control with Azure Management Groups

As enterprises start to move to Azure, managing subscriptions becomes tedious with the growing number of subscriptions. In an organization, there are usually many employees and, in some cases, many applications. If all these employees are provided Azure subscriptions and start creating Azure resources at will, it may soon become difficult to control, manage, and track who creates what. Eventually, the costs may go out of control. However, organizing your subscriptions using Azure Management Group can make the job very easy.

What is an Azure Management Group?

Management groups are logical groups for Azure subscriptions, allowing you to organize subscriptions and apply governance controls, such as Azure Policy and Role-Based Access Controls (RBAC), to the management groups. All subscriptions within a management group automatically inherit the controls applied to the management group.

What is an Azure Management Group?

When do you need to organize your subscriptions?

Management groups are logical groups for Azure subscriptions, allowing you to organize subscriptions and apply governance controls, such as Azure Policy and Role-Based Access Controls (RBAC), to the management groups. All subscriptions within a management group automatically inherit the controls applied to the management group.

Rules to consider when organizing

  • Below can be accomplished through Management Groups:
    • Group Subscriptions according to your organizational models and single assignment of controls that apply to all subscriptions.
    • Create a flexible hierarchy that can be updated quickly and can easily scale up or down depending on the organization’s needs.
    • Use Azure Resource Manager to integrate with other Azure services like Policy, Cost Management, Blueprints, and Security Center.
  • A few questions to answer before you should create a management group hierarchy:
    • What kinds of workloads are in each subscription?
    • What environment?
    • Which department/teams?
    • What are security rules being applied to these workloads?

What is a Root Management Group?

Benefits of Azure Management Groups

  • Provides a scope higher than a subscription to manage access, policies, and compliance efficiently.
  • Aggregated views above the subscription level.
  • Inheritance allows for a single assignment of controls that apply to a grouping of subscriptions.
  • Create a hierarchy of Management Groups that fit your organization.
  • Management Groups can be scaled up/down as needed.
  • Management tree can support up to six levels.
  • Each management group and subscription can only support one parent.
  • Each management group can have multiple children.
  • All subs and management groups fold up to one root management group within the directory.
  • A newly onboarded subscription is added to Root Management Group by default.

Limitations of Azure Management Groups

  • 10,000 Management Groups can be supported in a single directory.
  • Management Group tree can support up to six levels of depth.
  • Each management group and subscription can only support one parent.
  • Each management group can have multiple children.
  • Root management groups can’t be moved or deleted, unlike other management groups.

Role-Based Access (RBAC) in Management Groups

  • You could apply your RBAC roles at the Management Group scope to have one central place that you could go to instead of fiddling around with every subscription.
  • Azure roles can be applied to Management Groups, then inherited to all management groups and their subscriptions.
  • Classic admin roles can’t be applied at the Management Group level and need to be applied at the subscription level.
  • It is not possible to define a custom RBAC role at a Management Group.

Azure Account Roles

Azure Blueprint to Management Group

Azure Blueprint is a feature that allows defining a package of artifacts (resource groups, Azure policies, role assignments & Resource Manager templates, and more) targeted to Management groups and Azure subscriptions to create consistent and repeatable environments.

What is an Azure Blueprint?

Azure Policy Management Group

  • Management groups are a convenient place for defining Azure ARM policies.
  • You can apply Azure Policy, the service that allows you to create, assign and manage Azure policies at Management Group.
  • All these policies are then applied to all resources under the management group.
  • Allows your organization to set up Azure environments with consistent security policies.
  • The ARM policy violations can be seen either in the Azure Policy blade or in Azure Security Center.

Easing Subscription Management

This blog has explained the Azure Management Group concept and how you can use this service to ease your managing subscriptions. Other services can be used at the Management Groups level but will be applied or inherited at the Subscription level. It is suggested to use Management Groups when the number of subscriptions, departments, applications, and users grows.

Whether you have an Enterprise Agreement, Cloud Solution Partner, Pay-As-You-Go, or any other type of subscription, this service gives all Azure customers enterprise-grade management at a large scale for no additional cost.

Resources:

WHITEPAPER DOWNLOAD
Discover considerations, cost optimization techniques and how Azure capabilities can impact your organization.

Microsoft Azure Government DC is a group created for anyone in the IT world modernizing Government to bring real-world lessons to innovators in Government. AIS has supported and presented during these events since there were just five members. Now, the group is nearing 4,000. The July meetup focused on getting your agency to next-level cloud adoption with Microsoft Azure. Check out the recording and overview below.

Here’s What You Missed

Cloud Adoption has come a long way over the years. We have gone from a basic “lift and shift” model to migrating priority workloads to the cloud and optimizing for both high-security workloads and to tap into cloud-native services. If one thing is clear, hybrid capabilities are critical. It is important that we start thinking about the challenges as we start to move legacy IT infrastructure to the cloud. Two Microsoft Federal CTO’s, Susie Adams (Sales) and Kent Cunningham (Civilian), talk about changes Microsoft has been creating to simplify the migration processes. AIS Chief Technology Officer (CTO), Vishwas Lele, moderated a panel discussion with our customer, Richard Inzunza from Immigration and Customs Enforcement (ICE), who provided excellent insights around cloud adoption and his experience. The panel also included Jorge Gallardo, Program Manager from Peraton, who discusses his experience in regulated environments in the cloud.

Watch the Full Session:

Session Recap

Challenges when Migrating Workloads

Organizations can take their cloud adoption to the next level with Microsoft Azure when moving priority workloads in the cloud.
Microsoft breaks out the following based on maturity:

  • Identity and Security
  • DevOps and DevSecOps
  • Data
  • Cloud-Native
  • The Edge

When organizations migrate their workloads to the cloud but have a diverse IT estate, it poses challenges to manage security, access to their data, and understanding where that data lives. As a result, Microsoft has brought tools and resources that customers need to easily manage their workloads and simplify the migration process in a multi-cloud world.

Many customers come with questions on how to implement and manage Infrastructure as a Service (IaaS) while meeting regulatory requirements like FedRAMP and HIPAA. Microsoft has a portal full of Azure Blueprints that allow organizations to select and deploy the chosen environment inside of their subscription. The goal? To simplify the deployment of these methods with peace of mind that they align with regulatory and compliance standards.

What Tools Can We Use to Simplify

Mission Landing Zones are highly opinionated templates that IT oversight organizations can configure or customize to quickly create a cloud management system. These are then deployed on Azure environments for their teams. Based on the Terraform platform, this will provide a secure, scalable landing zone for customer workloads. Mission Landing Zones enable expedited Cloud Adoption across Commercial, IL4, IL5, & IL6.

Customer’s environments are becoming increasingly complex, and Azure is helping organizations securely and efficiently move to the cloud by creating a single management experience for your entire environment. We recognize that companies are struggling with multiple different environments, and we are focusing on providing companies with granular access.

Azure Hybrid is a cost savings benefit that lets you bring your existing on-premises Windows Server and SQL Server licenses with active Software Assurance or subscriptions to Azure.

Azure Hybrid consists of the following:

  • Azure Stack: A portfolio of products that extend Azure services and capabilities to your environment of choice – from datacenter to edge locations and remote offices.
  • Azure Arc: Enables you to manage your entire environment, with a single pane of glass, by projecting your existing non-Azure, on-premises, or other cloud resources into Azure Resource Manager.
  • Azure Internet of Things (IoT): A collection of Microsoft-managed cloud services that connect, monitor, and control billions of IoT assets hosted in the cloud.
  • Azure Lighthouse: Facilitates a secure relationship between companies and their managed service providers while providing on-demand transparency into vendor access and actions.

Implementing Capabilities

With the release of the Cybersecurity Executive Order in May 2021, Microsoft is developing new ways to support and deploy these capabilities while meeting security and compliance standards.

  • Enable security modernization: Help empower security teams to combat the increase in sophisticated attacks.
  • Increase collaboration: Improve information sharing and security practices across public and private sector organizations.
  • Deliver trusted and secure services: Build trust in government by securing information, improving software supply chain, and facilitating compliance.

Zero Trust architecture is based on the principle: never trust, always verify. This security approach protects customers by managing and granting access based on the continual verification of identities, devices, and services. Zero Trust architecture addresses challenges modern enterprises face. Microsoft Threat Protection powered by Azure is a comprehensive and seamless integration solution that provides end-to-end security for your organization using tools like Azure Sentinel and M365 functionalities. Learn more at https://aka.ms/cybereo.

Panel Discussion

A panel discussion was led by AIS CTO, Vishwas Lele. We were honored to have an AIS customer, Richard Inzunza, IT Specialist from the Department of Homeland Security speak on the panel. He was joined by Jorge Gallardo, Program Manager from Peraton.

Richard has been with the Federal Government for 36 years and with ICE since its inception in 2003. He has been a key player in the implementation of their hybrid cloud environment. ICE is in the process of building, extending, and expanding their ability to use cloud services from more than one service provider. AWS (Amazon Web Services) and Microsoft Azure are their biggest providers today, but their focus is to be able to take any valuable cloud service and integrate it into the ICE cloud to pass these capabilities onto their users and employees.

Common Challenges

There are several challenges Richard and ICE face in their line of work. Physical servers are no longer the main source for storing data, and helping customers understand the virtual aspect and how data is managed has been a challenge. Getting development teams and ITPMS, and other support teams to understand how to apply concepts of virtualization is extremely important for future development.

Many developers want to provision a capability without a true understanding of how this can open ICE to vulnerabilities. To address this ongoing challenge, they are helping their teams understand the responsibility level around cost and actions taken when provisioning new capabilities. Creating a vehicle that is compliant and future-proof is imperative for federal organizations to adapt and free time up for other key focuses. ICE’s goal is to get their teams to automate the delivery of their releases for their custom and third-party applications using pipelines.

Adjusting to a new virtual culture and applying security to a specific type of environment is a challenge that the assurance side of government IT is facing. ICE partnered with Peraton early on to align the implementation phase to begin their journey to the cloud. With this joint effort, three years later, ICE security teams are becoming more familiar with virtual environments at the beginning phase.

Ensuring Compliance

Policy Compliance & Security Compliance are a few types that ICE operates within. With their Policy, tagging is a method that ICE uses along with serverless LAMDA scripts to enforce compliance. They also have databases that store the tag values for valid metadata that correlates with an infrastructure or application. Ensuring that type of policy compliance helps at the management and administration level to understand the information they pull is accurate and helpful in many ways.

Security Compliance is now managed with advanced scanning tools and different checks to ensure when a policy has been adjusted. With accurate scanning, Richard is notified when policies have been adjusted and can reach out to the appropriate network to validate.

AIS: Your Trusted Azure Partner

We help government organizations create cohesive multi-cloud strategies, bringing the expertise you require for successful cloud adoption, modernization, data intelligence, and beyond. At AIS, we help you take a step back and look at the requirements needed and what services can be used with Azure or other tools to meet needs, offering templates and support for documentation. Our scalable cloud solutions address our clients’ most complex concerns, with deep experience across national security and military operations, as well as Federal, State, and Local Governments and their supporting agencies. We have been working with Azure for 12+ years and will have you well on your way to realizing the best that the cloud can offer.

Join us for future virtual meetings at the Microsoft Azure Government User Community: https://www.meetup.com/dcazuregov/.