Summary

To break this series apart into more manageable chunks, in this installment, we’ll focus on setting up the API in Azure API Management (APIM) and the custom connector. If you missed the first installment where we set up our On-Premises Data Gateway and Azure Logic App, please check it out here.

Scenario

The company has an on-premises SQL Server database that contains customer data that needs to be available to various apps built on the Power Platform and/or Azure.

Assumptions

  1. That you have a sample of the data your Logic App returns. For this write-up, we’ll use the data returned from the Logic App we created in our previous lab.

High-Level Steps

  1. Create an Azure API in Azure API Management (APIM) that provides access to the Logic App, and can be called from various Power Apps and Power Automate Flows
  2. Create a Custom Connector

Azure API Management

In this section, we’ll walk through setting up an API to the Logic App we created in our prior installment.

Why use Azure API Management (APIM)?

Azure APIM provides a single place for managing your company’s APIs. It can be used to selectively expose data and services to employees and partners by applying authentication and even usage limits.

Create the API

  1. Create an Azure API Management service instance in Azure. You can follow steps to do that here. In the Azure search bar, type API, and select API Management services. Provide the necessary information and click the Create button. Once complete, your API Management service should show as ‘on-line’.APIM 1Azure API Image 2Azure API Image 3
  2. Click to open your new APIM service. Select API’s from the blade that opens, and either select the Logic App tile or search APIs for Logic App and select it.
    Azure API Image 4
  3. Assuming you created the Logic App in the first installment of this series, select that Logic App from the “Select Logic App to import” blade that opens to the right.
    Azure API Image 5
  4. When that is done being created, you should notice an operation that was created automatically called manual-invoke. Select that operation and click the pencil edit button in the Frontend section.
    Azure API 6 Image
  5. Provide a meaningful display name if you’d like. Change the URL to a GET with a “/onpremcustomers” as a resource path.
  6. On the Query tab at the bottom, add the CustomerId (integer) query parameter.
    Azure API Image 7
  7. On the Responses tab, select the 200 OK response that should already be there. Add the details for an application/json representation using the sample data output from the Logic App created in the previous exercise. I also provided a definition of “customerresponse”.
    Azure API Image 8Aure API 11 Image
  8. On the Settings tab, add an API URL suffix, select a Product, and click save. You will want to make note of the Product you select. I am using “Unlimited” in my example. Later, when providing an API key for a custom connector, you’ll need to remember which product your API is using.Azure API 12 Image
  9. Now we’re ready to test this out. On the Test tab, enter a value for the Customer Id parameter and click Send. If all goes well you should get a 200 response back with, hopefully, some data.
    Azure API Image 13Azure API Image 14

Power Apps Custom Connector

  1. From the Azure API Management Overview blade, open the Developer portal (legacy). At the time of this writing, the new Developer portal was too buggy to use.
    Connector Image 1
  2. Click on the API we just created.
    Connector 2 Image
  3. Using the API definition button, download/copy and save the Open API 2 (JSON) file.
    Connector Image 3
  4. In Power Apps, go to Data  Custom Connectors  New Custom Connector Import an OpenAPI file. Import the downloaded JSON file from the previous step, provide a name for your connector and click Continue.
    Connector Image 5Connector Image 6
  5. Click through the General and Security tabs and make sure they match what you’re expecting (see screenshots).
    Connector Image 7Connector 9 Image
  6. On the Definition tab, if you see a parameter in the Body area, you can delete it. Then click Create connector at the top.
    Connector 11 Image
  7. On the Test tab, you’ll first need to create a new Connection. You will need the API key for the subscription you used in the APIM. In my case, I used “Unlimited”. (Click the ellipses to the right of the subscription and select Hide/Show Keys to see the key and copy it.)
    Connector 11.5 ImageConnector 12 Image
    Connector 15 Image
    Connector 13 Image
  8. Navigate back to the Custom Connector – Test tab and test out your Custom Connector.
    Connector 14 ImageConnector 16 Image

Conclusion

I hope this was helpful in demonstrating just how quickly you can better secure your Azure Logic Apps with Azure API Management, as well as how quickly you can spin up a custom connector that can then be distributed and used by other apps and Flows across your organization to connect to the Logic App through the new API.

Leveraging the Power Platform and Microsoft Azure to connect housing agencies and intermediaries with the Department of Housing and Urban Development (HUD)

Intro

The US Department of Housing and Urban Development runs a program known as the Housing Counseling Program that aids housing agencies around the nation through things like grants and training. These agencies provide tremendous benefit to Americans in many forms including credit counseling, foreclosure prevention, predatory lending awareness, homelessness, and more. One of the requirements for approval into the program is that the agency uses a software system that is also approved by HUD and interfaces with HUD’s Agency Reporting Module SOAP web service.

Original System

The original system is built on Dynamics 365 Online using custom entities, web resources, and plug-ins. Complex batching functionality was implemented to deal with the two-minute timeout period that plug-ins have, web resource files (JavaScript, HTML, and CSS) were stuck in a solution with no real structure, and application lifecycle management (ALM) was extremely painful. The original system design also doesn’t account for the intermediary-to-agency relationship, specifically the roll-up reporting and auditing that intermediaries desperately need for providing their compiled agency data to HUD, as well as their own internal reporting. Finally, because the solution was built using Dynamics 365 Online Customer Engagement, licensing costs were more than double what we knew they could be with Microsoft’s new Power Apps licenses and Azure pay-as-you-go subscriptions.

The Original Diagram

Figure 1 – current CMS system build on Dynamics 365 Online

Definition

Intermediaries – organizations that operate a network of housing counseling agencies, providing critical support services to said agencies including training, pass-through funding, and technical assistance.

Modern System

Whereas the data schema for the solution remains mostly unchanged, the architecture of the system has changed profoundly. Let’s check out some of the bigger changes.

Power Apps Component Framework (PCF)

The Power Apps Component Framework enables developers to create code components that can run on model-driven and canvas apps and can be used on forms, dashboards, and views. Unlike traditional web resources, PCF controls are rendered in the same context and at the same time as any other component on a form. A major draw for developers is that PCF controls are created using TypeScript and tools like Visual Studio, and Visual Studio Code. In the modern solution, web resources are replaced with PCF controls that make calls directly to the Power Apps Web API.

Azure Resources (API Management, Function Apps, Logic Apps, and a Custom Connector)

A Custom Connector and Azure API Management are used to manage and secure various APIs that are used in the system. The connector can be used in any Azure Logic App to make connecting to an API much easier than having to configure HTTP actions. All of the business logic that once lived in plug-ins has been replaced with a combination of Azure Logic Apps and Azure Function Apps. This has provided incredible gains where development and testing are concerned, but it also provides a more secure and scalable model to support these agencies as they grow. Lastly, it removes the burden experienced with the two-minute time-out limitation that plug-ins have.

ALM with Azure DevOps Services and Power Apps Build Tools

Azure DevOps Services, coupled with the Power Apps Build Tools, are being used to ease the pain we experienced with ALM prior to these tools being available. Now we can easily move our Power App solutions across environments (e.g. dev, test, production) and ensure our latest solution code and configurations are always captured in source control.

ALM with Azure

Figure 2 – modern solution using Power Apps and Azure resources for extreme scalability and maintainability

Next Steps

I hope by demonstrating this use case you’re inspired to contact your Microsoft partner, or better yet contact us and let us work with you to identify your organization’s workloads and technology modernization opportunities.

Ready to dig a little deeper into the technologies used in this blog post? Below we have some hands-on labs for you. Stay tuned for more updates!

  1. Power Apps Component Framework Controls
    As a developer, PCF Controls has been one of the most exciting things to grace the Power Platform landscape. In this video, I’ll show you how to go from zero to a simple PCF control that calls the Power Platform Web API. I also provide a Pro Tip or two on how I organize my PCF controls in my development environment.

  2. Using an Azure Logic App to Query Data in an On-Premises SQL Server Database
    Companies are embracing modernization efforts across the organization, quickly replacing legacy apps with Power Apps and Azure resources. This can’t all happen at once though, and often times companies either can’t or won’t move certain data to the cloud. In this hands-on lab, I’ll walk you through using an Azure Logic App and an On-Premises Data Gateway to connect to a local SQL Server 2017 database to query customer data using an HTTP request and response.
  3. Azure API Management and Custom Connectors
    API’s give you the ability to better secure and manage your application interfaces. Couple an API with a Custom Connector and you not only better secure and manage, but also make it super easy for other app developers across the organization to connect to your back-end resources through the API, without having to worry about all of the API configurations.
  4. Power Apps DevOps Using Best Practices
    There’s more to DevOps than just backing up your Power App solutions in source control and automating your build and release pipeline. Good solution management and best practices around publishers, entity metadata, components, and subcomponents are also key to manageability and scalability with your ALM.
In this blog post, I would like to introduce and explain Azure’s new service offering for Application Gateway Ingress Controller (AGIC) for Azure Kubernetes Services (AKS). AKS is gaining momentum within the enterprise and Microsoft has been refining the offering to drive adoption. This feature piques my interest because of its capability to improve latency and allow for better application performance within AKS Clusters. I will first discuss the basics of the application gateway Ingress controller. Then, I will dive into the underlying architecture and network components that can provide performance benefits. In addition, I will also explain the improvements and differences this offers over the already existing in-cluster ingress controller for AKS. Finally, I will discuss the new application gateway features that Microsoft is developing to refine the service even further.

In definition, the AGIC is a Kubernetes application that is like Azure’s L7 Application Gateway load balancer by leveraging features such as:

  • URL routing
  • Cookie-based affinity
  • SSL termination or end-to-end SSL
  • Support for public, private, hybrid web sites
  • Integrated Web Application Firewall (WAF)

The controller runs in its own pod on your AKS. It monitors and communicates routing changes to your AKS through the Azure Resource Manager to the Application Gateway which allows for uninterrupted service to the AKS no matter the changes. Below is a high-level design of how the AGIC is deployed along with the application gateway and AKS, including the Kubernetes API server.

Azure Resource Manager

Next, let’s take a closer look at the AGIC and breakdown how it differs from that of the already existing In-Cluster Ingress Controller. The image below shows some distinct differences between the two. The in-cluster load balancer performs all the data path operations leveraging the AKS clusters compute resources, because of this it is competing for the same resources as the application in the cluster.

In-Cluster Ingress Controller

In terms of networking features, there are 2 different models you can leverage when configuring you AKS. They are Kubenet and Azure CNI:

Kubenet

This is the default configuration for AKS cluster creation, each node receives an IP address from the Azure virtual network subnet. Pods then get an IP address from a logically different address space to that of the Azure virtual network subnet. They also use Network Address Translation (NAT) configuration to allow the Pods to the communication of other resources in the virtual network. If you want to dive deeper into Kubenet this document is very helpful. Overall these are the high-level features of Kubenet:

  • Conserves IP address space.
  • Uses Kubernetes internal or external load balancer to reach pods from outside of the cluster.
  • You must manually manage and maintain user-defined routes (UDRs).
  • Maximum of 400 nodes per cluster.
  • Azure Container Networking Interface (ACNI)

The other networking model is utilizing Azure Container Networking Interface (ACNI). This is how the AGIC works, every pod gets an IP address from their own private subnet and can be accessed directly, because of this these IP addresses need to be unique across your network space and should be planned in advance. Each node has a configuration parameter for the maximum number of pods that it supports. The equivalent number of IP addresses per node is then reserved upfront for that node. This approach requires more planning, as can otherwise lead to IP address exhaustion or the need to rebuild clusters in a larger subnet as your application demands grow.

Azure Virtual Network

From the Azure documentation, they also outline the advantages of using ACNI.

Azure CNI

Pods get full virtual network connectivity and can be directly reached via their private IP address from connected networks.
Requires more IP address space.

Performance

Based on the network model of utilizing the ACNI, the Application Gateway can have direct access to the AKS pods. Based on the Microsoft documentation because of this, the AGIC can achieve up to a 50 percent lower network latency versus that of in-cluster ingress controllers. Since Application Gateway is also a managed service it backed by Azure Virtual machine scale sets. This means instead of utilizing the AKS compute resources for data processing like that of the in-cluster ingress controller does the Application Gateway can leverage the Azure backbone instead for things like autoscaling at peak times and will not impact the compute load of you ASK cluster which could impact the performance of your application. Based on Microsoft, they compared the performance between the two services. They set up a web app running 3 nodes with 22 pods per node for each service and created a traffic request to hit the web apps. In their findings, under heavy load, the In-cluster ingress controller had approximately 48 percent higher network latency per request compared to the AGIC.

Conclusion

Overall, the AGIC offers a lot of benefits for businesses and organizations leveraging AKS from improving network performance to handling the on the go changes to your cluster configuration when they are made. Microsoft is also building from this offering by adding more features to this service, such as using certificates stored on Application Gateway, mutual TLS authentication, gRPC, and HTTP/2. I hope you found this brief post on Application Gateway Ingress-Controller helpful and useful for future clients. If you want to see Microsoft’s whole article on AGIC click here. You can also check out the Microsoft Ignite video explaining more on the advancements of Application Gateway.

Once you’ve decided to instrument your ASP.NET Core application with Application Insights, you may be looking for how to anonymize or customize the data that is being sent to Application Insights. For details on why you should be using Application Insights and how to get started, please reference my previous post in this blog series.

Two things to consider with telemetry:

  1. Adding additional telemetry that is not recorded by Application Insights.
  2. Removing or anonymizing data that is already being sent to Application Insights.

There will a later post in this series discussing how to add new telemetry, this post focuses on anonymizing or removing data.

Personally Identifiable Information (PII) Already in Application Insights Account

We’ll start with a likely scenario, during an audit or during testing you discovered that you are logging PII to your Application Insights account. How can you go about fixing that? The short answer is to delete the entire Application Insights resource. That means you’ll lose access to all historical telemetry that was in the account and your production system will no longer be logging telemetry anywhere unless you create a new account and update your production system with the new telemetry key. However, this does solve your immediate problem of retaining PII. See the Microsoft documentation, for details on what is captured by Application Insights, how it’s transmitted and stored.

Application Insights does provide a PURGE endpoint, but requests are not timely, the endpoint isn’t well documented and it will not properly update metrics to account for the data that was purged. In general, if you have a compliance concern, delete the Application Insights account. Remember, Application Insights is designed to be a highly available high-performance telemetry platform, which means it is designing around being an append-only system. The best solution is simply not to send data to the platform that you shouldn’t.

API Use Case

Think of an API your business may have built. This API allows us to search for customers by e-mail to find their customer id. Once we have the customer id, we can make updates to their record such as their first name, birthday, etc. By default, Application Insights records the request URL and the response code. By default, it does NOT record any HTTP headers or the request body or the response body. First, let’s think of how we might design the search endpoint, we have two approaches:

  1. GET /api/customer/search?emailAddress=test@appliedis.com
  2. POST /api/customer/search
    a. In the request body, send JSON:
    { “emailAddress”: “test@appliedis.com”}

If we design the API using approach #1, we will be sending PII to Application Insights by default. However, if we designed the API using approach #2, by default no PII would be sent to Application Insights. Always try and keep your URLs free of any PII.

That may not always be possible, so let’s look at another use case. Let’s say we have the primary key of a customer record and we want to view and make edits to that record, what would the endpoints look like:

  1. GET /api/customer/9b02dd9d-0afd-4d06-aaf1-c34d3c051ec6
  2. PUT /api/customer/9b02dd9d-0afd-4d06-aaf1-c34d3c051ec6

Now, depending on your regulatory environment logging these URLs to Application Insights might present a problem. Notice we are not logging e-mail addresses, phone numbers or names; we are logging behavior about an individual. Pay attention to when the site was accessed and when was their profile updated? To avoid this we would like to anonymize the URL data that is being sent to Application Insights.

Anonymize Data Sent to Application Insights

This section assumes you are using ASP.NET Core and have already configured Application Insights, see my previous blog post for details.  Also, if you need to troubleshoot your configuration or need to verify it’s working as expected, please see my other blog post for details.

The Application Insights NuGet package provides an interface for exactly this purpose called ITelemetryProcessor. You simply need to subclass it and implement the Process method. The Telemetry Processor implementation acts much like ASP.NET Core middleware, in that there is a chain of telemetry processors. You must provide a constructor in your implementation that accepts an ITelemetryProcessor which is next in the chain. In your process method, you are then responsible for calling onto the next processor in the chain. The last processors in the chain are the ones provided by the NuGet package that implements the same interface and sends the telemetry over the wire to the actual Application Insights service in Azure. In the Process method which you are required to implement, you receive a single argument, ITelemetry. You can cast that to one of the subclasses, e.g. DependencyTelemetry, RequestTelemetry, etc. In the Process method, you can then mutate the telemetry in whatever way you need to, e.g. to anonymize data. You’ll then be responsible for calling the Process method on the next telemetry processor in the chain, e.g. the one that was provided to the constructor of your class. If you want the given telemetry item to never be sent to Application Insights, simply omit the call to the Process method of the next telemetry processor in the chain.

Now we will look at the source code for one that does what we are proposing. This will be for anything that resembles a customer id in the URL of RequestTelemetry and then replaces it with the word “customerid”.
RequestTelemetry

As seen above, in the constructor we receive the next telemetry processor in the chain. In the process method, we check to see if we have RequestTelemetry, e.g. ignoring all other telemetry types, like TraceTelemetry or DependencyTelemetry. The RequestTelemetry has a .Name and .Url property both of which might contain details about the URL which contains our PII. We use a regular expression to see if either contains a customer id, if so, we replace it with the word “customerid”. And then we always ensure that we call the next telemetry processor in the chain so that the modified telemetry is sent to Application Insights.
Remember, just having this ITelemetryProcessor coded won’t do anything. We still need to register it with the Application Insights library. In the Startup.cs, you will need to add a line to the ConfigureServices method, see below:

Add a Line to the ConfigureServices Method

Now that you’ve registered the processor, it’s just a matter of deploying the code. If you wanted to debug this locally while still sending telemetry live to Application Insights in Azure, please see my blog post with details on how to do that.

Summary

You learned about the ITelemetryProcessor extension point in the Application Insights NuGet package and how to use that extension point to prevent PII data from being logged to your Application Insights account. We’ve discussed how to design your API endpoints efficiently by default so that hopefully you don’t need to customize your telemetry configuration. Lastly, I have shown you how to delete PII data that may have been logged accidentally to your production Application Insights account. You may also want to take advantage of a relatively new feature of Application Insights to set the retention period of logs. Previously it was only 90 days, but now you can configure it in as low as 30 days. This may be useful for handling forgotten requests if your logging system is storing PII and you need to ensure it is removed within 30 days of being requested. Next, in this blog series, we will discuss logging additional telemetry to create an overall better picture of your system’s health and performance.

As developers, we spend a lot of time developing APIs. Sometimes it’s to expose data that we’ve transformed or to ingest data from other sources. Coincidentally, more and more companies are jumping into the realm of API Management—Microsoft, Google, MuleSoft and Kong all have products now that provide this functionality. With this much investment from the big players in the tech industry, API management is obviously a priority. Now, why would anyone want to use an API Management tool?

The answer is simple: It allows you to create an API Gateway that you can load all your APIs into, providing a single source to query and curate. API Management makes life as an admin, a developer, and a consumer easier by providing everything for you in one package.

Azure API Management

Azure API Management logoWhat does Azure API Management provide? Azure API Management (APIM) is a cloud-based PaaS offering available in both commercial Azure and Azure Government. APIM provides a one-stop-shop for API authority, with the ability to create products, enforce policies, and utilize a robust developer portal.

Not only can API Management integrate seamlessly with your existing Azure infrastructure, but it can also manage APIs that exist on-prem and in other clouds. APIM is also available in both the IL4 and IL5 environments in Azure Government, which allows for extensibility and management for those working in the public sector.

APIM leverages a few key concepts to provide its functionality to you as a developer, including:

  • Products
  • Policies
  • Developer Portal

From providing security to leveraging rate-limiting and abstraction, Azure API Management does it all for API consolidation and governance in Azure. Any API can be ingested, and it gets even easier when APIs follow the OpenAPI Format.

What Are Products?

Products are a layer of abstraction provided inside APIM. Products allow you to create subsets of APIs that are already ingested into the solution—allowing you to overlap the use of APIs while restricting the use of individual collections of APIs. This level of compartmentalization allows you to not only separate your APIs into logical buckets but also enforce rules on these products separately, providing one more layer of control.

Product access is very similar to Azure RBAC—with different groups created inside of the APIM instance. These groups are yet another way for APIM admins to encapsulate and protect their APIs, allowing them to add users already associated to the APIM instance into separate subsets. Users can also be members of multiple groups, so admins can make sure the right people have access to the right APIs stored in their APIM instance.

What Are Policies?

Policies are APIM’s way of enforcing certain restrictions and providing a more granular level of control. There is an entire breadth of policies available in APIM, which range from simply disallowing usage of the API after calling it five times, to authentication, logging, caching, and transformation of requests or responses from JSON to XML and vice versa. Policies are perhaps the most powerful function of APIM and drive the control that everyone wants and need. Policies are written in XML and can be easily edited within the APIM XML Editor. Policies can also leverage C# 7 Syntax, which brings the power of the .NET Framework to your APIM governance.

What Is the Developer Portal?

The Azure API Management Developer Portal is an improved version of the Swagger documentation that’s generated when you use the OpenAPI spec. The Developer Portal provides an area for developers to readily see APIs, products, and associated applications. The Portal also provides sample request bodies (no more guessing API request structures!) and responses, along with code samples in many different languages.

Finally, the portal also allows you to try API calls with customized request bodies and headers, so you have the ability to see exactly what kind of call you want to make. Along with all that functionality, you can also download your own copy of the OpenAPI Spec for your API after it’s been ingested into your instance.

Why Should I Use APIM?

Every business should be using some form of API Management. You’ll be providing yourself a level of control previously not available. By deploying an API Gateway, that extra layer of abstraction allows for much tighter control of your APIs. Once an API has been ingested, APIM provides many additional functionalities.

First, you can match APIs to products, providing a greater level of compartmentalization. Second, you can add different groups to each product, with groups being subsets of users (i.e. Back-end Devs, Billing Devs, etc.). Third, you automatically generate a robust developer portal, which provides all of the functionality of the Swagger portal, but with added features, such as code snippets.  Finally, APIM also has complete integration with Application Insights in commercial Azure, providing access to a world-class logging and visualization tool.

Azure API Management brings power to the user, and no API should be left out.

If you want to skip reading the text that follows and simply want to download Visual Studio Code Snippets for Azure API Management policies, click here.

Azure API Management gives you a framework for publishing your APIs in a consistent manner with built-in benefits like developer engagement, business insights, analytics, security, and protection. However, the most powerful capability it offers is the ability to customize behavior of the API itself. Think of the customization as a short program that gets executed just before or after your API is invoked. The short program is simply a collection of statements (called policies in Azure API Management). Examples of policies that come out of the box include format conversion from XML to JSON, applying rate and quota limits and enforcing IP filtering. In addition, you have control flow policies such as choose that is similar to if-then-else, or a switch construct and set-variable that allows you declare a context variable. Finally, you have the ability to write C# (6.0) expressions. Each expression has access to the context variable, as well as, allowed to leverage a subset of .NET Framework types. As you can see, Azure API Management policies offer constructs equivalent to a programming language.

This begs the question, how do you author Azure API Management policies?

Well, today you have two options. Read More…


azure-api-managementAIS’ CTO and noted Azure expert Vishwas Lele has a brand new course available at Pluralsight: Microsoft Azure API Management Essentials.

Application Programming Interfaces (APIs) are increasingly considered “engines of growth” and are already fundamentally changing the way organizations do business. In this three-part course, you’ll learn about how the Azure API Management service can ensure that your current and future APIs reach their fullest potential.

First, you’ll learn why you should use API Management, and how to manage your API with the Azure service. Next, you’ll learn how to protect your API with rate limits, plus how to add caching to improve API performance. Then, you’ll learn about policies in API Management and C#-based Policy Expressions. Finally, you’ll learn about security in API Management.

By the end of this course, you’ll have a solid understanding of Azure API Management, its key capabilities, and how to host and secure your API (both internal and external). Get started right now!