As application developers, it’s our responsibility to ensure that the applications we create are using credentials and other secret configuration values in a secure way. Oftentimes, this task is overlooked in the pursuit of our primary concern: building new features and delivering business value quickly. In some cases, this translates into developers tolerating flat-out unsafe practices in the name of convenience, such as hardcoding secrets into the application source code or sharing secrets with team members via insecure communication channels and storing them on their development machines.

Fortunately, Microsoft provides a solution to this problem that should be attractive to both security experts and developers, known as “managed identities for Azure resources” (formerly “Managed Service Identities”). The idea is pretty simple: associate an Azure AD security principal* to your Asp.Net Core Web App and let it use this ‘identity’ to authenticate to Azure Key Vault and pull secrets into memory at runtime. Microsoft provides the glue to make all of this easy for developers: on the programming side, they provide a simple library for your Asp.Net Core app to pull the secrets from Key Vault (as demonstrated here), and on the hosting side they implement the mechanisms that make the identity available to the app’s runtime via first-class support in Azure hosting environments and local development tools.

For those unfamiliar, a security principal is any kind of digital ‘identity’ that can be authenticated and configured with permissions that authorize it to access Azure resources. Examples include a user’s personal login, an AD group, or a service principal. This is also known as App Registrations in Azure. They allow you to create a custom identity and credentials just for applications or other automated processes, so they can be granted access to Azure resources they need to interact with.

So, what’s to be gained with this approach, and what are the tradeoffs? There are two audiences that have a stake in this:

  • The business stakeholders and security team that place a high priority on protecting applications and user data from exposure
  • The developers that just want to make sure they can stay productive and spend less time worrying about how configuration values are provided. I’ll address these groups and their distinct concerns separately.

The Security Perspective

There are numerous security benefits that come with this approach. Most critically, there are far fewer points of exposure for your secrets. The reliance on developers to do the right thing and manage secrets responsibly is almost entirely removed, to the point where developers would have to go out of their way to do the wrong thing. Another benefit is the administrative access control built into Key Vault, which makes it easy to manage who should and shouldn’t be able to run the app and access secrets.

We will start with how this approach limits the exposure of your secrets. Without managed identity and Asp.Net Core Key Vault configuration, you are directly responsible for making your secrets available to your app, whether it’s hosted or running locally. A hosted app, for example, one running in Azure App Service, means configuring the PaaS App Settings or modifying the appsettings.json file that you deploy with your app binaries. The secrets must be put there by the process that regularly builds and deploys your application. It also needs to store and retrieve these secrets from somewhere, which could be Key Vault, a release variable, or some other data store, maybe even just a VM or user’s file system. Local development also spreads the surface area of secret exposure. In the best case, you might pull these onto the developer’s machine using a script that stores them in the dev’s file system, but too often people will take the path of least resistance and send these to each other over email, chat, or, even worse, hardcode them into source control.

In a managed identity world, the app simply reaches out to Key Vault for these secrets at runtime. This trims out several problematic points of exposure:

  1. No more accessing these credentials from the deployment pipeline where they might accidentally get captured in logs and build artifacts, and where they may be visible to those with permission to manage deployments.
  2. If a person is tasked with running your deployment scripts directly (to be clear – not ideal) they wouldn’t need access to app secrets to do a code deployment.
  3. No more storing these credentials in persistent storage of the app runtime host, where they can be inspected by anyone with management access to the host.
  4. No more spreading secrets across developer’s local devices, and no more insecure transmission of secrets on channels such as email or chat. It also makes it easy to avoid bad habits like hardcoding secrets into the app and checking them into source control.

Another benefit of this approach is that it doesn’t rely so heavily on developers and operations folks being mindful and responsible about security. Not only can they avoid insecurely distributing them amongst teammates, but they also don’t have to worry about removing them from their local machines or VM’s when they no longer need them, because they are never stored. Of course, developers always should be mindful and responsible for security, realistically things don’t always work out that way. Developers frequently overlook security concerns while focusing on being productive, and often people are simply under-educated about security. Any opportunity to improve security via architecture and design, and to make humans less capable of doing the wrong thing is a win.

Those with a focus on security will also appreciate the level of access control that is provided by Key Vault. Access to secrets is not managed via typical Azure RBAC (Resource Based Access Control). Instead, access policies are created to grant specific permissions for each user, service principal, or group. You can grant specific kinds of access, such as reading or editing/adding secrets. This can make Key Vault serve as a control center for deciding who should be allowed to run the app for a given environment. Adding a new team member or granting temporary access to debug a higher environment is as easy as adding a user to a Key Vault access policy that allows reading secrets only, and revoking access is as easy as removing them. See here for more info on securing access to Key Vault.

The Developer Perspective

Developers may have concerns that a centralized configuration approach could slow things down, but let’s look at why that doesn’t have to be the case, and why this can even improve velocity in many cases. As we’ll see, this can make it super easy to onboard new team members, debug multiple environments, regenerate keys due to recycling or resource recreation, and implement a deployment process.

We will start with onboarding. With your app configured to use managed identity and key vault authentication, onboarding a new team member to run and debug the app locally simply involves adding them to an access policy granting them the permission to read keys from the key vault. An even easier approach is to create an AD group for your developers and assign a single Key Vault access policy to the entire group. After that, they just need to login to the subscription from their personal machine using Visual Studio or the Azure CLI. Visual Studio has this support integrated and will apply when you start your app from there, and the Azure CLI extends this support to any other IDE that runs the app using the dotnet CLI, such as VS Code. Once they have been granted authorization and logged in, they can simply start the app, which will retrieve the secrets from Key Vault using their permissions. If this team member were to eventually leave the team, they can have their access revoked by removing their access policy. They will have nothing to clean up because the secrets were never stored on their computers, they only lived in the app runtime memory.

Another benefit of centralizing secrets when using shared resources is in situations where secrets may often change. Particularly in a development environment, you may have good reason to delete resources and redeploy them, for example, to test an infrastructure deployment process. When you do this, secrets and connection strings for your resources will have changed. If every developer had their own copy of the secrets on the machine, this kind of change would have broken everyone’s local environments and disrupt their work until they’ve acquired all the latest secrets. In a managed identity scenario, this kind of change would be seamless. The same benefit applies when new resources are added to your infrastructure. Dev team members don’t need to acquire the new connection secrets when they get the latest code that uses a new service, the app will just pull them from Key Vault.

Another time secrets may change is when they expire or when you intentionally rotate them for the sake of security. Using a key vault can make it significantly easier to implement a key rotation strategy. The key vault configuration provider can be configured to pull app secrets once at app start time (which is the default) or at a regular interval. Both can be part of a secret/key rotation strategy, but the first requires orchestrating an app restart after changing a secret, which isn’t necessary with the second approach. Implementing key rotation support in your app is fairly straight forward: most Azure resources provide two valid keys at a time to support rotation. You should store both keys for each service in Key Vault, but only use one of them in your app until it becomes invalid. Once your client hits an auth error, you should catch that exception, set the other key as the actively used key, and replay the request. Using approach 2, configure the Key Vault config provider to refresh on an interval, maybe 5 or 10 minutes, and then have an external process (Azure Automation Runbooks are a recommended solution for this) reset only one key at a time. If both keys are cycled at the same time, your app config won’t refresh fast enough to get the new keys and will start to fail. By doing one at a time, you ensure having at least one valid key available to your app at any given time.

Another way that this can improve developer agility is that you can easily change the environment you target with a simple configuration change. For example, let’s say some pesky issue is popping up in your UAT environment that isn’t showing up anywhere else, and you’re tearing out your hair looking through logs trying to understand it. You’re at the point where you’d give your left foot to just run the app locally targeting that environment so you can attach a debugger and step through the problematic code. Without using managed identity and the key vault configuration provider you would have to copy the secrets for that environment to your local computer. This is gross enough that you should probably seek any other option before resorting to it. However, if you were using managed identity and key vault, you could simply reconfigure the name of the key vault you want your local app to use with the one for the target environment and create a temporary access policy for yourself. As a good practice, you should still revoke your access afterward, but at least you have nothing sensitive on your local device to clean up.

Finally, let’s talk about the benefits of using this approach from the perspective of building a deployment pipeline. Specifically, the benefit is that you have one fewer thing to implement and worry about. Since secrets are centralized in the key vault and pulled during app runtime, you don’t need to have your process pull in the secrets from wherever you store them, then pave them into an appsettings.json file, or assign them as PaaS-level environment variables. This saves you time not having to code this behavior, and it also saves you time when something breaks because there’s one fewer place where something could have gone wrong. Having your app go directly to key vault streamlines the configuration and creates fewer opportunities to break things. It also has the advantage that you don’t need to run a full app deployment just to update a secret.

Counter Arguments

This may sound good so far, but I suspect you may already have a few concerns brewing. Maybe you’re thinking some of the following: Do I have to start keeping all my configuration values in Key Vault? Doesn’t this represent additional configuration management overhead? Won’t I have conflicts with other team members if I need to change secret values to personalize my local environment? Doesn’t this create a hard dependency on an internet connection, meaning I won’t be able to run a local environment fully offline? All of these are valid questions, but I think you’ll see that they all have acceptable and satisfying answers.

So, does this mean that Key Vault needs to become the singular place for all app configurations, both secret and non-secret? If we only put secrets there, then don’t many of the above arguments about the benefits of centralization become moot, since we still need to do distributed config management for non-secret values? Azure’s answer to this question is Azure App Configuration, a centralized app configuration service that gives you a nice level of control over non-secret configuration settings for your app, including cool features like config value versioning and feature flags. I won’t go too deep into the details of this service here, but it’s worth noting that it also supports managed identity and can integrate with your app in the same way as Key Vault. However, I’ll also offer the suggestion that you can incorporate App Configuration on an as-needed basis. If you are dealing with a small app with less than 10 environment-specific settings, then you might enjoy the convenience of just consolidating all your secret and non-secret values into Key Vault. The choice comes down to preference, but keep in mind that if your settings are changing semi-often or you expect your app to continue adding new config settings, you may get tired of editing every config using Key Vault’s interface. It’s tailored for security, so it should generally be locked down as much as possible. It also doesn’t have all the features that App Configuration does.

Regarding configuration management overhead, the fact is that, yes, this does require creating/managing a Key Vault service and managing access policies for dev team members. This may sound like work you didn’t previously have, but I assure you this kind of setup and ownership is lightweight work that’s well worth the cost. Consider all the other complexities you get to give up in exchange: with centralized config management, you can now do code-only app deployments that can ignore configuration management entirely. That makes it faster and easier to create your deployment process, especially when you have multiple environments to target, and will give you high marks for security. As we also mentioned, centralizing these config settings makes it simpler to onboard new team members and possibly to iterate on shared infrastructure without breaking things for the team.

You may also be concerned that sharing your configuration source will result in a lot of stepping on toes with your team during development. But consider this: nothing is stopping you from using the same kind of local environment configuration approaches that developers already use in addition to Key Vault. Asp.Net Core’s configuration system is based on the idea of layering configuration providers in a stack, where the last-in wins. If you want to allow your developers to be able to override specific values for development purposes, for example, to point at a personal database instance (maybe even a local database, like SQL Server or the Cosmos DB Emulator), you can still pass those as environment variables, in appsettings.Development.json, or as ‘dotnet user-secrets’. This doesn’t necessarily defeat the purpose of centralizing secret or config management. The benefits of centralization apply most to shared resources. If you want to use a personal resource, there’s no harm in personalizing your config locally. An alternate approach to personalization is to provide your own complete set of resources that make up an environment in Azure. Ideally, you already have a script or template to create a new environment easily, and if you don’t, I strongly recommend it, in which case you’ll get your own Key Vault as well, and you can simply point your local app at it.

Lastly, I’d like to address the question of whether this makes it impossible to do fully offline local development. There are a couple of considerations here:

  1. How to target local services instead of live-hosted ones
  2. Overcoming the fact that the Key Vault configuration provider relies on an internet connection.

The first is handled the same way you would handle configuration personalization, by overriding any config settings in something like appsettings.Development.json or ‘dotnet user-secrets’ to target your local database or Azure service emulator. The second is relatively simple, just put the line of code that configures Key Vault as a config provider within an ‘if’ condition that checks to see if you are running in a development environment (see a sample approach here). This is assuming that Key Vault is truly your only remaining dependency on an internet connection. If it seems strange to hear me recommend disabling Key Vault after advocating for it, consider again that the benefits of centralized configuration apply most to using shared resources, so if you are designing to support an entirely local development environment then using Key Vault becomes unnecessary when running in that mode.

Using centralized configuration services like Key Vault via managed identity requires a different mindset for developers, but it comes with clear advantages, especially when it comes to limiting the exposure of your application secrets. This kind of solution is an absolute win from a security perspective, and it has the potential to considerably improve your development team’s experience as well. Due to Asp.Net Core’s pluggable configuration system, it’s easy to apply to existing projects, especially if you’re already storing secrets in Key Vault, so consider looking at how you could incorporate it into your existing projects today, and don’t miss out on the chance to try it in your next greenfield project. Your security advocates and fellow developers just might thank you.

Once you’ve decided to instrument your ASP.NET Core application with Application Insights, you may be looking for a quick way to troubleshoot telemetry configuration. For details on why you should be using Application Insights and how to get started, please reference my previous post in this blog series. How do you go about testing your telemetry configuration? Typically, developers would adjust the application and would then deploy to Azure, ideally your development environment in Azure. However, making a change to an application, building, publishing to Azure and testing the given endpoints. Waiting for telemetry to appear in Application Insights can require upwards of 30 minutes per change assuming that you know what you are doing and without making any mistakes. Is there a way to work locally?

Troubleshooting Locally

Let me start by discussing the end goal. We want to simulate our production environment while running locally on our dev machine. Azure typically provides local emulators for services like storage and Cosmos. They do not provide an Application Insights emulator. When running locally we want to simulate production, but we will need to send data to an actual Azure Application Insights account. In order to simulate running in production, we should publish our Dotnet application in Release mode and begin outside of Visual Studio. The reason for starting the application outside Visual Studio is that our production environment will not have Visual Studio installed. Another reason for starting the application outside Visual Studio is because Visual Studio includes a Diagnostic panel that captures the Application Insights telemetry and prevents it from being sent to the Azure Application Insights account. I’d like to emphasize that the Diagnostics panel built into Visual Studio is not an emulator and shouldn’t be used for that purpose.

First, we must publish the application in Release mode. You can do that using the dotnet command-line as shown below.

Publish in Release Mode

This will publish to a directory similar to the below,

Publish to Directory

Once in the directory where the build artifacts are, we should find both appsettings.json and the .dll for our main application, CustomerApi.dll in my case. From the command-line. We can then run Kestrel directly, using the following command-line.

Run Kestrel Directly

If using the defaults, your application will now be running and available in a browser at either http://localhost:5000/ or https://localhost:5001/. We are likely still missing one step, which is configuring the telemetry key for Application Insights. In the bin\Release\netcoreapp3.0\ folder, locate the appsettings.json. Open the file and put the telemetry key in the file.
configuring the telemetry key

If you go back to the command-line you can press Ctrl+C to exit the running web application and then re-run the dotnet CustomerApi.dll command to restart the application. We now have an application running locally that is sending telemetry to Application Insights in Azure.

View Live Telemetry from Local Machine

In the Azure portal, open the Application Insights resource and then locate the “Live Metrics Stream” blade.
Live Metrics Stream

The Live Metrics panel should open and connect as long as the application is running locally using “dotnet CustomerApi.dll”. Once open, scroll to the bottom of the pane.

Bottom of Pane

At the bottom, you will see a list of connected servers. In my example below, you see two servers. The one highlighted in red is my local developer machine. The other server is the Azure Application Service that I have running in my development environment in Azure.

Developer and Azure Application Server

To quickly recap we have our application running locally outside Visual Studio on a command-line and in Azure Application Insights, we can see our local machine is connected up to live metrics. In order to actually see telemetry flow into this panel, you will likely want to make one other change. In the upper-right, click on the filter icon to adjust the live metrics filters.

Telemetry Sample

You will then be prompted with the following dialog. If you trust the servers you can safely ignore.

Authorize Connected Servers

You will then see a dialog with the current filters. Notice that the default configuration will only show failed requests and dependency calls. Since we are troubleshooting it’s likely you will want to see all requests. Feel free to click the “x” to remove both filters and then the “Save” button.

Current Filter Dialogue

Once you have completed this step, you can go back to your web browser on your local machine, either http://localhost:5000/ or https://localhost:5001/ and then make a request to your API. I tried a URL I know returns a 404 response. You can see the live telemetry that showed up for me:

Sample Telementry

Then, click on that row for more details about the given telemetry item. This telemetry is also being logged to Application Insights and you will be able to see it on all the usual dashboards and search for it using Log Analytics query, just be aware there is still the typical 2 to 5-minute delay between when it is sent and when it will appear in queries and dashboards.

Summary

You have now learned how to troubleshoot Azure Application Insights quickly and without needing to deploy your application to Azure. To summarize, you run “dotnet publish” in “Release” mode locally and then run the application from the command-line outside Visual Studio. This is done for a few reasons:

  • When publishing in release mode, you do not need to worry about appsettings.development.json
  • By running outside Visual Studio, you do not need to worry about launchSettings.json setting any special environment variables that don’t match your production environment (e.g. ASPNETCORE_ENVIRONMENT)
  • When running outside Visual Studio, you do not need to worry about the diagnostics panel deciding to capture your telemetry and preventing it from being sent to Azure.

Once your application is running locally and has the Application Insights telemetry key configured properly, you will find the telemetry in the “Live Metrics” view so you can avoid the typical 2 to 5-minute delay between sending telemetry and will see it elsewhere in Application Insights.

If you are concerned that this setup will not allow for the use of the Visual Studio Editor, think again! Once you have the application running outside Visual Studio, simply use the “Attach To Process…” menu item in Visual Studio. This gives you the best of both worlds:

Visual Studio Debugger

Hopefully, this post helped you understand how to more quickly how to troubleshoot your Application Insights telemetry configuration. That will come in handy in the next post for this series, where we talk about customizing telemetry to keep PII (Personally Identifiable Information) out of your application insights logs.

In this video blog, AIS’ CTO Vishwas Lele walks us through provisioning a Docker Swarm cluster using the Azure Container Service (ACS). Docker Swarm is a native clustering technology for Docker containers, which allows a pool of underlying Docker Hosts to appear as a single virtual Docker host. Containers can then be provisioned through the standard Docker API. The Azure Container service takes care of provisioning the underlying Docker host virtual machines, installing the required software (Docker plus  Docker host), and configuring the cluster. Once the cluster is provisioned, Vishwas demonstrates connecting to the master node to spin up containers on the cluster which hosts an ASP.NET application.

You might also like: 

The Seamless Hybrid Cloud

Enterprise #DevOps: A Service Catalog Driven Approach

Windows Azure Planning: Moving an Application into Production

Are you working on a REST API and using the new Web API to implement it? You’ve written an ApiController subclass or two? Let’s say you’ve created a new subclass of ApiController called OrderController. WebAPI provides your OrderController with out-of-the-box support for the following URLs:

HTTP Verb URL Description
GET /api/order Returns all orders
GET /api/order/3 Returns details order #3
POST /api/order Create new order
PUT /api/order/3 Update order #3
DELETE /api/order/3 Delete order #3

The above is considered verb-based routing. The URLs above only contain the controller name and an optional id. So the Web API uses the HTTP verb of the request to determine the action method to execute in your ApiController subclass.

Your Goal

Now what if you want to add some custom actions to your ApiController subclass? For example:

HTTP Verb URL Description
GET api/order/3/vendors Returns all vendors involved with providing items to complete order #3
PUT /api/order/3/expedite Expedites order #3, but can only be executed by managers in customer service dept.
PUT /api/order/3/reject Rejects order #3, but can only be executed by managers in customer service dept.

It turns out that adding those custom actions is hard, very hard. But keep reading. There is an easy way. Read More…

ASP.NET 4.5 has introduced some cool new features and enhancements for web developers using Visual Studio 2012. One of the new features introduced deals with the framework’s Web Forms technology. In previous versions of ASP.NET, if you wanted to display data-bound values in a control, you needed to use a data-binding expression with an Eval statement, e.g. <%# Eval(“Data”) %>.

Using an Eval Statement to display data-bound items

This approach works, but it introduces a few problems. In my experience, the Eval statement approach is prone to developer error. If you are like me, then you have undoubtedly misspelled a member name or tried to bind to a  nonexistent member. These mistakes, while trivial, tend to make themselves known only at run-time thus making them more difficult to track. Due to the Eval statement being dynamic in nature, it is impossible to enforce compile time error checking.

With ASP.NET 4.5, we can now take advantage of Strongly Typed Data Controls. These controls allow us to specify the type of data the control is to be bound to, providing us with IntelliSense (which solves another problem for me: remembering which members belong to the DataSource) and compile time error checking. Adding a strongly typed data control requires minimal effort!

Read More…

When it comes to the Web Forms vs. MVC debate, the question isn’t “Which is Better?”…but “How to Switch?”

I have been studying and trying to transition to Model-View-Controller (MVC) for about six months and I have come across some fundamental thinking that must be changed and new things that must be learned in order to make the switch. I worked with Web Forms extensively in my previous position, and since joining AIS have made a commitment to myself to embrace the switch. I am still learning, but this is what I have found so far (your results may vary).

Read More…

Happy Friday! Here are some of the latest posts by AIS employees from around the web and their personal blogs:

Using Git-Tf: Suppress the TFS Warning When Loading a Solution: Using Git-TF? Getting annoying TFS warnings in Visual Studio? Senior Software Engineer Kip Streithorst can help. (It’s Null?)

Fight Clutter and Confusion in the Workplace. The Importance of Process Streamlining and How to Do It: Developer Terra Gilbert has discovered a natural knack for process streamlining and improving documentation. Here are her tips. (codeterra)

Recent Items in Windows 8: Oskar Austegard plays around with a new Windows 8 install and solves the case of the missing (or at least hard-to-find) Recent Items folder. (mo.notono.us)

KnockoutJS & ASP.NET Mvc Partial View Loading: How to dynamically load “partial views” bound to KnockoutJS view models. (Null != Steve)

Scrum Fundamentals Recording Available: In case you missed Ryan Cromwell‘s Scrum Fundamentals webinar, the presentation is available on his blog. (And be sure to check our Events page — we add new events every week!) (cromwellhaus)