Deploying containerized apps to production in Kubernetes with rigorous security practices requires minimizing what is installed on a container image and what can be done in a running container. But doing so can make it challenging to diagnose elusive issues since the containers won’t have tools installed that could help diagnose the problem. This article demonstrates a technique for enabling diagnostics for a containerized .NET application while maintaining a high-security posture during normal operations.

Deploying Securely to Kubernetes

Standard practices for securing containers deployed to Kubernetes include:

  • Building containers with a minimal footprint reduces the attack surface, adding only what is needed to run the application.
  • Running the container as a non-root user.
  • Running with a read-only root filesystem so that the operating system and critical files cannot be tampered with.

However, doing so limits options for debugging and diagnostics when something goes wrong.

.NET Diagnostic Tools

.NET includes a suite of tools for diagnostics. But adding them to every container you run in production increases the attack surface.
Another catch: .NET applications run with diagnostics enabled by default. But if you have tried running a .NET container with the Docker –read-only flag, your app will crash with this error:

Failed to create CoreCLR, HRESULT: 0x80004005c

This prevents running the containers in Kubernetes with a read-only root filesystem.

However, it turns out that we only need to create a writable /tmp directory for .NET diagnostics to work. In Kubernetes deployments, we can do this with a writable emptydir mount on /tmp. (There needs to be more documentation on this point).

Solution: On-Demand Diagnostics Sidecar

We want to run our application containers in production with a minimal attack surface and the smallest footprint possible while still being able to debug when necessary. Our approach will be to use a sidecar container and enable .NET diagnostics on-demand when needed for diagnostics.

  • Helm is a common way to manage Kubernetes deployments, and with Helm, we can templatize our Kubernetes YAML to pass a value at deployment time to switch diagnostics on or off. When diagnostics are enabled, the Helm chart will:
  • Enable .NET diagnostics by setting the DOTNET_EnableDiagnostics environment variable to 1.
  • Add a sidecar container to our application pods with the diagnostics tools installed.
  • Sets the shareProcessNamespace property on the pod to true so that the sidecar container can initiate dumps on the application container.

With this approach, we can enable diagnostics on-demand when creating a .NET process dump for in-depth debugging while keeping our normal runtime footprint minimal.

Full Sample

See the details and everything you need to try it out in the open-source k8s-dotnet-diag GitHub repo.

In a previous article, we explored using the CSI Secrets Store Driver and Azure Provider to mount an HTTPS certificate stored in Azure Key Vault on pods deployed to Azure Kubernetes Service (AKS). This was done using Linux containers and an ASP.NET Core application. But what about .NET Framework applications on Windows containers? Let’s see if we can take the same approach.

About Windows Containers

Containers initially grew out of the Linux world based on its natural support for container isolation using cgroups and namespaces. Therefore, Linux containers are usually the preferable choice when possible, such as for apps written on the cross-platform .NET Core 3.1 or .NET 6.0. However, many organizations have a significant investment in .NET Framework applications, which must run on a Windows OS. In this case, you may use Windows containers to deploy these applications to AKS, leveraging an organization’s investments in both existing applications and Kubernetes.

A few important things to note about Windows containers in AKS:

  1. The kubenet networking option is not supported on Windows nodes; therefore, you can use Azure CNI for the cluster. This requires additional planning and a larger range of IP addresses.
  2. The first nodepool in AKS runs system services and must use Linux nodes; therefore, creating a cluster for Windows containers is to make the cluster and then add a Windows nodepool.
  3. Your container image Windows OS version must be compatible with the node OS version. As of writing, AKS nodes are created with Windows Server 2019, so the to use for the .NET Framework parent images is 4.8-windowsservercore-ltsc2019.
    See the AKS Windows FAQ for more information.

Using the CSI Secrets Store Driver for Windows Containers

Fortunately, AKS supports Container Storage Interface (CSI) drivers on Windows containers. Windows containers are also supported by the same CSI Secrets Store Driver and Azure Provider that we used for Linux containers and ASP.NET Core.
However, Windows is not enabled by default if you install using the Helm carts; you need to set the following configuration overrides to true:

  • windows.enabled

Once we have the driver and provider installed on our cluster, we can mount certificates stored in Azure Key Vault as files on our Windows container pods in AKS, just as we did for Linux containers.

The diagram below represents the flow from Key Vault to a pod and a volume mount on the container:

Key Vault to a pod and a volume mount on the container

Configuring ASP.NET With HTTPS in Windows Containers

Running ASP.NET Core applications on Linux containers uses the Kestrel web server, and it is easy to configure Kestrel with the HTTPS certificate to use either through configuration or code. But ASP.NET applications running on Windows will use IIS as the webserver. How does this work on Windows containers, and how can we configure IIS in the container to use our mounted HTTPS certificate?

Looking at the Dockerfile used to create the .NET Framework ASP.NET image gives us a clue with this line:

ENTRYPOINT ["C:\\ServiceMonitor.exe", "w3svc"]

Its entry point uses the IIS Service Monitor app to run the IIS World Wide Web Publishing Service  (w3svc).

So in our application’s Dockerfile we could set a new entry point that calls a script that:

  1. Install the mounted certificate file into the Windows Certificate Store.
  2. Configures IIS to use HTTPS with the imported certificate.
  3. Start the ServiceMonitor.exe process.

Here is a PowerShell example that expects the HTTPS_CERTIFICATE_PATH environment variable to be set with the certificate path:

Write-Host "Importing HTTPS certificate $certFilePath"
$cert = Import-PfxCertificate -FilePath $certFilePath -CertStoreLocation Cert:\LocalMachine\My

Write-Host "Creating HTTPS Binding"
New-WebBinding -Name "Default Web Site" -IP "*" -Port 443 -Protocol https

Write-Host "Binding Certificate to HTTPS Binding"
Set-Location IIS:\SslBindings
$cert | New-Item!443

Write-Host "Starting Service Monitor"
C:\\ServiceMonitor.exe w3svc

Then in our application’s Dockerfile we copy in our startup script and set ENTRYPOINT the to call it, for example:

COPY ./Bootstrap-IIS.ps1 ./
ENTRYPOINT ["powershell.exe", "./Bootstrap-IIS.ps1"]

Finally, we can set the HTTPS_CERTIFICATE_PATH environment variable in our Kubernetes YAML to match the mount point and file name we configure the driver and volume mount to use.

Sample Repo

For a complete example with setup, configuration, and deployment instructions, see the aks-csi-keyvault-certs-win repo in GitHub.

In a previous article, I wrote about the Key Vault FlexVolume driver for Kubernetes. I demonstrated how to use it to mount an HTTPS certificate from Azure Key Vault onto Kubernetes pods. Since then, the FlexVolume driver has been deprecated in favor of the Container Storage Interface (CSI) secrets store driver and Azure Key Vault provider.

The CSI Standard

The Container Storage Interface (CSI) is the latest evolution in storage plugins for Kubernetes. It is defined by a standard design to overcome the shortcomings of the FlexVolume plugin. It is an “out of tree” plugin, meaning that it is decoupled from Kubernetes so that CSI drivers can be developed and versioned separately from Kubernetes.

The Secrets Store CSI Driver

This driver’s design is a “secrets driver + provider” model where the secrets store CSI driver provides the implementation for mounting a volume and delivering secrets to pods. Providers implement access to a particular secrets store. Currently, supported providers include:

  • Azure Key Vault
  • HashiCorp Vault
  • Google Secret Manager

Multiple providers can run in the same cluster simultaneously.

Besides mounting secrets to a pod volume, this driver also allows you to map from the secret store to Kubernetes secrets optionally. This is useful when instead of terminating TLS at the pod level, you are using an ingress controller such as NGINX that requires the HTTPS certificate to be a Kubernetes secret.

The SecretProviderClass Resource

With the FlexVolume driver for Key Vault, all the Key Vault and secret settings were declared in the YAML defining the volume mount in a deployment.

The Secret Store CSI Driver uses a custom Kubernetes resource called a SecretProviderClass to define the secret store and secret mount settings. Then the volume mount definition refers to the SecretProviderClass name. This results in a much cleaner deployment YAML and a decoupling of the secrets provider configuration from a particular volume mount.

Installing with Helm

Installing the Secrets Store CSI Driver and Azure provider is straightforward with the Helm package manager and the provided Helm charts. This installs the driver as a Kubernetes daemonset that will be available on all nodes so that any pods can utilize it in the cluster.

Mounting a Certificate for HTTPS

In addition to secrets such as passwords and API keys, Azure Key Vault can securely store and provide private key certificates such as those used for HTTPS. As we demonstrated with the FlexVolume driver for Key Vault, we can mount a certification to our pods and use them to bootstrap Kestrel in ASP.NET Core for HTTPS.
Let’s look at how we would do the same thing with the CSI Secret Store Driver and Azure Provider. You can see the full working example in the aks-csi-keyvault-certs GitHub repo and see more detailed instructions in the README.

First, we create our SecretProviderClass resource definition:

kind: SecretProviderClass
  name: azure-kvname
  provider: azure
    tenantId: "[*** YOUR KEY VAULT TENANT ID ***]"
    keyvaultName: "[*** YOUR KEY VAULT NAME ***]"
    objects:  |
        - |
          objectName: aks-https
          objectAlias: https.pfx.base64
          objectType: secret        # object types: secret, key or cert
          objectFormat: pfx         # for .NET Core 3.1 we want the PFX format
          objectVersion: ""         # [OPTIONAL] object versions, default to latest if empty

We name our class azure-kvname, which we will use in our volume definition. In the object property, we can define 1-N secrets to be mounted as files on the volume. In this case, our secret has these properties:

objectName The name of the certificate in Key Vault.
objectAlias This will be used as the file name on the volume.
objectType We use “secret”, which will get us the private key certificate.
objectFormat Unfortunately, even if we have stored the certificate in Key Vault as a PFX, Azure CSI provider will automatically convert it to PEM format. .NET Core 3.1 does not support PEM out of the box, so setting this to “pfx” ensures we get a PFX.

In our Kubernetes Deployment YAML, we then define our volume like so:

      - name: aks-keyvault-aspnetcore-httpscert
          readOnly: true
            secretProviderClass: "azure-kvname"
            name: kvcreds

In this demo, we use an Azure Active Directory service principal to authenticate to Key Vault, whose credentials are stored as a Kubernetes secret. The nodePublishSecretRef option provides the name of the Kubernetes secret containing these credentials.

Then in our deployment YAML, we define the volume mount for our pods:

        - name: aks-keyvault-aspnetcore-httpscert
          mountPath: /certs
          readOnly: true

Given our volume mountPath and the objectAlias in our SecretProviderClass, the certificate will be available in our pods using the path /certs/https.pfx.base64.

Keeping Secrets

The Secrets Store CSI Driver and Azure Key Vault provider for Kubernetes are a great way to deliver secrets to your containerized applications. If you are currently using the FlexVolume driver for Azure Key Vault, you should strongly consider updating to the CSI driver to take advantage of the latest innovations and features it provides.

Kubernetes has become the go-to orchestrator for running containers. At AIS, we are working with clients using Azure Kubernetes Service (AKS), applying our expertise in DevOps. One concern is delivering HTTPS certificates to containerized web applications; let’s use Azure Key Vault to achieve this.

Azure Key Vault is a cloud service that provides hardware security modules (HSMs) for securely storing and delivering secrets, encryption keys, and certificates. The Key Vault FlexVolume driver for Kubernetes offers a means for mounting certificates as files in our containers; this allows us to automate and manage HTTPS certificates for applications without requiring them to access Azure Key Vault directly.

Let’s look at how we can apply this to Kubernetes services that front containers running ASP.NET Core 3.1 applications. We’ll start with a sample application from Microsoft’s .NET Docker repo. You can find the full solution and step-by-step instructions in this aks-keyvault-certs GitHub repo.

We’ll use PFX encoded certificates in our Azure Key Vault for this demo, as they are readily loadable in .NET Core 3.1 for use in Kestrel hosting.

There are a few important details to note:

  1. You can retrieve a certificate from Azure Key Vault using the certificate, key or secret object types. To get the full private key certificate, you need to use the “secret” object type.
  2. If you import a PFX encoded certificate into Azure Key Vault, getting its secret will return the full PFX file; however, since the API return value is a string and a PFX is a binary file format, the result must be base64 decoded.
  3. If you import a certificate using the text encoded PEM format, it is returned as-is and base64 decoding is unnecessary; however, .NET Core does not currently support loading private-key PEM certificates.
  4. Since we’ve already authenticated to Key Vault, the resulting PFX file mounted in our container no longer requires the PFX password to load.

Knowing this, if we import a PFX certificate into an Azure Key Vault, we can configure our Kubernetes Deployment YAML to create a volume containing a file containing the certificate secret contents. For example:

      - name: aks-keyvault-aspnetcore-httpscert
          driver: "azure/kv"
            name: kvcreds
            # update with your key vault name
            keyvaultname: "YOUR KEY VAULT NAME"
            # update with your AAD tenant
            tenantid: "YOUR KEY VAULT TENANT"
            # The name of the object in Key Vault
            keyvaultobjectnames: "aks-https"
            # Use "secret" to get the full cert
            keyvaultobjecttypes: secret
            # This becomes the file name on the mount.
            keyvaultobjectaliases: "https.pfx.base64"

Then, in our pod specification we can mount the volume and set an environment variable for its path:

        # Set an environment var to the cert path for
        # the application to use.
          value: "/certs/https.pfx.base64"
        # Mount the key vault volume to /certs
        - name: aks-keyvault-aspnetcore-httpscert
          mountPath: /certs
          readOnly: true

Finally, in our ASP.NET Core application, we configure Kestrel to use the mounted certificate file for HTTPS endpoints:

public static IHostBuilder CreateHostBuilder(string[] args) =>
                .ConfigureWebHostDefaults(webBuilder =>
                        .UseKestrel(options =>
                            // configure Kestrel with our HTTPS certificate

        private static void ConfigureHttps(HttpsConnectionAdapterOptions options)
                // When we get the certificate from Key Vault as a secret,
                // it provides the entire PFX file but without the password.
                // Since PFX is a binary format and a secret is a string,
                // it is base64 encoded. So we read in the text file and convert
                // it to the bytes to initialize the X509Certificate2.
                var certPath = Environment.GetEnvironmentVariable("HTTPS_CERTIFICATE_PATH");
                if (!string.IsNullOrEmpty(certPath))
                    var certString = System.IO.File.ReadAllText(certPath);
                    var certBytes = Convert.FromBase64String(certString);
                    var httpsCert = new X509Certificate2(certBytes);

                    Console.WriteLine($"HTTPS cert Subject:    {httpsCert.Subject}");
                    Console.WriteLine($"HTTPS cert Thumbprint: {httpsCert.Thumbprint}");

                    // set the Kestrel HTTPS certificate
                    options.ServerCertificate = httpsCert;
            catch (Exception ex)
                Console.Error.WriteLine($"unable to load https cert: {ex}");

See the full working solution in the aks-keyvault-certs repo.

With this approach, we can automate managing HTTPS certificates with Azure Key Vault and delivering them to Azure Kubernetes Service pods running ASP.NET Core applications.

27525399 - open window on white wall and the cloudy skyModern cloud computing offers enterprises unprecedented opportunities to manage their IT infrastructure and applications with agility, resiliency, and security, while at the same time realizing significant cost savings. The ability to rapidly scale up and down in the cloud opens countless doors of possibility to use compute and storage resources in innovative ways that were not previously feasible.

But getting to the cloud and managing both cloud and on-premises resources can be a daunting challenge. As a recent Gartner article explains, a Cloud Strategy is a must for organizations. That’s where we at AIS can help – we have years of experience and successes working with enterprises to develop a cloud strategy. We have the resources and expertise to then plan and execute, leveraging the latest technologies and best practices.

Read More…

Innovations in devices, platforms and applications have advanced many user experiences – and user expectations. Voice activated digital assistants like Siri and Cortana have given users new ways to interact with services and information.

In light of this, interfaces like trusty web forms may seem a bit dated… perhaps it’s time to consider a more natural, conversational interaction with users.

A Pizza Bot
A sample Pizza Bot interaction (image courtesy of Microsoft from this article).

Read More…

Office 365

In earlier posts we went over the Office 365 development platform and proposed an example application to demonstrate how we can leverage its resources and Azure Active Directory using the Graph API.

In the previous post we looked under the hood at securing our web application and API with Azure Active Directory, and using the Graph API to find users, check calendars and send email notifications.

In the final installment of this series, we’ll take a closer look at the Outlook Add-in for this application.

Office 365 Add-ins

As we’ve seen, the Graph API makes it easy to integrate Office 365 resources and functionality into your own applications. Add-ins allow you to pull external resources and services directly into Office applications like Outlook, Word, PowerPoint and Excel.

Office 365 Add-ins are implemented as independently hosted web applications that are hosted within Office applications (both the web-based versions or native applications). This means: Read More…

Office Graph In my previous post, I proposed an example application that leverages the resources available to us in Office 365 development platform and Azure Active Directory, as well as the in-application integration of Office 365 Add-ins.

Now we’ll take a deeper look at the Graph API and some of the implementation points.

Build Your Enterprise Graph

The Graph API empowers developers and enterprises to build new relationships and interactions between resources in Azure Active Directory, Office 365, and other applications and data assets.

As Microsoft’s enterprise cloud offerings continue to expand, so will the opportunities to weave these resources together in new and innovative ways. Microsoft’s acquisition of LinkedIn will help it expand its social network graph, so it will be interesting to see how it plays into its Graph API in the future. Read More…

Office_365_AppsEnterprises have a trove of business resources and data that are often under-utilized – users, calendars, contacts, emails, tasks, documents and other files. Often there are redundancies between what users do with Office applications and other enterprise applications, and a painful lack of integration.

In prior posts, I discussed the compelling new Office 365 development platform and introduced Matter Center to demonstrate how integrating web-based add-ins directly into Office applications like Outlook can lead to productivity gains and happy users.

In this post we’ll introduce a sample application to show a practical example of how we can use these technologies to bring enterprise applications together with these valuable resources.

Read More…