Kubernetes has become the go-to orchestrator for running containers. At AIS, we are working with clients using Azure Kubernetes Service (AKS), applying our expertise in DevOps. One concern is delivering HTTPS certificates to containerized web applications; let’s use Azure Key Vault to achieve this.

Azure Key Vault is a cloud service that provides hardware security modules (HSMs) for securely storing and delivering secrets, encryption keys, and certificates. The Key Vault FlexVolume driver for Kubernetes offers a means for mounting certificates as files in our containers; this allows us to automate and manage HTTPS certificates for applications without requiring them to access Azure Key Vault directly.

Let’s look at how we can apply this to Kubernetes services that front containers running ASP.NET Core 3.1 applications. We’ll start with a sample application from Microsoft’s .NET Docker repo. You can find the full solution and step-by-step instructions in this aks-keyvault-certs GitHub repo.

We’ll use PFX encoded certificates in our Azure Key Vault for this demo, as they are readily loadable in .NET Core 3.1 for use in Kestrel hosting.

There are a few important details to note:

  1. You can retrieve a certificate from Azure Key Vault using the certificate, key or secret object types. To get the full private key certificate, you need to use the “secret” object type.
  2. If you import a PFX encoded certificate into Azure Key Vault, getting its secret will return the full PFX file; however, since the API return value is a string and a PFX is a binary file format, the result must be base64 decoded.
  3. If you import a certificate using the text encoded PEM format, it is returned as-is and base64 decoding is unnecessary; however, .NET Core does not currently support loading private-key PEM certificates.
  4. Since we’ve already authenticated to Key Vault, the resulting PFX file mounted in our container no longer requires the PFX password to load.

Knowing this, if we import a PFX certificate into an Azure Key Vault, we can configure our Kubernetes Deployment YAML to create a volume containing a file containing the certificate secret contents. For example:

volumes:
      - name: aks-keyvault-aspnetcore-httpscert
flexVolume:
          driver: "azure/kv"
          secretRef:
            name: kvcreds
          options:
            # update with your key vault name
            keyvaultname: "YOUR KEY VAULT NAME"
            # update with your AAD tenant
            tenantid: "YOUR KEY VAULT TENANT"
            # The name of the object in Key Vault
            keyvaultobjectnames: "aks-https"
            # Use "secret" to get the full cert
            keyvaultobjecttypes: secret
            # This becomes the file name on the mount.
            keyvaultobjectaliases: "https.pfx.base64"

Then, in our pod specification we can mount the volume and set an environment variable for its path:

env:
        # Set an environment var to the cert path for
        # the application to use.
        - name: HTTPS_CERTIFICATE_PATH
          value: "/certs/https.pfx.base64"
        volumeMounts:
        # Mount the key vault volume to /certs
        - name: aks-keyvault-aspnetcore-httpscert
          mountPath: /certs
          readOnly: true

Finally, in our ASP.NET Core application, we configure Kestrel to use the mounted certificate file for HTTPS endpoints:

public static IHostBuilder CreateHostBuilder(string[] args) =>
            Host.CreateDefaultBuilder(args)
                .ConfigureWebHostDefaults(webBuilder =>
                {
                    webBuilder
                        .UseStartup<Startup>()
                        .UseKestrel(options =>
                        {
                            // configure Kestrel with our HTTPS certificate
                            options.ConfigureHttpsDefaults(ConfigureHttps);
                        });
                });

        private static void ConfigureHttps(HttpsConnectionAdapterOptions options)
        {
            try
            {
                // When we get the certificate from Key Vault as a secret,
                // it provides the entire PFX file but without the password.
                // Since PFX is a binary format and a secret is a string,
                // it is base64 encoded. So we read in the text file and convert
                // it to the bytes to initialize the X509Certificate2.
                var certPath = Environment.GetEnvironmentVariable("HTTPS_CERTIFICATE_PATH");
                if (!string.IsNullOrEmpty(certPath))
                {
                    var certString = System.IO.File.ReadAllText(certPath);
                    var certBytes = Convert.FromBase64String(certString);
                    var httpsCert = new X509Certificate2(certBytes);

                    Console.WriteLine($"HTTPS cert Subject:    {httpsCert.Subject}");
                    Console.WriteLine($"HTTPS cert Thumbprint: {httpsCert.Thumbprint}");

                    // set the Kestrel HTTPS certificate
                    options.ServerCertificate = httpsCert;
                }
            }
            catch (Exception ex)
            {
                Console.Error.WriteLine($"unable to load https cert: {ex}");
                throw;
            }
        }

See the full working solution in the aks-keyvault-certs repo.

With this approach, we can automate managing HTTPS certificates with Azure Key Vault and delivering them to Azure Kubernetes Service pods running ASP.NET Core applications.

In this video blog, I’ll walk you through building a continuous integration and continuous delivery (CI/CD) pipeline using the latest tools from Microsoft, including Visual Studio Team Services (VSTS) and Azure. The pipeline is built to support a .NET core application, and the walkthrough includes the following steps:

  1. Configuring Continuous Integration (CI) with VSTS Build services
  2. Adding unit testing and validation to the CI process
  3. Adding Continuous Deployment (CD) with VSTS Release Management & Azure PaaS
  4. Adding automated performance testing to the pipeline
  5. Promotion of the deployment to production once validated
  6. Sending feedback on completion of the process to Slack
visualstudio-wallpaper-05At one point I was coding on a hobby project, using Visual Studio Online for project management and source control. Because of the technologies involved, a large number of temporary files were being generated that I didn’t want checked in. Visual Studio’s TFS integration is pretty good at automatically filtering these kinds of files out and placing them in the Excluded Changes list in the Pending Changes window, but in my case the sheer number made it a pain to scan the Excluded Changes list for valid changes that I actually wanted to commit.

In my case, I didn’t want those temporary files to show up at all – not even in the Excluded Changes list. In order to gain control over which files TFS should ignore completely, I added .tfignore files to my solution. These allow you to specify which files, extensions and directories to ignore (or un-ignore!) from source control. If you’re familiar with the concept of .gitignore files in GIT, you should feel right at home.

Read More…

Despite the terms Dependency Inversion Principle (DIP), Inversion of Control (IoC), Dependency Injection (DI) and Service Locator existing for many years now, developers are often still confused about their meaning and use.  When discussing software decoupling, many developers view the terms as interchangeable or synonymous, which is hardly the case.  Aside from misunderstandings when discussing software decoupling with colleagues, this misinterpretation of terms can lead to confusion when tasked with designing and implementing a decoupled software solution.  Hence, it’s important to differentiate between what is principle, pattern and technique.
Read More…
Given the widespread use of the Android operating system running on today’s mobile platforms, Android development has become an excellent choice for enhancing a developer’s skill set. Fortunately for the seasoned .NET developer, learning Android development is not a huge stretch. While there are several avenues for .NET developers looking to break into the world of Android application development, currently the most popular options are made possible by utilizing any of the following technologies:

  • Xamarin platform
  • PhoneGap framework
  • Native Android development via Java

The Xamarin platform provides the ability for .NET developers to harness their C# knowledge, create cross-platform (iOS, Android and Windows) applications, reuse existing code and perform development within Visual Studio. The greatest advantage of utilizing the Xamarin platform is a reduced time to market while supporting multiple platforms. However, due to the additional Xamarin runtime contained within the final application, the footprint tends to be larger — this could be an issue, especially for some Android devices.

The PhoneGap framework is another option for writing Android applications.  The PhoneGap framework is a client-side web application comprised of HTML5 pages using CSS and JavaScript. While it’s possible to utilize Visual Studio to code and test the application, ultimately the code will need to be packaged into a real Android application. This will require an IDE such as Eclipse or JetBrains’s IntelliJ IDEA.  The PhoneGap Build service may also be used to accomplish the application packaging. While the PhoneGap approach will provide multiple platform support, the application type should be given consideration because the PhoneGap framework relies on JavaScript, which may have performance limitations compared with native Java Android applications.

While Xamarin and PhoneGap certainly have their merits for creating Android applications, native Android development via Java provides an opportunity to take advantage of a device’s full feature set with fast execution, all wrapped in a smaller package for more rapid downloads. For a complete discussion of the various mobile platforms’ benefits/drawbacks, please read Eric Svendsen’s excellent article where he provides plenty of depth on the issue.  For now, the remaining content of this post will be to provide valuable insight for .NET developers looking to expand their language set by utilizing native Java for Android development. Read More…

Welcome to the first article in a series on moving enterprise systems from a mainframe-based platform to something else. That “something else” could be any number of things, but our default assumption (unless I say otherwise) is going to be a transaction processing system based on a platform like Microsoft’s .NET Framework.  While I expect that there will eventually be some technical content, the initial focus is going to be on planning, methodology, defining solutions and project management.  Moving a large system off of a legacy mainframe platform is very different from a development project that starts with a blank slate. Chances are that you’re going to be shifting paradigms for quite a few people in both your technical and business organizations — and for yourself, as well. Read More…
Custom application development is one of AIS’ many strong suits, and we’re constantly expanding our repertoire. Recently we successfully delivered a custom-designed scorecard and dashboard, as well as a SQL Server Analysis Services (SSAS) data cube, which provided self-service Business Intelligence for one of the nation’s leading pain medication monitoring organizations. Ameritox’ senior management required a user-friendly solution that was consumable on a variety of mobile devices like Apple iPads and other tablets. Our experience in Microsoft BI, .NET development and mobile solutions enabled us to deliver a solution that allows the organization to effectively track their specimens on a daily basis.

Click here to find out exactly what we did to ensure success on this project!

Are you working on a REST API and using the new Web API to implement it? You’ve written an ApiController subclass or two? Let’s say you’ve created a new subclass of ApiController called OrderController. WebAPI provides your OrderController with out-of-the-box support for the following URLs:

HTTP Verb URL Description
GET /api/order Returns all orders
GET /api/order/3 Returns details order #3
POST /api/order Create new order
PUT /api/order/3 Update order #3
DELETE /api/order/3 Delete order #3

The above is considered verb-based routing. The URLs above only contain the controller name and an optional id. So the Web API uses the HTTP verb of the request to determine the action method to execute in your ApiController subclass.

Your Goal

Now what if you want to add some custom actions to your ApiController subclass? For example:

HTTP Verb URL Description
GET api/order/3/vendors Returns all vendors involved with providing items to complete order #3
PUT /api/order/3/expedite Expedites order #3, but can only be executed by managers in customer service dept.
PUT /api/order/3/reject Rejects order #3, but can only be executed by managers in customer service dept.

It turns out that adding those custom actions is hard, very hard. But keep reading. There is an easy way. Read More…

I have a Windows Presentation Foundation (WPF) app currently in testing and this week it came back with some weird issues. The app uses the Client Object Model to consume SharePoint 2010 lists and UX is a factor in the design, so we’ve decided to do some metrics recording to see how people interact with the app. We want to know what features people use and which efforts were wasted.

Strangely, we started noticing that the names of the metrics (which were also being written to a SharePoint list) were displaying some incorrect results…names like “ShowAllItemsOpened” were instead “showAllMenuItem_Click.” What’s that you say? Clearly that’s the name of an event handler! Well, you’d be right! But it worked before! What changed in testing? First, some background information …

Read More…