Kubernetes Implementations Steps

To install and set up Kubernetes on Windows, load kubectl and install minicube. Minicube is a tool that enables us to run a single node cluster in a VM on a machine.

Choco Install Kubernetes-cli Minicube

Windows PowerShell

Now, run the following command and verify that kubectl is installed on your machine:

kubectl.exe version –client

Verify Kubectl is Installed
Next, spin up a worker machine — or node — in Kubernetes. It can be a physical or virtual machine. To do this, use the following command to start minicube:

Minicube Start

Launch VM and set up Kubernetes Cluster
Since we are installing Kubernetes on Windows, use Hyper-V to launch a VM and set up a Kubernetes cluster on the VM.

This will complete the setup.

Running Kubernetes containers

To start containers, use the Kubernetes command below to create a deployment. Provide a name for the deployment and the container image to deploy.

kubectl.exe create deployment my-nginx –image nginx

When a deployment is created, Kubernetes builds pods to host application instances.

Enter get pods just after running the previous command to catch the Container Creating status as pods are deployed:

kubectl.exe get pods

Kubernetes Deployment

Use the describe keyword with the deployment name, as shown below, to see more detailed information about deployments:

kubectl.exe get deployment

kubectl.exe to describe deployment helloworld-nginx

Scale Up Applications

In this Kubernetes implementation tutorial, we run only one container or Nginx server, but sometimes it is necessary to accommodate increased workload and traffic. In that case, scale up the number of application instances. This can be achieved using kubectl scale deployment with a –replicas parameter:

kubectl.exe scale deployment helloworld-nginx –replicas 4

Use a Replicas Parameter

Now, check the Kubernetes pods; there should be four containers running on the node:

Four Containers Running on Node

Deploy your app

Now there is an application running in multiple containers with their IP addresses. Next, expose them outside the cluster so that the application is accessible:
Kubectl.exe expose deployment helloworld-nginx –port=80, type=NodePort

Verify this via the kubectl get services command. This will return a service type of Node Port to expose port 80 on each node of the Kubernetes cluster. This service is an abstraction layer that load balances, and groups more than one pod in a cluster that shares an IP address.

NodePort Service

To open this application in a web browser, create a proxy to connect the local port to the cluster port, which we exposed using the NodePort service in the previous step:

kubectl.exe port-forward svc/helloworld-nginx 80:80

Create a Proxy

Go to the web browser and open http://127.0.0.1/ or http://localhost/ to see the homepage of the Nginx web server deployed in Kubernetes containers.

Nginx Homepage

Test the Kubernetes Control Plane

Lastly, test that the Kubernetes control plane, or master server, can maintain the desired state of the pods that run on the node server. To check this, use the following command to forcibly delete one of the pods that run the application instance.

kubectl.exe delete pod helloworld-nginx-67bb76cf46-mks6j

This undesired state will be identified, and Kubernetes will immediately run a new instance of the application in a container:

Running Application on Container

AN OVERVIEW OF KUBERNETES AND K8 CLUSTERS
Learn how Kubernetes is different from traditional models, how it has evolved and how to set up a Kubernetes cluster on the Microsoft Azure Cloud.

In a previous article, we explored using the CSI Secrets Store Driver and Azure Provider to mount an HTTPS certificate stored in Azure Key Vault on pods deployed to Azure Kubernetes Service (AKS). This was done using Linux containers and an ASP.NET Core application. But what about .NET Framework applications on Windows containers? Let’s see if we can take the same approach.

About Windows Containers

Containers initially grew out of the Linux world based on its natural support for container isolation using cgroups and namespaces. Therefore, Linux containers are usually the preferable choice when possible, such as for apps written on the cross-platform .NET Core 3.1 or .NET 6.0. However, many organizations have a significant investment in .NET Framework applications, which must run on a Windows OS. In this case, you may use Windows containers to deploy these applications to AKS, leveraging an organization’s investments in both existing applications and Kubernetes.

A few important things to note about Windows containers in AKS:

  1. The kubenet networking option is not supported on Windows nodes; therefore, you can use Azure CNI for the cluster. This requires additional planning and a larger range of IP addresses.
  2. The first nodepool in AKS runs system services and must use Linux nodes; therefore, creating a cluster for Windows containers is to make the cluster and then add a Windows nodepool.
  3. Your container image Windows OS version must be compatible with the node OS version. As of writing, AKS nodes are created with Windows Server 2019, so the to use for the .NET Framework parent images is 4.8-windowsservercore-ltsc2019.
    See the AKS Windows FAQ for more information.

Using the CSI Secrets Store Driver for Windows Containers

Fortunately, AKS supports Container Storage Interface (CSI) drivers on Windows containers. Windows containers are also supported by the same CSI Secrets Store Driver and Azure Provider that we used for Linux containers and ASP.NET Core.
However, Windows is not enabled by default if you install using the Helm carts; you need to set the following configuration overrides to true:

  • windows.enabled
  • secrets-store-csi-driver.windows.enabled

Once we have the driver and provider installed on our cluster, we can mount certificates stored in Azure Key Vault as files on our Windows container pods in AKS, just as we did for Linux containers.

The diagram below represents the flow from Key Vault to a pod and a volume mount on the container:

Key Vault to a pod and a volume mount on the container

Configuring ASP.NET With HTTPS in Windows Containers

Running ASP.NET Core applications on Linux containers uses the Kestrel web server, and it is easy to configure Kestrel with the HTTPS certificate to use either through configuration or code. But ASP.NET applications running on Windows will use IIS as the webserver. How does this work on Windows containers, and how can we configure IIS in the container to use our mounted HTTPS certificate?

Looking at the Dockerfile used to create the .NET Framework ASP.NET image gives us a clue with this line:

ENTRYPOINT ["C:\\ServiceMonitor.exe", "w3svc"]

Its entry point uses the IIS Service Monitor app to run the IIS World Wide Web Publishing Service  (w3svc).

So in our application’s Dockerfile we could set a new entry point that calls a script that:

  1. Install the mounted certificate file into the Windows Certificate Store.
  2. Configures IIS to use HTTPS with the imported certificate.
  3. Start the ServiceMonitor.exe process.

Here is a PowerShell example that expects the HTTPS_CERTIFICATE_PATH environment variable to be set with the certificate path:

$certFilePath = $env:HTTPS_CERTIFICATE_PATH
Write-Host "Importing HTTPS certificate $certFilePath"
$cert = Import-PfxCertificate -FilePath $certFilePath -CertStoreLocation Cert:\LocalMachine\My

Write-Host "Creating HTTPS Binding"
New-WebBinding -Name "Default Web Site" -IP "*" -Port 443 -Protocol https

Write-Host "Binding Certificate to HTTPS Binding"
Set-Location IIS:\SslBindings
$cert | New-Item 0.0.0.0!443

Write-Host "Starting Service Monitor"
C:\\ServiceMonitor.exe w3svc

Then in our application’s Dockerfile we copy in our startup script and set ENTRYPOINT the to call it, for example:

COPY ./Bootstrap-IIS.ps1 ./
ENTRYPOINT ["powershell.exe", "./Bootstrap-IIS.ps1"]

Finally, we can set the HTTPS_CERTIFICATE_PATH environment variable in our Kubernetes YAML to match the mount point and file name we configure the driver and volume mount to use.

Sample Repo

For a complete example with setup, configuration, and deployment instructions, see the aks-csi-keyvault-certs-win repo in GitHub.

For those new to Kubernetes, it’s a portable, extensible, open-source platform for managing containerized workloads and services facilitating configuration and automation. This blog will cover a high-level overview of Kubernetes architecture and components. The image below outlines the various components that we will walk through in this guide.

Master Node

The master node controls and manages a set of worker nodes and contains the Kubernetes cluster. We can talk to the master node via CLI, GUI, or API, and more than one master node can be used for fault tolerance. Kubernetes uses the etcd, and all master nodes are connected to etcd, which is a distributed key-value store.

API Server

API servers perform all administrative tasks on the master nodes. Users send the command to the API server, which then validates the request process and executes them. The API server determines if the request is valid or not and then processes it.

API Architecture

Key-Value Store (etcd)

Etcd is an open-source distributed Key-Value Store used to hold and manage the critical information that distributed systems need to keep running. The Key-Value Store, also called etcd, is a database Kubernetes uses to back up all cluster data. It stores the entire configuration and state of the cluster.

Controller

The role of the Controller is to obtain the desired state from the API Server. It checks the current state of the nodes it is tasked to control, and determines if there are any differences, and resolves them if any.

Scheduler

The Scheduler’s main job is to watch for new requests coming from the API Server and assign them to healthy nodes. It ranks the quality of the nodes and deploys pods to the best-suited node.

Worker Node

Worker nodes listen to the API Server for new work assignments; they execute the work assignments and then report the results to the Kubernetes Master node.

Kubernetes Master

Kubelet

The kubelet runs on every node in the cluster. It is the principal Kubernetes agent. By installing kubelet, the node’s CPU, RAM, and storage become part of the broader cluster. It watches for tasks sent from the API Server, executes the task, and reports back to the Master.

Container Runtime

The container runtime pulls images from a container image registry and starts and stops containers. A 3rd party software or plugin, such as Docker, usually performs this function.

Kube-proxy

The kube-proxy makes sure that each node gets its IP address.

Pod

A pod is the smallest element of scheduling in Kubernetes. Without it, a container cannot be part of a cluster.

Pod in Kubernetes

Kubernetes Services

One of the best features Kubernetes offers is that non-functioning pods get replaced by new ones automatically. The new pods have a different set of IPs. It can lead to processing issues and IP churn as the IPs no longer match. If left unattended, this property would make pods highly unreliable.

FREE WHITEPAPER
Discover impactful cost savings with Azure with our free white paper download.

Container Deployment

Traditional Deployment

Initially, developers deployed applications on individual physical servers. This type of deployment posed several challenges. The sharing of physical resources meant that one application could take up most of the processing power, limiting the performance of other applications on the same machine.

Traditional Kubernetes Deployment

It takes a long time to expand hardware capacity, which in turn increases costs. To resolve hardware limitations, organizations began virtualizing physical machines.

Virtualized Deployment

Virtualized deployments allow you to scale quickly and spread the resources of a single physical server, update at will, and keep hardware costs in check. Each VM has its operating system and can run all critical systems on top of the virtualized hardware.

Virtualized Server

Container Deployment

Container Deployment is the next step in the drive to create a more flexible and efficient model. Much like VMs, containers have individual memory, system files, and processing space. However, strict isolation is no longer a limiting factor. Multiple applications can now share the same underlying operating system. This feature makes containers much more efficient than full-blown VMs. They are portable across clouds, different devices, and almost any OS distribution.

Container Deployment Structure

The container structure also allows for applications to run as smaller, independent parts. These parts can then be deployed and managed dynamically on multiple machines. The elaborate structure and the segmentation of tasks are too complex to manage manually.

GROW WITH US
AIS continues to expand Kubernetes work and is looking for talent to join our team. View open career opportunities and apply today!

Azure Kubernetes Service is a Microsoft Azure-hosted offering that allows for the ease of deploying and managing your Kubernetes clusters. There is much to be said about AKS and its abilities, but I will discuss another crucial role of AKS and containers, security. Having a secure Kubernetes infrastructure is a must, and it can be challenging to find out where to start. I’ll break down best practices, including baseline security for clusters and pods, and implement network hardening practices that you can apply to your own AKS environment that will lay the foundation for a more secure container environment, including how to maintain updates.

Cluster and Pod Security

Let’s first look at some best practices for securing your cluster and pods using policies and initiatives. To get started, Azure has pre-defined policies that are AKS specific. These policies help to improve the posture of your cluster and pods. These policies also allow for additional control over things such as root privileges. A best practice Microsoft recommends is limiting access to the actions that containers can provide and avoiding root/privileged escalation. When the Azure Policy Add-on for AKS is enabled, it will install a managed instance of Gatekeeper. This instance handles enforcement and validation through a controller. The controller inspects each request when a resource is created or updated. You’ll then need to validate (based on your policies). Features such as these are ever-growing and can make creating a baseline easier. Azure Policy also includes a feature called initiatives. Initiatives are collections of policies that align with organizational compliance goals. Currently, there are two built-in AKS initiatives which are baseline and restricted. Both come with many policies that lockdown items, such as limiting the host filesystem, networking, and ports. By combining both initiatives and policies, you can tighten security and meet compliance goals in a more managed fashion.

Another way to secure your cluster is to protect the access to the Kubernetes API-Server. This is accomplished by integrating RBAC with AD or other identity providers. This feature allows for granular access, similar to how you control access to your Azure resources. The Kubernetes API is the single connection point to perform actions on a cluster. For this reason, it’s imperative to deploy logging\auditing and to enforce the least privileged access. The below diagram depicts this process:

Cluster and Pod Security

Reference:https://docs.microsoft.com/en-us/azure/aks/operator-best-practices-cluster-security#secure-access-to-the-api-server-and-cluster-nodes

Network Security

Next, let’s look at network security and how it pertains to securing your environment. A first step would be to apply network policies. Much like above, Azure has many built-in policies that assist with network hardenings, such as using a policy that only allows for specific network traffic from authorized networks based on IP addresses or namespaces. It’s also important to note this can only occur when the cluster is first created. You also have the option for ingress controllers that access internal IP addresses. This ensures they can only get accessed from that internal network. These small steps can narrow the attack surface of your cluster and tighten traffic flows. The below diagram demonstrates using a Web Application Firewall (WAF) and an egress firewall to manage defined routing in/out of your AKS environment. Even more granular control is possible using network security groups. These allow only specific ports and protocols based on source/destination. By default, AKS creates subnet level NSGs for your cluster. As you add services such as load balancers, port mappings, and ingress routes, it will automatically modify the NSGs. This ensures the correct traffic flow and makes it easier to manage change. Overall these effortless features and policies can allow for a secure network posture.

Network Security Graphic

Reference: Microsoft Documentation

The Final Piece

The final piece of securing your AKS environment is staying current on new AKS features and bug fixes. Specifically, upgrading the Kubernetes version in your cluster. These upgrades can also include security fixes. These fixes are paramount to remain up to date on vulnerabilities that could leave you exposed. I won’t go too deep on best practices for Linux node updates or managing reboot. This link dives deeper into what Kured is and how it can be leveraged to process updates safely. There are many ways to foundationally secure your AKS clusters. I hope this article helps future implementations and maintainability of your deployment.