Automating the build and deployment of an Angular application in the Azure DevOps (AzDO) ecosystem is as easy as any other code hosting service. AzDO has added task types to help deploy directly to Azure subscriptions. In this post, we will look at creating a build pipeline with caching, discuss deploying to Azure App Service, and pushing a container to the Azure Container Registry for use in AKS.

Building the Pipeline

AzDO does have a GUI to help set up a JavaScript-based project. The YAML generated is a great starting place for most applications. We noticed while running the generated build pipeline that the Node Package Manager (NPM) installation task was taking a significant amount of time, 15 minutes in some cases. The packages and versions have been decided at this point in development and were not expected to change often. AzDO allows for caching of files or artifacts during a build to be used in subsequent pipeline runs, saving the node modules folder and not installing if there are no changes. The package-lock.json file with its hash keys is the perfect file for checking if there have been changes.

The first part of the pipeline for installation and building is kept from the generated pipeline: install the latest node version, 16.14.0 is the latest stable version at the writing of this post; install the specific angular cli version globally needed for the application, 13.2.5; install packages from package.json file and install a library from a tar archive located in our repository. Finally, run ng build to create a dist folder of the angular application that can be deployed to the cloud.

steps:

- task: NodeTool@0
  inputs:
    versionSpec: '16.x'
  displayName: 'Install Node.js'

- task: Npm@1  
  displayName: 'Angular CLI 13.2.5'  
  inputs:  
    command: custom  
    verbose: false  
    customCommand: 'install -g @angular/cli@13.2.5'

- task: Npm@1  
  displayName: 'npm install'  
  inputs:
    command: custom  
    customCommand: 'install file:custom-angular-lib.tgz --legacy-peer-deps'  
  
- task: Npm@1  
  displayName: Build  
  inputs: 
    command: custom  
    verbose: false  
    customCommand: 'run build'  

Next, we wanted to update our job with caching to improve the run time of the pipeline. We added the npm_config_cache variable and used a “Cache” task type to set up the npm install. Using an environment variable to a path under $(Pipeline.Workspace) ensures the cache is accessible from container and non-container jobs. The logic on the cache task attempts to restore the cache, if successful the npm clean install step will not need to run.

variables:
  npm_config_cache: $(Pipeline.Workspace)/.npm

... 

- task: Cache@2
  inputs:
    key: 'npm | "$(Agent.OS)" | package-lock.json'
    restoreKeys: |
        npm | "$(Agent.OS)"
    path: $(npm_config_cache)
  displayName: Cache npm
- script: npm clean install

Lastly, building the Angular project with the ng build command and publish the created dist folder to our AzDO artifacts for use in the next stage, deployment.

- task: Npm@1  
      displayName: Build  
      inputs:
        command: custom  
        customCommand: 'run build'  
      
    - task: CopyPublishBuildArtifacts@1  
      displayName: 'Copy Publish Artifact: angular-web-app'  
      inputs:  
        CopyRoot: /dist  
        Contents: '**'  
        ArtifactName: angular-web-app 
        ArtifactType: Container

TOP 10 REASONS TO CHOOSE YAML FOR YOUR NEXT AZURE DEVOPS CI/CD PIPELINE
Discover reasons to choose YAML for your next Azure DevOps Pipeline and the benefits of a unified development experience.

Deploying the Pipeline

We had two deployment ideas in mind: Azure Kubernetes Service, AKS, and Azure App Service. Deploying to App Service requires the archived build artifact and pushing it to the Azure Subscription. Alternatively, if we wanted to use our application in AKS the application is bundled into a Docker image.

Before either deployment option, a task to sign in to the Azure subscription is required. More information on setting up to deploy to Azure can be found in Microsoft’s documentation.

- task: AzureCLI@2
      displayName: Az Login
      inputs:
        azureSubscription: <Name of the Azure Resource Manager service connection>
        scriptType: bash
        scriptLocation: inlineScript
        inlineScript: |
          echo "Login to AZ"
          az login --name <login name>

Deploy to App Service

The Azure Web Apps task can be used to deploy the Angular application artifact directly to the App Service. The task takes the artifact and pushes it to the App Service.

- task: AzureWebApp@1
    displayName: 'Deploy Azure Web App : angular-app'
    inputs:
        azureSubscription: <Name of the Azure Resource Manager service connection>
        appName: angular-app
        appType: webAppLinux
        package: $(Pipeline.Workspace)/angular-web-app 

Container for ACR and AKS

In order to deploy to an AKS cluster, the Angular application will need to be containerized and deployed to the Azure Container Registry, ACR. A basic Nginx container that hosts the dist folder is a common Angular dockerizing practice. Using the dockerfile in the repository, the pipeline will build the image and then push it to the ACR in our AZ subscription. Once the container is pushed to the registry, it can be used in any container service like AKS. See the Microsoft documentation on how to push to an Azure Container Registry.

- task: AzureCLI@2
      displayName: Az Container Registry Login
      inputs:
        azureSubscription: <Name of the Azure Resource Manager service connection>
        scriptType: bash
        scriptLocation: inlineScript
        inlineScript: |
          echo "Login to AZ"
          az acr login --name <login name>

- task: AzureCLI@2
      displayName: Docker Build angular-app
      inputs:
        azureSubscription: <Name of the Azure Resource Manager service connection>
        scriptType: bash
        scriptLocation: inlineScript
        inlineScript: |
          echo "Build image with docker"
          docker build -t angular-app:latest -f Dockerfile .

    - task: AzureCLI@2
      displayName: Docker Tag and Push angular-app
      inputs:
        azureSubscription: <Name of the Azure Resource Manager service connection>
        scriptType: bash
        scriptLocation: inlineScript
        inlineScript: |
          docker image tag angular-app:latest <name><endpoint>/<repo>/angular-app:latest
          docker image tag angular-app:latest <name><endpoint>/<repo>/angular-app:<build id>
          docker image push <name><endpoint>/<repo>/angular-app:<build id>
          docker image push <name><endpoint>/<repo>/angular-app:latest

Azure Artifacts vs. Caching

Caching was chosen as a solution over creating an artifact to improve build time by reusing files from earlier pipeline runs. The Microsoft documentation recommends using pipeline caching specifically for reducing build time and reusing files from previous runs. On the other hand, a pipeline artifact should be used if a job needs the files produced in a previous job in order to succeed. Since the installation of the node modules could be done even if a pulled cache fails, it made more sense to use caching.

Conclusion

By using AzDO pipelines, we were successful in creating a build pipeline with caching while having the flexibility to deploy to Azure App Service or AKS. The caching of the node modules sped up our installation task from 15 minutes to 3 minutes. The retrieval of the node modules still takes time but is faster overall, and possibly more cost-effective. Using the artifacts would not have benefited us in the same manner. Finally, the built-in Azure CLI tasks allow us to either deploy the Angular application directly to App Services or to build a Docker container, which allows us to push the image up to our Azure Registry for use in AKS or to use the container elsewhere.

Azure Kubernetes Service is a Microsoft Azure-hosted offering that allows for the ease of deploying and managing your Kubernetes clusters. There is much to be said about AKS and its abilities, but I will discuss another crucial role of AKS and containers, security. Having a secure Kubernetes infrastructure is a must, and it can be challenging to find out where to start. I’ll break down best practices, including baseline security for clusters and pods, and implement network hardening practices that you can apply to your own AKS environment that will lay the foundation for a more secure container environment, including how to maintain updates.

Cluster and Pod Security

Let’s first look at some best practices for securing your cluster and pods using policies and initiatives. To get started, Azure has pre-defined policies that are AKS specific. These policies help to improve the posture of your cluster and pods. These policies also allow for additional control over things such as root privileges. A best practice Microsoft recommends is limiting access to the actions that containers can provide and avoiding root/privileged escalation. When the Azure Policy Add-on for AKS is enabled, it will install a managed instance of Gatekeeper. This instance handles enforcement and validation through a controller. The controller inspects each request when a resource is created or updated. You’ll then need to validate (based on your policies). Features such as these are ever-growing and can make creating a baseline easier. Azure Policy also includes a feature called initiatives. Initiatives are collections of policies that align with organizational compliance goals. Currently, there are two built-in AKS initiatives which are baseline and restricted. Both come with many policies that lockdown items, such as limiting the host filesystem, networking, and ports. By combining both initiatives and policies, you can tighten security and meet compliance goals in a more managed fashion.

Another way to secure your cluster is to protect the access to the Kubernetes API-Server. This is accomplished by integrating RBAC with AD or other identity providers. This feature allows for granular access, similar to how you control access to your Azure resources. The Kubernetes API is the single connection point to perform actions on a cluster. For this reason, it’s imperative to deploy logging\auditing and to enforce the least privileged access. The below diagram depicts this process:

Cluster and Pod Security

Reference:https://docs.microsoft.com/en-us/azure/aks/operator-best-practices-cluster-security#secure-access-to-the-api-server-and-cluster-nodes

Network Security

Next, let’s look at network security and how it pertains to securing your environment. A first step would be to apply network policies. Much like above, Azure has many built-in policies that assist with network hardenings, such as using a policy that only allows for specific network traffic from authorized networks based on IP addresses or namespaces. It’s also important to note this can only occur when the cluster is first created. You also have the option for ingress controllers that access internal IP addresses. This ensures they can only get accessed from that internal network. These small steps can narrow the attack surface of your cluster and tighten traffic flows. The below diagram demonstrates using a Web Application Firewall (WAF) and an egress firewall to manage defined routing in/out of your AKS environment. Even more granular control is possible using network security groups. These allow only specific ports and protocols based on source/destination. By default, AKS creates subnet level NSGs for your cluster. As you add services such as load balancers, port mappings, and ingress routes, it will automatically modify the NSGs. This ensures the correct traffic flow and makes it easier to manage change. Overall these effortless features and policies can allow for a secure network posture.

Network Security Graphic

Reference: Microsoft Documentation

The Final Piece

The final piece of securing your AKS environment is staying current on new AKS features and bug fixes. Specifically, upgrading the Kubernetes version in your cluster. These upgrades can also include security fixes. These fixes are paramount to remain up to date on vulnerabilities that could leave you exposed. I won’t go too deep on best practices for Linux node updates or managing reboot. This link dives deeper into what Kured is and how it can be leveraged to process updates safely. There are many ways to foundationally secure your AKS clusters. I hope this article helps future implementations and maintainability of your deployment.

In this blog post, I would like to introduce and explain Azure’s new service offering for Application Gateway Ingress Controller (AGIC) for Azure Kubernetes Services (AKS). AKS is gaining momentum within the enterprise and Microsoft has been refining the offering to drive adoption. This feature piques my interest because of its capability to improve latency and allow for better application performance within AKS Clusters. I will first discuss the basics of the application gateway Ingress controller. Then, I will dive into the underlying architecture and network components that can provide performance benefits. In addition, I will also explain the improvements and differences this offers over the already existing in-cluster ingress controller for AKS. Finally, I will discuss the new application gateway features that Microsoft is developing to refine the service even further.

In definition, the AGIC is a Kubernetes application that is like Azure’s L7 Application Gateway load balancer by leveraging features such as:

  • URL routing
  • Cookie-based affinity
  • SSL termination or end-to-end SSL
  • Support for public, private, hybrid web sites
  • Integrated Web Application Firewall (WAF)

The controller runs in its own pod on your AKS. It monitors and communicates routing changes to your AKS through the Azure Resource Manager to the Application Gateway which allows for uninterrupted service to the AKS no matter the changes. Below is a high-level design of how the AGIC is deployed along with the application gateway and AKS, including the Kubernetes API server.

Azure Resource Manager

Next, let’s take a closer look at the AGIC and breakdown how it differs from that of the already existing In-Cluster Ingress Controller. The image below shows some distinct differences between the two. The in-cluster load balancer performs all the data path operations leveraging the AKS clusters compute resources, because of this it is competing for the same resources as the application in the cluster.

In-Cluster Ingress Controller

In terms of networking features, there are 2 different models you can leverage when configuring you AKS. They are Kubenet and Azure CNI:

Kubenet

This is the default configuration for AKS cluster creation, each node receives an IP address from the Azure virtual network subnet. Pods then get an IP address from a logically different address space to that of the Azure virtual network subnet. They also use Network Address Translation (NAT) configuration to allow the Pods to the communication of other resources in the virtual network. If you want to dive deeper into Kubenet this document is very helpful. Overall these are the high-level features of Kubenet:

  • Conserves IP address space.
  • Uses Kubernetes internal or external load balancer to reach pods from outside of the cluster.
  • You must manually manage and maintain user-defined routes (UDRs).
  • Maximum of 400 nodes per cluster.
  • Azure Container Networking Interface (ACNI)

The other networking model is utilizing Azure Container Networking Interface (ACNI). This is how the AGIC works, every pod gets an IP address from their own private subnet and can be accessed directly, because of this these IP addresses need to be unique across your network space and should be planned in advance. Each node has a configuration parameter for the maximum number of pods that it supports. The equivalent number of IP addresses per node is then reserved upfront for that node. This approach requires more planning, as can otherwise lead to IP address exhaustion or the need to rebuild clusters in a larger subnet as your application demands grow.

Azure Virtual Network

From the Azure documentation, they also outline the advantages of using ACNI.

Azure CNI

Pods get full virtual network connectivity and can be directly reached via their private IP address from connected networks.
Requires more IP address space.

Performance

Based on the network model of utilizing the ACNI, the Application Gateway can have direct access to the AKS pods. Based on the Microsoft documentation because of this, the AGIC can achieve up to a 50 percent lower network latency versus that of in-cluster ingress controllers. Since Application Gateway is also a managed service it backed by Azure Virtual machine scale sets. This means instead of utilizing the AKS compute resources for data processing like that of the in-cluster ingress controller does the Application Gateway can leverage the Azure backbone instead for things like autoscaling at peak times and will not impact the compute load of you ASK cluster which could impact the performance of your application. Based on Microsoft, they compared the performance between the two services. They set up a web app running 3 nodes with 22 pods per node for each service and created a traffic request to hit the web apps. In their findings, under heavy load, the In-cluster ingress controller had approximately 48 percent higher network latency per request compared to the AGIC.

Conclusion

Overall, the AGIC offers a lot of benefits for businesses and organizations leveraging AKS from improving network performance to handling the on the go changes to your cluster configuration when they are made. Microsoft is also building from this offering by adding more features to this service, such as using certificates stored on Application Gateway, mutual TLS authentication, gRPC, and HTTP/2. I hope you found this brief post on Application Gateway Ingress-Controller helpful and useful for future clients. If you want to see Microsoft’s whole article on AGIC click here. You can also check out the Microsoft Ignite video explaining more on the advancements of Application Gateway.

Sound Familiar?

It’s not a sentiment you would expect from most IT decision-makers. However, it’s something we hear from an increasing number of organizations.

The benefits of a well-thought-out cloud transformation roadmap are not lost on them.

  • They know that, in an ideal world, they ought to start with an in-depth assessment of their application portfolio, in line with the best practice – “migrate your capabilities, not apps or VMs”.
  • They also realize the need to develop a robust cloud governance model upfront.
  • And ultimately, they understand the need to undertake an iterative migration process that takes into account “organizational change management” best practices.

At the same time, these decision-makers face real challenges with their existing IT infrastructure that simply cannot wait months and years for a successful cloud transformation to take shape. They can’t get out of their on-premises data centers soon enough. This notion isn’t limited to organizations with fast-approaching Data Center (DC) lease renewal deadlines or end of support products, either.

So, how do we balance the two competing objectives:

  • Immediate need to move out of the DC
  • Carefully crafted long-term cloud transformation

A Two-Step Approach to Your Cloud Transformation Journey

From our experience with a broad range of current situations, goals, and challenges, we recommend a two-step cloud transformation approach that addresses both your immediate challenges and the organization’s long-term vision for cloud transformation.

  1. Tactical “Lift-n-Shift” to the Cloud – As the name suggests, move the current DC footprint as is (VMs, databases, storage network. etc.) to Azure
  2. Strategic Cloud Transformation – Once operational in the cloud, incrementally and opportunistically move parts of your application portfolio to higher-order Azure PaaS/cloud-native services

Tactical “Lift-n-Shift” to the Cloud

Lift n Shift Approach to Cloud Transformation

On the surface, step #1 above may appear wasteful. After all, we are duplicating your current footprint in Azure. But keep in mind that step #1 is designed for completion in days or weeks, not months or years. As a result, the duplication is minimized. At the same time, step #1 immediately puts you in a position to leverage Azure capabilities, giving you tangible benefits with minimal to no changes to your existing footprint.

Here are a few examples of benefits:

  • Improve the security posture – Once you are in Azure, you tap into security capabilities such as intrusion detection and denial of service attack solely by being in Azure. Notice that I deliberately did not cite Security Information and Event Management (SIEM) tools like Azure Sentinel. Technically you can take advantage of Azure Sentinel for on-premises workloads.
  • Replace aging hardware – Your hardware may be getting old but isn’t old enough for a Capex-powered refresh. Moving your VMs to Azure decouples you from the underlying hardware. “But won’t that be expensive, since you are now paying by usage per minute?” you ask. Not necessarily and certainly not in the long run. Consider options like Reserved Instance (RI) pricing that can offer an up to 80% discount based on a one- or three-year commitment.

Furthermore, you can combine RI with Azure Hybrid Benefits (AHUB) which provides discounts for licenses already owned. Finally, don’t forget to take into account the savings from decreased needs for power, networks, real estate, and the cost of resources to manage all the on-premises assets. Even if you can’t get out of the DC lease completely, you may be able to negotiate a modular reduction of your DC footprint. Please refer to Gartner research that suggests that over time, the cloud can become cost-effective.

AMP Move out of Data Center

Source – https://blogs.gartner.com/marco-meinardi/2018/11/30/public-cloud-cheaper-than-running-your-data-center/

  • Disaster Recovery (DR) – Few organizations have a DR plan setup that is conducive for ongoing DR tests. Having an effective DR plan is one of the most critical responsibilities of IT. Once again, since geo-replication is innate to Azure, your disks are replicated to an Azure region that is at least 400 miles away, by default. Given this, DR is almost out-of-the-box.
  • Extended lease of life on out of support software – If you are running an Operating System (OS), such as Windows Server 2008 or SQL Server 2008, moving to Azure extends the security updates for up to three years from the “end of support” date.
  • Getting out of the business of “baby-sitting” database servers – Azure managed instances offer you the ability to take your existing on-premises SQL Server databases and move them to Azure with minimal downtime. Once your database is an Azure SQL Managed Instance, you don’t have to worry about patching and backup, thereby significantly reducing the cost of ownership.
  • Take baby steps towards automation and self-service – Self-service is one of the key focus areas for most IT organizations. Once again, since every aspect of Azure is API driven, organizations can take baby steps towards automated provisioning.
  • Get closer to a data lake – I am sure you have heard the quote “AI is the new electricity”. We also know that Artificial Intelligence (AI) needs lots and lots of data to train the Machine Learning (ML) algorithms. By moving to Azure, it is that much easier to capture the “data exhaust” coming out the applications in a service like Azure Data Lake. In turn, Azure Data Lake can help turn this data into intelligence.

Strategic Cloud Transformation

Strategic Cloud Transformation

Once you have completed step #1 by moving your on-premises assets to the cloud, you are now in a position to undertake continuous modernization efforts aligned to your business priorities.

Common approaches include:

  • Revise – Capture application and application tiers “as-is” in containers and run on a managed orchestrator like Azure Kubernetes Service. This approach requires minimal changes to the existing codebase. For more details of this approach, including a demo, read Migrate and Modernize with Kubernetes on Azure Government.
  • Refactor – Modernize by re-architecting to target Platform as a Service (PaaS) and “serverless” technologies. This approach requires more significant recoding to target PaaS services but allows you to take advantage of cloud provider managed services. For more information, check out our “Full PaaS” Approach to Modernizing Legacy Apps.
  • Rebuild – Complete rewrite of the applications using cloud-native technologies like Kubernetes, Envoy, and Istio. Read our blog, What Are Cloud-Native Technologies & How Are They Different from Traditional PaaS Offerings, for more information.
  • Replace – Substitute an existing application, in its entirety, with Software as a Service (SaaS) or an equivalent application developed using a no-code/low-code platform.

CHECK OUT OUR WHITEPAPER & LEARN ABOUT CLOUD-BASED APP MODERNIZATION APPROACHES

The following table summarizes the various approaches for modernization in terms of factors such as code changes, operational costs, and DevOps maturity.

Compare App Modernization Approaches

Azure Migration Program (AMP)

Microsoft squarely aligns with this two-step approach. At the recent Microsoft partner conference #MSInspire, Julia White announced AMP (Azure Migration Program).

AMP brings together the following:

Wrapping Up

A two-step migration offers a programmatic approach to unlock the potential of the cloud quickly. You’ll experience immediate gains from a tactical move to the cloud and long-term benefits from a strategic cloud transformation that follows. Microsoft programs like AMP, combined with over 200+ Azure services, make this approach viable. If you’re interested in learning more about how you can get started with AMP, and which migration approach makes the most sense for your business goals, reach out to AIS today.

GET YOUR ORGANIZATION ON THE RIGHT TRACK TO TRANSFORMATION. CONTACT AIS TODAY TO DISCUSS YOUR OPTIONS.

A Single Place to Manage, Create, and ConsumeAzure Monitor and OMS

The integration of the Operations Management Suite (OMS) into Azure Monitor is completed for both Azure Commercial and Azure Government. This change by Microsoft has given Azure Monitor/OMS users a single place to manage, create, and consume Azure Monitoring solutions. No functionality has been removed and documentation has been consolidated under the Azure Monitor documentation. With this consolidation of services, there have been some terminology changes that will impact the way one talks about Azure Monitor components. The consolidation of OMS and other Azure services into Azure Monitor is simplifying the way you manage the monitoring of your Azure services.

Updated Terminology

Microsoft has updated some of the terminologies for the Azure Monitor components to reflect the transition from OMS. I have highlighted some examples:

  • The log data for Azure Monitor is still stored in a Log Analytics Workspace, but the term Log Analytics in the Microsoft documentation is now Azure Monitor Logs.
  • The term log analytics now applies to the page in the Azure portal used to write and run queries and analyze log data.
  • What was once known as OMS Management solutions have been renamed Monitoring solutions (items like Security & Compliance and Automation & Control)

Azure Monitor — Your 1 Stop “Monitoring & Alerting” Shop

Azure Monitor is now pretty much the one stop shop for your monitoring and alerting needs (the exception here would be Azure Security Center is still the place to go to for most of your security and compliance needs).

Azure Monitor is broken out into four main categories in the Azure Portal:

  1. The main components of Azure monitor
  2. Insights
  3. Settings
  4. Support + Troubleshooting.

The main components include the Activity log, Alerts, Metrics, Logs, Service Health, and Workbooks.

Under Insights, there is Application, Virtual Machines, Containers, Network, and “…More”.

The Settings category includes Diagnostics settings and Autoscale.

And finally, under Support + Troubleshooting, there is Usage & estimated costs, Advisor recommendations, and New support request.

Check out the below table that provides an overview of the Azure Monitor Components and Descriptions:

Azure Monitor Component Description
Overview Overview of Azure Monitor
Activity Log Log data about the operations performed in Azure
Alerts Notifications based on conditions that are found in monitoring data both metrics and logs
Metrics (Metrics Explorer) Plotting charts, visually correlating trends, and investigating spikes and dips in metrics’ values.
Logs (Azure Monitor Logs) Useful for performing complex analysis across data from a variety of sources
Service Health Provides a personalized view of the health of the Azure services and regions you’re using
Workbooks Combine text, Analytics queries, Azure Metrics, and parameters into rich interactive reports.
Applications Application Performance Management service for web developers
Virtual Machines Analyzes the performance and health of your Windows and Linux VMs and monitors their processes and dependencies on other resources and external processes.
Containers Monitor the performance of container workloads deployed to either Azure Container Instances or managed Kubernetes clusters hosted on Azure Kubernetes Service (AKS).
Network Tools to monitor, diagnose, view metrics, and enable or disable logs for resources in an Azure virtual network.
More Replacement for the OMS Portal Dashboard.
Diagnostic Settings Configure the diagnostic setting for Azure resources (formally known as Diagnostic Logs)
Autoscale Consolidated view of Azure resources that have Autoscale enabled
Usage and estimated costs Consumption and cost estimates of Azure Monitor
Advisor Recommendations Link to Azure Advisor
New support requests Create a support request
I just returned from Microsoft BUILD 2019 where I presented a session on Azure Kubernetes Services (AKS) and Cosmos. Thanks to everyone who attended. We had excellent attendance – the room was full! I like to think that the audience was there for the speaker 😊 but I’m sure the audience interest is a clear reflection of how popular AKS and Cosmos DB are becoming.

For those looking for a 2-minute overview, here it is:

In a nutshell, the focus was to discuss the combining Cloud-Native Service (like AKS) and a Managed Database

Microsoft Build Session Architecting Cloud-Native Apps with AKS and Cosmos DB Slide Deck

We started with a discussion of Cloud-Native Apps, along with a quick introduction to AKS and Cosmos. We quickly transitioned into stateful app considerations and talked about new stateful capabilities in Kubernetes including PV, PVC, Stateful Sets, CSI, and Operators. While these capabilities represent significant progress, they don’t match up with external services like Cosmos DB.

Microsoft Build Session Architecting Cloud-Native Apps with AKS and Cosmos DB Slide Deck

Microsoft Build Session Architecting Cloud-Native Apps with AKS and Cosmos DB Slide Deck Cloud Native Tooling

One option is to use Open Service Broker – It allows Kubernetes hosted services to talk to external services using cloud-native tooling like svcat (Service Catalog).

Microsoft Build Session Architecting Cloud-Native Apps with AKS and Cosmos DB Slide Deck svcat

Microsoft Build Session Architecting Cloud-Native Apps with AKS and Cosmos DB Slide Deck SRE

External services like Cosmos DB can go beyond cluster SRE and offer “turn-key” SRE in essence – Specifically, geo-replication, API-based scaling, and even multi-master writes (eliminating the need to failover).

Microsoft Build Session Architecting Cloud-Native Apps with AKS and Cosmos DB Slide Deck Mutli Master Support

Microsoft Build Session Architecting Cloud-Native Apps with AKS and Cosmos DB Slide Deck Configure Regions

Microsoft Build Session Architecting Cloud-Native Apps with AKS and Cosmos DB Slide Deck Portability

Since the Open Service Broker is an open specification, your app remains mostly portable even when you move to one cloud provider to another. OpenService Broker does not deal with syntactic differences, say connection string prefix difference between cloud providers.  One way to handle these differences is to use Helm.

Learn more about my BUILD session:

Here you can find the complete recording of the session and slide deck: https://mybuild.techcommunity.microsoft.com/sessions/77138?source=sessions#top-anchor

Additionally, you can find the code for the sample I used here: https://github.com/vlele/build2019 

WORK WITH THE BRIGHTEST LEADERS IN SOFTWARE DEVELOPMENT