Preparation

We must prepare our Azure to set the correct permissions and prepare the on-premises Hyper-V hosts and VMs for server assessment and migration.

We need to create a new migration project on the access and migrate server section.

Create a new migration project

Next, we need to add the tools for the migration assessment. Click on the add tools button.

Add Migration Tools

After that, we need to create a new Azure migration project. Enter the details of the subscription, resource group, and project name. Also, we need to create a region where our migration needs to deploy.

WORKLOAD MIGRATION WITH AZURE MIGRATE
Improve your organization's infrastructure and migrate on-prem to Azure with AIS.

Adding Migration Tooling

Next, we need to select the assessment tool we want to use. This can be the Microsoft tool or any third-party provider. In this case, we are using the Microsoft assessment tool.

Assessment Tools

Next, we can select the migration tool we want to use. In this case, we used Microsoft tool one. We can skip this step and select the migration tool after assessing our environment.

Adding Migration Tools

Click on add tool. This will create the Azure migration project with the selected tool.

Create Azure Migration

Now we need to start assessing our Hyper-V VMs.

Now that we have completed the Azure migration project setup, we can discover and assess the Hyper-V virtual machines for migration. We need to download the Azure Migrate appliance from the Azure portal. Click on “Discover” and select yes with Hyper-V. This will prompt us to the download link to the VHD file, with that we can create a new virtual machine on your Hyper-V server.

Download Azure Migrate appliance

Set Up and Register the Appliance

Register a new appliance

We must import a new Hyper-V using this existing VHD file and set the VM switch. After your virtual appliance machine runs, you can open a browser on a computer that can connect to the appliance VM and open the URL of the appliance web app: https://appliancename-or-IPAddress:44368. Now you can configure the appliance, like setting connection details, time sync, and install updates.

Set up discovery for Azure Migrate

In the next step, we need to register the appliance with Azure Migrate.

Register Appliance with Azure Migrate

Begin Continuous Discovery

Specify credentials to connect to Hyper V

We can now set up and configure continuous discovery by connecting the appliance to Hyper-V hosts or clusters and start VM discovery. The appliance will begin collecting and discovering virtual machines. It can take up to 15 minutes until the discovered VMs appear in the Azure portal. If you want to use performance data in your assessment, we recommend that you create a performance-based assessment after 24 hours or more.

Hyper-V Migrate Discovered Servers

Create and Review the Assessment

After the servers have been discovered, you can now create a new assessment for the specific virtual machines you want to assess.

Review Assessment

After the assessment is complete, you will see a report showing you the Azure readiness and monthly cost estimation for compute and storage.

Azure Readiness

The report will provide you with details info about the readiness and will let you know about the VM status:

  • Ready for Azure: Azure Migrate recommends a VM size and cost estimates for VMs in the assessment.
  • Ready with conditions: Shows issues and suggested remediation.
  • Not ready for Azure: Shows issues and suggested remediation.
  • Readiness Unknown: Used when Azure Migrate can’t assess readiness due to data availability issues.

Assessment Cost Details

It will also give you more information on the cost of running VMs in Azure:

Cost estimates are based on the size recommendations for a machine and its disks and properties. Create the report after 24 hours or more if you want more accurate performance data.
Estimated monthly costs for compute and storage are shown.
The cost estimation is for running the on-premises VMs as IaaS VMs. Azure Migrate Server Assessment doesn’t consider PaaS or SaaS costs.

Migrate Hyper-V VMs using Azure Migrate

Migrate Hyper-V VMs

After you have done your assessment and you know which Hyper-V virtual machines you want to migrate, you can now start with the migration process.

Prepare Host Servers

The Azure Migrate Server Migration runs a lightweight Hyper-V VM appliance. This is the same VM used to assess the Hyper-V environment (Azure Migrate Server Assessment tool). If you didn’t use the assessment tool and went directly to migration, you will need to download and set up the appliance, like in the assessment process. If you already have that VM from the assessment, you can use the existing virtual machine.

Replicate Hyper-V VMs

To start the replication, you can now select the virtual machines you want to replicate to Azure. You can choose to apply them directly from the Azure Migrate assessment.

Replicating Hyper VMs

We can configure the target settings like resource group and virtual network in the Azure environment.

Target Settings for Migration

In the compute tab, you can change the VM size, OS disk, availability set, disks, and VM name.

Changes to Target VM sizes

We can start replicating the virtual machines if all the configuration is done. Azure Migrate will automatically provision other Azure resources for the migration, like a Service bus, gateway storage account, log storage account, and key vault. The initial synchronization will take a while, depending on the size of the virtual machines and the available internet bandwidth.

Test and Migrate Hyper-V VMs

Virtual Machine Replicated

When the virtual machine is replicated, you can now start to migrate it. However, in most cases, you first want to test the virtual machines before you completely migrate them. With Azure Migrate, you can first initiate a test migration to test the VM.

If you click on the virtual machine, you are replicating, and you get more information about the server details, migration status, and replication settings.

Replicating Servers

We can click on “Replicating servers,” right-click on the machine you want to migrate and click on “Migrate,” or you can click “Migrate” on the server overview page. Azure Migrate will not remove or delete the on-prem Hyper-V virtual machine. But by default, Azure Migrate shuts down the on-premises VM and runs an on-demand replication to synchronize any VM changes since the last replication occurred. This ensures no data loss.

Microsoft Azure Planned Failover

When the job is finished, you can view and manage the VM like any other Azure virtual machine.

Azure Migrate Post-Migration and Cleanup

Complete the migration.

  • After the migration is done, right-click the VM > Stop migration. This stops replication for the on-premises machine and cleans up replication state information for the VM.
  • Install the Azure VM Windows or Linux agent on the migrated machines. Perform post-migration app tweaks, such as updating database connection strings and web server configurations.
  • Perform final application and migration acceptance testing on the migrated application now running in Azure.
  • Cut over traffic to the migrated Azure VM instance.
  • Remove the on-premises VMs from your local VM inventory.
  • Remove the on-premises VMs from local backups.
  • Update any internal documentation to show the new location and IP address of the Azure VMs.

Post Migration Best Practices

  • Keep data secure by backing up Azure VMs using the Azure Backup service.
  • Keep workloads running and continuously available by replicating Azure VMs to a secondary region with Site Recovery.

Kubernetes Implementations Steps

To install and set up Kubernetes on Windows, load kubectl and install minicube. Minicube is a tool that enables us to run a single node cluster in a VM on a machine.

Choco Install Kubernetes-cli Minicube

Windows PowerShell

Now, run the following command and verify that kubectl is installed on your machine:

kubectl.exe version –client

Verify Kubectl is Installed
Next, spin up a worker machine — or node — in Kubernetes. It can be a physical or virtual machine. To do this, use the following command to start minicube:

Minicube Start

Launch VM and set up Kubernetes Cluster
Since we are installing Kubernetes on Windows, use Hyper-V to launch a VM and set up a Kubernetes cluster on the VM.

This will complete the setup.

Running Kubernetes containers

To start containers, use the Kubernetes command below to create a deployment. Provide a name for the deployment and the container image to deploy.

kubectl.exe create deployment my-nginx –image nginx

When a deployment is created, Kubernetes builds pods to host application instances.

Enter get pods just after running the previous command to catch the Container Creating status as pods are deployed:

kubectl.exe get pods

Kubernetes Deployment

Use the describe keyword with the deployment name, as shown below, to see more detailed information about deployments:

kubectl.exe get deployment

kubectl.exe to describe deployment helloworld-nginx

Scale Up Applications

In this Kubernetes implementation tutorial, we run only one container or Nginx server, but sometimes it is necessary to accommodate increased workload and traffic. In that case, scale up the number of application instances. This can be achieved using kubectl scale deployment with a –replicas parameter:

kubectl.exe scale deployment helloworld-nginx –replicas 4

Use a Replicas Parameter

Now, check the Kubernetes pods; there should be four containers running on the node:

Four Containers Running on Node

Deploy your app

Now there is an application running in multiple containers with their IP addresses. Next, expose them outside the cluster so that the application is accessible:
Kubectl.exe expose deployment helloworld-nginx –port=80, type=NodePort

Verify this via the kubectl get services command. This will return a service type of Node Port to expose port 80 on each node of the Kubernetes cluster. This service is an abstraction layer that load balances, and groups more than one pod in a cluster that shares an IP address.

NodePort Service

To open this application in a web browser, create a proxy to connect the local port to the cluster port, which we exposed using the NodePort service in the previous step:

kubectl.exe port-forward svc/helloworld-nginx 80:80

Create a Proxy

Go to the web browser and open http://127.0.0.1/ or http://localhost/ to see the homepage of the Nginx web server deployed in Kubernetes containers.

Nginx Homepage

Test the Kubernetes Control Plane

Lastly, test that the Kubernetes control plane, or master server, can maintain the desired state of the pods that run on the node server. To check this, use the following command to forcibly delete one of the pods that run the application instance.

kubectl.exe delete pod helloworld-nginx-67bb76cf46-mks6j

This undesired state will be identified, and Kubernetes will immediately run a new instance of the application in a container:

Running Application on Container

AN OVERVIEW OF KUBERNETES AND K8 CLUSTERS
Learn how Kubernetes is different from traditional models, how it has evolved and how to set up a Kubernetes cluster on the Microsoft Azure Cloud.

Also in this series:

Terragrunt Example

Let’s consider the situation where we want to maintain the infrastructure for a system with two major components: an API and a database solution. You must also deploy dev, test, stage, and production environments for this system. Dev and Test environments are deployed to one region, while stage and production environments are deployed to two regions.

We’ve created a preconfigured sample repository to demonstrate how we might handle something like this with Terragrunt. Now, although the requirements and scenario described above may not pertain to you, the preconfigured sample repository should give you a good idea of what you can accomplish with Terragrunt and the benefits it provides in the context of keeping your Terraform code organized. Also, keep in mind that Terragrunt is unopinionated and allows you to configure it in several ways to accomplish similar results; we will only cover a few of the benefits Terragrunt provides, but be sure to check out their documentation site for more information.

To get the most out of the code sample, you should have the following:

Run through the setup steps if you need to. This will involve running a mini terraform project to provision a few resource groups in addition to a storage account to store your terraform state files.
The sample repo contains several top-level directories:

  • /_base_modules
  • /bootstrap
  • /dev
  • /test
  • /stage
  • /prod
  • _base_modules folder – contains the top-level terraform modules that your application will use. There are subfolders for each application type, the API, and the storage solution (/api and /sql). For example, there is a subfolder for the API, which contains the terraform code for your API application, and one for SQL, which will include the terraform code for your storage/database solution; take note of the main.tf, variables.tf, and outputs.tf files in each subfolder. Each application type folder will also contain a .hcl file that contains global configuration values for all environments that consume that respective application type
  • [dev/test/stage/prod] – environment folders that contain subfolders for each application type. Each subfolder for each application type will contain Terragrunt configuration files that contain variables and inputs specific to that environment
  • Bootstrap – a small isolated terraform project that will spin up placeholder resource groups in addition to a storage account that can be used to maintain remote terraform state files

As mentioned above, there are several .hcl files in a few different places within this folder structure. These are Terragrunt configuration files. You will see one within each sub folder inside the _base_modules directory and one in every subfolder within each environment folder. These files are how Terragrunt knows what terraform commands to use, where to store each application’s remote state, and what variable files and input values to use for your terraform modules defined in the _base_modules directory. Read more about how this file is structured on Gruntwork’s website. With this sample repository, global configurations are maintained in the /_base_modules folder and consumed by configurations in the environment folders.

Let’s go over some of the basic features that Terragrunt offers.

Keeping your Remote State Configuration DRY

I immediately noticed when writing my first bits of Terraform code that I couldn’t use variables, expressions, or functions within the terraform configuration block. You can override specific parts of this configuration through the command line, but there was no way to do this from code.

Terragrunt allows you to keep your backend and remote state configuration DRY by allowing you to share the code for backend configuration across multiple environments. Look at the /_base_modules/global.hcl file in conjunction with the /dev/Terragrunt.hcl file.

/_base_modules/global.hcl:

remote_state {
  backend = "azurerm"
  generate = {
    path      = "backend.tf"
    if_exists = "overwrite"
  }
  config = {
    resource_group_name  = "shared"
    storage_account_name = "4a16aa0287e60d48tf"
    container_name       = "example"
    key            = "example/${path_relative_to_include()}.tfstate"
  }
}

This file defines the remote state that will be used for all environments that utilize the api module. Take special note of the ${path_relative_to_include} expression – more on this later.
A remote state Terragrunt block that looks like this:

remote_state {
    backend = "s3"
    config = {
      bucket = "mybucket"
      key    = "path/for/my/key"
      region = "us-east-1"
    }
  }

Is equivalent to terraform block that looks like this:

  terraform {
    backend "s3" {
      bucket = "mybucket"
      key    = "path/to/my/key"
      region = "us-east-1"
    }
  }

To inherit this configuration into a child sub folder or environment folder you can do this:

/dev/api/terragrunt.hcl

include "global" {
  path = "${get_terragrunt_dir()}/../../_base_modules/global.hcl"
  expose = true
  merge_strategy = "deep"
}

The included statement above tells Terragrunt to merge the configuration file found at _base_modules/global.hcl with its local configuration. The ${path_relative_to_include} in the global.hcl file is a predefined variable that will return the relative path of the calling .hcl file, in this case,/dev/api/terragrunt.hcl. Therefore, the resulting state file for this module would be in the example container at dev/api.tfstate. For the sql application in the dev environment, the resulting state file would be dev/sql.tfstate; look at the _base_modules/sql/sql.hcl file. For the api application in the test environment, the resulting state file would be, test/api.tfstate. Be sure to check out all of the built-in functions Terragrunt offers out of the box.

Using the feature just mentioned, we only define the details of the remote state once, allowing us to cut down on code repetition. Read more about the remote_state and include blocks and how you can configure them by visiting the Terragrunt documentation. Pay special attention to merge strategy options, how you can override includes in child modules, and the specific limitations of configuration inheritance in Terragrunt.

Keeping your Terraform Configuration DRY

Merging of configuration files do not only apply to remote state configurations – you can also apply them to the sources and inputs of your modules.
In Terragrunt, you can define the source of your module (main.tf or top level terraform module) within the terraform block. Let’s consider the api application:

/_base_modules/api/api.hcl

terraform {
  source = "${get_terragrunt_dir()}/../../_base_modules/api"

  extra_arguments "common_vars" {
    commands = get_terraform_commands_that_need_vars()

    required_var_files = [
      
    ]
  }
}

You’ll notice this is referencing a local path; alternatively, you can also set this to use a module from a remote git repo or terraform registry.
The api.hcl configuration is then imported as a configuration into each environment folder for the api application type:

Ex. /dev/api/terragrunt.hcl

include "env" {
  path = "${get_terragrunt_dir()}/../../_base_modules/api/api.hcl"
  expose = true
  merge_strategy = "deep"
}

Include statements with specific merge strategies can also be overwritten by configurations in child modules, allowing you to configure each environment separately if needed.
Merging inputs before they are applied to your terraform module is also extremely helpful if you need to share variables across environments. For example, all the names of your resources in your project might be prefixed with a specific character set. You can define any global inputs in the inputs section of the _base_modules/global.hcl file. Because Terragrunt configuration files are written in the HCL language, you can also utilize all the expressions and functions you use in Terraform to modify or restructure input values before they are applied. Look at how we are defining the identifier input variable found in both sql and api modules:

Here is the terraform variable:

/_base_modules/api/variables.tf and /_base_modules/sql/variables.tf

variable "identifier" {
  type = object({
      primary = string
      secondary = string
      type = string
  })
}

Here is the primary property being assigned from the global env:

/_base_modules/global.hcl

... 
inputs = {
    identifier = {
        primary = "EXAMPLE"
    }
}
...

Here is the secondary property being assigned from the dev/dev.hcl file:

/dev/dev.hcl

inputs = {
 identifier = {
     secondary = "DEV"
 }
}

And here is the type property being applied in the module folders

/_base_modules/sql/sql.env.tf

...
inputs = {
    identifier = {
        type = "SQL"
    }
}

/_base_modules/api/api.hcl

...
inputs = {
    identifier = {
        type = "API"
    }
}

All configurations are included in the environment configuration files with:

include "global" {
  path = "${get_terragrunt_dir()}/../../_base_modules/global.hcl"
  expose = true
  merge_strategy = "deep"
}

include "api" {
  path = "${get_terragrunt_dir()}/../../_base_modules/api/api.hcl"
  expose = true
  merge_strategy = "deep"
}

include "dev" {
  path = "../dev.hcl"
  expose = true
  merge_strategy = "deep"
}

would result in something like:

inputs = {
    identifier = {
      primary = "EXAMPLE"
secondary = "DEV"  
type = "API"
    }
}

We utilize this pattern to share variables across all environments and applications within a specific environment without having to declare them multiple times.
It is also important to note that because Terragrunt configuration files are written in the HCL language, you can access all of Terraform’s functions and expressions. As a result, because you can inherit Terragrunt configuration files into a specific environment, you can restructure, merge, or alter input variables before they are sent to terraform to be processed.

Running Multiple Modules at once

You can also run multiple terraform modules with one command using Terragrunt. For example, if you wanted to provision dev, test, stage, and prod with one command, you could run the following command in the root directory:

terragrunt run-all [init|plan|apply]

If you wanted to provision the infrastructure for a specific tier, you could run the same command inside an environment folder (dev, test, stage etc.).
This allows you to neatly organize your environments instead of maintaining everything in one state file or trying to remember what variable, backend, and provider configurations to pass in your CLI commands when you want to target a specific environment.

It is important to note that you can maintain dependencies between application types within an environment (between the sql and api application) and pass outputs from one application to another. Look at the dev/api environment configuration file.

/dev/api/terragrunt.hcl

dependency "sql" {
  config_path = "../sql"
    mock_outputs = {
    database_id = "temporary-dummy-id"
  }
}

locals {
}
inputs = {
    database_id = dependency.sql.outputs.database_id
    ...
}

Notice that it references the dev/sql environment as a dependency. The dev/sql environment uses the _base_modules/sql application so look at that module, specifically the outputs.tf file.

/_base_modules/sql/outputs.tf

output "database_id" {
  value = azurerm_mssql_database.test.id
}

Notice that this output is being referenced in the /dev/api/terragrunt.hcl file as a dependency.

The client requirements described earlier in this post proved to be especially difficult to maintain without the benefit of being able to configure separate modules that depend on one another. With the ability to isolate different components of each environment and share their code and dependencies across environments, we could maintain multiple environments effectively and efficiently with different configurations.

Conclusion

Terraform as an infrastructure as code tool has helped us reliably develop, maintain, and scale our infrastructure demands. However, because our client work involved maintaining multiple environments and projects simultaneously, we needed specific declarative design patterns to organize our infrastructure development. Terragrunt offered us a simple way to develop multiple environments and components of a given application in a way that was repeatable and distributable to other project pipelines.

There are several features of Terragrunt we did not discuss in this post:
Before, After, and Error Hooks
Maintaining CLI flags

We would like to see some of the functionality Terragrunt offers baked into Terraform by default. However, we do not feel like Terragrunt is a final solution; Terraform is rather unopinionated and less concerned with how you set up your project structure, while Terragrunt is only slightly more opinionated in your setup. Terragrunt claims to be DRY, but there is still a lot of code duplication involved when creating multiple environments or trying to duplicate infrastructure across regions. For example, creating the folder structure for an environment is cumbersome, especially when you want to add another tier.

There are times when you must correlate different log sources within a centralized Azure Log Analytics Workspace to have a single management point for leveraging the robust suite of tooling available within Azure that provides visualizations (Dashboards / Workbooks) and action and mitigation capabilities (Alerts / Automation). If you have resources spanning multiple tenants, Azure lighthouse is used to delegate access to these resources to collect these logs.

Azure Lighthouse

However, Azure lighthouse has its limitations. One that we recently encountered with a customer was the inability to delegate across the Azure commercial and Azure Government clouds:   

 “Delegation of subscriptions across a national cloud and the Azure public cloud, or across two separate national clouds, is not supported.”

Cross-tenant management experiences – Azure Lighthouse | Microsoft Docs

To facilitate log collection, we had to implement a Logic Apps solution to move data from one cloud to another.

Use Case

Application Insight data is hosted in an Azure Commercial tenant and must be transported for storage in a centralized log analytics workspace in an Azure Government tenant.

Solution Overview

The solution is to export the logs to an event hub and use a logic app to pull in data from the event hub to the workspace in the new tenant.

Tenant 1 & 2

Below is a tutorial on how we were able to accomplish this design. Note: We are porting app insight data to a log analytics workspace in our scenario. But a Log Analytics Workspace to Workspace transfer will also work by using the “Data Export” feature available in the Log Analytics Workspace blade, sending the logs of your choosing to an event hub and following the same subsequent steps outlined in this article.

Step One: Create an Event Hub

You will first send the data to an event hub to pull in application insight data from a Logic App in another tenant. You will need to create the hub in the same region as your source of logs. In this example, I used the basic pricing tier.

Event Hub Creation

Once the Azure deployment is finished, you must create an event hub with partition and retention settings. Navigate to your newly created event hub object in Azure, and under the Entities section of the blade, you can create a new event hub. Here I am using the default partition/retention settings.

Basic Event Hub

Once this hub is created, we are ready to send our application insight data.

Step Two: Send Application Insight Data to the Event Hub

Once the event hub has been deployed, you can start sending Application Insight data to the hub.

Navigate to your application insight objects, and in the blade, go to “Diagnostic settings.” Here I will create a diagnostic setting – pointing to the event hub. Give the setting a logical name and select the application data you want to send on the left pane. On the right pane, select “Stream to an event hub” and use the event hub namespace we created in the previous step.

App Diagnostics

Step Three: Create a Logic App in the New Tenant

Now that app data is being exported into Event Hubs, a Logic App can be created in the new tenant to pull in the data and send it to the central Log Analytics Workspace. In this example, I used the consumption tier.

Logic App Creation

UNLOCK EXCELLENCE IN MICROSOFT CLOUD
Make the most of your cloud investment with AIS’ proactive and severity-based approach to your cloud infrastructure management, based on your business strategy.

Step Four: Program the Logic App to Retrieve the Event Hub Data

Within our logic app, we want to build out three components within the designer:

  1. An execution trigger, in our example, is a recurring timer.
  2. A condition and action: when an event(s) are available in an event hub, parse the data in each message
  3. For each event message, send the data to Log Analytics Workspace.

For the first step – we will add a simple timer with a 1-minute recurrence frequency.

Set Timer Reoccurrence

For the second step, I will add an event hub trigger object: “when events are available in the event hub.” For this step, I will need to enter a connection string – this information can be found in the event hub object blade, under “Shared Access Policies.” Selecting the policy object will reveal the authentication keys and connection strings. You can choose the primary or secondary key connection string.

Available Events

Root Manager Shared Access Key

Create Event Hub

Next, I will need to parse out the event hub data. We will use the Parse JSON option under “data operations” for this. The content will be the body of the event hub.

Event Hub Data Parse JSON

The JSON data schema will depend on what telemetry your application is sending. You can upload a sample to generate the schema.
Lastly, I will want to send the logs to the log analytics workspace in the new tenant. For this, I will set up another data operations step where we loop through each event hub message and, within those events, loop through each of the records we previously parsed out and send them to the Log Analytics Workspace using the send data operation. This last step will require a connection to the workspace, which can be found under the “Agents Management” section of the Log Analytics Workspace blade. You will input the workspace ID and one of the two keys – either is fine.

Send logs to log analytics

Once set up is complete, we’re ready to save our Logic App and start sending data to the workspace. After a few minutes, you should be able to query the custom log table you configured in the Logic App.

Log Analytics workspace logs query

Conclusion

To conclude, there are ways to get data from Azure Commercial to a Log Analytics Workspace within Azure Government. With the help of Event Hub and Logic Apps, we could send data from one tenant to the other and work around the Azure Lighthouse limitations. Hopefully, you will find this helpful when implementing your solution.

Recently, I was involved in a project where we had requirements. A report needed to be built based on data from several sources, including Azure Cosmos DB and Azure SQL. This report was required to be stored in Azure, and data was subject to a retention policy where it would be unable to be deleted. For this use case, we decided to use Azure Data Factory to read the data, transform the data, and load the data into a CSV file within a blob container in an Azure Storage Account. Then, to prevent the deletion of data, we would create a retention policy on the blob container to enable immutable storage, giving us write-once, read-many capabilities.

Azure Cosmos Data Flow

We found that we needed a Data Flow within Azure Data Factory to perform logic such as joining across our data sources. We created an output sink to a Delimited Text output within an Azure Blob Storage Linked Service within our data flow.

Immutable storage policies divided into retention policies and legal holds can be enabled on a storage account to enforce write-once, read-many policies, which allow new documents to be added to the storage account. But, it prevents the modification and deletion of existing documents. In the case of a retention policy, individual blobs cannot be modified or deleted until the retention period has passed for a given blob. In the case of legal holds, no blobs can be modified or deleted until the legal hold is manually removed from the blob container. Legal holds can be used to fulfill various legal requirements, including keeping data under active investigation from being modified. You can read more about immutable storage in the official Microsoft documentation for Immutable storage for blobs.

Our project had legal requirements for retaining these created documents, so we found immutable storage policies to be a good fit. When we enabled Immutable Storage, either through a retention policy or a legal hold on the Blob Container, we encountered one of two errors:

Job failed due to reason: Spark job failed in one of the cluster nodes while writing data in one partition to sink, with following error message: This operation is not permitted as the blob is immutable due to a policy.

OR

Job failed due to reason: Spark job failed in one of the cluster nodes while writing data in one partition to sink, with the following error message: This operation is not permitted as the blob is immutable due to one or more legal holds.

Here, we will examine why this error occurs, and how we can resolve these errors.

Setup

To simplify our example, I will create a simple Azure Data Factory pipeline that uses a data flow to read from an Azure SQL server, performs a Select operation on that data, and then output a CSV file into an Azure Storage Blob Container.

Data Flow pipeline

I created an Azure SQL server to hold our source data. This server has a database named testdb, and a table named MOCK_DATA with a few columns of varying data types. I then inserted some mock data into the table.

SERVICE OFFERING: REHOST ON AZURE
Initiate your cloud journey on Microsoft technologies to accelerate your cloud transformation. Learn more about the benefits of rehosting.

I then created an Azure Storage Account to hold our final data, with a blob container named “output.” This is where I will store the final CSV files output by the Azure Data Factory.

Azure Storage Account

To make these blobs immutable, I added a legal hold to the output container:

Make blobs immutable

I then created a new Azure Data Factory:

Create and Open Azure Data Factory

Opening Azure Data Factory Studio, I created the following. A new Linked Service, associated with my Azure SQL Server:

Create New Linked Service associated with Azure SQL Server

A second Linked Service, associated with Azure Blob Storage:

Associate Linked Service with Azure Blob Storage

An Azure SQL Database Dataset, with the linked service associated to my SQL Database and the table source associated with my mock data table:

Azure SQL Dataset

An Azure Blob Storage Delimited Text Dataset, associated with my Blob Storage Linked Service, and with two parameters, Container and Blob, that are associated with the File path:

Azure Blob Storage Delimited Text Dataset

A Dataflow, which has the following configuration:

  • A parameter named filename
  • A Source associated with the mock data table dataset, with all default options.
  • A Select activity to simulate processing the data to select specific columns
  • A Sink activity with the following settings:
    • Dataset: The delimited text dataset
    • Settings tab:
      • File name option: Output to a single file
      • File name: $filename
      • Optimize: Single partition

Data Flow Activity and Configuration

A Pipeline with a Data Flow activity, associated with the Data Flow that was just built, with the following parameters:

  • Sink parameters:
    • Container: output
    • Blob: A dynamic content value with an empty string, ”
  • Other parameters:
    • filename: Data flow expression, “file1.csv”

Main Data Flow Parameters

When I run this pipeline, I get the following error:

Job failed due to reason: Spark job failed in one of the cluster nodes while writing data in one partition to sink, with the following error message: This operation is not permitted as the blob is immutable due to one or more legal holds.

Solution

Why do we get an error when running this pipeline? When the Spark data cluster associated with the Data Flow activity runs, it creates several temporary files within the Blob Container. It then attempts to append to these files and delete them after processing.

Error when running pipeline

This violates the write-once, read-many restrictions on the storage blob container so that the Data Factory job will fail.

To resolve this issue, I will need to create a temporary location for these files to be built and then copy that data from one storage account to another. First, I will create a new blob storage container with no retention restrictions, which I will call “temp”:

Temporary Blob Storage Container

Then, on the storage account, to clean up the temporary data, I will enable lifecycle management. I created a new lifecycle management rule on the storage account to delete old files by providing the following settings:

  • Rule scope: Limit blobs with filters
  • Blob type: Block blobs
  • Blob subtype: Base blobs
  • Base blobs rules:
    • Add a rule: If base blobs haven’t been modified in 1 day, then delete the blobs
  • Filter set:
    • Blob prefix: temp/*.csv

Base Blobs Subtype

Create the Filter Set

Next, back in Azure Data Factory, in my pipeline, I changed the dataflow sink parameters to use the temp blob container rather than the output blob container:

Change parameters in dataflow pipeline

The Copy Data activity does not work the same way as a Data Flow sink and does not create temporary files in its process. I created a new Copy Data activity in the pipeline, and attached the data flow’s anchor to the copy data activity:

Create New Data Activity in Pipeline

For the Source of the Copy Data activity, I selected the Delimited Text dataset and provided the following settings:

  • Container: temp
  • Blob: file1.csv

For the Sink of the Copy Data activity, I selected the same Delimited Text dataset and provided the following settings:

  • Container: output
  • Blob: file1.csv

Rerun the pipeline. The pipeline should run successfully, and your output file will exist in both the temp and output blob containers. After our lifecycle management policies find that the temporary blob is over a day old, the temporary blob will be deleted, while the output blob will remain. The retention policy will prevent the output blob from being deleted.

Run Pipeline with output and temp blob containers

Summary

The way the Data Flow activity in Azure Data Factory uses temporary files to build delimited text outputs, using a delimited text output as a Data Sink is not possible on an Azure Storage Blob Container with an immutable storage policy. A temporary storage output destination must exist as the output for the data flow, which is then copied over to the final, firm storage destination with a Copy Data activity to work around this restriction. The Copy Data activity is not processed the same way as the Data Flow sink. It does not create temporary files, so it is suitable for copying data from the temporary container to the output container. The temporary container is then cleaned regularly by creating a lifecycle management policy on the storage account.

We know most software applications, desktop, mobile, or web, require a database at the backend to store data. When we look at current software applications, the complexity is very high, and they have a high frequency of data transactions. So, we need testing of the data stored and retrieved from the database to make sure it has proper data integrity. Any database operation performed by the application is always one of the four, Create, Retrieve, Update and Delete.

We were required to do database unit tests in one of our recent client application implementations, as the application is mainly data-centric. The decisive reason to push for database unit testing is to ensure the application has data integrity. We were required to develop database test cases to include checks for data integrity and confirming business rules.

For database unit testing, we have the following frameworks that can be used:

  • Microsoft SQL Server Data Tools (Using Visual Studio)
  • tSQLt Unit Testing Framework

This blog post will give a high-level experience in implementing and demonstrating how to create test cases that use tSQLt features on AdventureWorks sample databases. We choose the tSQLt Database Unit Testing framework for Azure SQL Database to implement database unit testing. tSQLt allows us to create isolated test cases defined with the data we need; each test case runs in its own transaction.

A tSQLt unit test case is based on the AAA rule, which consists of:
Rules for Database Testing

Step One: tSQLt Environment Setup

The first step is to install tSQLt in your database.

  • Setup includes a set of objects (tables, stored procedures, functions, and more) that you add to the database you want to test. Download “tSQLt_V1.0.5873.27393.zip” from the tSQLt site, and unzip the file.
  • Run the “tSQLt.class.sql” script against the database. You can find the script in the zip file that you downloaded from the tSQLt site.
    Once the run is successful, you can find the tSQLt schema. Assigned to that schema will be tables, views, stored procedures, and user-defined functions. These are the components that do all the processing when creating and running test cases against your database. As a part of best practices, we will not be moving any objects created by “tSQLt.class.sql” and unit test cases outside the development database.

Object Explorer

Before you create a test case, you need to create a test class where the test case will be located. A test class is a schema configured with an extended property that tells tSQLt that it is a test class. To create the test class, you use the NewTestClass stored procedure part of the tSQLt schema.

NewTestClass Stored Procedure

The EXEC statement creates a test class named TestPerson. Once we’ve completed this step, we can add one or more test cases to the test class, so let’s get started doing that.

Step 2: Testing [Person].[GetAddressByCity] Stored Procedure

In tSQLt, a test case is a stored procedure that’s part of a test class and uses tSQLt elements to perform the testing. We can develop and test stored procedures and functions in a database.

As part of developing the tSQLt test case, we’ll create a test case for a stored procedure [Person].[GetAddressByCity] which will return address details for a given city.

Creating a Test Procedure by City

Use the CREATE PROCEDURE statement to create a test case. The procedure name must start with the word “test” and be created in an existing test class; otherwise, making the test case is much like creating any other procedure. The following T-SQL script creates a test case named “TestGetAddressByCitySuccess” in the “TestPerson” test class:

Follow tSQL Script to Create Test Case

As you can see, we are using CREATE OR ALTER PROCEDURE statement to create a test case. The critical part of the test case is the main body of the procedure definition, between BEGIN and END. First, we use tSQLt’s FakeTable stored procedure, which creates a unique temporary table with the same name as the table referenced within the actual stored procedure we are testing. This will ensure that the data in an actual database and the table is not updated or deleted. Any subsequent references we make in our test case to that table will always point to the test table and not the actual table in the database. Then we populate the table with test data.

AIS achieved Windows Server and SQL Server-based workloads to Azure, reflecting our commitment and investments in delivery around helping organizations migrate workloads to Azure.

In the test case, we use two temporary tables, #Expected, used to store the expected data, and #Actual, which will store data once the stored procedure is successfully run. We are keeping both temp tables schema similar.

Finally, we use the tSQLt AssertEqualsTable stored procedure to compare the data in the #Actual table to the data in the #Expected table.

Run the test case using tSQLt.Run the stored procedure.

When we run the test case, it should evaluate to true and return the following results:

Final result for unit testing

Now you have the basics, which will allow you to go a long way with unit testing using tSQLt for SQL Server. You can refer to documentation by visiting the tSQLt User Guide.

While recently working on moving an FTP server (using Passive FTP) from on-premises to Azure, I needed to expose the FTP server to the internet via an Azure External Load Balancer. A few things were not well documented. I thought it would be good to put together a step-by-step guide on setting up the load balancer, configuring the FTP server on the backend, and setting the Network Security Group rules associated with the FTP server. In this example, the FTP server will be running on IIS, and it is assumed that there is no Azure Firewall\NVA associated with the VNET that is hosting the FTP server. Suppose your environment does have an Azure Firewall\NVA. In that case, there will be an additional Azure resource needed, an Azure Route Table with a 0.0.0.0\0 out to the internet associated with the subnet the FTP server is hosted. This is required in order to route the outbound traffic back out to the internet. If this is not done the outbound FTP traffic will die on the Azure Firewall\NVA.

Windows IIS FTP Server Configuration

I won’t be going into the details of the FTP server configuration here. I will just be going over the FTP firewall settings and set the Data Channel Port (Passive FTP Ports). Below is how the FTP server is configured to support the external load balancer. On the IIS server settings – FTP Firewall Support is where you define the Data Channel Port Range.

FTP Firewall Support

I have defined ports 5000-5002.

Define Ports

At the site, the level is where you configure the public IP address of the external load balancer.

FTP Testt

As you see in the screenshot below the public IP of the external load balancer is configured.

FTP Firewall Support

Azure External Load Balancer

Configuring an Azure External Load Balancer for FTP is strait forward. One thing to note with Passive FTP is that all the Data Channel ports for FTP must be defined in the load balancing rules. If you define ports 5000-5100, there will be 100 load balancing rules for each data channel port. I recommend keeping the number of passive ports to a minimum since Azure External Load Balancers do not support a port range.

Requirements:

  • Azure Public IP – this will be configured as the load balancer’s front-end IP
  • Azure Load Balancer
  • FTP Port requirements for inbound traffic and public IP address(es) of the client(s) that will be accessing the FTP server

Deploy the Load Balancer

Search for Azure Load Balancer in the search bar in the Azure Portal

Search for Azure Load Balancer
Select Create

Creating a Load Balancer in Azure

Define the following parameters then select Next: Frontend IP configuration

  • Resource Group
  • Name
  • Region
  • Leave SKU: Standard
  • Leave Type: Public
  • Tier: Region

Project Details Load Balancing

Select Add a frontend IP Configuration, define the following parameters

  • Frontend IP Name
  • IP version: leave IPv4
  • IP type: leave IP address
  • Public IP address
  • Gateway Load balancer: leave None

Select Add

Name Load Balancer ad Public IP Address

Select Review + create – we will go over the configuration of the remaining items of the Load Balancer after it has been deployed.

Add Backend Pool

Configuration of the Azure External Load Balancer

I will now go over the configuration of the Azure Load Balancer. This will detail how the Backend pool, Health probes, Load balancing rules, and Outbound rules are configured.

Configuration in the Azure External Load Balancer

Frontend IP Configuration

As you see in the screenshot below, the frontend IP is defined by the rules associated with the front end. After additional load balancing rules are added I will review the frontend configuration again.

Frontend IP configuration

Backend Pools

The backend pool configuration is how you associate a virtual machine with the load balancer. In the screenshot below, I use the following configuration

  • Virtual network – the VNET that the target virtual machine is associated with
  • Backend Pool Configuration: NIC
  • IP Version: IPv4
  • Add the virtual machine by selecting the +Add button
  • Select Save

Add virtual machines to backend pools

Add Virtual Network

Health Probes

I have created an FTP Health probe for port 21. I will also be using this health probe for my FTP data channel ports. You can make a health probe for each data channel port.

  • Name: FTP
  • Protocol: TCP
  • Port: 21
  • Interval: 5
  • Unhealthy threshold:2

Use Health Probe for FTP Data

Load Balancing Rules

I have the port 21 load balancing rule already configured but need to add the FTP data channel ports that I have defined in FTP Firewall (5000-5002).

Define the following parameters:

  • Name: I like to give it the name of the port number, in this example, I will use 5000
  • IP Version: IPv4
  • Frontend IP address: Select your Frontend IP Config – FEIP
  • Backend Pool: Select your backend pool – BEP-01
  • Protocol: TCP
  • Port: 5000
  • Backend Port: 5000
  • Health Probe: ftp (TCP:21)
  • Session persistence: None
  • Idle timeout (minutes): 4
  • TCP reset: Disabled
  • Floating IP: Disabled
  • Outbound source network address translation (SNAT): (Recommended) Use outbound rules to provide backend pool members access to the internet
  • Select Add

Here is the full set of Load Balancing rules:

Load Balancing Rules

How to Add a Loan Balancing Rule

Outbound Rules

There is a requirement to create an outbound rule for the back-end resource to be able to reach out to the internet.

Create an outbound rule for back-end resource

Network Security Group Rules

The final step in configuring this solution is configuring the inbound rules for the NSG. The rule should have TCP ports 21 and the data channel ports defined in your FTP firewall (in my case, this would be 5000-5002).

External IP client accessing the FTP server

Conclusion

Passive FTP via an Azure Load Balancer combined with Network Security Group is a great solution to expose your FTP server on the internet securely. Understanding the total possible number of connections at a single time will help determine the minimum number of passive data channel ports required to configure on the Azure External Load Balancer.

This blog will explain how a short-staffed team overcame security issues in a critical legacy application by combining modern authentication with legacy MVC frameworks. This allowed the team to upgrade the application to meet enterprise security standards without disrupting the users.

Background

I managed an in-house application, ERMS, developed a decade ago with MVC and forms authentication. The forms authentication used custom user management with SQL Server database. The skilled developer created a productive application that served the needs of the HR, managers, and employees. ERMS was developed for AIS in-house use and had lower priority than the customer engagements. Over the years, it’s been stable, requiring few changes. When changes were required, we had to juggle resources to accomplish them. Figure 1 shows the application architecture before the change.

Upgrading Legacy MVC Forms Authentication to Azure AD

Challenge

The application was developed before the standard security practices. Over time, it became non-compliant. ERMS needed to be upgraded the legacy forms authentication to Azure AD authentication. This required sweeping changes to the way the users logged in to the application, which would be a significant undertaking. The solution was not challenging, but it must be done with minimal downtime and minimal resources. In addition to this, ERMS uses custom roles that do not map to the Active Directory roles.

Solution

We considered several ways to solve this problem, as outlined below.

Upgrade Authentication and Authorization

The first option was to remove forms authentication and custom role management to use Active Directory, as shown in Figure 2. Equivalent AD roles would have to be created. The code at various layers needed to be updated to refer to the corresponding AD roles. This was not a viable option as it is risky with many changes.

Removing forms authentication

SCALING CLOUD FOR GLOBAL ENGINEERING
Looking to migrate your enterprise to the cloud? AIS can help you scale, reduce technical debt, and mitigate risk.

Upgrade Authentication and Use Legacy Authorization

Figure 3 shows another approach we explored to retain the existing role management and use Azure AD for authentication. This was a sensible fallback in the given context. That did not work as the User Principal in the HTTP request context was always not set, which would cause the authentication to break.

Retain the existing role management

We learned two points from the failed trials. First, we should only attempt to upgrade the authentication but not touch the custom role management. Second, it was a resource-heavy effort to integrate Active Directory with custom role management in ERMS.

Using a Connector for Authentication

The solution that worked was to have a lightweight authenticator app that the ERMS application consumes to validate the users, as shown in the high-level flow in Figure 5. This authenticator service app would validate against Azure AD, and role management would stay the same. Figure 4 shows the solution overview.

Complete Solution OverviewHigh Level Flow

The Right Approach

An independent authentication connector service and maintaining the existing role management are the key to the solution. This is a dependable approach if you are looking for a quick turnaround using minimal coding or resources.

Challenges with Public Cloud

One of the oldest problems facing cloud infrastructure services has been access control. Ensuring that resources can be accessed by users and services that need to access them and not by anything else has been problematic when those services are delivered through the public internet. Even as late as last year, cybersecurity data firm BinaryEdge.io listed over 35,000 publicly accessible and unsecured databases, most of which were cloud-hosted. That’s over 35,000 data breaches that have likely already happened because of misconfigured cloud resources. In short, the problem with the public cloud is that most resources shouldn’t be public.

Service Endpoints to the Rescue

Azure’s first step in service access control was the service endpoint. A service endpoint allows virtual networks to use private IP addresses to route traffic to an Azure service’s public IP address. But more importantly, it also allows you to deny traffic over the service endpoint unless it comes from its subnet. This effectively denies access to all traffic coming in from the public internet, making your Azure service accessible only to other Azure services and to selectively whitelisted IP addresses.

This approach has some limitations, potentially the biggest being that your service still has a public endpoint. It has a firewall in front of it, but if you only want other Azure services to connect to your private database, why should that database server allow traffic over its public IP address? Availability is another issue. Some Azure services, such as Azure Kubernetes Services, don’t support service endpoints. The network traffic pattern is also inefficient since Azure services using service Endpoints need to interact through public IP addresses. Finally, you have the issue of scope. Service Endpoints are scoped to the service, meaning any service in that subnet has access to the service. This means either a high degree of granularity in subnets, an administrative burden, or simply ignoring the access issue and hoping nothing happens. And while storage accounts have a third option, service endpoint policies, they only apply to storage accounts.

US MILITARY TRANSITIONS TRADITIONAL IT
AIS used Microsoft Azure services to migrate Windows 2016 Servers to Azure PaaS, reducing costs and scaling application performance as needed.

Private Endpoints: The Next Step

It’s not that Service Endpoints were a bad idea or were poorly implemented. Rather, the needs and usage of the public cloud have evolved rapidly. In response to evolving needs, Microsoft introduced another way to control connections to a service. Private Endpoints allow services to be accessible through their private IP addresses and Private Link, making connections between private IP addresses possible. And that doesn’t just mean private IP addresses in Azure. You can connect to your Azure instance from your on-prem network without leaving your firewall or using the public internet with the correct Domain Name Server (DNS) configuration.

To that end, there is no public IP address for a private endpoint. An Azure service can have both a private endpoint and a public endpoint. Creating a private endpoint does not automatically deny traffic over the public endpoint. However, if you take the extra step of denying all publicly routed traffic, you have isolated your Azure service far more effectively than if you had used a service endpoint. We can take this private connectivity a step further. It’s possible to integrate a private endpoint with a private DNS zone in Azure, resulting in private endpoints that can accept traffic from on-premises networks without routing through the internet. It’s as if the Azure service was part of the on-premises network. Finally, private endpoints solve our scoping and availability problems. Private endpoints are available on more services than service endpoints. For a full list, reference Microsoft documentation on Private Endpoint Availability and Service Endpoint Availability. Private endpoints can be scoped to a specific resource, such as a storage account, or even a sub-resource, such as specific blobs or tables in the storage account. Because Azure Active Directory can govern access to the private endpoint, this offers a very granular Role-Based Access Control (RBAC) for Azure resources.

Considerations

Private endpoints aren’t all sunshine and roses, however. They do come with some significant downsides to consider. First, they aren’t free. Private endpoints are charged by usage, while service endpoints are free. Secondly, private endpoints require more setup than service endpoints because you need to set up and configure DNS. Azure services have Fully Qualified Domain Names (FQDN) that resolve to public IP addresses, so you must configure DNS so that the public address resolves to the private IP of its private endpoint. In the end, it takes more to use private endpoints, but you get more when using them.

Putting It All Together

Private Endpoints involve a fair number of steps and a good bit of supporting infrastructure. Because of that, a practical implementation might help visualize how everything fits together. Fortunately, Microsoft has provided a thorough set of tutorials and quickstarts to help you get your hands dirty. If you try these, and I encourage you to do so, please remember to keep an eye on how services are charged and tear down your work so that you don’t receive a surprise bill.

Conclusion

Private Endpoints are an evolution of Azure infrastructure. The upfront configuration effort and ongoing service billing for use means that you should carefully consider whether your organization needs them. For example, if you need to block all internet traffic to a service while making services available to on-premises traffic, or if you need to secure specific sub-resources in your virtual network, Azure now offers that capability through private endpoints.

Point-to-Site Virtual Private Network (VPN) connections are helpful when you want to connect to your VNet from a remote location. This helps us securely connect individual clients running Windows, Linux, or macOS to an Azure VNet. This blog will outline steps to create and test a Point to Site VPN while using an Azure Certificate Authentication method.

Create a VNet

Sign in to the Azure portal.
In Search, type Virtual Network.

Creating virtual network

Select Virtual Network from the Marketplace results.

Virtual Network

Once you select Create, the Create virtual network page will open.
On the Basics tab, configure Project details and Instance details VNet settings.

Virtual Network Creation

SLVNET

Create the VPN Gateway

A VPN gateway is a specific type of virtual network gateway used to send encrypted traffic between an Azure virtual network and an on-premises location over the public Internet. Each virtual network can have only one VPN gateway. The virtual network gateway uses a specific subnet called the gateway subnet. The gateway subnet is part of the virtual network IP address range you specify when configuring your virtual network. It contains the IP addresses that the virtual network gateway resources and services use.

Virtual network gateway

On the Basics tab, fill in the values for Project details and Instance details.

Completing VPN basics

Public IP Address input

Note: Deployment of the virtual network gateway may take up to 45 minutes.

VirtualNGateway

Looking to grow your career in Azure? We're growing quickly and we're looking for talent to join the team. View our open career opportunities today.

Generating Certificates

Azure uses certificates to authenticate clients connecting to a VNet over a Point-to-Site VPN connection. Once you obtain a root certificate, you upload the public key information to Azure. The root certificate is then considered ‘trusted’ by Azure to connect P2S to the virtual network. You also generate client certificates from the trusted root certificate and then install them on each client computer. The client certificate is used to authenticate the client when it initiates a connection to the VNet.

Generate a Root Certificate

Use either a root certificate generated with an enterprise solution (recommended) or generate a self-signed certificate. After creating the root certificate, export the public certificate data (not the private key) as a Base64 encoded X.509 .cer file. Then, upload the public certificate data to the Azure server.

Open PowerShell as an Administrator and run the following script.

$cert = New-SelfSignedCertificate -Type Custom -KeySpec Signature `
-Subject “CN=SLP2SRootCert” -KeyExportPolicy Exportable `
-HashAlgorithm sha256 -KeyLength 2048 `
-CertStoreLocation “Cert:\CurrentUser\My” -KeyUsageProperty Sign -KeyUsage CertSign

Admin Windows PowerShell

This will create a root cert and install it under the current user cert store.

Generating Client Certificates from Root Certificate

Open PowerShell as an Administrator and run the following command:

Get-ChildItem -Path “Cert:\CurrentUser\My”

This should provide a thumbprint:

PowerShell thumbprint

Next, run the following command. The thumbprint should mutch to your Certificate.

$cert = Get-ChildItem -Path “Cert:\CurrentUser\My\B1C79D177D465E76FF74243F7553EA4837FD137B”

Thumbprint to match certificate

Finally, you’ll need to run this to generate your client certificate.

New-SelfSignedCertificate -Type Custom -KeySpec Signature `
-Subject “CN=SLP2SClientCert” -KeyExportPolicy Exportable -NotAfter (Get-Date).AddYears(1) `
-HashAlgorithm sha256 -KeyLength 2048 `
-CertStoreLocation “Cert:\CurrentUser\My” `
-Signer $cert -TextExtension @(“2.5.29.37={text}1.3.6.1.5.5.7.3.2”)

Run and generate certificate

We now have certs in place, But we need to export the root certificate to upload it in Azure.
First, export the root certificate public key (.cer)

Hit the Windows Key + “R”, to bring up the Run dialog box and type in “certmgr.msc”. When the management console opens, you should see your newly created certificate in “Current User\Personal\Certificates”. Right-click on your newly created cert and go to All Tasks > Export.

Export certificates

In the Wizard, click Next.

Export Wizard

Select No, do not export the private key, and then click Next.

Do not export private key

On the Export File Format page, select Base-64 encoded X.509 (.CER)., and then click Next.

Pick file format

For File to Export, Browse to the location to which you want to export the certificate. Specify your file name.  Then, click Next.

Name File to export

Click Finish to export the certificate. Your certificate is successfully exported!
The exported certificate looks similar to this:

Exported Certificate

If you open the exported certificate using Notepad, you see something similar to this example. The section in blue contains the information that is uploaded to Azure. If you open your certificate with Notepad and it does not look similar to this, typically, this means you did not export it using the Base-64 encoded X.509(.CER) format. Additionally, if you want to use a different text editor, some editors can introduce unintended formatting in the background. This can create problems when uploading the text from this certificate to Azure.

Open Certificate in notepad

Configure Point to Site Connection

  • The next step of this configuration is to configure the point-to-site connection. Here we will define the client IP address pool as well. It is for VPN clients.
  • Click on the newly created VPN gateway connection.
  • Then in a new window, click on Point-to-site configuration
  • Click on Configure Now
  • In a new window, type the IP address range for the VPN address pool. We will be using 20.20.20.0/24. For tunnel, type use both SSTP & IKEv2. Linux and other mobile clients, by default, use IKEv2 to connect. Windows also use IKEv2 first and then try SSTP. For authentication type, use Azure Certificates.
  • In the same window, there is a place to define a root certificate. Under root certificate name, type the cert name and under public certificate data, paste the root certificate data ( you can open cert in notepad to get data).
  • Then click on Save to complete the process.
  • Note: when you paste certificate data, do not copy —–BEGIN CERTIFICATE—– & —–END CERTIFICATE—– text.

Point to Site configuration

Testing VPN Connection

Log in to Azure portal from the machine and go to VPN gateway configuration page.
Click on Point-to-site configuration.
Next, click on Download VPN client.

Download VPN client

We can see a new connection under the windows 10 VPN page.

New VPN connection

Click on connect to VPN. Then it will open this new window. Click on Connect.

Connect new VPN

Adding a VPN Connection

Run ipconfig to verify IP allocation from VPN address pool.

Run ipconfig to verify IP location

Congratulations! You’ve successfully configured a Point to Site VPN Connection using Azure Certificate Authentication.

AIS has been working with Azure since 2008. Interested in learning more? Reach out to AIS today.