Symantec’s Veritas Volume Manager (VxVM) is a storage management subsystem that allows you to manage physical disks as logical devices called volumes. This blog will outline and dive deeper into the components of a Veritas Volume Manager.

Why VxVM?

We have a volume manager called “LVM,” which comes with OS installation by default in the Linux operating system.
Logical volumes are an alternate method of partitioning hard drive space. The capability has been built into the Linux kernel since 1999.

Linux and Partitions

Windows and OS X assume that the hard drive is a single monolithic partition. Linux assumes that a hard drive will be partitioned as part of the basic operating system installation, with specific partitions called “/var,” “/usr,” “/tmp,” “/home” and “/boot.” The “/boot” partition is where your operating system lives, while the other partitions hold applications, spool files and error logs, temporary data, and user data. While Linux can run on a single partition, additional partitions improve system performance.
In the following sections, I will point out the features of the LVM.

Dynamic Volume Resizing

Logical volumes allow you to leave unpartitioned space on the hard disk and add it to a specific partition as needed without having to back up your data and reformat the hard drive. This allocation of unpartitioned space can be done dynamically through both the command line and graphical user interfaces without rebooting the computer.

Spanning Volumes on Multiple Disks

When you use logical volumes, you can assign multiple physical disks to the same logical volume, which means that all those disks are seen as one partition from the user’s perspective.

Shrinking Volume Sizes

While logical volumes are great for adding unpartitioned disk space to a specific volume, the reverse is not true. Shrinking a logical volume to reallocate its disk space somewhere else is risky and can result in data loss. You should back up your data and migrate to a new, larger disk rather than attempt to shrink a volume.

Disaster Recovery

While spanning a logical volume across several disks is one of the “killer features” on LVM, the loss of a single disk in a logical volume can render the entire volume unusable. Therefore, if you’re going to use LVM, make extensive and regular backups of the entire volume.

SCALING CLOUD SERVICES FOR GLOBAL ENGINEERING
We helped a global tech company migrate fifteen business units to the cloud, streamlining business through one cloud environment with Azure and M365.

Drawbacks of LVM

The main disadvantage of LVM is that it adds another layer to the storage system. While the overhead of LVM is usually small, any decrease in performance can be critical on busy systems. Many users have reported significant performance issues when creating snapshots, limiting their production system use.

Veritas Volume Terminology

LVM and VxVM

So, to avoid the above drawback, we go for VxVM. VxVM allows a system administrator to configure various volume layouts for volume this high, allowing redundancy and high performance better than LVM. Several Volume Manager objects must be understood before you can use the Volume Manager to perform disk management tasks:

VxVM uses two types of objects to handle storage management: physical objects and virtual objects.

  • Physical objects — It’s a physical disk or LUN from storage. Physical disks or other hardware with block and raw operating system device interfaces that are used to store data.
  • Virtual objects — When one or more physical disks are brought under the control of VxVM, it creates virtual objects called volumes on those physical disks.
  • VM disks — A VM disk is a contiguous area of disk space from which the Volume Manager allocates storage. It’s nothing but a public region of the disk.
  • Disk groups — A disk group is a collection of VM disks that share a common configuration.
  • Subdisks — A subdisk is a set of contiguous disk blocks. A VM disk can be divided into one or more subdisks.
  • Plexes — The Volume Manager uses subdisks to build virtual entities called plexes. A plex consists of one or more subdisks located on one or more disks.
  • Volumes — A volume is a virtual disk device that appears to applications, databases, and file systems like a physical disk partition. Still, it does not have the physical limitations of a physical disk partition. Each volume records and retrieves data from one or more physical disks. Volumes are accessed by file systems, databases, or other applications in the same way physical disks are accessed. Volumes are also composed of other virtual objects (plexes and subdisks) used to change the volume configuration. Volumes and their virtual components are called virtual objects or VxVM objects. There are lots of other features you can add to vxvm easily.  Database snapshotting, dirty region logging for fast resynch, remote site replication for DR, clustering, etc.

Installation Process on a Linux Server

1. Download the .sh file: https://sort.veritas.com/data_collectors/download

2. Extract the installer package and run the installer:

Extract Installer Package

NOTE: If you get an error as “Cannot find perl to execute”, Just move “bin”,”lib” folder from perl/RHEL6x8664/ to perl directory, because the installer looking the perl files under perl folder, but it placed on perl /RHEL6x8664

Move bin lib folder

3. Install the Veritas Volume Manager on Linux. You will see the prompt with many options, as shown in the image.

Install a Product

i) Press “I” and “Enter” to select “Install a Product”

Symantec SF/HA

ii) Press “Y” to accept the Terms and Conditions

Basic Foundation 6.2

iii) Press “2” to install the recommended RPMs

Enter the system name (enter hostname here). Once we enter the system name, the installer will check and give you the status in the last column. Sometimes it may report you as failed because of missing RPMs, and that will provide you with another option to install via YUM utility.

NOTE: If you have a YUM server, select “1” to install the missing required RPMs with yum. If you are getting the errors, RPMs are missing and have failed to install with yum.

Try the solution below. Sometimes it may fail for keys, and the below example may help correct it.

Again, start the installer and check. If that solution didn’t help, try to download the RPMs manually and rerun the installer. Once complete, press “Enter” to continue.

Download RPMS

After a few minutes, you should see the message Symantec Storage Foundation Basic installation has is complete, as shown in the below imaInstallation completedge.

Create Volumes and File Systems on RHEL7

1. Identify the correct Disks using Vxdisk – Ensure that the disk is detected, not mounted using the fdisk and df -h command. Then, execute the below command to list the available disk under Veritas Volume Manager.

Identify the correct disks

If you see the status as “online invalid,” it indicates these disks are yet to be added into Veritas Volume Manager. Be careful; even the mounted disks will show as invalid under Veritas Volume Manager because VxVM does not initialize these disks.

NOTE: Sometimes, we may get device column output in “Enclosure based names,” for example.

Enclosure Based Names

So, change the naming conversion format to Operating System Based names to help us identify the correct disk.

2. Change the “Enclosure Based Names” to “Operating System Based Names”

Change Enclosure Based Names

To revert to “Enclosure Bases names” use the below command:

Revert back to Enclosure Bases Names

Once we have identified the correct disks, go ahead to initialize the disks using the vxdisk setup command.

For example, let’s take sbd and sdc disks.

Vxdisk setup command

Disk’s status, which shows as online, is initialized, and it belongs to VxVM.

3. Create a Disk group and add the new disks. Disks groups are similar to volume groups in LVM, so create a disk group called “testdg” and add the identified disks to the “testdg” disk group.

Testdg Disk Group

Check the disk group properties.

Add a new Disk

Let’s assume that we have a new disk “sdd,” which needs to be added to the existing disk group “testdg”. So, we will see how to add a new disk into the existing Disk group. Initialize and add the disk as shown below.

Initialize and add the disk

4. Create a volume on the Disk group: Let’s create a volume of 100MB within the disk group.

List the volume details using vxlist command.

List volume details

Note: If you get an error when you use “vxlist” command as shown below:

Use vxlist command

Start the below script:

Below Script

5. Create a File system on the volume.

Test the Volume

Where /dev/vx/rdisk/testdg/vol1 is the Device file for volume vol1

6. Mount the file system using mount command

Mount the file system

Verify the mounted file system using mount and df command.

Online Resizing:

To achieve this, we can use “vxassist” and “vxresize” commands to resize the volume size, which means increasing or reducing the size of the volume. There are two parameters that can be used along with “vxassist” command to find the space available in a Disk group to know how much it can be increased or extended.

  • Maxsize – Use this option to find the size to which a new volume or existing volume can grow.
  • Maxgrow – Use this option to find the size to which an existing volume can be grown.

1. How to find the total Disk group size in Veritas Volume Manager:

Find total disk group size

The above output shows that mytestdg Disk group is initiated with the disk “sdb” and the disk size is 0.91G, and Volume “testvol1” size is 0.48G.

2. How to find the maximum free size available in the Disk group to extend or increase.

Find maximum free size avialable

Where mytestdg our Disk group name the above output shows that we have 441Mb of free volume size. This can be used to create a new volume or extend the existing volume.

3. How to find the maximum size of an existing volume

Test the size of existing volume

The current size of the volume is 0.48G. We had found the size in the 1st point itself so that the volume can be increased to 941Mb.

4. How to increase the volume size or extend the volume size in Veritas Volume Manager

Option 1: To extend or increase the volume to a specific size 500Mb, use the grown option, so your total volume size would be 500Mb.

Extend to increase volume size

Option2: To extend or increase the volume by a specific size 800Mb, use the grown option, so your total volume size would be added with a specified size.

Add total volume size to specific size

Resize the mounted volume size on the fly without unmounting.

Resize the mounted volume size

5. How to decrease the volume size or reduce the volume size.

Option 1: To reduce or decrease the volume to a specific size 500Mb, use the shrink to option. So your total volume size would be 500Mb.

Reduce or decrease the volume

Option 2: To reduce or decrease the volumes by a specific size 800Mb, use the shrink option, so your total volume size will reduce the specified size from the total volume size.

Use the shrink option

In order to resize the mounted volume size on the fly without unmounting, use the image below:

resize mounted volume size

Key Points:

  • It is available for Windows, AIX, Solaris, Linux, and HP-UX. A modified version is bundled with HP-UX as its built-in volume manager.
  • The latest version is Veritas Volume Manager 7.4.1Release date (Windows): February 2019.
  • VXVM supports a cluster file system with CFS and Oracle RAC
  • Base VXVM allows you to mirror your boot disk without any additional license and only the root dg.
  • VxVM allows you to stripe and mirror and convert between layered and non-layered, preserving the data.

Disadvantage:

  • VxVM main disadvantage is that it costs.
  • We must pay additional licenses for every little feature.

Veritas Volume Manager provides manageability, availability, and performance enhancements for enterprise computing environments. It has benefits like Disk spanning, Load balancing, Complex multidisk configurations, Online administration, and High availability. I hope that this blog has been helpful!

Challenges with Public Cloud

One of the oldest problems facing cloud infrastructure services has been access control. Ensuring that resources can be accessed by users and services that need to access them and not by anything else has been problematic when those services are delivered through the public internet. Even as late as last year, cybersecurity data firm BinaryEdge.io listed over 35,000 publicly accessible and unsecured databases, most of which were cloud-hosted. That’s over 35,000 data breaches that have likely already happened because of misconfigured cloud resources. In short, the problem with the public cloud is that most resources shouldn’t be public.

Service Endpoints to the Rescue

Azure’s first step in service access control was the service endpoint. A service endpoint allows virtual networks to use private IP addresses to route traffic to an Azure service’s public IP address. But more importantly, it also allows you to deny traffic over the service endpoint unless it comes from its subnet. This effectively denies access to all traffic coming in from the public internet, making your Azure service accessible only to other Azure services and to selectively whitelisted IP addresses.

This approach has some limitations, potentially the biggest being that your service still has a public endpoint. It has a firewall in front of it, but if you only want other Azure services to connect to your private database, why should that database server allow traffic over its public IP address? Availability is another issue. Some Azure services, such as Azure Kubernetes Services, don’t support service endpoints. The network traffic pattern is also inefficient since Azure services using service Endpoints need to interact through public IP addresses. Finally, you have the issue of scope. Service Endpoints are scoped to the service, meaning any service in that subnet has access to the service. This means either a high degree of granularity in subnets, an administrative burden, or simply ignoring the access issue and hoping nothing happens. And while storage accounts have a third option, service endpoint policies, they only apply to storage accounts.

US MILITARY TRANSITIONS TRADITIONAL IT
AIS used Microsoft Azure services to migrate Windows 2016 Servers to Azure PaaS, reducing costs and scaling application performance as needed.

Private Endpoints: The Next Step

It’s not that Service Endpoints were a bad idea or were poorly implemented. Rather, the needs and usage of the public cloud have evolved rapidly. In response to evolving needs, Microsoft introduced another way to control connections to a service. Private Endpoints allow services to be accessible through their private IP addresses and Private Link, making connections between private IP addresses possible. And that doesn’t just mean private IP addresses in Azure. You can connect to your Azure instance from your on-prem network without leaving your firewall or using the public internet with the correct Domain Name Server (DNS) configuration.

To that end, there is no public IP address for a private endpoint. An Azure service can have both a private endpoint and a public endpoint. Creating a private endpoint does not automatically deny traffic over the public endpoint. However, if you take the extra step of denying all publicly routed traffic, you have isolated your Azure service far more effectively than if you had used a service endpoint. We can take this private connectivity a step further. It’s possible to integrate a private endpoint with a private DNS zone in Azure, resulting in private endpoints that can accept traffic from on-premises networks without routing through the internet. It’s as if the Azure service was part of the on-premises network. Finally, private endpoints solve our scoping and availability problems. Private endpoints are available on more services than service endpoints. For a full list, reference Microsoft documentation on Private Endpoint Availability and Service Endpoint Availability. Private endpoints can be scoped to a specific resource, such as a storage account, or even a sub-resource, such as specific blobs or tables in the storage account. Because Azure Active Directory can govern access to the private endpoint, this offers a very granular Role-Based Access Control (RBAC) for Azure resources.

Considerations

Private endpoints aren’t all sunshine and roses, however. They do come with some significant downsides to consider. First, they aren’t free. Private endpoints are charged by usage, while service endpoints are free. Secondly, private endpoints require more setup than service endpoints because you need to set up and configure DNS. Azure services have Fully Qualified Domain Names (FQDN) that resolve to public IP addresses, so you must configure DNS so that the public address resolves to the private IP of its private endpoint. In the end, it takes more to use private endpoints, but you get more when using them.

Putting It All Together

Private Endpoints involve a fair number of steps and a good bit of supporting infrastructure. Because of that, a practical implementation might help visualize how everything fits together. Fortunately, Microsoft has provided a thorough set of tutorials and quickstarts to help you get your hands dirty. If you try these, and I encourage you to do so, please remember to keep an eye on how services are charged and tear down your work so that you don’t receive a surprise bill.

Conclusion

Private Endpoints are an evolution of Azure infrastructure. The upfront configuration effort and ongoing service billing for use means that you should carefully consider whether your organization needs them. For example, if you need to block all internet traffic to a service while making services available to on-premises traffic, or if you need to secure specific sub-resources in your virtual network, Azure now offers that capability through private endpoints.

What is Puppet?

Puppet is a configuration management and automation solution for the servers’ deployment, configuration, and management. In addition, it performs the following subsequent functions.

  • For every single host, there will be a different configuration file which is defined by Puppet. It monitors and checks continuously if the configuration of the file is not changed. If it finds any changes in the file, it reverts to its original configuration on the host.
  • Dynamic scaling-up of machines.
  • It is a centralized management system that controls all the configured machines and automatically pushes the master-server or repo-based change to all the configured servers.

Puppet uses a concept of Master-Slave where the master communicates the slave through a secure and encrypted channel using SSL.

How Do We Use It?

Puppet is used in many ways to automate the tasks, such as:

  • Base system configuration that includes registry, NTP, firewalls, and services.
  • The management of web servers: Apache, Nginx, Tomcat, and IIS.
  • The management of database systems: PostgreSQL, My SQL, Microsoft SQL Server, and Oracle.
  • Management of application/middleware systems such as Redis, ActiveMQ, RabbitMQ, IIB, IBM, IBM MQ, Fusion/WebLogic, and Java.
  • Source Control such as Gitlab and Github.
  • Monitoring tools like SNMP, Splunk, NewRelic, Prometheus, Sensu, Zabbix, and Nagios.
  • Patch Management. Puppet is used for OS patching of Windows and Linux servers.
  • Package-Management: Puppet can also be used to install the software directly on windows servers.
  • The containerization and cloud-native: Openshift, Terraform, Kubernetes, and Docker.
  • Networking such as Barracuda, Palo Alto, F5, Cisco Nexus, and  Cisco Catalyst.

Companies Adopting Puppet

There are many companies adopting Puppet to simplify their tasks. A few of them are below:

  • Spotify – Global music streaming network.
  • Google – An internet search engine.
  • Staples – An American office retail company.
  • AT&T – Carrier of both fixed and mobile networks.
  • AON – Global professional services firm.
  • US Air Force

Puppet Architecture

You can configure systems with Puppet either in a client-server architecture, using the Puppet agent and Puppet master applications, or in a stand-alone architecture, using the Puppet apply application.

Components of Puppet

  • Puppetmaster – The Puppet Master handles the complete data that belongs to the configuration. It runs on a designated server that manages the complete tasks like deploying, configuration, etc.
  • Puppet Agent – Puppet agent runs on the client-server. Puppetmaster manages and maintains the client machines.
  • Config repository – It is a place where all the information related to configuration and nodes is placed. Data is pulled whenever required.
  • Facts – They are global variables obtaining important machine-level information used to analyze the current state of nodes.
  • Catalog – All configurations that are written in Puppet are converted to a compiled format called catalog.
  • Manifests – Manifests are the actual codes for configuring the clients.
  • Class – Like other programming languages, even Puppet has classes to organize its code better.
  • Resources – In puppet codes, the coding block is defined by declaring resources to represent packages, files, users, commands.
  • Nodes – All servers or clients that need to be managed are called nodes.​​

Working with Puppet

The Puppet architecture is an agent-server model, where the master server or primary server controls the configuration data of the client nodes that it manages.
Puppet has a master-slave architecture.

  1. The client agent sends a certificate with its ID to the server.
  2. The server signs the certificates and sends them back.
  3. This will authenticate further communication between the client and the master.
  4. The facter collects the state of the clients and sends it to the master.
  5. Based on the fact sent, the master compiles the manifests into catalogs.
  6. Catalogs are sent to the clients.
  7. The agents execute these manifests on its machine.
  8. A report is generated by the client that describes the changes made and is sent to the master.

This process is required at regular intervals, ensuring all client systems are up to date communication of Puppet Master and Slave.

How Puppet Works

Server-agent communication follows this pattern:

  • Puppet Slave requests for the Master certificate.
  • Puppet Master compiles and sends the Master Certificate.
  • Puppet Master requests for the Slave certificate.
  • Puppet slave sends the Slave certificate.
  • Puppet Master requests.

Common Issues Faced while Installing the Agent Agent

  • Mismatch of the configuration file.
  • Master server not reachable due to port block.
  • DNS host not reachable.
  • Servers not reachable during remote installation.

Installation of Puppet on Windows Servers

There are three methods to install Puppet on the servers:

Remote Installation on the Group of Servers

Requirements for remote installation:

  • An input file that has the list of servers.
  • Installer script has a code to loop remote installation, script that pop-up the credentials required for authentication, and remote script execution on servers.
  • Jump server where you can execute the remote script.

Puppet agent can be installed on a group of servers simultaneously using an installer script installer.ps1 remotely. The installer script has the code that will fetch and download the required files from the repository through the puppet master: https://puppetmaster:8140/packages/current/install.ps1 and copies the file to the required system directory folder in the server.

Steps for installation:

  • Open Powershell as admin
  • Redirect the prompt to the directory where the remote installation script is located.
  • Run the remote installation script in .ps1 format.
  • Enter the credentials that authenticate the domain.
  • The script then remotely installs the agent on the list of servers given in the input file.
  • Output the status of installation whether it is a success or failure.

Local Installation

Puppet agents can also be installed locally by copying the installer.ps1 file in the system directory. Once the file is copied to the local directory, we can install the puppet agent below.

.\Installer.ps1 -server puppetmasterserver.example.com -ge_environment <environment> -ge_location <serverlocation> -doinstall $true

Steps for local installation:

  • RDP to the server that
  • Open Powershell as admin in the server.
  • Redirect to the location where the script is present.
  • Use the above command to install the puppet agent on the server locally.
  • Once the installation is completed, you will see the installation status on the PowerShell window.

PowerShell window

Workgroup Installation

Steps for installation:

  • Installation of puppet agent in workgroup servers is the same as the local installation, but if the DNS name is not resolvable from windows host and if there is no Host A DNS record for Puppet server.
  • Edit the host file that is in the path – C:\Windows\system32\drivers\etc\hosts
  • Input the file with Ip address and puppet master server in the format – 192.168.xx.xx puppetserver.example.com

Once the file is saved, follow the procedure of local installation.

Conclusion

Puppet automation tool seems robust, user-friendly interface, as well as very declarative. Installation was very easy for me; it was nothing to worry about dependencies during installation.

AIS seeks professionals of a certain character and level of excellence in their skill. Take your career to the cloud and join us!

About the Podcast

I had the pleasure to once again be a guest on the .NET Rocks! podcast last month. This year marked my 11th time on the show!

Carl, Richard, and I talked about how the cloud has changed data analytics. We discussed the latest data work at AIS and bringing a developer’s view to the approach. The cloud has changed bringing disparate data sources together for analytics. With the cloud’s compute-on-demand, you don’t need to do many transformations of data as it’s loaded, but you can test it! This conversation dives into how CI/CD techniques can be applied to data to make for accurate data analytics and intelligent ingestion pipelines. Check out the full episode to learn more.

Listen to the full podcast here

Related Content

Accelerate your journey to become a data-driven organization. Many companies struggle to extract value from their data at the pace that today’s world moves and data grows. Our Data Platform Modernization Proof of Value engagement provides you a path to enable safe and secure access to data sets, empowering business users to unlock the next big opportunities for the company. Learn more and reach out to AIS today.

As your organization continues the digital transformation journey, Microsoft offers a highly beneficial service for protecting and containerizing corporate data and assets for the remote workforce, such as employees, consultants, or contractors: Desktop-as-a-Service. In brief, Desktop-as-a-Service provides a virtual desktop infrastructure, eliminating the need to manage the actual infrastructure! Specifically, the customer is responsible for app deployments, custom images, virtual machine sizing and deployment, directory services integration, and data center network connectivity (e.g., site-to-site VPN, SD-WAN, ExpressRoute, etc.). Today, Microsoft offers two solutions for Desktop-as-a-Service: Azure Virtual Desktop (formerly Windows Virtual Desktop) and Windows 365. Now, comes the business decision: which one?

Azure Virtual Desktop

Azure Virtual Desktop allows your organization to deploy persistent and non-persistent virtual desktops, whether direct or automatic assignment, along with complete compute elasticity. Also, Azure Virtual Desktop enables your organization to deploy multi-session hosts and publish RemoteApps, depending on organizational requirements.
Consider several configuration steps:

  • Host pool settings (e.g., allow USB redirecting, RDP settings)
  • Out-of-the-box or custom images
  • Application groups
  • User profile storage
  • Load-balancing between non-persistent virtual desktops
  • Device management

Also, there are key decisions to consider when utilizing Azure Virtual Desktop, such as disaster recovery and business continuity. Finally, while your organization must understand the consumed compute operational costs in Azure, keep in mind the licensing costs for the Windows desktop OS (e.g., perpetual or subscription-based). Overall, the proper planning and execution make Azure Virtual Desktop a beneficial and flexible solution for your organization.

REHOST ON AZURE
Our rehost migration approach helps initiate your cloud journey on Microsoft technologies to accelerate cloud transformation.

Windows 365

Windows 365 offers an end-to-end solution for persistent virtual desktops, deployed and managed via Microsoft Endpoint Manager (formerly Microsoft Intune). In brief, some of the prerequisites include network connectivity to Active Directory on-premises (Azure AD Join coming soon!), identity and device synchronization via Azure AD Connect, Azure subscription, Azure virtual network, and DNS resolution to Active Directory on-premises. In addition, there are some configuration steps to consider, such as a custom or out-of-the-box images, provisioning policies, and user settings. Finally, a pivotal decision to consider and understand is the licensing types for Windows 365, dependent upon the compute resource size requirement (e.g., vCPU, RAM, and storage). Overall, while there may be a lack of computing elasticity and disaster recovery flexibility, Windows 365 is a perfect solution to quickly deploy virtual desktops to the remote workforce at a fixed cost, regardless of actual compute resource usage.

How Do You Decide?

Azure Virtual Desktop and Windows 365 provide various options to meet specific organizational needs.

Ultimately, deciding on Azure Virtual Desktop and Windows 365 is dependent upon several factors:

  • Operational versus fixed costs
  • Disaster recovery and business continuity expectations
  • Compute elasticity and auto-scaling
  • Device management roadmap
  • IT administration functions

Below are common scenarios and possible solutions between Azure Virtual Desktop, Windows 365, or both!

Scenario and Solution table

Conclusion

I hope this blog has been helpful When choosing between Azure Virtual Desktop and Windows 365 for Desktop-as-a-Service.

I recently had the opportunity to perform a lift-and-shift migration of a SharePoint 2016 environment to cloud Infrastructure as a Service (IaaS) in Amazon Web Services (AWS). To support the long-term goals of the client, Okta would be implemented for authentication. Additionally, the client had several product integrations, including SQL Server Reporting Services (SSRS) (Integrated Mode), Office Online Server (OOS), Gimmal, and Nintex.

One of the first problems that I ran into was very little knowledge or lessons learned available. Okta does provide an Integration Guide; however, it simply walks you through setting up Okta as a Trusted Claims Provider for SharePoint. The guide does not cover or even mention potential architectural, migration, or integration concerns. I found a useful article at SharePointDoctors.com that does a great job filling in some of the gaps left in the Okta documentation, and I highly recommend reviewing it. One of the most critical points made by SharePointDoctors.com was to “Test, Test, and Test Again,” which is exactly what we did to discover and solve migration issues. In this post, I will share some of the issues we encountered migrating to Okta and what we did to mitigate them.

Lesson 1: Authentication Providers and the People Picker

When configuring Okta, there is no way to switch to Okta entirely; Windows Authentication is required for service accounts, and search crawls — watching an Okta product presentation, around the 8:20 mark, the presenter glazes over this fact. He claims that when they are ready for the final cutover, they disable Windows Authentication.

Claims Authentication Types

Initially, we had both Okta and Windows Authentication enabled for the Default Zone. If you configure SharePoint this way, users will be asked to select which credentials to use to log on to the site when they navigate the site.

Windows Authentication Sign In

If you do not want users to be prompted with this, follow these steps:

  1. Open Central Administration
  2. Select Manage Web Applications
  3. Select your web application and then Authentication Providers from the ribbon
  4. Select the Zone (usually Default)
  5. Scroll down to the Sign In Page URL and select Custom Sign In Page
  6. Enter /_trust/

Custom Sign In Page

This will force the user to use Okta when navigating to the site. You can find additional information about the login sequence here.

When we configured both authentication modes in the Default Zone, we found that whenever the People Picker was used, it would return two users: The Windows User and the Okta User. We knew that this would be very confusing and decided to extend the Web Application with a different Zone. Thus, we had a web application (ex. internal.contoso.com) in the Default Zone with Windows Authentication and a second web application (ex. contoso.com) in the Internet Zone with Okta Authentication. Using this arrangement, admins, third-party integrations, and search crawls could operate in the Default Zone, and all users would have access within the Internet Zone. You will understand why we chose this as you see the issues we encountered later in this article.
If the issues that we encountered are not applicable and you decide to use both authentication types in the Default Zone, you can hide AD from the People Picker using the following script:

Add-PSSnapin Microsoft.SharePoint.Powershell
$cpm = Get-SPClaimProviderManager
$ad = get-spclaimprovider -identity "AD"
$ad.IsVisible = $false
$cpm.Update()

SHAREPOINT MIGRATION FOR DHS
AIS helped the U.S. Department of Homeland Security migrate its large SharePoint Server environment to M365 in just six months, reducing costs and improving collaboration. Read how we did it.

Lesson 2: Migrating Claims

Using Move-SPUser

When I started scripting out the migration of users, I initially started with the script provided by SharePointDoctors.com, iterating over each site collection and calling Move-SPUser for each user.

However, SharePointDoctors.com warns that migrating a user twice is bad:
“SharePoint deletes all instances of the original user and replaces it with a new blank one that owns nothing. As a result, you lose all permissions information about the user.”

This concerned me greatly and since we had four web applications and numerous site collections with the same user in several places. I was concerned that if Move-SPUser was called more than once for the same user, the user would be over-written; after meeting with Microsoft, I found that this is not the case. The overwrite concern would be if a user logged into the system premigration creating a scenario where an Okta user (ex. c:0-.t|okta|chettinger) and a Windows user (ex. i:0#.w|ais\chettinger) exist in the system. Once Move-SPUser migrated the Windows User, the original Okta User would be overwritten with a new Okta user. In other words, there is no problem with calling Move-SPUser more than once if you happen to do so over multiple site collections.

Planning Time for User Migration

Another interesting issue that we encountered was the time it took to migrate many users (20k+). After working with Microsoft and researching the logs, we found that it took longer to migrate a user on servers where 3rd party integrations were installed. For example, when we ran the script on the SSRS or Search servers, it would only take 1 second to migrate a user. If we ran it on one of the application servers, it would take 1.5 minutes per user. In our case, we had Nintex and Gimmal installed. After working with Microsoft and testing thoroughly, we determined that it was perfectly safe to run the migration on the faster servers and that there was no negative impact on the migration.

Using SPFarm.MigrateUser()

While working on the script to migrate groups using SPFarm.MigrateGroup() I found that there was also a SPFarm.MigrateUser() function. It seemed more efficient to build a list of users and iterate over it, calling SPFarm.MigrateUser() for each one. Once again, we met with Microsoft, and they assured us that the SPFarm.MigrateUser() function would behave just like the Move-SPUser command, only at the farm level. Ultimately, we used this as it allowed us to batch up the user migration into PowerShell Jobs easily. This is the script that we ended up using across multiple servers.

Add-PSSnapin Microsoft.SharePoint.Powershell
$domain = "ais"
$dryrun = $true

$muliplier = 0    #The number of server instance migrating users (0,1,2,3,etc.)
$jobCount = 20    #The number of parallel jobs
$chuckSize = 100  #The number of users to process at one time
$maxUsers = 2500  #The number of users to process on this server

$usersString = 
"<ActiveUsers>
    <ActiveUser>i:0#.w|ais\user1</ActiveUser>
    <ActiveUser>i:0#.w|ais\user2</ActiveUser>
    <ActiveUser>i:0#.w|ais\user3</ActiveUser>
    <ActiveUser>i:0#.w|ais\user4</ActiveUser>
</ActiveUsers>"

#Using a string in this example, but a separate file would be more appropriate
$ScriptRoot = Split-Path $MyInvocation.MyCommand.Path
#$UsersXml = (Get-Content "$($ScriptRoot)\ActiveUsers.xml") 

$UsersXml = $usersString
$users = $UsersXml.ActiveUsers.ActiveUser

#Use Dry Run to test
$dryrunText = "[DRYRUN]"
if($dryrun -eq $false){
    $dryrunText = ""
}

if($maxUsers -ne $null){
    $users = $users| select -Skip ($maxUsers*$muliplier) | select -first $maxUsers
}

$oktaClaimChar = Get-SPClaimTypeEncoding |  Where-Object { $_.ClaimType -like '*Okta*' }

Write-Host "$($dryrunText)Start: $(Get-Date)"

#Build Chunks
$chunks = [System.Collections.ArrayList]::new()
for ($i = 0; $i -lt $users.Count; $i += $chuckSize) {    
    
    if (($users.Count - $i) -gt ($chuckSize-1)  ) {
        $chunks.add($users[$i..($i + ($chuckSize-1))]) | Out-Null
    }
    else {
        $chunks.add($users[$i..($users.Count - 1)]) | Out-Null
    }
}


for ($i = 0; $i -lt $chunks.Count; $i++) {  
    $chunk = $chunks[$i]
    Write-Progress -Id 0 -Activity Updating -Status 'Progress->' -PercentComplete ($i/$chunks.Count * 100) -CurrentOperation Chunks
    $running = @(Get-Job | Where-Object { $_.State -eq 'Running' })
    if ($running.Count -ge $jobCount) {
        $running | Wait-Job -Any | Out-Null
    }
    $jobName = "Chunk$i"
    $job = Start-Job -Name $jobName -OutVariable $job {
        Add-PSSnapin Microsoft.SharePoint.Powershell
        $chunk = $using:chunk
        $dryrun = $using:dryrun
        $dryrunText = $using:dryrunText
        $i = $using:i
        
        $oktaClaimChar = $using:oktaClaimChar        
        $farm = Get-SPFarm

        for ($j = 0; $j -lt $chunk.Count; $j++) {
            $user = $chunk[$j] 
            if($user -ne $null)
            {
                $oldUserName = $user.ToLower()
                $newUserName =  $user.Replace("i:0#.w|", "i:0$($oktaClaimChar.EncodingCharacter).t|okta|")
                $newUserName =  $newUserName.Replace("$domain\", "")               
                if($oldUserName -ne $newUserName)
                {
                    Write-Host "  $($dryrunText) Moving User $oldUserName  to $newUserName"
                    if($dryrun -eq $false)
                    {     
                        try{    
                            $farm.MigrateUserAccount($oldUserName,$newUserName,$false)
                        }catch{
                            Write-Host $_
                        }
                    }              
                }     
            }                       
        }      
    }
}
Wait-Job * | Out-Null 

# Process the results
foreach($job in Get-Job)
{
    $result = Receive-Job $job
    Write-Host $job.Name
    Write-Host $result
}
Remove-Job -State Completed
Write-Host "$($dryrunText)End: $(Get-Date)" 

Lesson 3: Integration with SQL Server Reporting Services (SSRS)

As mentioned earlier, our environment was running SQL Server Reporting Services (SSRS) – Integrated Mode. There were no changes necessary for reports to work for the end-user. However, for report authors to create and edit reports, they needed to use Windows authentication. How you decide to handle this is tightly coupled with what I covered in Lessons 1 and 2. If you choose to use both Okta and Windows Authentication in a single zone, you will face issues when editing a report with the Report Builder while logged in as an Okta user.

This was the second reason why we went with two authentication zones. To edit the report, the authors would connect to the Default Zone URL (ex. https://internal.contoso.com); however, if the data source is a SharePoint list, the Internet Zone URL is used (ex. https://contoso.com). SharePoint will respect the permissions of the SharePoint user (in this case, Okta).

Input Data Source Type

For all of this to work together, we migrated content, then migrated users and groups to Okta claims, and then added new Windows groups so that certain users could log in with Windows credentials and edit reports.
CAUTION: This creates the scenario that I warned about; so, make sure your user migration was successful before adding these groups and letting report authors access the system. If you migrate the Windows user to Okta, and then the user logs in with windows credentials, there will be two claims in the system (ex. c:0-.t|okta|chettinger and i:0#.w|ais\chettinger). If you were to migrate the new Windows user a second time, it would likely overwrite the Okta user and its permissions.

Lesson 4: Integration with Microsoft Office Products

Microsoft Word

Okta did not seem to consider Microsoft Office when developing its SharePoint integration solution. Editing items in Word, Excel, and PowerPoint is an important feature, and our users wanted it to work. When the Open in Word option is used on a file in SharePoint, the Word application will open on the user’s computer and attempt to authenticate with Okta.

Open in Word

Under the hood, Office products use an outdated browser control based on Internet Explorer version 9. The Okta login page would not render correctly in the browser control due to compatibility issues; instead, it would throw a script error, and controls would not render.

Script Error

We had to work with Okta to get them to change it for our login page. Microsoft loosely explains how to fix it in this article; however, Okta had to detect the browser version and add the meta tag accordingly. Ultimately, if you plan on using Office products with SharePoint, you will need to work with Okta to get your login page fixed.

Microsoft Outlook

We also ran into a unique issue with Microsoft Outlook and adding Calendars. Outlook 365 users had to go to File > Options > Trust Center > Form-based Sign-in and choose Ask me what to do for each host.

Trust Center

For Outlook clients before 365 (Outlook 2019 in our case), the Form-based Sign-In option was unavailable. We had to work with our Group Policy Object (GPO) Administrators and create a GPO to set this and add the hostname (ex. contoso.com). Unfortunately, this only partially fixed the problem; once users added the calendar, they started getting prompted with Windows credentials. After working with Microsoft, we found out that when the calendar is added to Outlook, it stores the URL somewhere, and it gets it from SharePoint based on the first Zone it finds a URL in.

  • It checks the Zones in the following order: Intranet, Default, Extranet, Internet, Custom. If you remember, we had the following:
  • Intranet – empty
  • Default – https://internal.contoso.com (Windows Auth)
  • Internet – https://contoso.com (Okta Auth)
  • Extranet – empty
  • Custom – empty

Outlook was storing the URL from the Default Zone, which was Windows authentication, and prompted the user. So, what was the fix? First, we had to move https://contoso.com to the Intranet Zone so that Outlook would store it instead.

  • Intranet – https://contoso.com (Okta Auth)
  • Default – https://internal.contoso.com (Windows Auth)
  • Internet – empty
  • Extranet – empty
  • Custom – empty

Lesson 5: Integration with Nintex Forms and Workflows

When we started testing Nintex Forms and Workflows, we quickly found that users had not been migrated as we had hoped. So the first thing we did was add the Nintex MigrateUser operation right after the farm-migrate user command in the script above:

$farm.MigrateUserAccount($oldUserName,$newUserName,$false)
NWAdmin.exe -o MigrateUser -oldUser $oldUserName -newUser $newUserName 

According to Nintex, this command only updates user settings and history, not in the workflow definitions or running workflows. So, to fix the workflows, I wrote a script that recursively goes through all of SharePoint and looks for the hidden library NintexWorkflows. Each of these libraries exports each workflow, replaces the Windows claim with an Okta claim and then deploys the workflow. It does all of this using NWAdmin.exe operations and was approved by Nintex. Here is the script that we used:

$domain = "ais"
$dryrun = $true
$dryrunText = "DRYRUN"


$oktaClaimChar = Get-SPClaimTypeEncoding | Where-Object { $_.ClaimType -like '*Okta*' }
$encodedValue = [int]([char]($oktaClaimChar.EncodingCharacter))
$encodingCharacter = "&#$($encodedValue);"
if($dryrun -eq $false){
    $dryrunText = ""
}

function CheckWorkflow($asset)
{ 
    $text = [System.Text.Encoding]::ASCII.GetString($asset.OpenBinary())

    if($text.Contains("i:0#.w|$domain\"))
    {        

        try {
            $assetConfig = $asset.ParentFolder.Files | Where-Object Name -eq $($asset.Title +".xoml.wfconfig.xml")
            $configText = [System.Text.Encoding]::ASCII.GetString($assetConfig.OpenBinary())
            $configXml = $configText
            $listId = $configXml.WorkflowConfig.Association.ListID

            $path = $asset.Web.Url.Replace('https://','')
            $pattern = '[\\/]'
            $path = $path -replace $pattern, '-'
            $nwfFile = "C:\Temp\NintexMigration\$path\$($asset.title).nwf"


            if((Test-Path "C:\Temp\NintexMigration\$path") -eq $false){
                New-Item -ItemType Directory -Path "C:\Temp\NintexMigration\$path"
            }

            if($null -ne $listId)
            {
                $list = $asset.Web.Lists | Where-Object Id -eq $listId
                $listName = $list.Title

                $output = & NWAdmin.exe -o ExportWorkflow -siteUrl $($asset.Web.Url) -list "$($listName)" -workflowName "$($asset.title)" -fileName "$($nwfFile)" -workflowtype list           
                if ($output  -eq "Exporting complete.")
                {
                    $nwfText = Get-Content -Path "$($nwfFile)"
                    $newNwfText = $nwfText
                    $newNwfText = $newNwfText.Replace("i:0#.w|$domain\","i:0$($encodingCharacter).t|okta|")    
                    Set-Content -Path "$($nwfFile)" -Value $newNwfText

                    Write-Host "$dryrun TextChange Type=""List"" SiteUrl=""$($asset.Web.Url)"" TargetList=""$($listName)"" WorkflowName=""$($asset.title)"" NWFFile=""$($nwfFile)"" Web=""$($asset.Web.Url)"" File=""$($asset.Url)"" DateTime=""$(get-date -f MM-dd-yyyy_HH_mm_ss)"""
                    if($dryrun -eq $false) {  
                        & NWAdmin.exe -o DeployWorkflow -siteUrl $($asset.Web.Url) -targetlist "$($listName)" -workflowName "$($asset.title)" -nwffile "$($nwfFile)" -overwrite
                    }
                } else{
                    Write-Host "$dryrunText $output"
                }
            }
            else
            {
                $output = & NWAdmin.exe -o ExportWorkflow -siteUrl $($asset.Web.Url) -workflowName "$($asset.title)" -fileName "$($nwfFile)" -workflowtype site            
                if ($output  -eq "Exporting complete.")
                {
                    $nwfText = Get-Content -Path "$($nwfFile)"
                    $newNwfText = $nwfText
                    $newNwfText = $newNwfText.Replace("i:0#.w|$domain\","i:0$($encodingCharacter).t|okta|")   
                    Set-Content -Path "$($nwfFile)" -Value $newNwfText

                    Write-Host "$dryrun TextChange Type=""Site"" SiteUrl=""$($asset.Web.Url)"" WorkflowName=""$($asset.title)"" NWFFile=""$($nwfFile)"" Web=""$($asset.Web.Url)"" File=""$($asset.Url)"" DateTime=""$(get-date -f MM-dd-yyyy_HH_mm_ss)"""
                    if($dryrun -eq $false) {  
                        & NWAdmin.exe -o DeployWorkflow -siteUrl $($asset.Web.Url) -workflowName "$($asset.title)" -nwffile "$($nwfFile)" -overwrite
                    }
                } else{
                    Write-Host "$dryrunText $output"
                }
            }

        } catch {
            Write-Line $_
        }
    }   
}

function CheckWorkflows($w)
{
    foreach ($list in $w.Lists)
    {
        if ( $list.title.tolower().contains( "nintexworkflows" ) )
        {
            foreach ($item in $list.items)
            {
                $asset = $item.file
                CheckWorkflow($asset)
            }
        }
    }
    foreach($sub in $w.Webs)
    {
        CheckWorkflows($sub)    
    }
}

$spWebApps = Get-SPWebApplication
foreach ($spWebApp in $spWebApps)
{      
    foreach ($spSite in $spWebApp.Sites)
    {
        if ($null -ne $spSite)
        {
            CheckWorkflows($spSite.RootWeb)
            $spSite.Dispose()
        }
    } 
}

Conclusion

There is much to consider if you want to use Okta as your authentication provider for SharePoint On-Premises. If you are using integrations such as Nintex, SSRS, and Microsoft Office, there will be a lot of work ahead of you. Hopefully, this blog post will save you some time with planning and risk mitigation. Either way, the most important take away to be sure to test thoroughly.

We're hiring for SharePoint careers at AIS. Help us deliver solutions and support client SharePoint environments.

In early October of 2021, our team was honored to be present as our own Brigadier General Stephen Iacovelli stepped down as Commanding General of the 94th Training Division after admirably leading the division for 38-months.

Stephen enlisted in the Army as a Private in 1984 and was later named a distinguished graduate from his AIT class at Fort Lee, Virginia. He was later commissioned as an Engineer Officer through the Army ROTC Program and later became Psychological Operations qualified at Ft. Bragg, NC. He holds a Master’s in Information Systems and a Bachelor’s in Computer Science from Roosevelt University in Chicago, IL, and a Master of Strategic Studies from the Army War College, where he was a Distinguished Graduate. He is also a National Security Fellow at Syracuse University’s Maxwell School of Citizenship and Public Affairs.

We asked Stephen to share a bit about his experience while in service and his experience with us at AIS, and the recent ceremony.

Can you tell us a little about the Change of Command ceremony in Ft. Lee?

The change of command ceremony at Ft. Lee was amazing! The event started the night prior, with a military dining out, and concluded with the actual change of command ceremony SundayStephen Iacovelli morning. My Command Sergeant Major planned everything for months and wouldn’t let me get involved as he wanted the details to be a surprise. He even brought in some drill sergeants from our battalion at Ft. Leonard Wood to be the color guard. He knew I had previously commanded both a drill sergeant battalion and brigade and that I had also been enlisted and went through basic training. In addition, we had support from an artillery battery, who provided an 11-gun salute in my honor, and members from the TRADOC band out of Ft. Eustis.

In addition to the command teams from the division’s five brigades and 25 battalions, I was blessed to have friends and family present, including AIS President and CEO, Larry Katzman (and son Alex), and David Skiavo, VP, Workforce Automation & Analytics. We even had representation from the 94th Infantry Division Historical Society, who, 2-years ago, gave me the honor of being a guest speaker at a monument dedication at the National Infantry Museum, commemorating the 100th anniversary of the division. The whole weekend was very emotional, and it was difficult leaving the 94th after having been with it for 38-months. It’s an extreme honor and privilege to have been the longest-serving commander in the 103-year history of the division.

You’ve been with AIS for nearly nine years. What’s the experience at AIS been like throughout your service, and what keeps you with the company?

My time at AIS has been great. I’ve been fortunate to have worked on a variety of projects and assignments over the past 8-1/2 years. As I said in my remarks at my change of command, AIS has bent over backward to accommodate me in my day-to-day responsibilities, especially over the past 3-years, which has allowed me to meet my extensive military commitments – I’m very grateful for that support. I really enjoy working with everyone here at AIS because everyone is passionate about technology and wants to provide the highest quality solutions for our clients. Also, everybody here really goes out of their way to help each other – that’s something you don’t often find in the civilian workplace. This company truly is something special!

What would it be if you could have given yourself one piece of advice before beginning your career?

Focus on what you can affect, and don’t get upset about those things that are out of your control. Your time and effort are much better spent working on things you can impact than dwelling / lamenting on those you can’t because that’s just a waste of time and energy. Too often, I’ve seen people get so spun up about things that are out of control that they lose sight of what they can/should be doing.

Supporting Members of the Military Community

AIS is proud to support our nation and its service members. We continue to hire through the Microsoft Software and Systems Academy (MSSA) program, an industry-leading 17-week training for high-demand careers in the cloud. MSSA helps to transition service members with skills required for today’s fast-growing industry.

David Skiavo quote

Grow with Us

We are proud to have a quickly growing team, some of who have been with AIS for upwards of 35 years. We feel that speaks to our commitment to our employees over the years they have been with us. Our people are the soul of AIS and make us who we are, with continued growth year after year. We’re a thriving group that works together to drive the mission of our clients forward. We search for individuals that are willing to learn and grow alongside us in an ever-evolving industry. If you’re looking for an opportunity to take your career to the next level, AIS is the place for you!

JOIN OUR GROWING TEAM
AIS provides employees with opportunities to learn and grow in their careers. Won’t you join us?

Point-to-Site Virtual Private Network (VPN) connections are helpful when you want to connect to your VNet from a remote location. This helps us securely connect individual clients running Windows, Linux, or macOS to an Azure VNet. This blog will outline steps to create and test a Point to Site VPN while using an Azure Certificate Authentication method.

Create a VNet

Sign in to the Azure portal.
In Search, type Virtual Network.

Creating virtual network

Select Virtual Network from the Marketplace results.

Virtual Network

Once you select Create, the Create virtual network page will open.
On the Basics tab, configure Project details and Instance details VNet settings.

Virtual Network Creation

SLVNET

Create the VPN Gateway

A VPN gateway is a specific type of virtual network gateway used to send encrypted traffic between an Azure virtual network and an on-premises location over the public Internet. Each virtual network can have only one VPN gateway. The virtual network gateway uses a specific subnet called the gateway subnet. The gateway subnet is part of the virtual network IP address range you specify when configuring your virtual network. It contains the IP addresses that the virtual network gateway resources and services use.

Virtual network gateway

On the Basics tab, fill in the values for Project details and Instance details.

Completing VPN basics

Public IP Address input

Note: Deployment of the virtual network gateway may take up to 45 minutes.

VirtualNGateway

Looking to grow your career in Azure? We're growing quickly and we're looking for talent to join the team. View our open career opportunities today.

Generating Certificates

Azure uses certificates to authenticate clients connecting to a VNet over a Point-to-Site VPN connection. Once you obtain a root certificate, you upload the public key information to Azure. The root certificate is then considered ‘trusted’ by Azure to connect P2S to the virtual network. You also generate client certificates from the trusted root certificate and then install them on each client computer. The client certificate is used to authenticate the client when it initiates a connection to the VNet.

Generate a Root Certificate

Use either a root certificate generated with an enterprise solution (recommended) or generate a self-signed certificate. After creating the root certificate, export the public certificate data (not the private key) as a Base64 encoded X.509 .cer file. Then, upload the public certificate data to the Azure server.

Open PowerShell as an Administrator and run the following script.

$cert = New-SelfSignedCertificate -Type Custom -KeySpec Signature `
-Subject “CN=SLP2SRootCert” -KeyExportPolicy Exportable `
-HashAlgorithm sha256 -KeyLength 2048 `
-CertStoreLocation “Cert:\CurrentUser\My” -KeyUsageProperty Sign -KeyUsage CertSign

Admin Windows PowerShell

This will create a root cert and install it under the current user cert store.

Generating Client Certificates from Root Certificate

Open PowerShell as an Administrator and run the following command:

Get-ChildItem -Path “Cert:\CurrentUser\My”

This should provide a thumbprint:

PowerShell thumbprint

Next, run the following command. The thumbprint should mutch to your Certificate.

$cert = Get-ChildItem -Path “Cert:\CurrentUser\My\B1C79D177D465E76FF74243F7553EA4837FD137B”

Thumbprint to match certificate

Finally, you’ll need to run this to generate your client certificate.

New-SelfSignedCertificate -Type Custom -KeySpec Signature `
-Subject “CN=SLP2SClientCert” -KeyExportPolicy Exportable -NotAfter (Get-Date).AddYears(1) `
-HashAlgorithm sha256 -KeyLength 2048 `
-CertStoreLocation “Cert:\CurrentUser\My” `
-Signer $cert -TextExtension @(“2.5.29.37={text}1.3.6.1.5.5.7.3.2”)

Run and generate certificate

We now have certs in place, But we need to export the root certificate to upload it in Azure.
First, export the root certificate public key (.cer)

Hit the Windows Key + “R”, to bring up the Run dialog box and type in “certmgr.msc”. When the management console opens, you should see your newly created certificate in “Current User\Personal\Certificates”. Right-click on your newly created cert and go to All Tasks > Export.

Export certificates

In the Wizard, click Next.

Export Wizard

Select No, do not export the private key, and then click Next.

Do not export private key

On the Export File Format page, select Base-64 encoded X.509 (.CER)., and then click Next.

Pick file format

For File to Export, Browse to the location to which you want to export the certificate. Specify your file name.  Then, click Next.

Name File to export

Click Finish to export the certificate. Your certificate is successfully exported!
The exported certificate looks similar to this:

Exported Certificate

If you open the exported certificate using Notepad, you see something similar to this example. The section in blue contains the information that is uploaded to Azure. If you open your certificate with Notepad and it does not look similar to this, typically, this means you did not export it using the Base-64 encoded X.509(.CER) format. Additionally, if you want to use a different text editor, some editors can introduce unintended formatting in the background. This can create problems when uploading the text from this certificate to Azure.

Open Certificate in notepad

Configure Point to Site Connection

  • The next step of this configuration is to configure the point-to-site connection. Here we will define the client IP address pool as well. It is for VPN clients.
  • Click on the newly created VPN gateway connection.
  • Then in a new window, click on Point-to-site configuration
  • Click on Configure Now
  • In a new window, type the IP address range for the VPN address pool. We will be using 20.20.20.0/24. For tunnel, type use both SSTP & IKEv2. Linux and other mobile clients, by default, use IKEv2 to connect. Windows also use IKEv2 first and then try SSTP. For authentication type, use Azure Certificates.
  • In the same window, there is a place to define a root certificate. Under root certificate name, type the cert name and under public certificate data, paste the root certificate data ( you can open cert in notepad to get data).
  • Then click on Save to complete the process.
  • Note: when you paste certificate data, do not copy —–BEGIN CERTIFICATE—– & —–END CERTIFICATE—– text.

Point to Site configuration

Testing VPN Connection

Log in to Azure portal from the machine and go to VPN gateway configuration page.
Click on Point-to-site configuration.
Next, click on Download VPN client.

Download VPN client

We can see a new connection under the windows 10 VPN page.

New VPN connection

Click on connect to VPN. Then it will open this new window. Click on Connect.

Connect new VPN

Adding a VPN Connection

Run ipconfig to verify IP allocation from VPN address pool.

Run ipconfig to verify IP location

Congratulations! You’ve successfully configured a Point to Site VPN Connection using Azure Certificate Authentication.

AIS has been working with Azure since 2008. Interested in learning more? Reach out to AIS today.