Introduction

PowerShell DSC is possibly one of the most potent configuration managers for Windows Operating Systems. Even with the increase in popularity of Ansible and SaltStack, Powershell DSC remains supreme when enforcing the desired state on a Windows VM. Ansible itself has included the win_dsc module, which allows Ansible to run PowerShell DSC. In this blog post, we will dive deeper into one of PowerShell DSC’s most powerful aspects, 3rd Party Resources, and how they interact with Azure Automation.

3rd Party Resources are PowerShell DSC Modules that are created by the community. Any PowerShell community member can create modules, and there are tons of modules out there to choose from. Modules are kept in repositories, the most well known and default PowerShell repository being the PowerShell Gallery run by Microsoft. This is a common repository for PowerShell modules that are deployed to the Gallery by the community. PowerShell Modules in the PSGallery can be downloaded and installed by the PowerShellGet Module.

As developers and infrastructure engineers, there are many different reasons to script various services you are creating. Often, instead of developing behavior or scripts from scratch, it is much easier to leverage the work that others have done to expedite a task’s completion. 3rd Party Modules allow for easily repeatable code that can become production-ready through collaboration.

Often, DSC Configuration can become complicated. Engineers can be asked to do many things, from creating an Azure AD Domain, configuring OMS Solutions associated with a VM, and even interactions with non-native Azure products, such as Splunk.

These may all seem very daunting, but don’t fret! Members of the PowerShell community have dealt with these problems and many others, and often you will find third party modules to help do the work for you.

Here is an example of a Third Party Resource, named ActiveDirectoryDsc, which will help in the promotion, configuration, and management of Active Directory

Azure Automation is a robust PaaS offering from Azure that allows for a cloud-based DSC pull server. Within Azure Automation, it is possible to add both custom modules that the user develops and third-party modules available in any hosted source.
⚠ It should be known that organizations in locked-down environments can manage their Repository of PowerShell Modules, which have been vetted by the respective InfoSec team. It is possible to deploy your Artifact repo using the Azure DevOps product shown here. It allows an internal team to deploy its versions of packages, and you can use that as your URI references.
⚠ There are a few ways to upload modules to the Azure Portal natively. You can upload manually through the portal as shown here in this picture:

Uploading modules to the Azure Portal

However, being DevOps Engineers, we want to automate this process as much as possible. One way to do this is via ARM Templates, like the ones we used in the previous module.
Below is an example of how to add a 3rd party module to your Azure Automation Account via ARM Templates:

{
"name": "[concat(parameters('automationAccountName'), '/', parameters('name'))]",
"type": "Microsoft.Automation/automationAccounts/modules",
"apiVersion": "2015-10-31",
"location": "[resourceGroup().location]",
"properties": {
"isGlobal": false,
"sizeInBytes": 0,
"contentLink": {
"uri": "uri to package"
}
}
}

If you are deploying from the PowerShellGallery, your Uri would look something like this:

"uri": "[concat('https://www.powershellgallery.com/api/v2/package/', parameters('name'), '/', parameters('version'))]"

Alternatively, you can script the import of modules using the New-AzAutomationModule module in a Powershell Script.

Oddly enough, there is sometimes some difficulty understanding the correct ContentUri to use in both the ARM and Powershell case. Finding the correct one can be done by navigating the right module in the Powershell Gallery, and adding /api/v2 to the URL, and replacing packages (plural) with package (singular).

Add the /api/v2 to a URL

Conclusion

3rd Party Modules are a great way for developers to speed up development and productivity. If you are inclined to help in the development of these modules, head over to GitHub and contribute!

In January, AIS’ Steve Suing posted a great article titled “What to Know When Purchasing a COTS Product and Moving to the Cloud.” In his post, Steve discusses some things to consider such as security, automation, and integration. Springboarding off of those topics for this blog, I am going to discuss another major consideration: legacy data.

Many COTS implementations are done to modernize a business function that is currently being performed in a legacy application. There are some obvious considerations here. For example, when modernization a legacy app, you might ask:

  • How much of the business needs are met by purely out of the out-of-the-box functionality?
  • How much effort will be necessary to meet the rest of your requirements in the form of configuration and customization of the product or through changing your business process?

However, legacy data is a critical consideration that should not be overlooked during this process.

Dealing with Data When Retiring a Legacy System

If one of the motivators for moving to a new system is to realize cost savings by shutting down legacy systems, that means the legacy data likely needs to be migrated. If there is a considerable amount of data and length of legacy history, the effort involved in bringing that data into the new product should be carefully considered. This might seem trivial, with most products at least proclaiming to have a multitude of ways to connect to other systems but recent experience has shown me that the cost of not properly verifying the level of effort can far exceed the savings provided by a COTS product. These types of questions can help avoid a nightmare scenario:

  1. How will the legacy data be brought into the product?
  2. How much transformation of data is necessary, and can it be done with tools that already exist?
  3. Does the product team have significant experience of similarly legacy systems (complexity, amount of data, industry space, etc.) moving to the product?

The first and second conversations usually start by sitting down and talking to the product team and having them outline the common processes that take place to bring legacy data into the product. During these conversations, be sure both sides have technical folks in the room. This will ensure the conversations have the necessary depth to uncover how the process works.

Once your team has a good understanding of the migration methods recommended by the product team, start exploring some actual use cases related to your data. After exploring some of the common scenarios to get a handle on the process, jump quickly into some of the toughest use cases. Sure, the product might be able to bring in 80% of the data with a simple extract, transform, load (ETL), but what about that other 20%? Often legacy systems were created in a time long, long ago, and sometimes functionality will have been squished into them in ways that make the data messy. Also, consider how the business processes have changed over time and how the migration will address legacy data created as rules changed over the history of the system. Don’t be afraid to look at that messy data and ask the tough questions about it. This is key.

CONVERTING LEGACY DATA TO BIG DATA PLATFORMS
Accelerate your data migration project with our free white paper download.

Your Stakeholders During Legacy Data Migration

Be sure those involved are not too invested in the product to decrease confirmation bias, the tendency to search for, interpret, and/or focus on information that will confirm that this is your silver bullet. This way they will ask the hard questions. Bring in a good mix of technical and business thought leaders. For instance, challenge the DBA’s and SME’s to work together to come up with things that will simply not work. Be sure to set up an environment in which people are rewarded for coming up with blockers and not demotivated by being seen as difficult. Remember every blocker you come up with in this early phase, could be the key to making better decisions, and likely have a major downstream impact.

The product of these sessions should be a list of tough use cases. Now it’s time to bring the product team again. Throw the use cases up on a whiteboard and see how the product team works through them. On your side of the table, be sure to include people that will be responsible for the effort of bringing the new system to life. With skin in the game, these people are much less likely to allow problems to be glossed over and drive a truly realistic conversation. Because this involves the tough questions, the exercise will likely take multiple sessions. Resist any pressure to get this done quickly, keeping in mind that a poor decision now, can have ripple efforts that will last for years.

After some of these sessions, both sides should have a good understanding of the legacy system, the target product, and some of the complexities of meshing the two. With that in place, ask the product team for examples of similar legacy systems they have tackled. This question should be asked up front, but it really cannot be answered intelligently until at least some of the edge cases of the legacy system are well understood. While the product team might not be able to share details due to non-disclosure agreements, they should be able to speak in specific enough terms to demonstrate experience. Including those involved in the edge case discovery sessions is a must. With the familiarity gained during those sessions, those are the people that are going to be most likely to make a good call on whether the experience being presented by the product team relates closely enough to your needs.

Using Lessons Learned to Inform Future Data Migration Projects

Have I seen this process work? Honestly, the answer is no. My hope is that the information I have presented, based on lessons learned from projects past helps others avoid some of the pitfalls I have faced and the ripple effects that could lead to huge cost overruns. It’s easy to lose sight of the importance of asking hard questions up front and continuing to do so in light of the pressure to get started. If you feel you are giving in to such pressure, just reach out to future you and ask them for some input.

And a final thought. As everyone knows, for complex business processes, COTS products don’t always perfectly align to your legacy system. There are two ways to find alignment. Customizing the system is the obvious route, although it often makes more sense to change business processes to align with a product you are investing in rather than trying to force a product to do what it was not designed for. If you find that this is not possible, perhaps you should be looking at another product. Keep in mind that if a product cannot be found that fits your business processes closely enough, it might make more financial sense to consider creating your own system from the ground up.

Sound Familiar?

It’s not a sentiment you would expect from most IT decision-makers. However, it’s something we hear from an increasing number of organizations.

The benefits of a well-thought-out cloud transformation roadmap are not lost on them.

  • They know that, in an ideal world, they ought to start with an in-depth assessment of their application portfolio, in line with the best practice – “migrate your capabilities, not apps or VMs”.
  • They also realize the need to develop a robust cloud governance model upfront.
  • And ultimately, they understand the need to undertake an iterative migration process that takes into account “organizational change management” best practices.

At the same time, these decision-makers face real challenges with their existing IT infrastructure that simply cannot wait months and years for a successful cloud transformation to take shape. They can’t get out of their on-premises data centers soon enough. This notion isn’t limited to organizations with fast-approaching Data Center (DC) lease renewal deadlines or end of support products, either.

So, how do we balance the two competing objectives:

  • Immediate need to move out of the DC
  • Carefully crafted long-term cloud transformation

A Two-Step Approach to Your Cloud Transformation Journey

From our experience with a broad range of current situations, goals, and challenges, we recommend a two-step cloud transformation approach that addresses both your immediate challenges and the organization’s long-term vision for cloud transformation.

  1. Tactical “Lift-n-Shift” to the Cloud – As the name suggests, move the current DC footprint as is (VMs, databases, storage network. etc.) to Azure
  2. Strategic Cloud Transformation – Once operational in the cloud, incrementally and opportunistically move parts of your application portfolio to higher-order Azure PaaS/cloud-native services

Tactical “Lift-n-Shift” to the Cloud

Lift n Shift Approach to Cloud Transformation

On the surface, step #1 above may appear wasteful. After all, we are duplicating your current footprint in Azure. But keep in mind that step #1 is designed for completion in days or weeks, not months or years. As a result, the duplication is minimized. At the same time, step #1 immediately puts you in a position to leverage Azure capabilities, giving you tangible benefits with minimal to no changes to your existing footprint.

Here are a few examples of benefits:

  • Improve the security posture – Once you are in Azure, you tap into security capabilities such as intrusion detection and denial of service attack solely by being in Azure. Notice that I deliberately did not cite Security Information and Event Management (SIEM) tools like Azure Sentinel. Technically you can take advantage of Azure Sentinel for on-premises workloads.
  • Replace aging hardware – Your hardware may be getting old but isn’t old enough for a Capex-powered refresh. Moving your VMs to Azure decouples you from the underlying hardware. “But won’t that be expensive, since you are now paying by usage per minute?” you ask. Not necessarily and certainly not in the long run. Consider options like Reserved Instance (RI) pricing that can offer an up to 80% discount based on a one- or three-year commitment.

Furthermore, you can combine RI with Azure Hybrid Benefits (AHUB) which provides discounts for licenses already owned. Finally, don’t forget to take into account the savings from decreased needs for power, networks, real estate, and the cost of resources to manage all the on-premises assets. Even if you can’t get out of the DC lease completely, you may be able to negotiate a modular reduction of your DC footprint. Please refer to Gartner research that suggests that over time, the cloud can become cost-effective.

AMP Move out of Data Center

Source – https://blogs.gartner.com/marco-meinardi/2018/11/30/public-cloud-cheaper-than-running-your-data-center/

  • Disaster Recovery (DR) – Few organizations have a DR plan setup that is conducive for ongoing DR tests. Having an effective DR plan is one of the most critical responsibilities of IT. Once again, since geo-replication is innate to Azure, your disks are replicated to an Azure region that is at least 400 miles away, by default. Given this, DR is almost out-of-the-box.
  • Extended lease of life on out of support software – If you are running an Operating System (OS), such as Windows Server 2008 or SQL Server 2008, moving to Azure extends the security updates for up to three years from the “end of support” date.
  • Getting out of the business of “baby-sitting” database servers – Azure managed instances offer you the ability to take your existing on-premises SQL Server databases and move them to Azure with minimal downtime. Once your database is an Azure SQL Managed Instance, you don’t have to worry about patching and backup, thereby significantly reducing the cost of ownership.
  • Take baby steps towards automation and self-service – Self-service is one of the key focus areas for most IT organizations. Once again, since every aspect of Azure is API driven, organizations can take baby steps towards automated provisioning.
  • Get closer to a data lake – I am sure you have heard the quote “AI is the new electricity”. We also know that Artificial Intelligence (AI) needs lots and lots of data to train the Machine Learning (ML) algorithms. By moving to Azure, it is that much easier to capture the “data exhaust” coming out the applications in a service like Azure Data Lake. In turn, Azure Data Lake can help turn this data into intelligence.

Strategic Cloud Transformation

Strategic Cloud Transformation

Once you have completed step #1 by moving your on-premises assets to the cloud, you are now in a position to undertake continuous modernization efforts aligned to your business priorities.

Common approaches include:

  • Revise – Capture application and application tiers “as-is” in containers and run on a managed orchestrator like Azure Kubernetes Service. This approach requires minimal changes to the existing codebase. For more details of this approach, including a demo, read Migrate and Modernize with Kubernetes on Azure Government.
  • Refactor – Modernize by re-architecting to target Platform as a Service (PaaS) and “serverless” technologies. This approach requires more significant recoding to target PaaS services but allows you to take advantage of cloud provider managed services. For more information, check out our “Full PaaS” Approach to Modernizing Legacy Apps.
  • Rebuild – Complete rewrite of the applications using cloud-native technologies like Kubernetes, Envoy, and Istio. Read our blog, What Are Cloud-Native Technologies & How Are They Different from Traditional PaaS Offerings, for more information.
  • Replace – Substitute an existing application, in its entirety, with Software as a Service (SaaS) or an equivalent application developed using a no-code/low-code platform.

CHECK OUT OUR WHITEPAPER & LEARN ABOUT CLOUD-BASED APP MODERNIZATION APPROACHES

The following table summarizes the various approaches for modernization in terms of factors such as code changes, operational costs, and DevOps maturity.

Compare App Modernization Approaches

Azure Migration Program (AMP)

Microsoft squarely aligns with this two-step approach. At the recent Microsoft partner conference #MSInspire, Julia White announced AMP (Azure Migration Program).

AMP brings together the following:

Wrapping Up

A two-step migration offers a programmatic approach to unlock the potential of the cloud quickly. You’ll experience immediate gains from a tactical move to the cloud and long-term benefits from a strategic cloud transformation that follows. Microsoft programs like AMP, combined with over 200+ Azure services, make this approach viable. If you’re interested in learning more about how you can get started with AMP, and which migration approach makes the most sense for your business goals, reach out to AIS today.

GET YOUR ORGANIZATION ON THE RIGHT TRACK TO TRANSFORMATION. CONTACT AIS TODAY TO DISCUSS YOUR OPTIONS.

PaaS & Cloud-Native Technologies

If you have worked with Azure for a while, you’re aware of the benefits of PaaS, such as the ability to have the cloud provider manage the underlying storage and compute infrastructure so you don’t have to worry about things like patching, hardware failures, and capacity management. Another important benefit of PaaS is the rich ecosystem of value-add services like database, identity, and monitoring as a service that can help reduce time to market.

So if PaaS is so cool, why are cloud-native technologies like Kubernetes and Prometheus all the rage these days? In fact, not just Kubernetes and Prometheus, there is a groundswell of related cloud-native projects. Just visit the cloud-native landscape to see for yourself.

Key Benefits of Cloud-Native Architecture

Here are ten reasons why cloud-native architecture is getting so much attention:

  1. Application as a first-class construct — Rather than speak in terms of VMs, storage, firewall rules, etc. cloud-native is about application-specific constructs. Whether it is a Helm chart that defines the blueprint of your application or a service mesh configuration that defines the network in application-specific terms.
  1. Portability — Applications can run on any CNCF certified clouds and on-premises and edge devices. The API surface is exactly the same.
  1. Cost efficiency — By densely packing the application components (or containers) on the underlying cluster, the cost of running an application is significantly more efficient.
  1. Extensibility model — Standards-based extensibility model allows you to tap into innovations offered by the cloud provider of your choice. For instance, using the service catalog and open service broker for Azure, you can package a Kubernetes application with a service like Cosmos DB.
  1. Language agnostic — Cloud-native architecture can support a wide variety of languages and frameworks including .NET, Java, Node etc.
  1. Scale your ops teams — Because the underlying infrastructure is decoupled from the applications, there is greater consistency for lower levels of your infrastructure. This allows your ops team to scale much more efficiently.
  1. Consistent and “decoupled” — In addition to greater consistency at the lower levels of infrastructure, applications developers are exposed to a consistent set of constructs for deploying their applications. For example, Pod, Service Deployment and Job. These constructs remain the same across cloud, on-premises and edge environments. Furthermore, these constructs also help decouple the developers from the underlying layers (Cluster, Kernel and Hardware layers ) shown in the diagram below.decoupling
  1. Declarative Model – Kubernetes, Istio, and other projects are based on a declarative, configuration-based model that support self-healing. This means that any deviation from the “desired state” is automatically “healed” by the underlying system. Declarative models reduce the need for imperative automation scripts that can be expensive to develop and maintain.
  1. Community momentum – As stated earlier, the community momentum behind CNCF is unprecedented. Kubernetes is #1 open source project in terms of contributions. In addition to Kubernetes and Prometheus, there are close to 500 projects that have collectively attracted over $5 B of venture funding! In the latest survey, (August 2018), the use of cloud-native technologies in production has gone up by 200% since Dec 2017.
  1. Ticket to DevOps 2.0 – Cloud-native combines the well-recognized benefits of what is being termed as “DevOps 2.0” that combines hermetically sealed and immutable container images, microservices and continuous deployment. Please refer to the excellent book by Victor Farcic.

Now that we understand the key benefits of cloud-native technologies, let us compare it to a traditional PaaS offering:

Attribute Tradition PaaS Cloud-Native as a Service
Portability Limited Advanced
Application as a first-class construct Limited (application construct limited to the specific PaaS service) Advanced construct including Helm, network and security policies
Managed offering Mature (fully managed) Maturing (some aspects of the cluster management currently require attention)
Stateful applications Advanced capabilities offered by the database as service offerings Some cloud-native support for stateful applications (However, cloud-native applications can be integrated with PaaS database offerings through the service catalog)
Extensibility Limited Advanced (extensibility includes Container Network Interface, Container Runtime Interface)

Azure & CNCF

Fortunately, Microsoft has been a strong supporter of CNCF, as they joined CNCF back in 2017 as a platinum member. Since then, they have made significant investments in a CNCF-compliant offering in the form of Azure Kubernetes Service (AKS). AKS combines the aforementioned benefits of a cloud-native computing with a fully managed offering – think of AKS as a PaaS solution that is also CNCF compliant.

Additionally, AKS addresses enterprise requirements such as compliance standards, integration with capabilities like Azure AD, Key Vault, Azure Files etc. Finally, offerings like Azure Dev Spaces and Azure DevOps greatly enhance the CI/ CD experience in working with cloud-native applications. I will be remiss not to talk about VS Code extension for Kubernetes that also brings a useful tooling to the mix.

Cloud-Native Use Cases

Here are few key use cases for cloud-native applications. Microservices are something you would expect, of course.  Customers are also being used to run Apache Spark on AKS.  There is also thinking around managing IoT Edge deployments right from within the Kubernetes environment. Finally, “Lift and shift to containers” – this use case is getting a lot of attention from customers as the preferred route for moving on-premises applications to the cloud. Please refer to our recent blog post on this very topic “A “Modernize-by-Shifting” App Modernization Approach” for more details!

Cloud-Native Scenarios

FREE HALF DAY SESSION: APP MODERNIZATION APPROACHES & BEST PRACTICES
Transform your business into a modern enterprise that engages customers, supports innovation, and has a competitive advantage, all while cutting costs with cloud-based app modernization.

Since PaaS and Windows Azure have both been in the news recently, thanks to a favorable report from Gartner, I thought it might be good timing to revisit a whitepaper I wrote on PaaS. 

There’s been a lot of talk about the different cloud-based services available today, including Infrastructure as a Service (IaaS), Software as a Service (SaaS) and Platform as a Service (PaaS). While each of these services is unique, PaaS stands out from the mix. This is not to suggest that PaaS is somehow better than IaaS; that would be an improper comparison. In fact, as shown in the diagram below, PaaS simply builds on the capabilities offered by IaaS.

But if you are a developer, IT shop or an ISV responsible for building, deploying and maintaining solutions, leveraging PaaS is where you reap the maximum cloud-computing benefits. Read More…

AIS will be presenting a new series of Azure ‘n’ Action Café webinars in early 2014. We will have expert speakers offering new and exciting content in a convenient “lunch and learn” online format, one Wednesday a month, from 12 p.m. to 1 p.m. EST.

Our first presentation date is Wednesday, January 29. You won’t want to miss this session, as we’ll cover both updates to Azure and new ways to extend your data center.

In this session we will share an example Platform as a Service (PaaS) Web Role workload in Azure.  This workload will help you understand how Azure constructs can be used in support of different scenarios.

The workload we’ll discuss makes use of the following Constructs:

  • A private data center extended into multiple Azure Regional Data Centers (East Coast / West Coast) with IPSEC tunnels
  • The on-premises AD / DNS infrastructure extended into Azure IaaS
  • An on-premises SQL Server 2012 Server database supporting a High Availability Group (Synchronous) with a local primary and a remote secondary located in Azure IaaS
  • The PaaS Web Role Load Balanced across data centers using Azure Traffic Manager

Click here to register!

Yesterday, Microsoft announced the general availability of its offering of Infrastructure as a Service (IaaS). They join an already-crowded market of IaaS providers, but this offering gives all companies the ability to offload workloads that have traditionally run in a company data center to the cloud. Welcome, Microsoft — the water is fine.

This announcement also represents a major chunk of Microsoft’s family of Azure offerings…and in my opinion, a stepping stone many companies simply must take in moving out of the traditional data center and into the cloud.  The following diagram shows the stepping stones out of the traditional data center:

Read More…

In a previous blog post I discussed Windows Azure PaaS / IaaS hybrid scenarios. Together with my colleague Jack O’Connell (Infrastructure Specialist extraordinaire), we set up each of the four scenarios outlined in the previous post including:

  • Using Windows Azure Virtual Network to provision a VPN to connect our on-premised infrastructure with a Windows Azure datacenter.
  • Set up front-end and back-end subnets.
  • Provision a set of Azure IaaS Virtual Machines and Azure Web Roles.
  • Install System Center Monitoring Pack for Windows Azure Applications on Azure-based machines.
  • Install System Center Operations on-premises in order to manage Azure-based resources.

Watch the following video for a quick walkthrough of the scenarios in action:

In the past, I have written about the benefits of Platform as a Service (PaaS) style of applications. While I continue to believe that PaaS offers the best ROI for hosting custom applications in the cloud, there are a number of scenarios where inserting elements of Infrastructure as a Service (IaaS) to a PaaS solution can help alleviate some of the limitations that have prevented the adoption of PaaS. In this blog post we will look at a few compelling scenarios that are enabled by combining PaaS with the recently announced IaaS features within a Windows Azure Cloud Service. Read More…