Introduction

PowerShell DSC is possibly one of the most potent configuration managers for Windows Operating Systems. Even with the increase in popularity of Ansible and SaltStack, Powershell DSC remains supreme when enforcing the desired state on a Windows VM. Ansible itself has included the win_dsc module, which allows Ansible to run PowerShell DSC. In this blog post, we will dive deeper into one of PowerShell DSC’s most powerful aspects, 3rd Party Resources, and how they interact with Azure Automation.

3rd Party Resources are PowerShell DSC Modules that are created by the community. Any PowerShell community member can create modules, and there are tons of modules out there to choose from. Modules are kept in repositories, the most well known and default PowerShell repository being the PowerShell Gallery run by Microsoft. This is a common repository for PowerShell modules that are deployed to the Gallery by the community. PowerShell Modules in the PSGallery can be downloaded and installed by the PowerShellGet Module.

As developers and infrastructure engineers, there are many different reasons to script various services you are creating. Often, instead of developing behavior or scripts from scratch, it is much easier to leverage the work that others have done to expedite a task’s completion. 3rd Party Modules allow for easily repeatable code that can become production-ready through collaboration.

Often, DSC Configuration can become complicated. Engineers can be asked to do many things, from creating an Azure AD Domain, configuring OMS Solutions associated with a VM, and even interactions with non-native Azure products, such as Splunk.

These may all seem very daunting, but don’t fret! Members of the PowerShell community have dealt with these problems and many others, and often you will find third party modules to help do the work for you.

Here is an example of a Third Party Resource, named ActiveDirectoryDsc, which will help in the promotion, configuration, and management of Active Directory

Azure Automation is a robust PaaS offering from Azure that allows for a cloud-based DSC pull server. Within Azure Automation, it is possible to add both custom modules that the user develops and third-party modules available in any hosted source.
⚠ It should be known that organizations in locked-down environments can manage their Repository of PowerShell Modules, which have been vetted by the respective InfoSec team. It is possible to deploy your Artifact repo using the Azure DevOps product shown here. It allows an internal team to deploy its versions of packages, and you can use that as your URI references.
⚠ There are a few ways to upload modules to the Azure Portal natively. You can upload manually through the portal as shown here in this picture:

Uploading modules to the Azure Portal

However, being DevOps Engineers, we want to automate this process as much as possible. One way to do this is via ARM Templates, like the ones we used in the previous module.
Below is an example of how to add a 3rd party module to your Azure Automation Account via ARM Templates:

{
"name": "[concat(parameters('automationAccountName'), '/', parameters('name'))]",
"type": "Microsoft.Automation/automationAccounts/modules",
"apiVersion": "2015-10-31",
"location": "[resourceGroup().location]",
"properties": {
"isGlobal": false,
"sizeInBytes": 0,
"contentLink": {
"uri": "uri to package"
}
}
}

If you are deploying from the PowerShellGallery, your Uri would look something like this:

"uri": "[concat('https://www.powershellgallery.com/api/v2/package/', parameters('name'), '/', parameters('version'))]"

Alternatively, you can script the import of modules using the New-AzAutomationModule module in a Powershell Script.

Oddly enough, there is sometimes some difficulty understanding the correct ContentUri to use in both the ARM and Powershell case. Finding the correct one can be done by navigating the right module in the Powershell Gallery, and adding /api/v2 to the URL, and replacing packages (plural) with package (singular).

Add the /api/v2 to a URL

Conclusion

3rd Party Modules are a great way for developers to speed up development and productivity. If you are inclined to help in the development of these modules, head over to GitHub and contribute!

The Dockerfile development workflow is fantastic if you are familiar with the documentation and have spent a lot of time troubleshooting. However, if you are writing your first Dockerfile, it may be challenging to identify and debug certain cases.

During my first foray into the world of containers, I was containerizing a React Application. As I got comfortable copying the correct files into the image and making sure I was running things correctly, I ran the “docker build .” command multiple times. Suddenly, I noticed that my computer was giving me warnings that it was running out of storage! In this article, I will be discussing the concept of a Dangling Container when creating your Dockerfile, and we can finally find out what is taking up so much space on your machine!

What is a Dockerfile?

To quickly recap, a Dockerfile allows you to customize any image to fit your needs. You can add packages, run servers, the possibilities are endless! You start with a base image, i.e. Microsoft Container Registry’s official dotnetcore3.1 image. Then, go through various steps inside of a text document that is parsed by the Docker Engine. Eventually, you have your application running on a container image that has your application. The Docker daemon will read the Dockerfile, and perform whatever tasks you have described in said Dockerfile. Each time a step is run inside a Docker Container – It is a run in an Intermediate Container. Intermediate containers allow Docker to be very efficient in caching, as layers of the intermediate containers make up your final image.

Here is the Dockerfile I will be running, and the application I am using is the simple example found of the reactjs site.

Dockerfile example

Inside my console, I can run “docker build .” and everything will run completely fine, as shown here:

removing intermediate containers

You will notice that the container I have successfully built is a GUID. I did not provide my Docker build step with a tag, provided with the -t flag after the Docker build command. I ran “docker build .” one more time and I have two images of 1.15GB in size.

GUID

However, this isn’t the full story – the intermediate containers are also saved along with your containers themselves. You can see them with the “docker images -a” command.

Docker images -a

As you can see here, I have occupied roughly ~8 GB of Storage when trying to run these containers within approximately the past fifteen minutes of development. During a long project, a microservices architecture running on containers, for example, it is possible to accumulate a significant amount of storage space. Dangling images themselves can be seen here with this command “docker images -f dangling=true”

Docker images prune

In this case, since I missed the -t tag initially, my images are essentially dangling images. They are not associated with a repository.

Finally, to clean up my dangling images, all I need to run is “docker images prune”

Docker images

Developing with Dockerfiles can be difficult, and it can be daunting to step into the world of containers. Remember to prune often, and make sure you tag your images!

During a recent design session at a client site, our team had the opportunity to participate in something cool. We had the opportunity to create a custom DSL (Domain Specific Language) in YAML for an automation framework we wanted to build. This blog will help provide a high-level overview. The client wants to use Azure Automation to create a self-healing framework for their infrastructure. The environment is a mixture of both PaaS and IaaS, with certain VMs being used to host app and database servers. Unfortunately, because of potential performance issues, it is important to regularly perform healing actions on the environment, which so far is being done manually. Now you may be asking, “Why don’t they use something like Chef or Puppet?”. It is because these technologies were either in the process of being on-boarded or were not available. You may also ask, “Well, if you’re going to use Powershell, just use Powershell DSC!”. While I agree that Powershell DSC is powerful and could potentially power the engine for this application (explained below), DSC itself is not very user-friendly. One of the major factors for this session was to allow any user to edit or read the definitions, and YAML is a much more robust option for usability.

The design session itself was fascinating. I had never spent time creating my own configuration language complete with definitions and structure. It was a great learning experience! I had used YAML briefly when demoing out things like Ansible for personal projects, but never directly to solve an issue. YAML had always been another markup language for me. This usage of it, however, showed me the power of the language itself. The problem that we ended up solving with the below snippet was creating an abstraction layer for Azure Monitor Alerts.

After the session completed, we were left with something like this:
Create and Abstraction Layer

Now what?

We must take this YAML File and convert it into an Azure Monitor Alert. The first problem we ran into is that this file is a YAML File. How can I take these configuration values and convert them to be used in Powershell? Here is where Cloudbase’s powershell-yaml module comes into play. As we all know, Powershell was written on top of and created to be an extension for the .NET Framework, so people at Cloudbase created the wonderful powershell-yaml, a module that is a wrapper around the popular .Net Library YamlDotNet.

With powershell-yaml you can create something like this:
Converting the yaml File

In this example, I am not only converting the yaml file into a JSON file as an example, but I am returning the yaml as a PSObject. I find this much easier to use because of the ease of dot notation.

In this case, you can now reference variables such as this:
Corresponding Powershell cmdlets

I was able to follow up with the corresponding PowerShell cmdlets available in the Az Modules and programmatically create Azure Monitor Alerts using YAML.

The eventual implementation will look roughly like this:
Azure Alerts

Each Azure Alert will trigger a webhook that is received by the Engine that is running in Azure Automation. This Engine will then do all business logic to find out whether the piece of infrastructure in question needs healing. If all the previously defined reasons are true the automation runbook will perform the recovery action.

The exercise itself was eye-opening. It made me much more comfortable with the idea of designing a solution for our specific scenario rather than trying to find out and wait to find the product or framework that would solve the problem for us. Also, this PoC showed me the power of YAML and how it can turn something incredibly monotonous, like configuration values, into something that can be part of your robust solution.

As developers, we spend a lot of time developing APIs. Sometimes it’s to expose data that we’ve transformed or to ingest data from other sources. Coincidentally, more and more companies are jumping into the realm of API Management—Microsoft, Google, MuleSoft and Kong all have products now that provide this functionality. With this much investment from the big players in the tech industry, API management is obviously a priority. Now, why would anyone want to use an API Management tool?

The answer is simple: It allows you to create an API Gateway that you can load all your APIs into, providing a single source to query and curate. API Management makes life as an admin, a developer, and a consumer easier by providing everything for you in one package.

Azure API Management

Azure API Management logoWhat does Azure API Management provide? Azure API Management (APIM) is a cloud-based PaaS offering available in both commercial Azure and Azure Government. APIM provides a one-stop-shop for API authority, with the ability to create products, enforce policies, and utilize a robust developer portal.

Not only can API Management integrate seamlessly with your existing Azure infrastructure, but it can also manage APIs that exist on-prem and in other clouds. APIM is also available in both the IL4 and IL5 environments in Azure Government, which allows for extensibility and management for those working in the public sector.

APIM leverages a few key concepts to provide its functionality to you as a developer, including:

  • Products
  • Policies
  • Developer Portal

From providing security to leveraging rate-limiting and abstraction, Azure API Management does it all for API consolidation and governance in Azure. Any API can be ingested, and it gets even easier when APIs follow the OpenAPI Format.

What Are Products?

Products are a layer of abstraction provided inside APIM. Products allow you to create subsets of APIs that are already ingested into the solution—allowing you to overlap the use of APIs while restricting the use of individual collections of APIs. This level of compartmentalization allows you to not only separate your APIs into logical buckets but also enforce rules on these products separately, providing one more layer of control.

Product access is very similar to Azure RBAC—with different groups created inside of the APIM instance. These groups are yet another way for APIM admins to encapsulate and protect their APIs, allowing them to add users already associated to the APIM instance into separate subsets. Users can also be members of multiple groups, so admins can make sure the right people have access to the right APIs stored in their APIM instance.

What Are Policies?

Policies are APIM’s way of enforcing certain restrictions and providing a more granular level of control. There is an entire breadth of policies available in APIM, which range from simply disallowing usage of the API after calling it five times, to authentication, logging, caching, and transformation of requests or responses from JSON to XML and vice versa. Policies are perhaps the most powerful function of APIM and drive the control that everyone wants and need. Policies are written in XML and can be easily edited within the APIM XML Editor. Policies can also leverage C# 7 Syntax, which brings the power of the .NET Framework to your APIM governance.

What Is the Developer Portal?

The Azure API Management Developer Portal is an improved version of the Swagger documentation that’s generated when you use the OpenAPI spec. The Developer Portal provides an area for developers to readily see APIs, products, and associated applications. The Portal also provides sample request bodies (no more guessing API request structures!) and responses, along with code samples in many different languages.

Finally, the portal also allows you to try API calls with customized request bodies and headers, so you have the ability to see exactly what kind of call you want to make. Along with all that functionality, you can also download your own copy of the OpenAPI Spec for your API after it’s been ingested into your instance.

Why Should I Use APIM?

Every business should be using some form of API Management. You’ll be providing yourself a level of control previously not available. By deploying an API Gateway, that extra layer of abstraction allows for much tighter control of your APIs. Once an API has been ingested, APIM provides many additional functionalities.

First, you can match APIs to products, providing a greater level of compartmentalization. Second, you can add different groups to each product, with groups being subsets of users (i.e. Back-end Devs, Billing Devs, etc.). Third, you automatically generate a robust developer portal, which provides all of the functionality of the Swagger portal, but with added features, such as code snippets.  Finally, APIM also has complete integration with Application Insights in commercial Azure, providing access to a world-class logging and visualization tool.

Azure API Management brings power to the user, and no API should be left out.