Have you been in a situation where an Azure Resource Manager (ARM) template with Azure Blueprint is the only option available for Infrastructure as Code (IaC) deployment? Have you realized that certain operations during deployment are not possible with an ARM template? Have you been in a situation that PowerShell or Azure CLI can get the job done, but there is no way to inject that script into the ARM template? Maybe you have felt like ARM-disabled at this time. If you answered yes to at least one of the situations, this blog is for you. Let’s dive into a cool feature, ‘Deployment Scripts in ARM Templates,’ which helped me to overcome these hurdles!

Azure managed disks by default have Server-Side Encryption (SSE) with Platform Managed Key (PMK), identified as SSE + PMK. I had the requirement to encrypt VM’s (Windows or Linux) with either SSE or Azure Disk Encryption (ADE) with Customer Managed Key (CMK). CMK provides an additional security layer as the customer can manage the keys, and they can rotate the keys periodically. Both types of encryptions require an RSA-based Key Vault key. While you can implement an SSE and ADE with CMK can using PowerShell and Azure CLI, I only had an ARM template deployment option. An ARM template has the functionality to create key vault secrets but cannot create key vault keys. For SSE encryption, a ‘disk encryption set‘ needs to be created. In the automated deployment, the key vault key and disk encryption set must exist for the virtual machine deployment to consume the key vault key to encrypt the VM and OS/Data disks.

The following picture shows default encryption on a VM managed disk – SSE with PMK (by default):

VM Disk

Choosing encryption types is based on customer requirements. Azure offers SSE + PMK as a default feature, which provides encryption at rest. Either SSE + CMK or ADE + CMK can be applied on top of default encryption. For more information to understand SSE and ADE, please read this great blog post. It explains fundamental differences between these types of encryptions when to choose one over the other, and the caveats to watch out after the encryption is applied.

Microsoft explains the difference between SSE and ADE by stating that Azure Disk Encryption leverages the BitLocker feature of Windows to encrypt managed disks with Customer-Managed Keys within the guest VM. Server-Side Encryption with Customer-Managed Keys improves on ADE by enabling you to use any OS type and images for your VMs by encrypting data in the Storage service.

The following screenshot shows Managed disk SSE + CMK encryption implemented via ARM template at the time of VM creation using Disk Encryption Set:

SSE + CMK encryption

The following screenshot shows Managed disk ADE + CMK encryption implemented via ARM Custom Extension:

ADE + CMK

The following screenshot shows ARM Custom Extension for ADE + CMK encryption:

ARM Custom Extension

The following screenshot shows how Key Vault secret is used by writing ‘Encrypted BEK’ for ADE + CMK encryption:

Key Vault secret

While I learned more about disk encryption, I was still contemplating disk encryption options in the ARM template. Microsoft ARM template team announced this cool new feature, deployment scripts! This was announced at the MS Build Event, which you can view here for more information. Deployment scripts are a new resource in ARM that can run PowerShell and Azure CLI scripts within the ARM template! The feature came in time to resolve ARM deployment impediments. The implementation creates a storage account to copy the script from the ARM template into file share and run an Azure container instance to execute the script. A user-managed identity is created and given permission to the resource group and added to key vault access policies. The script is executed as this user-managed identity. At this time, no system assigned identity is supported. The deployment scripts resource is available in both Azure public and government, wherever Azure container instances are available. Both the storage account and container instances are deleted after successful deployment scripts resource deployment.

The following screenshot is the deployment scripts ARM template resource:

Deployment Scripts ARM template

The following screenshot shows a container instance and storage account resources used by deployment scripts. They will be deleted after successful deployment script execution. The user managed identity is created in an ARM to execute the deployment script. The deployment script creates Key Vault, Keys in key vault, and Disk Encryption Set.

Container Instance

Additional features in deployment scripts resource:

  • The scripts can be either embedded inline or referenced from a location accessible from deployment.
  • The output from deployment scripts resource can be consumed/referenced by other resources in the deployment
  • An existing storage account can be referenced, and that storage account will be used instead of creating a temporary storage account.
  • Any task done via PowerShell and Azure CLI can be done in the deployment scripts, such as disk encryption, the storage account can be encrypted with CMK via PowerShell in deployment scripts.

Security considerations:

  • Deployment scripts create Standard_LRS storage account SKU, and if a Geo-Redundant security policy is enabled then the policy might fail and report. As the storage account gets deleted after the deployment scripts are done, the policy will not be non-compliant.
  • The storage account access should be public when using the existing storage account for the deployment script. This is to allow the script to be copied to file share and by permission for the container, instance to execute the script.
  • The Key Vault firewall needs to be turned off for the deployment scripts to be able to perform operations in key vault key and certificates. But the firewall can be enabled back in the PowerShell after all the work is done.

The following screenshot shows ARM Template Key vault resource, no firewall restriction:

ARM Template

In the following screenshot, I enable firewall access to the key vault in PowerShell script after the tasks are done:

Enable Firewall access

ARM template for SSE + CMK disk encryption:

SSE + CMK is applied when the disk-encryption-type parameter is set to ‘SSE’. If it is ‘ADE’ then no SSE is applied.

SSE + CMK is applied

ARM Template for ADE + CMK Disk Encryption using VM Extension

ARM Template for ADE

In the code example, the newly created Azure key vault key is saved to secrets to be able to access from an ARM template. ARM does not have the functionality to access key vault keys but can access secrets using the template reference parameter, like below.

Template Reference parameter

Instead of using as secrets, another approach to write the key vault key (KeK ID) as deployment scripts output and use that output in the VM ARM template.

The following screenshot shows how to write output from deployment scripts resource in keyvault.json:

Write up output

The following screenshot shows how to consume the output written in deployment scripts resource in vm.json for KeK URL:

Consume the output

For full code access to try using ‘deployment scripts’ in an ARM template to create Key vault keys, disk encryption set, and to encrypt the VM disks with SSE or ADE, please follow the link to get the code.

Once downloaded, follow the steps to test in your environment, Azure Cloud or Azure Government. Update the parameters before deploying:

Connect-AzAccount -Tenant "000000-000-0000-0000" -SubscriptionId "000000-000-0000-0000" -EnvironmentName AzureCloud

# Create Resource Group
$rg = New-AzResourceGroup -Name "deploy-script-rg" -Location "eastus2"

# Create Key Vault with ‘deployment scripts’
New-AzResourceGroupDeployment -ResourceGroupName $rg.ResourceGroupName -TemplateFile "C:\deploymentScriptsARM\keyvault.json" -TemplateParameterFile "C:\deploymentScriptsARM\keyvault.parameters.json"

# Create Virtual Machine with disk encryption
New-AzResourceGroupDeployment -ResourceGroupName $rg.ResourceGroupName -TemplateFile "C:\deploymentScriptsARM\vm.json" -TemplateParameterFile "C:\deploymentScriptsARM\vm.parameters.json"

Additional ARM Templates references:

Happy ARM Templating for IaC!!!

Last week, Microsoft introduced its new Dataflex technology for data services inside of Microsoft Teams, in addition to the rebranding of its former Common Data Service (CDS) to be called Dataflex Pro.

Below I have outlined five real-world scenarios where Microsoft Dataflex is a game-changer, particularly in larger enterprise organizations. These 5 scenarios are a reflection on actual situations I see playing out with AIS clients today.

Requirement-driven Application Development

These are common scenarios when the organization has a specific set of requirements or use cases for which you are seeking to build a new, modern application. For some time now we’ve lacked a true data service to build solutions of moderate data complexity but that don’t require the premium Power Platform licensing. Power Apps built on top of Dataflex now fit nicely between SharePoint (lower complexity) and Power Platform or custom-Azure solutions (higher complexity). Use Dataflex when you need a relational data source that isn’t complex enough to require Dataflex Pro or SQL, and use Teams and your existing M365 licensing instead. Now we’ve got a great new arrow in our app development quiver.

Application Modernization

Here we’re talking about cases when you have a legacy application (or portfolio of applications) that you’re looking to modernize through a Rebuild approach (reference Gartner’s 5R Model). Many legacy applications are built in very complex ways, creating immense technical debt. In past times, we’ve lacked the tools to build solutions more simply and rapidly using low-code tools. For rebuilds that don’t require the complexity of Dataflex Pro or Azure data services, Dataflex inside of Teams introduces an extraordinary new path to rapid rebuild (maybe time for Gartner to add a sixth “R”).

Business group-based app development

We often encounter workloads that are specific to individual business groups, and thus not deployed enterprise-wide. Many business groups—be they functional, geographic, product-aligned, or event smaller teams or departments—already use a corresponding Microsoft Teams team. Dataflex’s 1:1 relationship with each team provides an excellent place to digitize business processes localized to individual business groups. For example, a training team that needs to manage their upcoming events and student attendees, or a department head who needs to take stock of employees’ outside skills as she re-jiggers post-pandemic “return to work” assignments.

InfoPath Migration

As Microsoft InfoPath faces end-of-life, we’re seeing more and more organizations needing to migrate thousands (or more) forms to some combination of new technologies (usually Power Platform). Like the previous example, though, many of these InfoPath forms are localized to activities inside specific business groups. Dataflex inside of Teams provides a pathway for rapid modernization of these forms, allowing more significant efforts to be directed to the enterprise-wide InfoPath “workloads” whose impact, complexity, and scope will necessarily make them heavier lifts.

Enterprise-wide Power Platform adoption (low-code cloud transformation)

Gartner predicts that 65% of all app development will take place atop low-code platforms such as Power Platform by 2024 – what I often refer to as “low-code cloud transformation”. The idea is that there is tremendous opportunity to modernize in the cloud when we’ve got both citizen and full-time developers working together to build solutions along a very wide complexity spectrum. Dataflex is a game-changer here because it allows productivity to important grade workloads (check out the model for this) to be built and delivered inside of Teams (where many users already are) using a truly relational data source. It also allows apps built in one part of the organization to be push-button installed to Teams serving a different part of the organization via an app store like service. That’s going to be powerful for propagating innovation across large organizations.

We will see tremendous innovation around Dataflex and Dataflex Pro in the months ahead. Organizations’ potential here will be enabled by their ability to effectively manage and govern the technology in the hands of users. Another topic for another day, but I would be remiss not to say so. For this, I recommend starting with the Power Platform Adoption Framework, and our recent video outlining the Center of Excellence (CoE) as a Service concept.

AIS is the 2020 Microsoft worldwide Partner of the Year for Power Apps and Power Automate.

The Dockerfile development workflow is fantastic if you are familiar with the documentation and have spent a lot of time troubleshooting. However, if you are writing your first Dockerfile, it may be challenging to identify and debug certain cases.

During my first foray into the world of containers, I was containerizing a React Application. As I got comfortable copying the correct files into the image and making sure I was running things correctly, I ran the “docker build .” command multiple times. Suddenly, I noticed that my computer was giving me warnings that it was running out of storage! In this article, I will be discussing the concept of a Dangling Container when creating your Dockerfile, and we can finally find out what is taking up so much space on your machine!

What is a Dockerfile?

To quickly recap, a Dockerfile allows you to customize any image to fit your needs. You can add packages, run servers, the possibilities are endless! You start with a base image, i.e. Microsoft Container Registry’s official dotnetcore3.1 image. Then, go through various steps inside of a text document that is parsed by the Docker Engine. Eventually, you have your application running on a container image that has your application. The Docker daemon will read the Dockerfile, and perform whatever tasks you have described in said Dockerfile. Each time a step is run inside a Docker Container – It is a run in an Intermediate Container. Intermediate containers allow Docker to be very efficient in caching, as layers of the intermediate containers make up your final image.

Here is the Dockerfile I will be running, and the application I am using is the simple example found of the reactjs site.

Dockerfile example

Inside my console, I can run “docker build .” and everything will run completely fine, as shown here:

removing intermediate containers

You will notice that the container I have successfully built is a GUID. I did not provide my Docker build step with a tag, provided with the -t flag after the Docker build command. I ran “docker build .” one more time and I have two images of 1.15GB in size.

GUID

However, this isn’t the full story – the intermediate containers are also saved along with your containers themselves. You can see them with the “docker images -a” command.

Docker images -a

As you can see here, I have occupied roughly ~8 GB of Storage when trying to run these containers within approximately the past fifteen minutes of development. During a long project, a microservices architecture running on containers, for example, it is possible to accumulate a significant amount of storage space. Dangling images themselves can be seen here with this command “docker images -f dangling=true”

Docker images prune

In this case, since I missed the -t tag initially, my images are essentially dangling images. They are not associated with a repository.

Finally, to clean up my dangling images, all I need to run is “docker images prune”

Docker images

Developing with Dockerfiles can be difficult, and it can be daunting to step into the world of containers. Remember to prune often, and make sure you tag your images!

This blog is part of a series of hands-on labs on leveraging the Power Platform and Microsoft Azure. To break this series up into more manageable chunks, in this installment, we’ll focus on setting up the On-Premises Gateway that will connect to our local SQL Server and the Azure Logic App that will use the new gateway to connect to our local SQL Server 2017 and return customer data using an HTTP request and response.

We’ll wrap it up by using Postman to test our Logic App. If you don’t have Postman installed, you can download it here.

Stay tuned for the next lab in this series, Azure API Management, and Custom Connectors.

Scenario

The company has an on-premises SQL Server database that contains customer data that needs to be available to various apps built on the Common Data Service.

High-Level Steps

  1. Install and configure an On-Premises Data Gateway to connect to the local SQL Server
  2. Create an Azure Logic App to connect to the SQL Server through the gateway

On-Premises Data Gateway

As you can imagine, there are already plenty of good write-ups on installing and configuring an On-Premises Data Gateway, including an official version from Microsoft. It is very straight forward, but I’m here to offer you a couple of pro-tips.

Pro Tip #1: Pay attention to the account you use for installing the gateway. This account needs to belong to the same Azure tenant as the account you’ll use for building your Azure Logic Apps that will use this gateway.

Pro Tip #2: Pay attention to the region that is automatically selected when installing the gateway. For this example, the gateway needs to be installed in the same region as your Azure Logic App.

Azure Logic App

Now I’m going to step you through building an Azure Logic App that can be executed using a simple HTTP Request. The Logic App will accept a single parameter (CustomerId), or no parameter at all. If no id is presented, the Logic App will return all customer rows. Otherwise, it will filter on the passed in CustomerId.

  1. Create a new Azure Logic App.Logic App
  2. When the deployment of the Logic App is complete, start with the common trigger “When HTTP request is received”.Logic Apps Designer
  3. Add the following JSON schema for the Request Body and click Save. Once you save the Logic App you should see the HTTP POST URL field populated automatically.
    {
    "properties": {
        "CustomerId": {
          "type": "integer"
        }
    },
     "type": "object"
    }

  4. Add an Initialize variable step named CustomerIdVariable, type string, and set the value to the passed in CustomerId parameter.
  5. Add a new Condition step to the Logic App (Controls -> Condition) and change the operator to “Or”. Add a second line row to the condition step and configure as follows.
      1. CustomerIdVariable is equal to null
      2. CustomerIdVariable is equal to 0
      3. Where null is a function, and CustomerIdVariable is set from the Dynamic content. Save the Logic App.
  6. In the “true” block, add a SQL Server, Get rows (V2) action. We will need to set up a new connection here so let’s do that. Since we were sure to set up the On-Premises Gateway in the same Azure Subscription, you should see the gateway automatically available in the Connection Gateway dropdown. We are using Windows Authentication for authenticating to the SQL Server.
  7. Once the connection is created, set the server name, database name, and table name on the SQL Server Get rows (V2) action. Save the Logic App.
  8. Repeat this process for the False side of the conditional, using the same connection that was created in a previous step. When creating this action, click the Add new parameter button and add a Filter Query parameter.
  9. Set the Filter Query parameter to powerappscontactid eq CustomerIdVariable where the CustomerIdVariable is filled in using the Dynamic content. Save the Logic App.
  10. After the Get rows (V2) action in the False side of the conditional, add another conditional. Use the following expression for the conditional.
    length(body(‘Get_rows_(V2)_2’)?[‘value’]) is equal to 0.
    Where the first part is an expression.
  11. For this new conditional we’ll leave the False side empty, but we want to add a Response Request action to the True side. Configure the Response to use Status Code 200, and [] as the Body. Save the Logic App.
  12. Finally, outside of the conditionals, we’ll add another Response Request Action with a Status Code of 200. For this response, we’ll use Dynamic content and send back the ‘values’ from both SQL Server Get rows (V2) actions.
  13. The Logic App should look like this.
  14. Time to save and test your Logic App. We’re going to use Postman to do that since we haven’t set up our API yet in Azure API Management. Open up Postman and start a new request. Use HTTP Post and the URL from the “HTTP Request Received” trigger on your Logic App. I also tweaked that URL to include our CustomerId query string parameter.

Once the URL is put into Postman, the other parameters should be auto-populated in your Postman environment. Here is a snippet of the URL that shows how I included the CustomerId, followed by a screenshot of the Logic App run (after completion), the Postman results, and the contents of the table displayed in SQL Server Management Studio.

…triggers/manual/paths/invoke?CustomerId=2&api-version=2016-10-01…

Conclusion

I hope this helped demonstrate how quickly you can start querying data in your on-premises data stores from a cloud resource like an Azure Logic App. Please keep in mind that you’ll most likely want to secure this Logic App since right now, anyone can send a request to it. We’ll cover that in more detail in another write-up.

We are in awe of our team as they have reached another impressive milestone in support of our nation’s mission to modernize their IT footprint. Within the last week, AIS was able to successfully establish the first Microsoft Azure Impact Level 6 (IL6) instances and achieve a conditional Authority to Operate (ATO) at IL6. 

At the end of 2018, AIS was the first to establish a cloud connection at Impact Level 5, meaning our customers could connect and store any unclassified data on their Azure subscription.

By achieving an ATO for Microsoft Azure at IL(6), AIS has enabled classified missions to use the best which Azure’s public cloud has to offer – unparalleled connectivity, high availability, and resiliency. This new capability will bring commercial innovations, which meet the security and integrity requirements of classified workloads, to DoD’s secured networks. IaaS, SaaS, and PaaS Azure services (ex. artificial Intelligence (AI), machine learning (ML), identity, analytics, security, and high-performance computing) will now be available for many more DoD systems and applications.

Dedicated to Enabling a More Secure Cloud for Government

Microsoft’s dedication to the public sector has been reflected in the continued investments made in Azure Government to enable the full spectrum of State, Local, and Federal agencies as well as all Military branches.

In addition to the recently achieved Provisional Authorization (PA) at the Department of Defense (DoD) IL6, Microsoft announced a third region to enable even higher availability for national security missions, released hundreds of new features and launched over 40 new services.

We Stand Alongside You to Back Your Mission

AIS is a proud Microsoft partner and stands alongside the products, technology investments, and values of the organization. We are honored to help pave the way for our government to increase performance and connectivity, reduce costs, realize actionable data insights, and enable faster mission innovations on the first cloud natively connected to classified networks – Azure Government.

STRUGGLING WITH ATO? READY TO MIGRATE?
Work with cloud consultants, engineers, and security experts to build cloud and data solutions that deliver real mission impact.

Leveraging the Power Platform and Microsoft Azure to connect housing agencies and intermediaries with the Department of Housing and Urban Development (HUD)

Intro

The US Department of Housing and Urban Development runs a program known as the Housing Counseling Program that aids housing agencies around the nation through things like grants and training. These agencies provide tremendous benefit to Americans in many forms including credit counseling, foreclosure prevention, predatory lending awareness, homelessness, and more. One of the requirements for approval into the program is that the agency uses a software system that is also approved by HUD and interfaces with HUD’s Agency Reporting Module SOAP web service.

Original System

The original system is built on Dynamics 365 Online using custom entities, web resources, and plug-ins. Complex batching functionality was implemented to deal with the two-minute timeout period that plug-ins have, web resource files (JavaScript, HTML, and CSS) were stuck in a solution with no real structure, and application lifecycle management (ALM) was extremely painful. The original system design also doesn’t account for the intermediary-to-agency relationship, specifically the roll-up reporting and auditing that intermediaries desperately need for providing their compiled agency data to HUD, as well as their own internal reporting. Finally, because the solution was built using Dynamics 365 Online Customer Engagement, licensing costs were more than double what we knew they could be with Microsoft’s new Power Apps licenses and Azure pay-as-you-go subscriptions.

The Original Diagram

Figure 1 – current CMS system build on Dynamics 365 Online

Definition

Intermediaries – organizations that operate a network of housing counseling agencies, providing critical support services to said agencies including training, pass-through funding, and technical assistance.

Modern System

Whereas the data schema for the solution remains mostly unchanged, the architecture of the system has changed profoundly. Let’s check out some of the bigger changes.

Power Apps Component Framework (PCF)

The Power Apps Component Framework enables developers to create code components that can run on model-driven and canvas apps and can be used on forms, dashboards, and views. Unlike traditional web resources, PCF controls are rendered in the same context and at the same time as any other component on a form. A major draw for developers is that PCF controls are created using TypeScript and tools like Visual Studio, and Visual Studio Code. In the modern solution, web resources are replaced with PCF controls that make calls directly to the Power Apps Web API.

Azure Resources (API Management, Function Apps, Logic Apps, and a Custom Connector)

A Custom Connector and Azure API Management are used to manage and secure various APIs that are used in the system. The connector can be used in any Azure Logic App to make connecting to an API much easier than having to configure HTTP actions. All of the business logic that once lived in plug-ins has been replaced with a combination of Azure Logic Apps and Azure Function Apps. This has provided incredible gains where development and testing are concerned, but it also provides a more secure and scalable model to support these agencies as they grow. Lastly, it removes the burden experienced with the two-minute time-out limitation that plug-ins have.

ALM with Azure DevOps Services and Power Apps Build Tools

Azure DevOps Services, coupled with the Power Apps Build Tools, are being used to ease the pain we experienced with ALM prior to these tools being available. Now we can easily move our Power App solutions across environments (e.g. dev, test, production) and ensure our latest solution code and configurations are always captured in source control.

ALM with Azure

Figure 2 – modern solution using Power Apps and Azure resources for extreme scalability and maintainability

Next Steps

I hope by demonstrating this use case you’re inspired to contact your Microsoft partner, or better yet contact us and let us work with you to identify your organization’s workloads and technology modernization opportunities.

Ready to dig a little deeper into the technologies used in this blog post? Below we have some hands-on labs for you. Stay tuned for more updates!

  1. Power Apps Component Framework Controls
    As a developer, PCF Controls has been one of the most exciting things to grace the Power Platform landscape. In this video, I’ll show you how to go from zero to a simple PCF control that calls the Power Platform Web API. I also provide a Pro Tip or two on how I organize my PCF controls in my development environment.

  2. Using an Azure Logic App to Query Data in an On-Premises SQL Server Database
    Companies are embracing modernization efforts across the organization, quickly replacing legacy apps with Power Apps and Azure resources. This can’t all happen at once though, and often times companies either can’t or won’t move certain data to the cloud. In this hands-on lab, I’ll walk you through using an Azure Logic App and an On-Premises Data Gateway to connect to a local SQL Server 2017 database to query customer data using an HTTP request and response.
  3. Azure API Management and Custom Connectors
    API’s give you the ability to better secure and manage your application interfaces. Couple an API with a Custom Connector and you not only better secure and manage, but also make it super easy for other app developers across the organization to connect to your back-end resources through the API, without having to worry about all of the API configurations.
  4. Power Apps DevOps Using Best Practices
    There’s more to DevOps than just backing up your Power App solutions in source control and automating your build and release pipeline. Good solution management and best practices around publishers, entity metadata, components, and subcomponents are also key to manageability and scalability with your ALM.

Agility and Discipline are Both Required as Organizations Adopt the Cloud

At the beginning of cloud adoption, experimentation is common and exploration of applicable technologies is welcomed. Development teams are excited to try the newest service offerings and proof of concept efforts are completed with great interest. However, the thrill of discovery often turns to frustration as configuration complexity increases, and cost control challenges mount. The initial exploration proof of concepts often does not consider where costs are coming from or what resources are assigned to different teams.

While cloud providers make it easy to provision new resources, considerable forethought is necessary to come up with disciplined procedures to measure costs and control access. Security and management raise valid concerns regarding proper security controls. Infrastructure as Code (IaC) concepts will allow for exploration and development agility while including the proper management review, thoughtful allocation of resources, and cost control initiatives. In this post, we will introduce IaC concepts and tools that can be used to bring agile interaction to your cloud adoption experience.

What is Infrastructure as Code (IaC)?

Infrastructure as Code (IaC) refers to the tools and processes used to track and manage infrastructure resources as code. This code is commonly stored in a version control system and the tools are most effective when applied to virtual environments. One such example is the Microsoft Azure public cloud, but IaC can be equally effective in private cloud environments. The combination of version control with executable infrastructure definitions provide a complete change history and ensure future repeatability

Historical Configuration Management

IT administration teams have historically consisted of a handful of individuals tasked with managing a large footprint of physical and virtual machines. With the popularization of virtualization technologies came the need to effectively maintain configuration and security baselines. Rudimentary automation tools such as parallel SSH gave way to more powerful configuration management tools such as Puppet (2005) and Chef (2009). These tools allowed a single IT administrator to establish and apply configuration templates across thousands of machines, and address divergence from configuration when needed.

With the maturation of the configuration management tooling, different paradigms began to emerge. Puppet and Chef both operated with a pull model, utilizing a machine agent responsible for retrieving configuration from the central control server. Newer entrants to the Infrastructure as Code space such as Ansible (2012) and Terraform (2014) operated with a push model over established protocols (such as SSH) for remote access. Configuration changes also saw two very different approaches to how they were defined between Infrastructure as Code tools. The imperative model relied on the developer to define specific change operations, while the declarative model favored the definition of the desired end-state. In the declarative model, the IaC tool itself was responsible for identifying and applying changes. Each model comes with unique benefits and challenges, but regardless of the model chosen, the application of IaC tools has become a huge force multiplier for the IT team.

DevOps

In the last few years, the term DevOps has emerged out of the convergence of development and operational teams adopting shared tooling. This new engineering specialization brings with it a mindset founded in the security and configuration management needs of the IT administrator while enhancing it with version control and module abstraction common in the software development space. Probably the biggest change brought forth by DevOps was the adoption of container technologies, largely popularized by Docker. Containers greatly hastened the adoption of immutable infrastructure. The underlying machines became less important themselves, instead, acting as a vehicle to deploy container workloads.

Immutability in Infrastructure

Infrastructure immutability refers to the static nature of the machines themselves and is most easily achieved in virtualized environments such as Microsoft Azure. The value of immutability is that infrastructure configuration can be reasoned about based on a single source of truth – the code that resides in the version control system. Configuration drift in long-lived infrastructure systems is a monumental challenge that Puppet and Chef have worked to solve, but this problem can be eliminated entirely by treating our infrastructure as replaceable. When changes are required, we simply replace old infrastructure with new ones.

Adopting immutability in infrastructure configuration is not an overnight process. There are considerable challenges, especially when dealing with stateful systems such as databases or file storage systems. While these challenges are real, the advantages are substantial:

  • Deterministic infrastructure
  • Version controlled source of truth
  • Infrastructure automation via CI/CD systems
  • Audit change deltas prior to execution
  • Peer Review based on code change and change deltas
  • Gated approvals from security authorities

Adopting IaC Within Your Organization

Adopting Infrastructure as Code in your organization must be approached carefully. You may be met with resistance from security and IT staff. Successful adaptation requires your stakeholders to understand the benefits of the IaC method.

Greenfield cloud projects are one of the best ways to introduce these concepts without having to tackle the challenging task of codifying existing systems. This approach has the advantage of a reduced scope while learning the tools and exploration without danger of impacting operational systems. If a greenfield project is not possible, this can be done with a short-run proof of concept effort. In the early stages of a project, attention to the following guidelines will help set you up for future success:

  • Security first
  • Avoid premature optimization
  • Favor managed (PaaS/SaaS) vs unmanaged (IaaS)
  • Tag resources aggressively

Security First

Security first is one of the most challenging things to get right in any project, but cloud efforts are especially challenging as there is a much higher exposed surface area. With consuming data from object storage APIs you could expose sensitive data to the world if not properly locked down. There are known security breaches every year where misconfiguration caused huge leaks of Intellectual Property and Personally Identifiable Information. Infrastructure as Code is uniquely suited to minimize the pain associated with good security practices. Least privilege is a key tenant of security and Infrastructure as Code can be used to define access policies that explicitly allow only the resources that need them. This is made simple because resource identifiers can be dynamically injected into policies as the Infrastructure as Code is responsible for managing both.

Avoid Premature Optimization

Premature optimization of systems or code is a trap that can be easily fallen into. Infrastructure as Code offers a wealth of opportunity to create reusable modules for resources. These modules bring advantages for sharing proven code between environments, but if used too early can greatly slow down team velocity. It is generally best to iterate your system until you reach a secure, stable MVP where the architecture is well established before beginning the abstraction into discrete modules.

Favor managed Paas/Saas

Deciding between Platform as a Service (PaaS), Software as a Service (SaaS) and Infrastructure as a Service (IaaS) is straight-forward early in a project. SaaS offerings, such as Azure Databases, are generally always the right choice over IaaS with a self-managed database. SaaS or PaaS is able to eliminate the busywork of security patching and backup management, allowing the team to focus on using the software to solve their unique business challenges. Time is our most valuable resource and while IaaS has its place, it incurs the highest time expenditure to manage properly.

Tag Resources Aggressively

Resource tagging is fundamental to cost visibility in cloud environments. It is difficult to devise a complete resource tagging strategy, but at a bare minimum, all resources should include a project tag. Even if your project is the only one present in your cloud account, allowing tags to be set on all resources by your Infrastructure as Code will make it easy to adopt uniform tagging in the future. Tags can also be useful in differentiating development, test, staging, and operational environments if they reside in a shared account.

Summary

This is just the first post in a blog series on DevOps and Infrastructure as Code. Understanding the concepts outlined here is the foundation of successful cloud adoption. In the future, we will look at applying these principles with specific examples that can be used as a template in your own DevOps journey. In the meantime, some resources can be found below that expand on topics discussed within this post:

Executive Summary

This blog post is for Power Apps developers with basic experience with Microsoft Excel and .NET development. It begins by defining a real-world example for data migration. Next, it describes the steps necessary to use Excel to generate repetitious code within a provided Visual Studio Console Application. Finally, it demonstrates the solution being used and the result data in a Power App.

The Scenario

In part one of this two-part series, we covered the basics techniques of how Excel could be used to generate code and improve efficiency. In this blog post, we continue by showing how these techniques could be used to design a data loader for Microsoft Power Apps and Dynamics.

For this scenario, let’s imagine that a client is looking to migrate to Microsoft Power Apps. The solution has been designed in Power Apps and you are ready to migrate the client’s data to the new system. The client provides you with an Excel workbook with all of their data. You open the workbook and see that the client has included four spreadsheets: Clients, Accounts, Products, and Invoices.

The Solution

Download and review the solution. The code within it is outside of the scope of this demonstration; however, let’s review a few key parts:

  • The Enums folder contains all of the enumerations used in the project. I like using enums for self-documenting code and they are especially useful when mapping column indexes. This is one place you will be placing Excel-generated code.
  • The Models folder contains all of the models that are populated from the Excel file and ultimately used to populate CRM entities. This is another place you will be placing Excel-generated code.
  • The Main method in the Program.cs is the starting point of the application.
  • The GetClients, GetAccounts, GetProducts, and GetInvoices methods will be used to get data from the Excel sheets.
    • Each method is the same; it creates an instance of an Excel Application, opens the workbook, opens the sheet, gets the rows, iterates over them populating the model, closes the workbook, and exits the Excel Application.
    • Each method has an area comment //ADD MAPPNG HERE. This is where you will be adding Excel-generated code later.
  • The MigrateClients, MigrateAccounts, MigrateProducts, and MigrateInvoices methods are used to create records within the Power App.
    • Each method is the same; it gets the records from the Excel sheet, builds an Entity record from it, and saves it to the Power App instance.
    • Each method has an area comment //ADD MAPPNG HERE. This is where you will be adding an Excel-generated code later.
  • Earlybound Entities folder contains the classes that are generated from the Power App entities. The entities used in this example are all out-of-the-box and were generated using the Xrm Toolbox Early Bound Generator tool.
  • The app.config contains the properties used by the console application; you will need to update some of these settings prior to running the project:
    • CrmConnectionOrgName – your organization name (ex. org1a234567)
    • CrmConenctionRegion – your region (ex. North America)
    • CrmConnectionUsername – your username
    • CrmConnectionPassword – your password
    • DataMigrationConnectionString – the path to the mock data Excel file
    • DryRunMode – when “true”, nothing will be written to Power App
    • MigrateClients, MigrateAccounts, MigrateProducts, MigrateInvoices – use this to bypass migration
    • ClientIdNumStart , AccountIdNumStart, ProductIdNumStart, ProductIdNumStart – use this to start migration as a particular record (helpful if an error occurred during migration)

Creating the Models

  1. Create a new Excel workbook named Mapping.xlsx and rename the first sheet to Models
  2. Open the MOCK_DATA.xlsx and select the headers for the Clients sheet and copy (CTRL+C)
    Client Header
  3. Return to the Models sheet, select cell A1
  4. Select the drop-down under the Paste and select Transpose
  5. Select cell B1 and enter the formula: =”public string “&A1 & ” { get; set; }”
  6. Drag the formula down to B8; your sheet should look like this:
    Dragging Formula
  7. Select cells B1:B8 and copy/paste into the client model in the Models folder
  8. Repeat these steps for the Account, Product, and Invoice models (NOTE: you can paste the values in step 5 below the previous model and drag the formula you already created)

Creating the Column Enums

  1. In the Mapping.xlsx create a new sheet named Enums
  2. Select cell A1, press “=”, select Models tab, select cell A1 and press Enter
  3. Cell A1 on Enums sheet should now have the same value as Models sheet
  4. Drag the formula down far enough to include all values on the Models sheet
  5. Select B1 and enter the number 1
  6. Select B2 and enter the number 2
  7. Select cells B1:B2 and drag down to the last Client Value
  8. Repeat steps 5-7 for other models
  9. Select cell C1 and enter the formula: =” ” & A1 & ” = ” & B1 & “,”
  10. Drag the formula down to include all of the values in column A; your sheet should look like this:
    Creating Column Enums
  11. Select the corresponding cells for each model and paste into its column enum

Completing the Record Loaders

  1. In the Mapping.xlsx create a new sheet named Loaders
  2. Select cell A1, press “=”, select Models tab, select cell A1 and press Enter
  3. Cell A1 on Loaders sheet should now have the same value as Models sheet
  4. Drag the formula down far enough to include all values on the Models sheet
  5. Select cell B1 and enter the formula: =”record.”&A1&” = (string)xlRange.Cells[rowNumber, (int)ColumnEnum.”&A1&”].Text;”
  6. Drag the formula down to include all of the values in column A; your sheet should look like this:
    Completing the Record Loaders
  7. Select the applicable cells for each model and paste over the comment //ADD MAPPNG HERE in each Get method (ex. GetClients)
  8. Replace ColumnEnum with appropriate Enum (Note: you could fix this in the formula ahead of time as well of you prefer)

Completing the Migrators

  1. In the Mapping.xlsx create a new sheet named Migrators
  2. Select cell A1, press “=”, select Models tab, select cell A1 and press Enter
  3. Cell A1 on Migrators sheet should now have the same value as Models sheet
  4. Drag the formula down far enough to include all values on the Models sheet
  5. Click cell B1 and type “contact”
  6. Drag down to the end of the Contact values
  7. Repeat for “account”, “product”, and “invoice”; your sheet should look like this:
    Completing the Migrators
  8. Select C1 and enter the following formula: = B1 & ” = record.” & A1 & “;”
  9. Drag the formula down to include all of the values in column A; your sheet should look like this:
    Add Value to Column A
  10. Select the applicable cells for each model and paste over the comment //ADD MAPPNG HERE in each of the Migrate methods. NOTE: there will be warning in the code
  11. In the code, go through each of the errors and select the Power App field that maps to record the field. In some cases, you will need to use Guid.Parse, DateTime.Parse, etc. to map the fields.

Here is my mapping for the Contact:
Mapping Example

NOTE: new_migrationid is a custom Whole number field that I have added to the Contact entity to keep track of the corresponding record in the spreadsheet. I do this for the following reasons:

  • Keeping track of which record the code is on in case an error occurs.
  • Deleting records that were created by the migration

Running the Migration

Once the code is completed, make sure that the DryRunMode is true in the app.config and that you do not have the MOCK_DATA.xlsx file open. When you run the project, it will open the Excel workbook, get the records, and map them to an Entity. Nothing will be saved at this time. The output should look something like this:
Running the Migration

Review the output for errors and fix them as needed. Once there are no errors, switch the DryRunMode back to false and run the loader again. Once it is complete, navigate to your Power App instance, and now your newly migrated data will be available!
Active Contact List

Conclusion

This has been only one example of how you can leverage Excel to aid with generating repetitious code. Additionally, the data loader included in this project can be reused for your custom Power App; simply use the techniques we have covered to create Models, Column Enums, Loaders, and Migrators.

The example solution used in this walkthrough can be downloaded here.

The completed solution with the steps in this blog post complete can be downloaded here.

Microsoft Partner of the Year 2020

Applied Information Sciences (AIS) has been recognized as the Microsoft Partner of the Year (2020) for Power Apps and Power Automate! This is an exciting award for our company and one which we are honored to receive. AIS has a long history helping customers in large, global, and often regulated industries work through their cloud transformation. We know that this success is due in large part to the Power Platform community and the hard work of our business applications leaders like our Director of Cloud Applications Strategy Andrew Welch, and our Power Platform Solution Architects like Emily Cartner, Jeff Myers, Patrick O’Gorman, and Kevin Dowker as well as the countless other business applications specialists and leaders at AIS (including Yash Agarwal, Ritika AgarwalNick Gill, Andrea Pinillos, and Jamie Willis-Rose to name a few).

At AIS, we believe in investing in our people. We send our business applications experts to industry events and support their participation in community-driven efforts. From winning global hackathons to providing Power Apps training courses to the growing community, our team lives out the AIS values of Continued Learning, Sharing, & Technical Excellence and Citizenship & Community every day through their dedication and contributions.

Early in 2019, AIS invested heavily in publishing and sharing the first edition Power Platform Adoption Framework. Since its initial publication, the framework has been globally accepted and vastly improved upon through ownership by the Power Platform community via GitHub at www.PowerPlatform.af. Today, AIS’ business applications specialists work alongside experts from Microsoft, business applications MVPs, and other Microsoft partners around the globe contributing to the ever-evolving set of best practices in the Power Platform Adoption Framework. We are incredibly proud to support the members of the Power Platform Community worldwide and thank them for their continued collaboration in expanding the framework and its capabilities.

One of the most important tenets of the Power Platform Adoption Framework, and consequently AIS’ implementation strategies for Power Platform, is the notion of “platform-focus”. The real value of the Power Platform is realized when business users are able to bring the “whiteboard to life” on their own, without large IT investments. The benefits of citizen development are immense, but they can introduce risks if not managed appropriately. This is one of the first concerns our IT owners typically come to us with. As a result, we place early emphasis on governance and focus on good platform stewardship.

AIS has many offerings to help organizations quickly adopt Power Platform. Our Power Platform Center of Excellence (CoE) is a turnkey offering that complements our customer’s in-house IT staff. It combines our expertise and our lessons learned with deployable tooling, patterns, and models for rapidly building solutions. We used this model first with Azure, then with Modern Workplace, and now with Power Platform for each of the verticals and customers we serve.

Recently, AIS launched our industry-focused Power Platform offerings for Financial Services.  The Financial Services Power Platform offerings support initial adoption and governance as well as best practices associated with scaling and development. These offerings aren’t just ideas, they represent ‘real-world’ lessons learned that we have taken from implementations for the world’s leading finance and insurance companies.

In addition to the private sector, AIS has succeeded in bringing Power Platform to the government and global humanitarian organizations.  In response to the COVID-19 crisis, AIS built a Power Platform-based digital clearinghouse designed to streamline the procurement of medical supplies (ex. Personal Protective Equipment, digital medical devices).  The digital clearinghouse allows medical equipment manufacturers to offer medical supplies for nations across the globe to quickly procure.  Whether in India, Europe, or the United States, our AIS business applications group is the finest in the world and we will continue our investment in the technology and our people.

AIS is a proud Microsoft partner and stands alongside the products, technology investments, and values of the organization. The team is humbled by another successful year of working together and excited for the many prosperous years ahead. We thank Microsoft for this honor.

A Framework to Meet Your Needs

PowerApps comes with a wide variety of controls out of the box. However, these controls may not be on-brand or have the desired look and feel, functionality, or style for your app. The PowerApps Component Framework command-line interface tools let you create a custom PowerApps control specific to your individual needs. PCF works this magic by integrating JavaScript libraries into the PowerApps control.

As a primarily JavaScript frontend developer, my normal workflow is project dependent but can be boiled down to utilizing the project framework or library, create a base component, extend the base functionality, and style using CSS. This component can be used as a named module, or if appropriate, the HTML and JavaScript can be rolled into another module. Thankfully, most of this is applicable to PCF. The tutorial I followed centers around the jQuery UI horizontal slider and vanilla CSS, though, the PCF build pipeline allows for CSS preprocessors.

Working with the PCF CLI tools for someone not familiar with the Microsoft Azure ecosystem was quite a hurdle. As a side note, PCF CLI at the time of writing is not cross-platform and only available on Windows 10. During the course of development, I needed to create various entities, build Power Apps model-driven app, and provision environments. Being a complete novice in the environment made these steps confusing and frustrating.

With generous assistance from a seasoned PowerApps developer, I was able to follow the tutorial to create a horizontal slider straight out of jQuery UI that was ready to be customized via CSS. To demonstrate how the new control incorporated CSS, I added color to the slider handle.

Building a Customized PowerApp

During the build process, PCF CLI bundles the JavaScript and CSS into the control. This control is then added to a model-driven app. However, to my knowledge, there is no shared CSS across multiple controls. It is incumbent upon the developer to include the required CSS in each individual PCF control project. Additionally, having never worked in PowerApps prior to this experiment, I’m not familiar with the structure of what a nicely customized app would look like. Following the tutorial, I had successfully incorporated jQuery UI’s horizontal slider and custom CSS. However, it looked woefully out of place when added to a stock PowerApps app.

The exercise I used to publish a custom component in PCF was relatively simple. PCF CLI allows a developer to make something that doesn’t exist in the list of out-of-the-box controls. For example, a map control built with the OpenLayers library, loading a tileset from a selection in a PowerApps form, and then allowing the user to draw geometries on the map control and automatically calculating the land area of the shapes. I would advise using the out-of-the-box controls whenever possible, but when a custom component is necessary, I would suggest the following based on what I learned:

  • Use a CSS pre-processor. They are industry standard and can help simplify all but the least complex styles.
  • Make sure you have a good grasp of the Microsoft ecosystem. It may be a simpler task to add HTML, CSS, and JavaScript to your toolbelt than to get a grasp on PowerApps.
  • Start out simple. Add complexity once you are sure your component builds properly.
  • Make sure to use the CLI testing environment to preview your control. You don’t have to publish the control, add it to a model-driven app, etc. to see it work.