This write up is executed in sequence in that if you jump to a section, it assumes you have completed the parts before it. This write-up assumes the following;

  • You have a Power Apps tenant with administrative privileges.
  • For the source control and CI/CD sections;
    • An Azure DevOps project configured with the Power Apps Build Tools Extension.
    • 3 Power Apps environments; development, staging/build, and production.

Executive Summary

This write-up is for anyone looking to learn how to automate the deployment of their Power Apps or Dynamics solution customizations. It starts out walking the reader through setting up a new environment since, for this exercise, you’ll need three of them. It also takes some time to demonstrate setting up a proper Power Apps solution, making sure to include a unique publisher prefix and only what components and subcomponents we need to satisfy our requirements. In other words, our solution stays clean and manageable. If you have already done these things, and have three environments ready, feel free to proceed to the section “Getting our Solution into Source Control.”

Create the New Environment

    1. Create a new environment.
      New Environment in PowerApps Image 1
    2. Create a new database in the process. Notice I checked the “Deploy sample apps and data” for this new environment:
      New Environment in PowerApps – Add Database Image
    3. Let’s change the Environment URL now to something that looks a little nicer than the organization’s internal name that was set up automatically by the system. Click on the new environment from the list, and you should be taken to a page where you can edit the URL:
      Change Environment URL
    4. Click on the link for the environment, and you will be taken to the Power Apps landing page for that environment. It should look something like this, except you may or may not have the sample apps installed:
      New Environment Home Page with Sample Apps Image
    5. Head over to, and let’s look at what we have there. You can get here by using the menu in the top-left corner next to the word Power Apps.
      Make Image

      Create a New Publisher

    6. The first thing we want to do is build our own solution…and as many of you know, you’ll want your own custom prefix on your solution items. As far as I know at this time of this writing (11/7/19) Microsoft has not added a new user interface for adding publishers so that we can customize our prefix. That said, use the cog in the top right to open the Advanced settings window. You should see a window like this:
      Classic Advanced Settings Page Image
    7. Using the Settings dropdown, select the Customizations menu item.
      Classic Advanced Settings Page with Flyout
    8. Select Publishers on the Customizations page
      Classic Customization Image
    9. Click New on the Publisher page and fill out the Display Name and Prefix on the new Publisher dialog. The Name field should auto-populate based on what you put in the Display Name. Click Save and Close to take you back to the Publisher page.
      Classic Add Publisher Image

      Create a New Solution

    10. Back to our page, let’s select the Solutions menu item on the left.
    11. On the Solutions page, click the New solution button and provide a Display Name and Version for the new solution. Select the new Publisher we just set up in previous steps. Once again, the Name should auto-populate. Click Create.
      Create New Solution with New Publisher Image
    12. Open the solution by clicking on its Name in the solution list.
      Empty Solution Image

      Create a New Custom Entity

    13. Since you’ll undoubtedly see an empty solution, let’s start simple and add a custom entity. Click New so that you see the dropdown menu. Then select Entity. Fill out the Display Name and take note of the Publisher prefix.
      New Custom Entity

      Add an Out-of-Box Entity to the New Solution

    14. Once that Entity is created, click on it to open it, and you should see a number of out-of-box fields that were automatically created with your new Entity. Now go back to the solution. Let’s take it a step farther and see what happens when we add an out-of-box entity that is deployed when we create our database. Click Add existing from the solution home page and select Entity. Please scroll down and find Contact, select it, and click Next.
      Adding Contact Image
    15. This next page may get a little confusing. It asks what components you want to include, and if you wish to include the entity metadata. For now, let’s not select any parts and just add the Contact entity. You can always go back and add those components using the Add subcomponents button at the top of the Solution inventory page when you have the entity selected. For now, though, I just want to create a new form for the Contact entity so Cancel out of this.
      Add Subcomponents Image
    16. Open up your Contact entity in the solution by clicking on it. Then, select the Forms tab. From there, click the Add form menu and select Main Form from the dropdown.
      New Form of Contact Image
    17. You should have a window open up with the Contact form designer ready to go.
      Contact Form Designer
    18. Make some changes to this form and consider changing the Display Name of the form. Save it.
      Contact Form Designer with Changes
    19. Go back to the solution view and click Publish all customizations. If all was successful you will see a success message.
      Publishing Customizations ImagePublish Successful Image

      Getting our Solutions into Source Control

      This section will require 3 different service connections, one for each of the environments we’ll be using for our build and release pipeline. I’ll show you how to create these in this section, but if you’re already familiar go ahead and set your 3 service connections up now. For this example, we’ll use the following connections:

      – ogs-powerapps-connection – this is our dev environment
      – cds-powerapps-connection-ogsllcbox – this is our staging/build environment
      – cds-powerapps-connection-ogsllc – this is our production environment
      Connections Image

    20. Navigate to the Azure DevOps project you should already have created with Power Apps Build Tools installed (see Assumptions). Under Pipelines, select Builds and click New, then New Build Pipeline. We will use the classic editor for this write-up.
      DevOps Step 1 Image
    21. Click Continue, and then click the link to start with an empty job.
    22. Give your Pipeline a meaningful name then select the Agent job 1 node. Save the pipeline.
      DevOps Step 2
    23. On the Agent job 1 node, make sure to select the Allow scripts to access OAuth token
      Allow Script to use Ouath Toaken
    24. Click the plus (Add task) on Agent job 1. Type in Power Apps Tool into the search box and add the Power Apps Tool Installer to your pipeline.
      DevOps Step 3 Image
    25. Add another task to the agent job, this time search on Power Apps Export, and select the Power Apps Export Solution. Select this new task to make the necessary configuration changes. Use $(SolutionName) for the Solution Name field.
      We’ll add that variable later. Use $(Build.ArtifactStagingDirectory)\$(SolutionName).zip for the Solution Output File. I need to configure a new service connection, so click Manage next to the Power Apps Environment URL label.
      DevOps Step 4 Image
    26. Configure the new generic service connection
      DevOps Step 5 Image
    27. Add another task to the agent job and search on Power Apps Unpack to locate then select the Power Apps Unpack Solution. Use $(Build.ArtifactStagingDirectory)\$(SolutionName).zip for the Solution Input File. Use $(Build.SourcesDirectory)\$(SolutionName) for the Target Folder to Unpack Solution. Save the pipeline.
      DevOps Step 7 Image Saved Unpack Task
    28. Add another task to the agent job, this time use the Command line task. Give it a meaningful name and use the following script to commit the code to the appropriate repository. In this case, we are using ‘master’.
      echo commit all changes
      git config "<>"
      git config "Automatic Build"
      git checkout master
      git add --all
      git commit -m "solution init"
      echo push code to new repo
      git -c http.extraheader="AUTHORIZATION: bearer $(System.AccessToken)" push origin master

      DevOps Step 6
    29.  Click on the Variables tab above the Pipeline designer area and add a variable for SolutionName. Save the pipeline. At this point, you can actually use the Save and Queue, or just queue the pipeline and cross your fingers.
      DevOps Step 9 ImageHopefully, all is successful.
      DevOps Final Successful
    30. Go over to your repor now and you should see something like this:
      Repo after Source is Loaded

      Deploying Solution as a Managed Solution (to Test, QA, Production)

      Creating a Managed Solution

    31. Now that we have our customizations in source control, we want to create a managed solution for deployment to our production environment. Create a new build pipeline using the classic interface, and empty job, and give it a meaningful name.  Once again on the Agent job, make sure to check the box to allow scripts to access the OAuth token.  Also add the Power Apps Tool Installer as we did in our first build pipeline.  Save your pipeline.
      Create Managed Solution Image
    32. Now we want to add a Power Apps Pack Solution task. Use $(Build.SourcesDirectory)\$(SolutionName) for the source folder, and $(Build.ArtifactStagingDirectory)\$(SolutionName).zip for the solution output file.  Leave the Type of Solution as Unmanaged.  Save the pipeline.
      DevOps Create Managed Solution
    33. Add a Power Apps Import Solution task. For the Solution Input file use $(Build.ArtifactStagingDirectory)\$(SolutionName).zip.  This is where we want to use our staging/build environment so either select that service connection or create a new service connection and point it to said environment.  Save your pipeline.
      DevOps Create Managed Solution Image
    34. Add a Power Apps Export Solution task. Use $(SolutionName) for the Solution Name, and $(Build.ArtifactStagingDirectory)\$(SolutionName) for the Solution Output file.  Make sure to check the Export as a Managed Solution checkbox.  Save the pipeline.
      DevOps Create Managed Solution Image
    35. Add a Publish Pipeline Artifact task and use $(Build.ArtifactStagingDirectory)\$(SolutionName) for the File or directory path. Use drop for the Artifact name.  Save the pipeline.
      DevOps Create Managed Solution Image 5
    36. Click on the Variables tab up top and add your SolutionName variable.
      DevOps Create Managed Solution Image 7
    37. Run the pipeline. If it was successful you should see a drop artifact in the Artifacts dropdown menu.
      DevOps Create Managed Solution Image 6

      Deploying to Production – Creating the Release Pipeline

    38. Now it’s time to create a Release pipeline. Click on Releases in the left nav, then click New, and then New release pipeline from the dropdown menu.  Start with an empty job on the next screen.
      Release Pipeline Image 1
    39. Select the Add an Artifact block and select the build pipeline that we created to build the managed solution.
      Release Pipeline Image 2
    40. Click on the 1 job, 0 tasks link in the Stage 1 block. Add the Power Apps Tools task, and a Power Apps Import Solution task.
    41. Click on the Agent job and scroll to find the Artifact download and select the build pipeline that was built for creating the managed solution.
      Release Pipeline 3 Image
    42. Click on the Power Apps Import Solution task. You will use something similar for the Solution Input File.  $(System.DefaultWorkingDirectory)/ogs-powerapps-create-managed-solution/drop/
    43. Select the production environment service connection from the Power Apps Environment URL. Save the pipeline.
      Release Pipeline Image 4
    44. Use the Create Release button to test the release pipeline out.
      Release Pipeline Image 5
    45. Now you should be able to go to your production environment and see the solution, plus the changes we made to the contact form.
      Productions with New Solution Image
      Production Deployed Contact form with Updates Image


I hope this helped demonstrate how to use the Power Apps Build Tools to get your solutions into source control and automate the deployment of said solutions across any environments. I hope you were also able to pick up something new about your Power App solutions composition.


To break this series apart into more manageable chunks, in this installment, we’ll focus on setting up the API in Azure API Management (APIM) and the custom connector. If you missed the first installment where we set up our On-Premises Data Gateway and Azure Logic App, please check it out here.


The company has an on-premises SQL Server database that contains customer data that needs to be available to various apps built on the Power Platform and/or Azure.


  1. That you have a sample of the data your Logic App returns. For this write-up, we’ll use the data returned from the Logic App we created in our previous lab.

High-Level Steps

  1. Create an Azure API in Azure API Management (APIM) that provides access to the Logic App, and can be called from various Power Apps and Power Automate Flows
  2. Create a Custom Connector

Azure API Management

In this section, we’ll walk through setting up an API to the Logic App we created in our prior installment.

Why use Azure API Management (APIM)?

Azure APIM provides a single place for managing your company’s APIs. It can be used to selectively expose data and services to employees and partners by applying authentication and even usage limits.

Create the API

  1. Create an Azure API Management service instance in Azure. You can follow steps to do that here. In the Azure search bar, type API, and select API Management services. Provide the necessary information and click the Create button. Once complete, your API Management service should show as ‘on-line’.APIM 1Azure API Image 2Azure API Image 3
  2. Click to open your new APIM service. Select API’s from the blade that opens, and either select the Logic App tile or search APIs for Logic App and select it.
    Azure API Image 4
  3. Assuming you created the Logic App in the first installment of this series, select that Logic App from the “Select Logic App to import” blade that opens to the right.
    Azure API Image 5
  4. When that is done being created, you should notice an operation that was created automatically called manual-invoke. Select that operation and click the pencil edit button in the Frontend section.
    Azure API 6 Image
  5. Provide a meaningful display name if you’d like. Change the URL to a GET with a “/onpremcustomers” as a resource path.
  6. On the Query tab at the bottom, add the CustomerId (integer) query parameter.
    Azure API Image 7
  7. On the Responses tab, select the 200 OK response that should already be there. Add the details for an application/json representation using the sample data output from the Logic App created in the previous exercise. I also provided a definition of “customerresponse”.
    Azure API Image 8Aure API 11 Image
  8. On the Settings tab, add an API URL suffix, select a Product, and click save. You will want to make note of the Product you select. I am using “Unlimited” in my example. Later, when providing an API key for a custom connector, you’ll need to remember which product your API is using.Azure API 12 Image
  9. Now we’re ready to test this out. On the Test tab, enter a value for the Customer Id parameter and click Send. If all goes well you should get a 200 response back with, hopefully, some data.
    Azure API Image 13Azure API Image 14

Power Apps Custom Connector

  1. From the Azure API Management Overview blade, open the Developer portal (legacy). At the time of this writing, the new Developer portal was too buggy to use.
    Connector Image 1
  2. Click on the API we just created.
    Connector 2 Image
  3. Using the API definition button, download/copy and save the Open API 2 (JSON) file.
    Connector Image 3
  4. In Power Apps, go to Data  Custom Connectors  New Custom Connector Import an OpenAPI file. Import the downloaded JSON file from the previous step, provide a name for your connector and click Continue.
    Connector Image 5Connector Image 6
  5. Click through the General and Security tabs and make sure they match what you’re expecting (see screenshots).
    Connector Image 7Connector 9 Image
  6. On the Definition tab, if you see a parameter in the Body area, you can delete it. Then click Create connector at the top.
    Connector 11 Image
  7. On the Test tab, you’ll first need to create a new Connection. You will need the API key for the subscription you used in the APIM. In my case, I used “Unlimited”. (Click the ellipses to the right of the subscription and select Hide/Show Keys to see the key and copy it.)
    Connector 11.5 ImageConnector 12 Image
    Connector 15 Image
    Connector 13 Image
  8. Navigate back to the Custom Connector – Test tab and test out your Custom Connector.
    Connector 14 ImageConnector 16 Image


I hope this was helpful in demonstrating just how quickly you can better secure your Azure Logic Apps with Azure API Management, as well as how quickly you can spin up a custom connector that can then be distributed and used by other apps and Flows across your organization to connect to the Logic App through the new API.

I support projects where we have platforms like SharePoint, are looking towards adopting PowerApps, and have Azure Government subscriptions. While learning about containers, I wondered where they could fit into an environment where many applications are developed on top of SharePoint. This helped me better understand containers and discover the perfect use case. I thought of sharing my architecture idea through this blog post. If you would like to learn more about container architecture, read this blog post on Container Architecture Basics from our VP of Solution Engineering, Brent Wodicka.

Architecture Idea

If you are maintaining a SharePoint on-premise farm in a local data center or in the cloud, you cannot replace your SharePoint farm VMs with Containers. But what can you do is take the heavy business logic code out of your custom applications from SharePoint farm and deploy it as services using containers. Deploy your backend services using containers. You can call these services endpoints (Rest API Endpoints) from your SharePoint framework web parts and render UI inside the SharePoint framework webparts.

If your application is developed using PowerApps, I believe you can call custom service endpoints in PowerApps. So, your front end can still be deployed using SharePoint or PowerApps, but your business logic can be deployed as services inside the containers.

Where can containers fit in a multi-platform environment?

Answer: Below is two diagrams that explain the use case of the container in a multi-platform environment discussed above.
Business Logic Services Snapshot

Backend Databases and Cloud Services

Advantages of this Architecture

  • Since you have server-side code deployed inside the Azure Cloud, you can easily integrate with other Azure PaaS services, including Azure SQL Database, Azure Cognitive Search.
  • Since most business logic will deploy inside the containers as services, it is easy to implement these services in any other cloud provider.
  • Suppose you have complex legacy applications developed inside the SharePoint, and you are storing the data in SharePoint lists. In that case, you can move those SP lists data to the Azure SQL database and call the Azure SQL Apis from your services that deployed side the containers. ( See the second diagram above)


Suppose you have heavy business logic written in front end JavaScript files in SharePoint. You can rewrite those codes as server-side code using C# and deploy as services inside the containers and call these services endpoints from SharePoint webparts. Complex application data can even move from SharePoint lists to Azure SQL Databases. Containers solve deploying your custom code as services, but they cannot replace your SharePoint infrastructure.


Unfortunately, Azure DevOps does not have a SaaS offering running in Azure Government. The only options are to spin up Azure DevOps server in your Azure Government tenant or connect the Azure DevOps commercial PaaS offering (specifically Azure Pipelines) to Azure Government. Your customer may object to the latter approach; the purpose of this post is to provide you with additional ammunition in making a case that you can securely use commercial Azure DevOps with Azure Government.

Throughout this blog post, the biggest question you should always keep in mind is where is my code running?


Take a simple example; a pipeline that calls a PowerShell script to create a key vault and randomly generates a secret to be used later (such as for a password during the creation of a VM)

add-type -AssemblyName System.Web
$rgName = "AisDwlBlog-rg"
$kvName = "AisDwlBlog-kv"
$pw = '"' + [System.Web.Security.Membership]::GeneratePassword(16, 4) + '"'
az group create --name $rgName --location eastus --output none
az keyvault create --name $kvName --resource-group $rgName --output none
az keyvault set-policy --name $kvName --secret-permissions list get --object-id 56155951-2544-4008-9c9a-e53c8e8a1ab2 --output none
az keyvault secret set --vault-name $kvName --name "VMPassword" --value $pw

The easiest way we can execute this script is to create a pipeline using a Microsoft-Hosted agent with an Azure PowerShell task that calls our deployment script:

  name: 'AisDwl'

- task: AzureCLI@2
    azureSubscription: 'DwlAzure'
    scriptType: 'ps'
    scriptLocation: 'scriptPath'
    scriptPath: '.\Deploy.ps1'

When we execute this, note that output from that PowerShell script flows back into the Pipeline:

Deployment Output

To circle back to that question that should still be in your mind….where is my code running? In this case, it is running in a Virtual Machine or container provided by Microsoft. If you have a customer that requires all interactions with potentially sensitive data to be executed in a more secure environment (such as IL-4 in Azure Government), you are out of luck as that VM/Container for the hosted build agent is not certified at any DoD Impact Level. Thus, we have to look at other options, where our deployment scripts can run in a more secure environment.

I’ll throw a second wrench into things…did you see the bug in my Deploy.ps1 script above? I forgot to add --output none to the last command (setting the password in the key vault). When I run the pipeline, this is shown in the output:

Secret Visible

Not good! In an ideal world, everyone writing these scripts would be properly handling output, but we need to code defensively to handle unintended situations. Also, think about any error messages that might bubble back to the output of the pipeline.

Option 1

Azure Pipelines provide the capability to run pipelines in self-hosted agents, which could be a VM or container-managed by you/your organization. If you set up this VM in a USGov or DoD region of Azure, your code is running in either an IL-4 or IL-5 compliant environment. However, we can’t simply spin up a build agent and call it a day. As with the Microsoft-hosted build agent, the default behavior of the pipeline still returns output to Azure DevOps. If there is ever an issue like I just demonstrated, or an inadvertent Write-Output or Write-Error, or an unhandled exception containing sensitive information, that will be displayed in the output of the pipeline. We need to prevent that information from flowing back to Azure Pipelines. Fortunately, there is a relatively simple fix for this: instead of having a task to execute your PowerShell scripts directly, create a wrapper/bootstrapper PowerShell script.

The key feature of the bootstrapper is that it executes the actual deployment script as a child process and captures the output from that child process, preventing any output or errors from flowing back into your Pipeline. In this case, I am simply writing output to a file on the build agent, but a more real-world scenario would be to upload that file to a storage account.

	& "$PSScriptRoot\Deploy.ps1" | Out-File "$PSScriptRoot\log.txt" -append
	Write-Output "Deployment complete"
	Write-Error "there was an error"

The biggest disadvantage of this approach is the additional administrative burden of setting up and maintaining one (or more) VMs/containers to use as self-hosted build agents.

Option 2

If you would prefer to avoid managing infrastructure, another option is to run your deployment scripts in an Azure Automation Account. Your Pipeline (back to running in a Microsoft-hosted agent) starts an Azure Automation Runbook to kick off the deployment. The disadvantage of this approach is that all of your deployment scripts must either be staged to the Automation Account as modules or converted into “child” runbooks to be executed by the “bootstrapper” runbook. Also, keep in mind that the bootstrapper runbook must take the same preventative action of capturing output from any child scripts or runbooks to prevent potentially sensitive information from flowing back to the Pipeline.

Sample code of calling a runbook:

$resourceGroupName = "automation"
$automationAccountName = "dwl-aaa"
$runbookName = "Deployment-Bootstrapper"
$job = Start-AzAutomationRunbook -AutomationAccountName $automationAccountName -ResourceGroupName $resourceGroupName -Name $runbookName -MaxWaitSeconds 120 -ErrorAction Stop
$doLoop = $true
While ($doLoop) {
    Start-Sleep -s 5
    $job = Get-AzAutomationJob -ResourceGroupName $resourceGroupName –AutomationAccountName $automationAccountName -Id $job.JobId
    $status = $job.Status
    $doLoop = (($status -ne "Completed") -and ($status -ne "Failed") -and ($status -ne "Suspended") -and ($status -ne "Stopped"))
if ($status -eq "Failed")
    Write-Error "Job Failed"

The Deployment script code running as an Azure Automation Runbook (Note that this has been converted to Azure PowerShell as the AzureCLI isn’t supported in an Automation Account Runbook):

$Conn = Get-AutomationConnection -Name AzureRunAsConnection
Connect-AzAccount -ServicePrincipal -Tenant $Conn.TenantID -ApplicationId $Conn.ApplicationID -CertificateThumbprint $Conn.CertificateThumbprint

add-type -AssemblyName System.Web
$rgName = "AisDwlBlog-rg"
$kvName = "AisDwlBlog-kv"
$pw = [System.Web.Security.Membership]::GeneratePassword(16, 4)

$rg = Get-AzResourceGroup -Name $rgName
if ($rg -eq $null)
	$rg = New-AzResourceGroup -Name $rgName -Location EastUs

$kv = Get-AzKeyVault -VaultName $kvName -ResourceGroupName $rgName
if ($kv -eq $null)
	$kv = New-AzKeyVault -Name $kvName -ResourceGroupName $rgName -location EastUs
Set-AzKeyVaultAccessPolicy -VaultName $kvName -PermissionsToSecrets list,get,set -ServicePrincipalName $Conn.ApplicationID

$securePw = ConvertTo-SecureString -String $pw -AsPlainText -Force
Set-AzKeyVaultSecret -VaultName $kvName -Name "VMPassword" -SecretValue $securePw

Why Stream?

If you don’t have school-aged children, you might not know that live streaming is the hot thing at the moment. Imagine telling your younger self that you could not only get paid for playing video games but that people would enjoy watching you play games. Honestly, I was born in the wrong era.

Traditionally, the way developers communicated their craft was through the written word; we already type thousands of lines a day, have a WPM of 90, and write extensive documentation. Writing instructional text is a natural progression.

But some ideas are best communicated visually and personally – this is why developers go to conferences, sit in crowded rooms, and stare at a wall of text. However, in the age of COVID-19, in-person meetings have been put on hold indefinitely – so the next best way to recreate the experience is live streaming.

This may sound daunting; most people have a fear of public speaking, and many developers are introverts, compounding the fear. If you’re terrified of speaking to a dozen people, imagine potentially thousands of people watching and listening to you online! But the best way to overcome fear is through familiarity.

Livestreaming is no more difficult than sharing your screen in Microsoft Teams and talking your coworkers through your code. Sure, you could write a multi-page email or extensive documentation, but that lacks the immediacy and leaves it open to misinterpretation. A live stream, with a live audience responding and asking questions, is a great way to communicate ideas.

Getting Started


This is the simplest but sometimes the hardest part – coming up with something to discuss. In your daily grind, it’s easy to talk about what you’re working on with your coworkers, but once you clock out, it can be hard to come up with a topic. This is the curse of every communicator, and it’s up to you to figure out your muse. But if you can figure out a way to talk about what you’re working on for work without giving up any company secrets, then that’s a great place to start.


Now that you have an idea of what you want to talk about, you have to figure out who you want to talk to! Who’s your audience? It’s simple in a work setting: your office roommate, your pair programming buddy, and your team members. But outside of the 9-5, it can be difficult to find an audience. This is a topic in and of itself, but a great first step is to find dev streamers that you like, then follow in their footsteps and appeal to their kind of audience. Over time, you’ll develop your niche.


Now that you know what you want to talk about, and who you want to talk to, it’s time to figure out how to bring it together. Each area will be broken down in greater detail in the following section.

  • Computer hardware (of course)
  • Broadcasting software
  • Audio/Video equipment
  • Streaming services
  • Miscellaneous gear

Computer Hardware

Since we’re specifically talking about live streaming software development-related topics, it follows that having a computer is required. You’ll want a decently capable computer with a moderate CPU, RAM, and GPU. Essentially, if it’s a decent gaming system, then it’ll be a decent devstreaming system. A good starting point would be a Ryzen 3600, 16GB RAM, 2 or more monitors, and an Nvidia RTX GPU*.

* The GTX 1650 Super is also good because it is built on the same Turing engine as the RTX, and it encodes just as well as the RTX 2080TI, so don’t overspend. The other important factor is the number of connected monitors affected by the amount of GPU memory.

Broadcasting Software

I’ve only experimented with a couple of apps, so this isn’t an exhaustive list – do your research as to which one suits you best.

Open Broadcaster Software (OBS) Studio

It’s free, has a lot of features, is cross-platform, and is very popular. This is an excellent app for getting started with streaming and is pretty straightforward. It has a thriving plugin ecosystem. There’s also an alternative version of OBS called StreamLabs OBS (or SLOBS), which streamlines the OBS interface and provides additional functionality.


vMix is not free, but it has many professional features, which makes it very well suited for professional broadcasts and personal live streaming. It’s got a steeper learning curve than OBS but is much more flexible and customizable. You can try it out free for 60 days, and after that, the basic HD version is just fine for live streaming and costs $60.

How to Choose?

I’d suggest starting with OBS to get acquainted with live streaming, and it’s sufficient for screen sharing and integrating a few cameras. But once you start trying more advanced tricks that are difficult to do in OBS, look into vMix (or other paid apps).

Add Video and Equipment

At a minimum, you’ll need an audio input – no point in a live streaming without one! Hopefully, as we’ve all become remote workers, you’ve been provided with a headset by your company, that’s all you need to get started. The important part is that your voice is clear and intelligible, so if your coworkers can understand you during your daily standups, then you’re good to go.

A camera is not strictly necessary, but recommended – you want your audience to connect with you quickly, and your face on the stream facilitates that. An example would be giving a presentation in a room full of people; they’re focused on you just as much as your content. A webcam can fulfill both video and audio needs, but the webcam microphone will probably pick up a lot of ambient or background noise, so be sure to test all your audio sources to figure out which sounds the best.

If you have a camera, the next most important thing is lighting – make sure you have some strong neutral ambient light sources. If you don’t have a nearby window or are broadcasting at night, you may have to buy some key lights.

As your live streaming journey progresses, you’ll discover a whole new world of AV gear, but now stick to the basics before you start chasing that dragon. A headset and a webcam are sufficient to begin.

Streaming Services

Up to this point, we’ve addressed everything required to produce a live stream – now, we need a service[SK1] to receive and distribute it.

  • Twitch.TV – The current popular service for live streamers of all kinds. Good interactivity with your audience. The negatives are that the video and audio quality (bitrate) is limited.
  • – YouTube is the gold standard of internet video distribution. There are no limitations on bitrate here. They’ll take as high a quality as you can provide them—a great place to organize and store older streams. Live interaction with your audience is not as good as other services.
  • Facebook – Very easy to get started, with good interactivity with your audience, but it can be hard to find an audience outside your immediate circle of friends. Limited bitrate dramatically affects video and audio quality.
  • Restream.IO – Restream is a site that receives your stream feed and simultaneously retransmits it to other services, like FB, YT, Twitch, etc. The free tier can handle up to 3 output streams—a great way to increase your reach for free.

Collaboration Apps

Everything that applies to streaming also applies to screen-sharing or teleconference apps! You can stream into applications like Microsoft Teams, Zoom, Slack, and Discord by either dedicating an entire display to the broadcast output and then screen sharing, or adapting the broadcast output as a webcam.

Miscellaneous Gear

These are optional tools that I’ve found useful for improving the ease and quality of a live stream.

HDMI Dummy Plugs

HDMI Dummy Plug ShotThese little HDMI plugs trick your computer into thinking there’s a real display connected to it, which you can then use as a virtual display for screen captures. Sometimes, two screens aren’t enough to produce a live stream, so I plug one of these in, set the screen capture to the fake display, and composite my applications.

Elgato Stream Deck

These are great little devices that can be used to control your broadcasting software like OBS or vMix, execute macros, launch apps, etc. Each button is an LCD and can provide visual feedback to the state of your stream. Here’s my layout for when I’m streaming with vMix:

First row: CPU/GPU usage & temps
Second row: vMix inputs
Third row: vMix overlays, cough (hold to mute), Fade To Black (cuts feed)

Elgato Stream Deck

Webaround Green Screen

Green Screen WebThe downside of putting yourself on camera is that you also have to put your background on camera – unless you get a green screen. These green screens by WebAround attach around your chair and provide the utility of a green screen without the messy setup.

Then you go into your broadcasting software, set the chroma key, and now you have a clean image of yourself that you can overlay on top of the screen share.
Example of Green Screen

Producing a Live Stream

Giving a successful presentation is outside the scope of this article, but you can check out a blog from my co-worker, Andrea Pinillos, on top considerations when presenting at a conference. Below are some practical steps to follow when live streaming:

  1. Clean up your screens, and remove or hide anything that might be a distraction. As a viewer, seeing dozens of unorganized icons on the desktop brings me almost physical pain. Close all apps not needed for streaming or development. Prevent desktop notifications from popping up during the stream.
  2. If you’re displaying a browser window, have a clean one prepared ahead of time with no bookmarks or tabs.
  3. Test your broadcasting software ahead of the scheduled broadcast and give yourself enough time to fix any technical issues.
  4. If you have a smaller audience, take some time at the beginning of the live stream for small talk and connect.
  5. Switch things up occasionally (visually and aurally), to keep the audience engaged.
  6. Be mindful of your audience’s time. Avoid doing things that take a long time unless you have content to fill that gap. If possible, just like on a baking show, have a precooked result ready to go.
  7. When ending the stream, wrap it up coherently, summarizing everything discussed, and the main points you wanted to convey.
  8. Solicit feedback and adjust accordingly.

Do’s and Don’ts

Lastly, here are some tips as you begin your live streaming journey:

DO: Show your face.
DON’T: Be a slob – check your appearance before you show your face.

DO: Be prepared and have an outline.
DON’T: Have any “dead air” where you’re waiting on something and have nothing exciting going on – have a precooked result and skip right to it.

DO: Place the camera above your screen so that you’re naturally looking at it while developing.
DON’T: Have a distracting background behind you – tidy it up or use a green screen to hide it.

DO: Know how to cut the audio and or video feed if you need to cough or handle an interruption.
DON’T: Multitask – stay focused on the subject matter.

DO: Show your mistakes – nobody’s perfect, and showing how to recover from a mistake is also helpful.
DON’T: Trash other languages, frameworks, etc. Everybody has their own opinions and favorites. In general, avoid negativity.

DO: Have a drink handy – you’ll be talking a lot.
DON’T: Talk endlessly – give yourself and your audience breaks, at least once an hour.


So why stream? Because you’re most of the way there and you didn’t even know it.

You already have a good foundation of experience for it because you’ve been working from home for a few months. It doesn’t require a lot of specialized hardware, and your employer has most already provided you with the minimum (headset and/or webcam). All the software and services you need to get started are free. There are no technical or financial hurdles to start devstreaming.

But the real benefit of devstreaming is personal growth. By attempting to explain and teach a topic to others, you’ll learn it much more thoroughly – you end up teaching yourself. Every benefit that applies to public speaking also applies to streaming: you’ll learn to overcome stage fright, be more organized in your thought process and speech, and get better at communication – always a plus at work!

Coding is relatively easy – clearly conveying ideas is hard. But it’s a skill that you can master with practice – so get streaming!

Cosmos DB is a cloud-native, geo-distributed, highly scalable schema-less database provided by Microsoft Azure. Cosmos DB achieves cloud-scale horizontal scalability by utilizing a partitioning scheme. Choosing how to partition data stored in Cosmos DB can be a tough decision and directly impacts how well Cosmos DB will work as a solution. Let’s discuss some ideal use cases for Cosmos DB to understand better. I’d like to discuss two usage patterns; the first is storing user information. The second use case will deal with storing device telemetry in an internet-of-things scenario.

User Information Use Case

How to store user information in Cosmos DB

Cosmos DB is an excellent solution for storing user information. A few examples would be saving a user’s shopping cart or storing a user’s wishlist. For either of these scenarios, we can create a single schema-less document in Cosmos DB. That single document would contain the contents of their shopping cart or all the items on their wishlist. When determining the partition key for the Cosmos DB collection, we would choose the identity of the user.

Now, when integrating Cosmos DB with the website, the user will have first logged in. Once logged in, we will have the user identity. Now, anytime we need to display the shopping cart or wishlist, it is a document read. This document read is to a well-known partition in Cosmos DB, the fastest and cheapest operation in Cosmos DB. Similarly, updates are straight-forward with a single document write to a well-known partition in Cosmos DB. If users wished to share their wishlist, they would typically do that with a link, which could embed the user identity.

Advantages of storing user information in Cosmos DB

Now, let’s discuss some of the advantages of using Cosmos DB for this scenario. In the above scenario, we are always performing a single document reads, or a separate document writes. Both of those scenarios have service level agreements (SLAs) provided by the Cosmos DB service. Document reads and writes are guaranteed to finish in 10 milliseconds or less. So, by using Cosmos DB, we have guaranteed performance.

Also, when building a website storing user information, we need to design for high-load events. Cosmos DB achieves its scalability by creating additional physical nodes as requested and then mapping those logical partitions onto physical nodes. In our case, a logical partition is simply the identity of a single user. So, if we want to max out our scaling potential, Cosmos DB would create a single physical node for every individual user of our system. A single RU (request unit) is defined as a document read of 2 KB document. Writing a document typically consumes 2-3 times the RUs (request units) that a document read uses (this depends on the consistency and indexing settings). A single Cosmos DB physical node can support up 10,000 RUs (request units) per second. Based upon this knowledge, here are some example workloads that Cosmos DB could support should you max out the scaling so that each user ended up with their physical node.
Maximum Scalability for each user

Hopefully, as you can see, Cosmos DB is highly scalable when choosing the right partition strategy. With Cosmos DB, we also get the ability to easily geo-replicate our entire database. You need to enable the desired Azure regions, and Cosmos DB will automatically deal with replicating the data and keeping it consistent. For our scenario, this provides additional benefits because we can deploy our web service in each different continent. Each instance of the web service could then automatically locate the closest Cosmos DB region of our database to ensure high performance. So, we can now achieve high performance, while also guaranteeing high availability, but still only manage a single Cosmos database.

Finally, when provisioning Cosmos DB, you pay for a given number of RUs (request units per second). The provisioned RUs can be adjusted as needed to either increase scalability or lower cost. So, we can scale Cosmos DB as necessary for our user information scenario to ensure we only pay for the resources that we need at any given time.

Disadvantages of storing user information in Cosmos DB

Cosmos DB provides the ability to query documents using a SQL like a syntax. This capability can either query a single partition or query all the partitions. In general, executing queries do NOT provide a response time SLA (service level agreement), and they typically cost many more RUs (request units). Now, the advantage of storing user-level information, especially for a web platform or e-commerce platform, is that it would never be performing these types of queries. You would always be showing the user their information using their own user identity. Hopefully, I’ve explained that this use case avoids one of the problem areas with Cosmos DB.

Device Telemetry Use Case

How to store device telemetry in Cosmos DB

Cosmos DB is an excellent solution for storing device telemetry. There might be a manufacturing center with several hardware devices that generate telemetry as they are used. In this case, each piece of generated telemetry could be a separate document in Cosmos DB. Because Cosmos DB  is schema-less, each part of telemetry could have an entirely different schema from any other piece of telemetry. However, they could also be stored in the same document collection in Cosmos DB. For this solution, the ideal partition strategy would be to use the unique identifier of each device. With this storage solution, each device can stream telemetry to Cosmos DB, and then a dashboard could show the telemetry received from each device.

Advantages of storing device telemetry in Cosmos DB

The advantages of using Cosmos DB for this scenario are very similar to those advantages for the user information scenario. Specifically, I will mention the guaranteed latency SLA (service level agreement) of 10 milliseconds or less is very useful for this scenario to ensure that the device telemetry is received promptly. Also, providing the required request units ahead of time and scale them elasticity up and down is very useful for this scenario. For device telemetry, it likely that not all telemetry needs to be stored for all time. Cosmos DB nicely supports this scenario by allowing each document to have a time to live (TTL) property set on the document. This property indicates how many seconds the document should remain in the system since the last update to the document. This feature would work nicely to ensure that the system only retained the last week or the previous 30 days of telemetry. Finally, the advantage of schema-less storage is incredibly useful for this scenario as well.

Disadvantages of storing device telemetry in Cosmos DB

Like the user information scenario, the primary disadvantage of Cosmos DB for this solution would be querying documents, individually querying across devices. Now, instead of querying telemetry directly from Cosmos DB, one could either use the Analytical data storage feature of Cosmos DB or the change feed. The Analytical data storage feature is a feature of Cosmos DB that will replicate the data from Cosmos DB into a separate row-based data storage system using Parquet files. Once the data is replicated, it can then be directly connected to Azure Synapse and queried using either Apache Spark or massively parallel SQL engine. Both queries compute engines are designed for big data querying scenarios and have no effect on the RUs (request units) required to manipulate documents in the Cosmos DB transactional store. I’ll discuss the Cosmos change feed in the next section.

Event sourcing and Cosmos change feed

Cosmos DB provides a built-in change history feature known as the change feed. The Cosmos DB change feed tracks changes to the documents in a collection over time. Any client can query the change feed at any moment and from any given historical point in the collection. Each client can then track their position in the change feed to know if new document changes have arrived in the system.

The change feed feature is an incredibly useful feature to replicate data from Cosmos DB into another storage system as needed in near real-time. Specifically, as mentioned, complex querying across documents is not ideal in Cosmos, but with the change feed, you can easily replicate the data to any other third-party data platform.

However, the most compelling use case of the change feed is to pair it with event sourcing. For both of our use cases, instead of storing the current state of the data (e.g., the current contents of the user shopping cart or wishlist), save the action that the user performed as a document (e.g., added an item to the wishlist, removed an item from the wishlist). The same would apply to the device telemetry scenario. One could then use the change feed to process the system events in near real-time and then calculate the desired aggregate (e.g., the user’s current wishlist) and then store that in Cosmos DB as an aggregate document or in another data storage system. By storing events and processing them in order using the change feed, you can then enable and trigger other scenarios. Such as answering if an ad campaign resulted in users adding the item to their shopping cart or wishlist and how quickly that occurred from when they interacted with the ad campaign. Cosmos DB specifically enables this scenario because of the schema-less support, and each event can be stored as a document with a different schema. Still, all the functions can be stored in a single collection.


Finally, a summary of the advantages of storing user information and device telemetry in Cosmos DB.

  • Partition Strategy is very clear (user identity or device id)
  • Only uses document reads and document writes (most efficient operations in Cosmos DB)
  • Guaranteed SLAs on latency
  • Easy to predict RU (request unit) usage
  • Support for complex schema-less documents
  • Elastic scalability (provisioned request units per second) can be adjusted as needed. The partitioning strategy will result in maximum scalability of a single physical node for a given user or device
  • Ability to scale Cosmos DB down to save costs or up to deal with high load
  • Automatic geo-replication and use of the nearest region to reduce round-trip latency
  • Ability to auto-trim device telemetry by utilizing the time-to-live (TTL) feature on a document
  • Ability to perform event sourcing and near real-time reactions to data changes using the Change Feed feature of Cosmos DB

Hopefully, this article highlighted some of the ideal usage patterns for Cosmos DB and helped you better understand how to utilize this cloud-scale, elastic, geo-replicated schema-less database.

Web development is arguably the most popular area of software development right now. Software developers can make snappy, eye-catching websites, and build robust APIs. I’ve recently developed a specific interest in a less discussed facet of web development: web scraping.

Web scraping is the process of programmatically analyzing a website’s Document Object Model (DOM) to extract specific data of interest. Web scraping is a powerful tool for automating certain features such as filling out a form, submitting data, etc. Some of these abilities will depend if the site allows web scraping or not. One thing to keep in mind, if you want to web scrape, is that some websites will be using cookies/session state. So some automation tasks might need to abide by the use of the site’s cookies/session state. It should go without saying, but please be a good Samaritan when web scraping since it can negatively impact site performance.

Getting Started

Let’s get started with building a web scraper in an Azure Function! For this example, I am using an HTTP Trigger Azure Function written in C#. However, you can have your Azure Function utilize a completely different trigger type, and your web scraper can be written in other languages if preferred.

Here is a list of Azure resources that were created for this demo:

Azure Resources

Before we start writing code, we need to take care of a few more things first.

Let’s first select a website to scrape data from. I feel that the CDC’s COVID-19 site is an excellent option for this demo. Next, we need to pick out what data to fetch from the website. I plan to fetch the total number of USA cases, new USA cases, and the date that the data was last updated.

Now that we have that out of the way, we need to bring in the dependencies for this solution. Luckily, there is only one dependency we need to install. The NuGet package is called HtmlAgilityPack. Once that package has been installed into our solution, we can then start coding.

Coding the Web Scraper

Since the web scraper component will be pulling in multiple sets of data, it is good to capture them inside a custom resource model. Here is a snapshot of the resource model that will be used for the web scraper.

Resource Model

Now it’s time to start coding the web scraper class. This class will utilize a few components from the HtmlAgilityPack package that was brought into the project earlier.

Web Scraper Class

The web scraper class has a couple of class-level fields, one public method, and a few private methods. The method “GetCovidStats” performs a few simple tasks to get our data from the website. The first step is setting up an HTML document object that will be used to load HTML and parse the actual HTML document we get back from the site. Then, there is an HTTP call out to the website we want to hit.

Right after that, we ensure the call out to the website results in a success status code. If not, an exception is thrown with a few details of the failing network call.

Ensure the Call Out

We then load the HTML that we received back from the network call into our HTML document object. There are several calls to a method that will perform the extraction of the data we are looking for. Now you might be wondering what those long strings are in the method call. Those are the full xpaths for each targeted HTML element. You can obtain them by opening the dev tools in your browser, selecting the HTML element, and right-clicking it in the dev tools. From there, select “copy full xpath”.

Load the HTML

Next, we need to set up the endpoint class for our Azure Function. Luckily for us, the out of the box template sets up a few things automatically. In the endpoint class, we are merely calling our web scraper class and returning its results to the client calling the Azure Function.

Endpoint Class for Azure Function

Now comes time to test out the Azure Function! I used Postman for this, and these are the results.

Test out the Azure Function

Closing Thoughts

Overall, web scraping can be a powerful tool at your disposal if sites do not offer APIs for you to consume. It allows you to swiftly grab essential data off of a site and even automate specific tasks in the browser. With great power comes great responsibility, so please use these tools with care!

Bridging the gap between serverless OLTP and Analytics

Cosmos DB is unabashedly a document-based Online Transactional Processing (OLTP) system. It was engineered at its core to provide low latency, high throughput transactions with service level agreements (SLAs), consistency models, and guarantees to back it. It does this exceptionally well, but there are trade-offs for these architectural decisions. Other databases can do like complex queries with joins and aggregates or set-based operations. Cosmos DB, by design, doesn’t have a native solution or is too resource prohibitive. Capabilities in Cosmos DB continue to evolve, but at a certain point requires another solution that is better suited for the challenges. One of these gaps has now been bridged with Azure Synapse Link for Cosmos DB.

Azure Synapse is Microsoft’s consolidated data analytics platform that brings together data ingestion, transformation, machine learning (ML) training, testing, management, security, monitoring, and visualization all in one place. With Synapse Link, Cosmos DB data can participate in this eco-system.

Under the hood, Cosmos DB data is then replicated from its row-based index store to a column-based index store that sits on top of Azure Data Lake. These files are stored in a read optimized format while the process is fully managed and is enabled by a checkbox and a few options. Because the data is replicated, there is no impact on transactional workloads against Cosmos DB, but there is a delay. There is currently an up to 5-minute replication period, but this time is much lower in practice.

CosmosDb OLTP

The analytical storage is decoupled from the analytical compute systems, so as other compute options become available, the data doesn’t need to be replicated. This also allows for multi-use scenarios like Apache Spark structured event for streaming or traditional data warehousing. Azure Synapse also provides a Serverless SQL compute engine that can read the replicated data.

Previously, these features were possible but required the use of Cosmos DB change feed or direct queries to move the data to another store. With Azure Synapse Link for Azure Cosmos DB, analytics has gone serverless and cloud-native!

AIS is now one of few partners that can offer your organization cloud licensing across Azure Commercial, Azure Government Cloud, M365, M365 Government Community Cloud (GCC), and M365 GCC High. As a Cloud Solution Partner (CSP) and long-term Microsoft Partner, our Microsoft certified cloud engineers can support purchasing and managing licenses, migrating/deploying cloud infrastructure, modernizing applications, and managing cloud environments – including cost optimizations, security, and compliance.

AIS holds 10 Microsoft Gold Competencies, over 25 years of experience in Microsoft technologies, and more than a decade of cloud expertise.

What is the Microsoft AOS-G Program?

Through Microsoft’s AOS-G program (Agreement for Online Services for Government), AIS is now an authorized reseller of Microsoft 365 GCC-High licenses. AIS can now sell licenses directly to our clients who need to meet Federal Risk and Authorization Management Program (FedRAMP), DFARS, CMMC, NIST 800-53, 171, GDPR, and CJIS accreditation, and other compliance requirements. We are your one-stop-shop for all your Microsoft cloud needs from Commercial to GCC and GCC-High, including licensing, management, security, app development, migrations, and more.

Connect with Us

We aren’t just a vendor; we’re your cloud advocate and partner in transformation. ​Learn more about our Cloud Reseller Services and contact us today.

How do you know you’re ready to present at a conference? You may have a good understanding of how something works, or you can build and explain how you reached a specific solution. I’m writing this article not because I am a presentation expert, but because I presented once at a large conference (and have seen a fair share of live presentations). If you follow the path that I took, presentations don’t have to be scary, but an opportunity for you to present your work to your peers!

Background Info

I presented at the Power Platform World Tour in DC. My manager suggested I put together a few ideas for a proposal, and he would help guide me in the right direction. I submitted three different plans, and the one below was accepted!

Screenshot of Breakout Session

Pre-Presentation Day

  1. Prepare in advance – This blog is for a 1-hour presentation
    • About one month before presentation day, start building out a slide deck
    • Don’t include YouTube videos; the audio will probably fail or slow you down
    • Try a screen recorder for videos instead!
    • Add more images than text
  2. Practice going through your slides on your own a few times
  3. Record yourself presenting and play it back
    • Time yourself to ensure you don’t go over 30 minutes
  4. Build a demo – this should bring your slide deck to life
    • It can be a live walk-through or pre-recorded
    • Go through your demo alone to expose any bugs or technical difficulties you run into
  5. Demo to some colleagues/friends
    • A demo is a safe place to get input and ideas and run into more technical challenges to troubleshoot later

At this point, you should be ready to present

To Be Safe!

  1. Do a dry run of your slide deck and demo the day before – this will give you peace of mind
    • Fix any last-minute bugs
    • Ensure you are within the allocated time
  2. Set up your demo material and prepared for your presentation
  3. Have a backup demo prepared
    • If all else fails and you’re in the middle of your demo, and things start to break, have a backup demo plan, one that already works
  4. Have a few ideas prepared if your presentation runs fast
    • You could allocate time in the end for questions, but you also run into the risk of there being no questions, so have something in mind to present or demo if you run 10-15 minutes short
  5. Bring an extra adapter for your laptop and plenty of business cards
  6. Optional: purchase a presentation slides clicker

Day of Presentation

If you’ve followed the pre-presentation day steps, you should feel prepared. If you still feel nervous/anxious, imagine that you are a member of the audience. The audience does not know that you are worried. As an audience member, you walk into a presentation, not knowing much about the topic or the presenter (for the most part). The audience has come prepared to listen and isn’t focused on how nervous the presenter might be. Loosen up, make sure to have fun, and remember your practice!

Don’t skip meals and drink plenty of water.

  1. Arrive at your session 10-15 minutes before your allocated start time
    • Connect your machine to the display board
    • Set up your microphone
    • Pull your slides up on your laptop.
    • Have your demo ready in the background
  2. Go through your presentation, just like you practiced.
    • Don’t forget that you are the subject matter expert in the room. Your presentation is what they have come to listen to.  Be confident.
  3. Pay attention to the time.
  4. Thank everyone for coming, provide QR codes on the display screen for any social media and remind them to take your business card

Presenting at a Conference

Final Thoughts

You’ve completed your first conference speaker gig! Even if it didn’t go as planned, presenting in front of a crowd is an accomplishment, you should be proud. Try not to be too hard on yourself and learn from your experience. Most importantly, don’t shy away from the next one. It can only go up from here.