Rapidly deploy Microsoft’s Return to the Workplace solution and leverage the underlying technology for future growth

Today, Microsoft announced the release of the Return to Work, Return to Growth strategy in a white paper developed jointly with AIS. We’re also releasing our Return to Work implementation service via Microsoft App Source, and are excited to begin working with large enterprise organizations rolling out Microsoft’s Return to the Workplace solution and then using the Power Platform to empower citizen developers, build enterprise solutions, and leverage pre-built functionality.

Organizations around the globe have turned their attention to the task of returning to work even as virtual work continues to blossom. We find these large enterprise organizations asking questions in two key areas when it comes to the future of work and its organizational and economic impacts.

How do we return to work safely? Rapidly deploy Microsoft technologies that support our safe return to the workplace and adopt the underlying technology with future growth in mind.

How do we then return to growth? Leverage today’s investment so that we may return to growth and prosperity in the months and years to come. This is a once-in-a-generation opportunity.

We’re thrilled to have now fundamentally written the book on how to leverage large enterprise organizations’ return to work technology investment for return to growth. Our methods, patterns, and technical tooling will help these organizations get the most from Power Platform.

I hope that you find the white paper and the solution itself to be useful, and look forward to perhaps working with you to launch your return to work, return to growth strategy with Power Platform.

RETURN TO WORK, RETURN TO GROWTH WHITEPAPER
Rapidly deploy Microsoft’s Return to the Workplace solution and leverage the underlying technology for future growth.

Introduction

Azure Bicep is a new declarative Domain Specific Language (DSL) for provisioning the Azure resources. The purpose of Azure Bicep is to simplify the authoring experience with a cleaner syntax and the potential for more code re-use. This new language aims to make it easier to write Infrastructure as Code (IaC) for developers and DevOps engineers that typically author ARM templates targeting Azure Resource Manager (ARM) and writing the syntax’s more manageable than the JSON syntax of Azure ARM Templates.

Azure Bicep works as an abstraction layer built over the ARM and ARM Templates. All resource types, apiVersions, and properties valid in an ARM template are equally valid in Azure Bicep. We can do anything with Azure ARM Templates with Azure Bicep as it provides a “transparent abstraction” over ARM (Azure Resource Manager).

Azure Bicep introduces a new file type referred to as a ‘bicep file.’ This file includes code written in the new bicep language. It has a file extension of ‘.bicep.’ Bicep compiles the bicep file to standard ARM Template JSON files ARM JSON is effectively being treated as an Intermediate Language (IL).

Azure Bicep Language

Figure 1: Bicep Structure – Image Source: Intro to Project Bicep (ARM DSL)

Why Azure Bicep?

ARM templates are written in a modified JSON format that can be complicated to write, especially for cloud engineers that are new to working in Azure. Azure Bicep is a Domain Specific Language (DSL) that offers a simplified authoring experience by employing a “transparent abstraction” over ARM templates. The result is cleaner code syntax with better support for modularity and code re-use.

Azure Bicep Goals

Primary goals of the Bicep language, as described in on the project site –https://github.com/Azure/bicep:

  1. Azure Bicep code should be easily understood and straightforward to learn for those both new and experienced with other programming languages.
  2. The Azure Bicep language is a transparent abstraction that does not require any updates or onboarding to the underlying platform to support a new Azure resource type and/or apiVersion.
  3. Create a better language for writing Infrastructure as Code (IaC) to describe, validate, and deploy Azure resources.
  4. Code re-use should be a primary feature allowing users the freedom to modularize and re-use code without ‘copy/paste’.
  5. Azure Bicep should enable users to have high confidence that the code is ‘syntactically valid’ before it’s deployed.
  6. Tooling should provide a high level of resource discoverability and validation and should be developed alongside the compiler rather than added at the end.

Azure Bicep Limitations

Azure Bicep is a new project, and some known limitations should be kept in mind at this time:

  1. No support for the copy or condition property.
  2. No explicit support for deployments across scopes Bicep assumes we are deploying to a resource group, though the generated template can be deployed to any scope.
  3. Single line object and arrays (i.e. [‘a’, ‘b’, ‘c’]) are not yet supported.
  4. We still need to deploy the compiled template with the help of PowerShell Az deployment cmdlets and az cli deployment commands.
  5. Minimal resource schema validation. Other than basic validations like correct resource type structure and requiring a name, we will not get errors for missing or incorrect properties in a resource declaration.
  6. No support for string interpolation in property names.

Install Azure Bicep

The critical component for Azure Bicep is the Bicep CLI. This is the required tool used for compiling Bicep code into ARM JSON. It’s open-source and cross-platform.
The Azure Bicep project also has a Visual Studio Code Extension for Azure Bicep. This extension adds Azure Bicep and .bicep file support to Visual Studio Code. This will enhance the experience when authoring .bicep files. This extension is developed along-side the Azure Bicep project since the tooling is being developed simultaneously as the Azure Bicep compiler.

Please refer to this section for detailed instructions to install the Azure Bicep.

Azure Bicep Files and Syntax

Azure Bicep code is written in a simpler syntax that is easier to read and write than the JSON syntax used with ARM Templates. The Azure Bicep code’s main element is the resource block that declares an infrastructure resource to provision.

Resource Declaration Syntax

  1. resource keyword.
  2. symbolic name – This is an identifier for referencing the resource throughout the bicep file. It is not what the name of the resource will be when it’s deployed.
  3. type – This is composed of the resource provider, resource type, and apiVersion.
  4. properties – These are the specific properties to specify for the given resource type. These are the same properties available in an ARM Template.

Declaring Azure resources with Azure Bicep code is done using the following format:

resource <symbolic-name>'<type>@<api-version>' = {
    name: 'uniquestorage001' // must be globally unique
    location: 'eastus'
    kind: 'Storage'
    sku: {
        name: 'Standard_LRS'
    }
}

Here’s a simple example of an Azure Bicep code that deploys an Azure Storage Account:

resource stg 'Microsoft.Storage/storageAccounts@2019-06-01' = {
name: 'uniquestorage001' // must be globally unique
    location: 'eastus'
    kind: 'Storage'
    sku: {
        name: 'Standard_LRS'
    }
}

Parameter Syntax

In Azure Bicep, we can also declare parameters in the .bicep file. These parameters can be required to be passed in when the template is deployed, or the parameters can be given a default value if they aren’t passed in.
Here’s an example of the Azure Bicep code to use within a .bicep file to declare the parameters with default values assigned and declaring an Azure Storage Account resource to deploy with the storageAccountName and location properties getting assigned to the values of the parameters declared:

param location string = 'eastus'
param name string = 'uniquestorage001' // must be globally unique

resource stg 'Microsoft.Storage/storageAccounts@2019-06-01' = {
    name: name
    location: location
    kind: 'Storage'
    sku: {
        name: 'Standard_LRS'
    }
}

Variable Syntax

Azure Bicep variables are defined by using the var keyword followed by the variable name, then followed by the equal sign (=) and the value to assign to the variable. This is done using the following format:

var <variable-name> = <value>

Here’s an example of declaring a variable named storageSkuName with a string value set and an Azure Storage Account resource to deploy with the storage Sku name property getting assigned to the value of the variable declared:

var storageSku = 'Standard_LRS' // declare variable and assign value

resource stg 'Microsoft.Storage/storageAccounts@2019-06-01' = {
    name: name
    location: location
    kind: 'Storage'
    sku: {
        name: storageSku // reference variable
    }
}

Output Syntax

In Azure Bicep, output variables are declared by using a similar syntax to variables used within the template but using the output keyword for declaration.
Here’s an example of declaring an output variable assigned to the value of the Azure Resource Id for the resource with the symbolic name of mystorage within the .bicep file:
// Output variable set to Azure Resource Id of Storage Account

output storageId string = stg.id // output resourceId of storage account

Azure Bicep Expressions

Expressions in Azure Bicep support the same built-in functions and expressions as ARM Templates.

Functions

Any valid ARM Template function is also a valid bicep function. Just like in ARM Templates, we can use the uniqueString() function to generate a unique string value to use within the template.
Here is an example of valid functions. Instead of forcing users to guess a unique storage account name in our main.bicep file, let’s get rid of our name parameter and use the uniqueString() and resourceGroup() functions to calculate a unique name.

param location string = resourceGroup().location

var storageSku = 'Standard_LRS' // declare variable and assign value

resource stg 'Microsoft.Storage/storageAccounts@2019-06-01' = {
    name: uniqueString(resourceGroup().id) // generates unique name based on resource group ID
    location: location
    kind: 'Storage'
    sku: {
        name: storageSku // assign variable
    }
}

output storageId string = stg.id

String Interpolation

Azure Bicep supports a simpler syntax for performing string concatenation like this with a programming feature called string interpolation. String interpolation allows to use a syntax to embed the name of a variable or parameter within a string value that will get replaced at deploy time to perform the string concatenation necessary.
Here is an example where we can combine a namePrefix parameter with a hardcoded suffix:

param namePrefix string = 'unique'

var storageAccountName = '${namePrefix}storage001'

Conditional Assignment

In Azure Bicep, we can conditionally provide a value for a variable, resource, or output using the ternary operator, which is the equivalent of the if() function in ARM Templates. The use of conditional and property assignments allows for more flexible deployment customization with Azure Bicep based on template parameters.
Here’s the example where we choose a redundancy setting for our storage account by adding a new parameter globalRedundancy and combining it with the ternary operator.

param location string = resourceGroup().location
param namePrefix string = 'stg'

param globalRedundancy bool = true // defaults to true, but can be overridden

var storageAccountName = '${namePrefix}${uniqueString(resourceGroup().id)}'

resource stg 'Microsoft.Storage/storageAccounts@2019-06-01' = {
    name: storageAccountName
    location: location
    kind: 'Storage'
    sku: {
        name: globalRedundancy ? 'Standard_GRS' : 'Standard_LRS' // if true --> GRS, else --> LRS
    }
}

output storageId string = stg.id

Compiling and Deploying Azure Bicep Code

Azure Bicep code is written in files with the .bicep extension. These Bicep files will contain the code that will then be compiled into Azure Resource Manager (ARM) JSON. Then, once compiled, the resulting ARM JSON will be deployed to Microsoft Azure using Portal, PowerShell or Azure CLI.

Compiling Azure Bicep

Each of the .bicep files is compiled into a single ARM Template JSON file. If we have Azure Bicep file named main.bicep, Azure Bicep CLI command to compile it into ARM JSON:

bicep build main.bicep

This command will compile the main.bicep file into ARM JSON and save the resulting ARM Template into a file with the same name using a .json file extension. So, compiling main.bicep would result in an ARM Template existing in a new file named main.json in the same directory as the original main.bicep file. The .bicep file will still exist, and we can continue to modify as we build out the Infrastructure as Code.

Deploying Azure Bicep

The ARM Template i.e. main.json that results from Azure Bicep compilation is deployed the same as any other ARM Template. This can be done using either the Azure CLI or Azure PowerShell command-line tools.

Below are the commands that can be used to deploy the resources using Azure PowerShell and Azure CLI.

Azure PowerShell:

bicep build ./main.bicep # generates main.json
New-AzResourceGroup -Name my-rg -Location eastus # optional - create resource group 'my-rg'
New-AzResourceGroupDeployment -TemplateFile ./main.json -ResourceGroupName my-rg

Azure CLI:

bicep build ./main.bicep # generates main.json
az group create -n my-rg -l eastus # optional - create resource group 'my-rg'
az deployment group create -f ./main.json -g my-rg

Take Away

Azure Bicep is more of a revision to the existing ARM template language than an entirely new language. While there have been syntax changes, the core functionality of ARM templates and the runtime remains the same. Bicep is not yet production-ready, but it will make life easier for cloud engineers working in Azure when it matures.
Bicep is moving towards a more stable 0.3 version with features to make authoring ARM templates more straightforward and approachable.

Databricks provides a robust notebook environment that is excellent for ad-hoc and interactive access to data. However, it lacks robust software development tooling. Databricks Connect and Visual Studio (VS) Code can help bridge the gap. Once configured, you use the VS Code tooling like source control, linting, and your other favorite extensions and, at the same time, harness the power of your Databricks Spark Clusters.

Configure Databricks Cluster

Your Databricks cluster must be configured to allow connections.

  1. In the Databricks UI edit your cluster and add this/these lines to the spark.conf:
    spark.databricks.service.server.enabled true
    spark.databricks.service.port 8787
  2. Restart Cluster

Configure Local Development Environment

The following instructions are for Windows, but the tooling is cross-platform and will work wherever Java, Python, and VSCode will run.

  • Install Java JDK 8 (enable option to set JAVA_HOME) – https://adoptopenjdk.net/
  • Install Miniconda (3.7, default options) – https://docs.conda.io/en/latest/miniconda.html
  • From the Miniconda prompt run (follow prompts):Note: Python and databricks-connect library must match the cluster version. Replace {version-number} with version i.e. Python 3.7, Databricks Runtime 7.3
    “` cmd
    conda create –name dbconnect python={version-number}
    conda activate dbconnect
    pip install -U databricks-connect=={version-number}
    databricks-connect configure
    “`
  • Download http://public-repo-1.hortonworks.com/hdp-win-alpha/winutils.exe to C:\Hadoop
    From command prompt run:

    “` cmd
    setx HADOOP_HOME “C:\Hadoop\” /M
    “`
  • Test Databricks connect. In the Miniconda prompt run:
    “` cmd
    databricks-connect test
    “`

You should see an “* All tests passed.” if everything is configured correctly.

  • Install VSCode and Python Extension
    •  https://code.visualstudio.com/docs/python/python-tutorial
    • Open Python file and select “dbconnect” interpreter in lower toolbar of VSCode
  • Activate Conda environment in VSCode cmd terminal
    From VSCode Command Prompt:
    This only needs to be run once (replace username with your username):
    “` cmd
    C:\Users\{username}\Miniconda3\Scripts\conda init cmd.exe
    “`
    Open new Cmd terminal
    “` cmd
    conda activate dbconnect
    “`
    Optional: You can run the command ` databricks-connect test` from Step 5 to insure the Databricks connect library is configured and working within VSCode.

Run Spark commands on Databricks cluster

You now have VS Code configured with Databricks Connect running in a Python conda environment. You can use the below code to get a Spark session and any dependent code will be submitted to the Spark cluster for execution.

“` python
from pyspark.sql import SparkSession
spark = SparkSession.builder.getOrCreate()
“`
Once a context is established, you can interactively send commands to the cluster by selecting them and right-click “Run Select/Line in Python Interactive Window” or by pressing Shift+Enter.

Context established to send commandsThe results of the command executed on the cluster will display in the Visual Studio Code Terminal. Commands can also be executed from the command line window.

Executed Command Cluster

Summary

To recap, we set up a Python virtual environment with Miniconda and installed the dependencies required to run Databricks Connect. We configured Databricks Connect to talk to our hosted Azure Databricks Cluster and setup Visual Studio code to use the conda command prompt to execute code remotely. Now that you can develop locally in VS Code, all its robust developer tooling can be utilized to build a more robust and developer-centric solution.

You are about to start and new C# project and have opened Visual Studio. Let’s discuss some best practices you should employ or consider before writing that first line of code. These practices are just a start. I have scoped this blog post to only cover those best practices to those that can be enabled before you write a single line of code and utilize the built-in features of Visual Studio and VS Code. Of course, there are other best practices you should follow, such as unit testing, code coverage, continuous integration, etc. and other tools you may consider acquiring, but those are out of scope for this blog post.

These best practices are gleaned from years of experience developing C# solutions. First, let me present my comprehensive list of practices, then we can dive into each one individually. I will be assuming you are using .NET Core 3 and Visual Studio 2019 or later for each of the below practices. Most of the below practices are still recommended for earlier versions of .NET and Visual Studio but may be configured differently.

  1. Turn on Warnings as Errors
  2. Turn on Code Analysis
  3. Establish Code Conventions
  4. Consider Enabling Nullable Reference Checking

1. Turn on Warnings as Errors

Why?: You might think that having warnings is enough. However, even for some of the most diligent programmers on a project, they might have a bad day and miss a warning message from the compiler or forget to fix it before committing code to a project. Instead of relying on human discipline, why not merely automate that discipline by asking for the compiler’s assistance. By enabling warnings as errors, every compiler warning will now manifest as an error preventing compilation until addressed. As a developer, you can still disable alerts at the entire project level, or you can #pragma directives to disable warnings for given cases in the codebase. In either case, you now have an affirmative decision in the codebase that can be traced back with version history. It would be best if you considered asking developers to include comments explaining why the warning was disabled. This is the single biggest thing I do when starting any C# project. It keeps unfixed warnings from ever entering my codebase.

How?: You can right-click on a Project in Visual Studio and select “Properties…”. On the properties screen, you can navigate to the Build tab. On the Build tab, be sure and use the drop-down for Configuration to select “All Configurations.” Then, under the Treat warnings as errors, select “All” for the radio button and save the project. You can reference the screenshot below.

Set configurations for new project

For those of you on Linux/MacOS X or using VSCode, you can instead open the .csproj file directly in a text editor. You can then add the following code inside the top-level tag:

<TreatWarningsAsErrors>true</TreatWarningsAsErrors>

See the screenshot below:

Warning as Errors Screenshot

2. Turn on Code Analysis

Why?: Let’s talk about compiler warnings and errors first. Compiler errors are typically there because you made a syntax error, and the compiler can’t generate any code. Compiler warnings are provided typically because the compiler may be a deciding factor, but isn’t sure of your intention, hence a warning that they hope you will read and then fix. Most compiler teams are wary of adding new errors because it will discourage upgrades to a new compiler version. So, most of the time, compiler teams will only add recent warnings to a compiler, and by default, those warnings can be disregarded. But in general, I like to consider new compiler warnings as errors the compiler team wished they could have written. Hence, the rationale in the previous section for always enabling warnings as errors.

Now, let’s continue with the viewpoint that compiler warnings are errors, just left as warnings because the compiler team was worried about upgrades to new compiler versions and backward compatibility. With that viewpoint, warnings are still essentially only pointing out significant faults in your codebase; they aren’t pointing out suggestions or even best practices. So, how can we get recommendations or best practices? The answer to that question is to enable Code Analysis and then configure your desired ruleset.

How?: You can right-click on a Project in Visual Studio and select “Properties…”. On the properties screen, you can navigate to the Code Analysis tab. On the Code Analysis tab, click the Install button. The install button will download a NuGet package that contains the code analysis rules. In general, leave both checkboxes enabled. This ensures rules during a full build of the project, and the rules built as you are typing live in the editor. You can reference the screenshot below.

Download nuget package that contains code analysis

For those of you on Linux/MacOS X or using VSCode, you can instead open the .csproj file directly in a text editor. You can then add the following code inside a new tag:

<PackageReference Include=”Microsoft.CodeAnalysis.FxCopAnalyzers” Version=”2.9.6″>

See the screenshot below.

Turned on Code Analysis

If you have enabled Warnings as Errors as recommended in the previous section, the Code Analysis rules will be considered compiler errors.

What to do once you’ve turned on Code Analysis

Let’s imagine you have created a new console application using the provided templates. Now imagine you followed my recommendations and enabled warnings as errors and turned on code analysis. Now, if you compile that application for the first time, you immediately see 2 Code Analysis errors, and your project will no longer compile. See the screenshot below.

Compile Application and see 2 Errors

Keep in mind all you’ve done is used a provided template project and turned on some Code Analysis rules. You haven’t added any code to the project. Why then, do we see two errors in the code? Let’s talk about each error individually. As you see, the codes for the rules are CA1801 and CA1303.

Fixing Code Analysis Error #1 – CA1801 Unused Argument

How can we fix this code analysis suggestion/error, and what does it mean? The first suggestion is CA1801, which tells us that our Main method accepts a string array of arguments but that we aren’t using them in the method body. This is precisely the kind of assistance we are looking for. In this case, we could remove the arguments to the Main method. This makes our intention clearer; our Console application doesn’t have any arguments.

However, if we were implementing an interface, we might be required to have that argument even though our implementation isn’t using that argument. In this case, we can disable this on a case-by-case basis using #pragma directive. We can disable this by adding the following code around the statements where we want the rule disabled.

#pragma warning disable CA1801 // Remove unused parameter
//other code goes here
#pragma warning restore CA1801 // Remove unused parameter

You can also reference the below screenshot.

Add code to disable rule

Any code between the #pragma disable, and the restore will not be checked for that given code analysis rule. The developer could also provide comments as to why this choice was made. But it’s now clear and has an intention behind it. This same approach can be used for general compiler warnings that were turned on as errors in the previous section.

Fixing Code Analysis Error #2 – CA1303 Globalization

Now, let’s talk about the second code analysis error/suggestion, which is CA1303. This rule was built because internally, Microsoft is required to run it. Microsoft is a multinational company, and they deliver software to many countries in many languages. The CA1303 code analysis warning indicates that we have a hard-coded string, and we should move it to the resource (.resx) file where we could map different versions of the string depending upon the requested language and culture. Now, for our given project, it’s good that Code Analysis pointed this out so early. Because we encountered this error so early in our project, we can ask our requirements team if we need to support globalization. However, let’s assume our requirements team decides we don’t need to perform globalization. How can we turn this off once and not have to litter our code with hundreds of #pragma’s? The answer to that question is to use .editorconfig files. For those working in .NET for many years or before .NET Core, the use of .editorconfig is relatively new but is far easier than the older alternatives. The easiest way to get started is to let Visual Studio help you out. If you right-click on the Lightbulb icon for the given line, you can click to Suppress or Configure Issues. Within that menu, you can Configure CA1303 severity. When you select that option, regardless of the value, it will offer to create a new .editorconfig and add it to your project. See the screenshot below for reference.

create a new .editorconfig and add it to your project

For those of you on Linux/MacOS X or using VSCode, create a new file named .editorconfig with the following contents:
[*.cs]
# CA1303: Do not pass literals as localized parameters
dotnet_diagnostic.CA1303.severity = none

Once you have an .editorconfig, if you are using Visual Studio, you can view all of the analysis rules and configure them individually. In the Solution Explorer, expand your project. Then expand the Dependencies node and under that expand the Analyzers node. You can then find a given rule and right-click to configure Severity. This will then add a new line to your .editorconfig. See the below screenshot.

New line added to .editorconfig

Code Analysis Summary

To summarize, enable Code Analysis. It would be best if you then fixed any code analysis errors or decide you have a good reason for not fixing the suggestion. If you have a good cause, you can use #pragma directives to disable on a case-by-case basis, or you can .editorconfig to disable entire rules.
The .editorconfig works on a directory basis. The compiler will walk up directories from the source file until it finds a .editorconfig file. So, you can stack rules for your entire solution while having different rules for unit test projects, for example.
You can also write custom code analysis rules and find other open-source packages with custom code analysis rules. You should research to see if any of the third-party dependencies also provide code analysis packages.

Establish Code Conventions

Why?: As you start work on a project, you are implicitly making some code style conventions. You are deciding how indentation is formatted, whether the project will use the “var” keyword, the preferred syntax for initializing collections, etc. It would be nice to automate enforcement of some decisions and have the compiler or formatting tool to help enforce these decisions. In previous years, your only option was to find a third-party tool, either free or commercial. With a more recent version of Visual Studio and VS Code, there are built-in tools that provide some of this capability for free. So, let’s take advantage of those provided features. The benefit of code conventions is enabling consistency across projects and making it easier to start working on an existing project. To that end, I would recommend you begin with well-established code conventions like those provided by Microsoft and only diverge where necessary for your project or organization.

How?: If you agree with the Why section above, let’s talk about How. All you need to do is use Visual Studio’s built-in support for .editorconfig. If you’ve been following my guidance from earlier sections, you should already have an .editorconfig file. That’s it, and you are done. As your team makes decisions, apply the setting to your .editorconfig file. Visual Studio then provides several tools to help you update code to match your agreed-upon conventions.

References:
https://docs.microsoft.com/en-us/visualstudio/ide/create-portable-custom-editor-options?view=vs-2019
https://docs.microsoft.com/en-us/visualstudio/ide/editorconfig-language-conventions?view=vs-2019

4. Consider Enabling Nullable Reference Checking

Why?: One of the largest sources of errors in running codebases is unchecked null values, which throw a NullReferenceException. I’m not suggesting you add if (x != null) checks all over your codebase, which can also have unintended consequences and can make your code harder to read. Instead, I might suggest you utilize a new C# 8 feature, which is nullable references checked by the compiler using advanced flow analysis at compilation time. This feature can be retrofitted onto an existing project, but the work is considerable and can be very confusing. But since this best practice guidance is for new projects, I would suggest you strongly consider enabling this on new projects from the beginning.

How?: To enable nullable reference checking, simply right-click on the project in Visual Studio. Then navigate to the Build tab and select “Enable” from the drop-down for the Nullable field. See the screenshot below for reference.

Enable nullable reference checking

For those of you on Linux/MacOS X or using VSCode, you can open your .csproj in a text editor and add the following to the existing tag:

<Nullable>enable</Nullable>

See the below screenshot.

Insert the Nullable group tag

Once this setting is enabled, any reference type (e.g. string name, Dictionary<string, string> properties) is now considered to be non-null the compiler and the compiler will enforce this. If you wish to permit a null value, you must annotate the type like the long-time support for nullable value types. You would need to use (string? name, Dictionary<string, string>? properties). At this point, the compiler will warn you if you attempt to use any part of the object without first checking for null. It provides advanced flow analysis to determine this and supports many of the well-known patterns you already have been using for years to check for null.

Summary

I hope you found the above guidance around best practices useful. Remember, the above best practices are simply a starting point and intended to be steps you take before writing a single line of code and are enabled using only built-in features of Visual Studio or VS Code. It would help if you considered techniques such as unit testing, code coverage, and continuous integration, and potentially using other free or commercial tools such as Resharper or CodeIt.Right.

The goal for all the above practices is to ensure you start a new project out on the best foot and ensure high code quality from the beginning of the project. By creating early, it should be a low effort to maintain high code quality. It should also result in code with fewer bugs, which always makes users happy and should result in easier code because intentions in the code are clear.

Introduction

PowerShell DSC is possibly one of the most potent configuration managers for Windows Operating Systems. Even with the increase in popularity of Ansible and SaltStack, Powershell DSC remains supreme when enforcing the desired state on a Windows VM. Ansible itself has included the win_dsc module, which allows Ansible to run PowerShell DSC. In this blog post, we will dive deeper into one of PowerShell DSC’s most powerful aspects, 3rd Party Resources, and how they interact with Azure Automation.

3rd Party Resources are PowerShell DSC Modules that are created by the community. Any PowerShell community member can create modules, and there are tons of modules out there to choose from. Modules are kept in repositories, the most well known and default PowerShell repository being the PowerShell Gallery run by Microsoft. This is a common repository for PowerShell modules that are deployed to the Gallery by the community. PowerShell Modules in the PSGallery can be downloaded and installed by the PowerShellGet Module.

As developers and infrastructure engineers, there are many different reasons to script various services you are creating. Often, instead of developing behavior or scripts from scratch, it is much easier to leverage the work that others have done to expedite a task’s completion. 3rd Party Modules allow for easily repeatable code that can become production-ready through collaboration.

Often, DSC Configuration can become complicated. Engineers can be asked to do many things, from creating an Azure AD Domain, configuring OMS Solutions associated with a VM, and even interactions with non-native Azure products, such as Splunk.

These may all seem very daunting, but don’t fret! Members of the PowerShell community have dealt with these problems and many others, and often you will find third party modules to help do the work for you.

Here is an example of a Third Party Resource, named ActiveDirectoryDsc, which will help in the promotion, configuration, and management of Active Directory

Azure Automation is a robust PaaS offering from Azure that allows for a cloud-based DSC pull server. Within Azure Automation, it is possible to add both custom modules that the user develops and third-party modules available in any hosted source.
⚠ It should be known that organizations in locked-down environments can manage their Repository of PowerShell Modules, which have been vetted by the respective InfoSec team. It is possible to deploy your Artifact repo using the Azure DevOps product shown here. It allows an internal team to deploy its versions of packages, and you can use that as your URI references.
⚠ There are a few ways to upload modules to the Azure Portal natively. You can upload manually through the portal as shown here in this picture:

Uploading modules to the Azure Portal

However, being DevOps Engineers, we want to automate this process as much as possible. One way to do this is via ARM Templates, like the ones we used in the previous module.
Below is an example of how to add a 3rd party module to your Azure Automation Account via ARM Templates:

{
"name": "[concat(parameters('automationAccountName'), '/', parameters('name'))]",
"type": "Microsoft.Automation/automationAccounts/modules",
"apiVersion": "2015-10-31",
"location": "[resourceGroup().location]",
"properties": {
"isGlobal": false,
"sizeInBytes": 0,
"contentLink": {
"uri": "uri to package"
}
}
}

If you are deploying from the PowerShellGallery, your Uri would look something like this:

"uri": "[concat('https://www.powershellgallery.com/api/v2/package/', parameters('name'), '/', parameters('version'))]"

Alternatively, you can script the import of modules using the New-AzAutomationModule module in a Powershell Script.

Oddly enough, there is sometimes some difficulty understanding the correct ContentUri to use in both the ARM and Powershell case. Finding the correct one can be done by navigating the right module in the Powershell Gallery, and adding /api/v2 to the URL, and replacing packages (plural) with package (singular).

Add the /api/v2 to a URL

Conclusion

3rd Party Modules are a great way for developers to speed up development and productivity. If you are inclined to help in the development of these modules, head over to GitHub and contribute!

DevOps implements a Continuous Integration/Continuous Delivery (CI/CD) process. When multiple team members work in the same codebase, anyone’s update could break the integrated code. So, Continuous Integration is to trigger a build pipeline whenever a code update is pushed. The build pipeline will fail if the newly updated code is incompatible with the existing codebase if there are any conflicts. The codebase might work well within a single developer environment, but in a build pipeline where all configurations and dependencies are expected to be in place can fail. Continuous Delivery speeds up the deployment process. The release pipeline helps to deploy the same code base to multiple environments based on configurations. This helps to support code to be deployed in all environments without many manual changes.

Having an approval process helps peer code reviews, identifies potential issues, and any security flaws ahead of time. The current production applications are very distributed and complex. Whether it is an on-premise or cloud-based solution, missing a dependency or proper configurations could cost significant risk in deployments. DevOps helps to maintain the same code base for repeatable deployment in many environments with just configuration changes. DevOps avoids manually building the deployment packages and handing over to the operations team who would not have insights on what is being deployed. If an error occurs during deployment or post-deployment, then the development team jumps in at that time, which is time-consuming. This will cost in production timeline and end up with some unhappy customers also!
DevOps ImagePicture credit: DoD DevOps

Popular DevOps Tools

Follow here to learn more about DevOps practices from other AIS bloggers!

Why not just “DevOps”?

DevOps is fundamental for any organization’s build and deployment process with seamless CI/CD integration. Then, what is ‘DevSecOps’ and why is ‘Sec’ added between Dev and Ops. The ‘Sec’ in DevSecOps is ‘Security.‘ Though it’s added in between, security implementation should start from Development and continue in Operations. As development and deployment packages add many dependencies from both internal and external, this could introduce vulnerabilities. It could cost severe issues in production if not identified earlier in the build pipeline. Code scans help identify possible weaknesses in code implementations. But for any cybersecurity-related vulnerabilities, only specific tools at different stages of the pipeline must be used to identify as early as possible. Adding security scanning earlier in the pipeline and automating are essential for DevSecOps.

DevSecOps Software Lifecycle

Picture Credit: DoD DevSecOps

DevSecOps is not a tool or pattern but a practice and can be enhanced by adding appropriate tools. It is a process in securing the build and deployment by using several security tools by shifting security to the left. These security tools help to identify vulnerabilities that the code could have introduced, recommend possible solutions to fix those issues, and in some instances, the tools can mitigate some of those issues as well. This is to use the ‘fail fast’ method to identify vulnerabilities earlier in the build pipeline. As more applications moved into the cloud, it is highly imperative to follow Cloud Native Computing Foundation (CNCF) certified tools and implement security benchmarks that provided CIS benchmarks. DevSecOps avoids manual changes once the code is in the pipeline, deployed, and deployed. The codebase will be a single source of truth and should not be manipulated at any point.

Adding scanning tools for security and vulnerabilities helps to mitigate any flaws introduced in code and operations. Many open-source tools provide these functionalities. Enabling logging, continuous monitoring, alerting processes, and any self-fix for faster remediation are key for ongoing business operations. Containerizing with hardened container images from DoD Iron Bank helps to protect application container images. Hardened images can be kept up to date from reliable providers. Containers provide cloud-agnostic and no vendor lock-in solutions.

All the security tools in the DevSecOps pipeline must be deployed and running for pipeline scanning in the customer environment. A request will be sent to those security tools from the pipeline code via API request or trigger command-line interface (CLI) commands. Those tools then respond with their findings, statistics, and provide pass/fail criteria. If a tool identifies any vulnerability findings in the scan, then the pipeline will fail.

Deploying the security tools as SaaS services will require permission from the security team. Not all are approved to run in highly secured cloud environments. Those tools all need to be Authority to Operate (ATO) to deploy and configure. Whereas getting the hardened container images for those tools is a safer and secure approach to deploy those tools in the cloud. As the containers are already hardened, which means scanned, secured, and ready to go with all dependencies, they will provide continuous ATO. The hardened container images can be downloaded from DoD Iron Bank, and almost all tool providers provide container images. Many of these providers have different downloads, whether as a software download or a container image. When downloading as a software image, additional tasks to be done to ensure all the dependencies are appropriately configured or should pre-exist. Simultaneously, downloading as hardened container images comes with dependencies and are pre-scanned. The tools can be deployed into Kubernetes in your cloud environment to provide scalable functionality.

Below is a sample DevSecOps pipeline implementation with recommended security tools, as depicted in the picture below:

  • Source code pull request is approved by reviewers
  • The build pipeline kicks off and code scan is run after a successful initial build
    • If any code vulnerabilities are identified, then the pipeline fails
  • Build pipeline continues with DAST and PEN testing
    • If any vulnerabilities are identified, then the pipeline fails
  • Build artifacts are added to private repository either as packages or container
    • Repository scan is performed using repository scanning tools and vulnerabilities are reported
  • Release pipeline picks up artifacts from private repositories and deploys to Azure (or cloud of your choice)
    • Kubernetes is a highly recommended deployment for orchestration, but deployment can be an application of your choice such as Function App, App Service, Azure Container Instances, etc.
  • Security has been applied throughout the pipeline process and will continue once the application is deployed. Both native security tools such as Azure Monitor, Azure Security Center, Azure Policies, etc., and third-party tools such as Twistlock, Qualys, etc. Can be used to monitor the health of your production environment.DevSecOps Diagram

Let’s look at a few of the recommended tools to support the security validations in the DevSecOps process.

Build tools/CLI

A developer can write their code in their favorite editor such as Visual Studio, VS Code, and run/execute to test their applications. The code editor also generates debug/release packages generating binaries using the build tool that comes with the editor. The application works seamlessly from the developer environment as the dependencies and correct configurations exist. For the build to work in the pipeline, the build tool must be available to build the code. Based on the code language, the build tool varies, and they must be available in the pipeline.

Some of the build tools are:

  • DotNet Build
  • MSBuild
  • Maven
  • Gradle

Static Application Security Testing (SAST)

A code scan is one of the essential steps in securing the codebase. Automated testing helps identify failures, but these specific code scan tools help identify security flaws and vulnerabilities. The application does not need to be running for code scan tools as it scans only the codebase and not any dependencies.

Some of the Code scanning tools are:

  • SonarQube
  • Fortify
  • Anchore
  • JFrog Xray
  • OpenSCAP
  • HBSS
  • OWASP dependency check

Dynamic Application Security Testing (DAST)

DAST scans the application while its running or a container image that is hosted in private repositories. Container scanning before deploying helps resolve many security vulnerabilities.

Some of the DAST scanning tools are:

Penetration (Pen) Testing

Provides Web Applications scanner to help to find security vulnerabilities. Read here to learn about, “Top 10 Web Application Security Risks”

PEN testing tools:

  • OWASP ZAP

Deploy Code & IaC (Infrastructure as Code)

IaC is paramount in DevOps to avoid any manual work in customer environments and help with immutable infrastructure.

Popular IaC tools are:

  • Azure ARM Templates
  • Terraform
  • HELM
  • Private Repositories

In DevSecOps, a private repository is recommended to host the build dependencies, reference container images, container images for tools, and the built packages or application container images. This is to keep all the artifacts together in one centralized location, and the release pipeline can continue with deployments from there.
Some of the private repositories are:
JFrog
Docker Hub
Azure Container Registry (ACR)

Private Repository Scanning

As the pipeline requires security scanning, the repositories require scanning also. These tools scan for vulnerabilities in all packages and container artifacts stored in the repository. A scan report is being sent/notified for any issues.

Some artifact scanning tools are:

  • XRay
  • SonaType
  • Azure Monitor
  • Azure Security Center

Deploy

As the recommendation to deploy the security tools with container orchestration, the same recommendation goes to deployed applications. Containers provide high security with limited ways to be affected by attackers. Sidecar containers protect by continually monitoring applications with a container security stack built-in. Applications are scalable on a demand basis using Kubernetes and tools such as Kubectl; HELM packages are used to deploy and manage K8S clusters. ArgoCD is a declarative tool specifically for Kubernetes deployment in CI/CD pipeline.

Deployments to Azure could be:

  • Azure function app
  • Azure App Service
  • Azure Container Instance
  • Azure Kubernetes Service (AKS)
  • Open Shift in Azure
  • Monitoring/Alerting

Monitoring/Alerting

As the applications deployed and running in a cloud environment, it must be continuously monitored for attacks and identify any security vulnerabilities. For containers, these tools act as sidecar containers to regularly protect main containers from attacks, and some mitigate the issue. All these tools have built-in alert/notify operations team for immediate actions.

Monitoring/alerting tools:

  • Azure Monitor
  • Azure Security Center
  • Twistlock
  • Qualys
  • Aqua Security

So, all powered up with learning DevSecOps! Follow up back here for the next blog post in container-based deployments and containers scanning in the DevSecOps pipeline!

References for continuing your DevSecOps Journey

Azure Kubernetes Service is a Microsoft Azure-hosted offering that allows for the ease of deploying and managing your Kubernetes clusters. There is much to be said about AKS and its abilities, but I will discuss another crucial role of AKS and containers, security. Having a secure Kubernetes infrastructure is a must, and it can be challenging to find out where to start. I’ll break down best practices, including baseline security for clusters and pods, and implement network hardening practices that you can apply to your own AKS environment that will lay the foundation for a more secure container environment, including how to maintain updates.

Cluster and Pod Security

Let’s first look at some best practices for securing your cluster and pods using policies and initiatives. To get started, Azure has pre-defined policies that are AKS specific. These policies help to improve the posture of your cluster and pods. These policies also allow for additional control over things such as root privileges. A best practice Microsoft recommends is limiting access to the actions that containers can provide and avoiding root/privileged escalation. When the Azure Policy Add-on for AKS is enabled, it will install a managed instance of Gatekeeper. This instance handles enforcement and validation through a controller. The controller inspects each request when a resource is created or updated. You’ll then need to validate (based on your policies). Features such as these are ever-growing and can make creating a baseline easier. Azure Policy also includes a feature called initiatives. Initiatives are collections of policies that align with organizational compliance goals. Currently, there are two built-in AKS initiatives which are baseline and restricted. Both come with many policies that lockdown items, such as limiting the host filesystem, networking, and ports. By combining both initiatives and policies, you can tighten security and meet compliance goals in a more managed fashion.

Another way to secure your cluster is to protect the access to the Kubernetes API-Server. This is accomplished by integrating RBAC with AD or other identity providers. This feature allows for granular access, similar to how you control access to your Azure resources. The Kubernetes API is the single connection point to perform actions on a cluster. For this reason, it’s imperative to deploy logging\auditing and to enforce the least privileged access. The below diagram depicts this process:

Cluster and Pod Security

Reference:https://docs.microsoft.com/en-us/azure/aks/operator-best-practices-cluster-security#secure-access-to-the-api-server-and-cluster-nodes

Network Security

Next, let’s look at network security and how it pertains to securing your environment. A first step would be to apply network policies. Much like above, Azure has many built-in policies that assist with network hardenings, such as using a policy that only allows for specific network traffic from authorized networks based on IP addresses or namespaces. It’s also important to note this can only occur when the cluster is first created. You also have the option for ingress controllers that access internal IP addresses. This ensures they can only get accessed from that internal network. These small steps can narrow the attack surface of your cluster and tighten traffic flows. The below diagram demonstrates using a Web Application Firewall (WAF) and an egress firewall to manage defined routing in/out of your AKS environment. Even more granular control is possible using network security groups. These allow only specific ports and protocols based on source/destination. By default, AKS creates subnet level NSGs for your cluster. As you add services such as load balancers, port mappings, and ingress routes, it will automatically modify the NSGs. This ensures the correct traffic flow and makes it easier to manage change. Overall these effortless features and policies can allow for a secure network posture.

Network Security Graphic

Reference: Microsoft Documentation

The Final Piece

The final piece of securing your AKS environment is staying current on new AKS features and bug fixes. Specifically, upgrading the Kubernetes version in your cluster. These upgrades can also include security fixes. These fixes are paramount to remain up to date on vulnerabilities that could leave you exposed. I won’t go too deep on best practices for Linux node updates or managing reboot. This link dives deeper into what Kured is and how it can be leveraged to process updates safely. There are many ways to foundationally secure your AKS clusters. I hope this article helps future implementations and maintainability of your deployment.

We live in fractious times! Let’s escape for a few minutes to consider how Microsoft is moving us toward calm and synchrony. Together with a consistent, organized design aesthetic, the Fluent Design System delivers a User Experience bridge across apps, platforms, and devices.

Fluent UI is a package of tools we can use to build for our customers faster and with a consistent UX.

Across Platforms

5 Platforms - Fluent Design System
Five Platforms of tools offered by the Fluent Design System

The Fluent Design System applies to five platforms: Web, Windows, iOS, Android, macOS (recently added). Fluent UI is the set of design tools within the overall System. We can use these toolsets to implement coherent, consistent design elements.

Many Names

Note that for Windows development, the design and toolset are being allowed to keep the existing name of WinUI. The Fluent naming changes are a moving target! Fluent Design System is a design language developed in 2017 as the successor to Metro at the introduction of Windows 10.  Microsoft gave the name Fluent UI to the merge of two repositories, UI Fabric and another Microsoft UI library that was named Stardust. Browsing the Fluent Design website you will encounter places where Fluent UI is still named UI Fabric. We will see these become Fluent UI gradually. Having the names organized helps to deliver the Design System concept.

Native

Achieving a consistent look and UX appears a straightforward goal. But the challenge is more than meets the eye. Working Fluent Design smoothly into the native behaviors, elements, and controls of the many devices will require a balancing effort. The challenge is to respect each platform’s native elements and behavior while adding a uniquely Fluent feeling. Consider that Android devices are built using Google’s Material Design. Displaying Office on an Android device requires considering how the Fluent experience will work with the Material Design of the OS. It is worth the effort. Familiar user patterns across devices increase productivity for end-users. To help build natively, Fluent UI React Native was announced at Build, to allow easier cross-platform development.

A Coherent User Experience

Users are evolving and expecting a different experience. Today people use multiple devices and sometimes all at the same time. We want to let a user start work on one device and finish on another. Our cloud-powered, multi-device, cross-platform world has created a need and expectation of a consistent and coherent experience from one to the other.

“Fluent has taken Microsoft back to the basics of design, with a much bigger focus on simplicity. Instead of bold typography and edge-to-edge content, Fluent focuses on subtle elements like light, depth, motion, and material. We’ve seen it appear in Windows with hints of motion and blur effects. It’s also appeared in Office and on the web across services like OneDrive, Office Online, and Outlook. Microsoft is gradually making Fluent the centerpiece for how the company thinks about design.” –The Verge

Fluent UI React

Used by Microsoft 365 apps such as Outlook and SharePoint, Fluent UI React is also incorporated in the generators for SharePoint Framework, Office Add-ins.

 Fluent Design used by many Microsoft products
Microsoft products built using Fluent Design

Developing for Microsoft 365 apps using SharePoint Framework and Add-Ins is supported by having Fluent UI already included from the generator, to keep the design of webparts and add-ins consistent with the hosting product.

Design-to-Code

The Design-to-Code concept separates design from development to ease building on multiple platforms. Using a design tool such as Figma, Sketch, or Adobe XD, the designer sets UI elements once and can export the tokens to many development environments. It exports style tokens into machine-readable formats: XAML for Windows, compiled CSS for Web, and Swift for iOS.

“Most designers at Microsoft use Figma” – Medium

Getting Started

The deceptively simple Fluent site opens the way to a sizeable body of documentation and quickstarts. Choose your platform and jump in. From the Build presentation: “Our opportunity in the cloud-powered, multi-device and cross-platform world is for us to design seamless experiences that close the technology gaps and allow people to organize, manage, create and communicate wherever they are in ways they’ve never imagined before.”

Recap

  • Fluent Design System helps build natural and productive experiences on each device
  • Resources are available to build UX on Windows, Web, Mobile, and Cross-Platform
  • Further improvements to come to improve design to code

Links

This blog is a step-by-step walkthrough of building a fully functioning leave request app for any organization with a vacation, sick leave, military leave, bereavement leave, holidays, Jury Duty, and so on. This app will include a solution in the Common Data Service, a Model-Driven App, a Canvas App, and Power Automate. The Canvas App is for employees to submit a request. The Model-Driven App will be a back-office system used by admin or supervisors to check everyone’s requests. The Power Automate flow will trigger an approval email to the employee’s supervisor and an automated email to the employee once the supervisor has either approved or rejected the leave request.

The app’s flow will start with the employee using the Canvas App to submit a leave request form. Once that form is submitted, an email is sent to the supervisor, and, simultaneously, a new row is added to the Model-Driven App with the new leave request entry. Once the supervisor has approved or rejected the request, the employee will receive an email with the decision.

Pre-requisites

  1. A Microsoft PowerApps Trial Plan 2, this can be a Free Trial
  2. Ensure you are using an environment with a database

Common Data Service (CDS)

  1. Navigate to https://make.powerapps.com/
  2. Create a New Solution called “Leave Request Solution”
    1. Create a New Publisher called “Leave Request App”Create a New Publisher
  3. Select New > Entity name it “Leave Request Entity” and Enable Attachments
    Enable Attachments CDS
  4. Select Add Fields
    Add Fields DSC
  • Add Field: First name
    • Data Type: Single Line Text
    • Required
  • Add Field: Last name
    • Data Type: Single Line Text
    • Required
  • Add Field: Email
    • Data Type: Email
    • Required
  • Add Field: Supervisor Email
    • Data Type: Email
    • Required
  • Add Field: Start date
    • Data Type: Date Only
    • Required
  • Add Field: End date
    • Data Type: Date Only
    • Required
  • Add Field: Request Type
    • Data Type: Option Set
      • Create New: “Request Type”
        Request New Option
    • New Option: Vacation Leave
    • New Option: Sick Leave
    • New Option: Military Leave
    • New Option: Bereavement Leave
    • New Option: Jury Duty
  • Add Field: Work Items
    • Data Type: Multiline Text
    • Required: Recommended
  1. SAVE ENTITY
  2. Navigate to the Views tab
  3. Select Active Leave Request Entities
    Leave Request Entity
  • Right-click on Created On Column > Remove
  • Add Last Name by clicking the field on the left
  • Add Email by dragging the field to the editor
  • Add Start Date
  • Add End Date
    Add Start and End Date
  1. Once your View looks like the above, click Save, Publish, then Back
  2. Navigate to the Forms tab
    1. Click on the row with the Form type “Main”
      Form type “Main”
    2. Add all custom fields by clicking them directly on the left
    3. Select the section so that it is outlined in purple
    4. Click Formatting on the right, select 2 columns
      Formatting
  3. Once your Form looks like the above, select Save, Publish, then Back

Model-Driven App

  1. Navigate to the “Leave Request Solution”
  2. Select New > App > Model-Driven App
  3. Name it “Leave Request Back Office”
  4. Check “Use the existing solution to create the App”
  5. Select Next
  6. Select Solution: * “Leave Request Solution”
  7. Select Site Map to configure it
    Select Site Map to Configure

    1. Select the pencil next to New Subarea
    2. Select the “Leave Request Entity” from the dropdown on the right
      Leave Request Entry from dropdown
  8. Publish, and Save and Close
  9. In the App Designer view, select Publish then Play
  10. Select New
    1. Add test data
    2. Select Save & Close

Canvas App

  1. Navigate back to the “Leave Request Solution”
  2. Select New > App > Canvas App > Tablet form factor
  3. Select the Insert tab > Forms > Edit
  4. A box will appear in the editor, select Connect to data
    1. Select “Leave Request Entities” on the left
    2. Under Data source on the right, select “Leave Request Entities”
    3. Select Edit Fields
    4. Add the remaining fields in the popup modal
    5. Set Default mode to New
    6. Change name to “Leave Form”
      Change Name to Leave Form
  5. Navigate to Insert tab > Button
    1. Select OnSelect from dropdown
    2. Type “SubmitForm(‘Leave Form’);Navigate(Screen2);” in the functions(fx) box
      Functions box

      1. Select Text from dropdown and type “Submit Request”
  6. Navigate to Insert tab > New Screen > Success
  7. Navigate to the File tab > Save > Publish > Play
    Save and Publish, then play
  8. Add test data, select Submit, navigate back to the Model-Driven App to see the new entry

Power Automate

  1. Navigate back to the “Leave Request Solution”
  2. Select New > Flow
  3. In the search box type “When a record is created” and select the Common Data Service option
    Common Data Service Trigger Box
  4. Fill in the trigger box:
  5. Select New Step
  6. Then the Choose an Action box, type “Start and wait for an approval” in the search box and select it
  7. Fill in the action box using the Dynamic Content box
    1. Click inside the input field and then select the correct dynamic field from the Dynamic Content popup box
      Dynamic Content Box
  8. Select New Step
  9. Select Condition
  10. Fill in the boxes:
    Fill in the Boxes for the Trigger
  11. Select Save >Test > Check “I’ll perform the trigger action” > Select Save & Test
  12. Navigate to your Canvas App, select Play, add test data, and Submit Form
  13. You should see an approval email in the inbox you sent the supervisor email to, select Approve
    Approve Email
  14. Another email will be sent to the employee’s email with the response
    Approved Request

Final Thoughts

Congratulations! You have completed the Leave Request app using the Common Data Service, a Model-Driven App, a Canvas App, and Power Automate. You have a fully functioning application that can be used right away! The next steps would be to implement a status field where the Approve or Reject decision will be updated in the Model-Driven App so supervisors can keep track of any pending requests. This app can be adjusted to your organization’s needs and requirements, but this app is a great starting point.

INFOPATH FORM MODERNIZATION WITH POWER PLATFORM
Discover how we helped a Fortune 100 Insurance Company modernize their InfoPath forms with Power Platform.

Recently we integrated Azure QnA Maker Service with SharePoint Search page for one of our DoD Customers so thought to share my experience and lesson learned with broader audiences. Generally, QnA Maker is used when designing a Chatbot, but, according to customer requirements, we integrated with an existing search page, so users do not need to go to multiple places for information. A single place to search all the contents, including static Questions and Answers (FAQs).

Introduction

Azure QnA Maker is one of the Azure Cognitive Services. It is a PaaS service, so no need to maintain any infrastructure. It is available within Azure Gov at DoD Impact Level (IL) 2 and can be configured to store the response data at higher DoD impact levels.

QnA Maker consists of two main parts, QnA Maker Portal site, which is a very user-friendly web interface to import questions and answers, and QnA Maker backend Azure service resources.

Use Case: If your organization has static questions and answers sets in word documents, PDFs, on web pages, etc., you can use the Azure QnA Maker portal to quickly import these questions and answers into QnA Maker to create knowledge bases. It can also automatically extract questions and answers sets from semi-structured contents, manuals, guidelines, etc. Once Questions and Answers are in QnA Maker, your application (Examples: Custom Bot or Custom Search Page) can call QnA Maker Rest API to answer the user’s question.

Asking questions through custom chatbot

High-Level Architecture

It is necessary to understand the QnA Maker architecture if you are working with Government clients and planning to upload answers in QnA Maker, then consider hardening the backend resources of Azure QnA Maker. Apply Network settings on App Service and Search Service. Check out this reference from Microsoft covering backend resources.

High-Level Architecture

Decision Making Between LUIS and QnA Maker Service

It is not hard to decide whether we should use LUIS or QnA Maker to design the Chatbot or any custom applications?

When to use the QnA Maker?
If your organization has lots of static questions and answers ( Example: Questions/Answers Sample), take advantage of out of the box features of QnA Maker. Upload your static questions and answers into QnA Maker Portal, and your application can call QnA API to search for questions asked by the user on the front end application and return the response.

When to use the LUIS?
If you would like to design the application which needs to extract the information from the user’s questions and further process their intents, then use the LUIS.

Example Application: I want to design Bot, where users can ask Bot to create various application development projects. Below are 2 example requests:

Hi, can you create .net core application in C:/Code folder?
Hi , can you create the node.js application in C:/Projects/Code folder?

In this case, LUIS API can extract the word “.net core,” “node.js,” “C:/Code”, and “C:/Projects/Code “ from the user requests, and our application can further process extracted data and can create the respective projects. See the below screenshot from LUIS output. Base on intent = CreateApplicationProject and AppType = node.js, the application gets enough data for the next steps.
LUIS Output Screen Shot

Lessons Learned

Below are the few lessons learned and essential points about the QnA Maker service.

  • You can easily use Postman to test various rest APIs of QnA.
  • Azure QnA Maker Permission can’t be set from Azure QnA Maker Portal. If you would like to share a knowledge base with multiple users, you must give permission on back-end resources using Azure role-based access control (Azure RBAC). You can create a group ( Example: QnAAdministrators) in Azure AD and grant access to the resource group (Resource group with all QnA Maker resources) by assigning a contributor role to a QnAAdministrator at this scope.
  • You can rename your knowledge base from QnA Portal anytime; it does not change the Knowledgebase Key or Endpoint Key.
  • Take advantage of the Metadata Tag feature, so you do not need to design the separate knowledge bases for all departments.
  • You can add a tag with each question.
  • QnA Maker works best if you import both questions and answers into the knowledgebase. If you would like to search only on questions, use the RankerType=QuestionOnly in the POST body of the GenerateAnswer request.
  • To turn on the Application Insight‘s telemetry (Logging), if you haven’t turned it on during QnA Maker Service creation time, you can turn it on afterward on back-end App Service Resource.
  • Application Insight telemetry is very useful for tracking all the unanswered questions and does not need to write custom code for it. Run the below query to view all unanswered questions.
requests
| where url endswith "generateAnswer"
| project timestamp, id, url
| parse kind = regex url with *"(?i)knowledgebases/"KbId"/generateAnswer"
| join kind= inner (
traces | extend id = operation_ParentId
) on id
| extend question = tostring(customDimensions['Question'])
| extend answer = tostring(customDimensions['Answer'])
| extend score = tostring(customDimensions['Score'])
| where  score  == "0" and message == "QnAMaker GenerateAnswer"
| project timestamp, KbId, question, answer, score
| order  by timestamp  desc
  • If you are using Train API Post call to send alternative phrases, it takes 30 to 40 mins to show up on QnA Portal. Once alternative phrases show up on Portal, you need to train and publish the knowledge base again. For Train API to work, make sure you turn on the Active Learning from QnA Portal Settings.
  • QnA Maker has a batch testing tool which is an exe file. Still, you cannot run it from your production environment desktops where an exe file upload might be blocked. In this case, we can easily write custom front end node.js code to read the various questions from the CSV file or Excel file, and for each question, call the GenerateAnswer API to get the answer and score and write it back to the output excel file.

Conclusion

A QnA maker’s beauty is the features of importing your questions and answers sets into QnA Maker from various files like Word, PDF, Text, Excel, etc. You can also bring problems and solutions from published web pages. Once your knowledgebases are in QnA Maker, we can quickly design questions/answers Bot applications without adding many custom codes. QnA Maker is smart enough to search your full question text or phrases from your questions and answers. It can also understand the spelling mistakes and returns the correct answer.

QnA Maker is not only about the QnA Maker Portal site. It does have back-end resources created in your Azure Subscription. If you plan to use QnA maker for larger knowledge bases, consider increasing the SKU of QnA Maker back-end resources and considering hardening the security of back-end resources.

If you plan to design the application that needs to extract the information from the user’s question and further process it, consider using the LUIS.