Assumptions

Overview

One of the challenges I had when learning Dynamics 365 / Power Platform development was how to add custom functionality to the User Interface. The .NET developer in me was frustrated when I wanted to add a button to the page that triggered some server-side code. Eventually, I learned a pattern of adding HTML / JavaScript resources to the page that would create an Annotation (Note). A plugin would intercept the Annotation and get the Regarding object. This would allow me to trigger server-side code from the User Interface, but it was not as graceful as I would have liked. A colleague of mine, Patrick O’Gorman, told me about Power Apps Component Framework (PCF) controls for some time, and I recently decided to start experimenting with them. In this blog post, I will illustrate how I could use a PCF control to trigger a flow in Power Automate.

Creating the PCF Control

Before creating the control, I recommend reviewing both the Microsoft Learning Paths for Power Platform Component Framework and Patrick O’Gorman’s PCF Demo.

I begin by creating a folder to house the project and then follow these steps:

  • Open Visual Studio (VS) Code
  • In VS Code, Open the folder that was created for the project
  • Click View and then TerminalIn the terminal window, I do the following:
  • Initialize the project: pac pcf init –namespace PCFControls –name PCFFlowTriggerDemo –template field
  • Install TypeScript: npm install typescript
  • Build the project: npm run build
  • Run the project: npm start

When I run the project, the PCF Control Sandbox opens. It is empty at this point, but not for long.

Empty PCF Control

I close the Sandbox window and use CTRL+C to terminate the Sandbox in the terminal window.

Creating the Power Automate Flow

For this project, I will set the control to the side for now and create the flow because I will want the endpoint created to use in control. For this example, I will create a simple workflow that receives a Contact Id from the PCF Control and then emails the Contact. I realize there are better methods for emailing a client within the Model Driven App, but this is just an example. I create this flow by following these steps:

Configure the Trigger

Login to Power Automate

Create New Flow and choose Instant cloud flow

Create New Flow and choose instant cloud flow

Enter the name of the flow and choose When an HTTP request is received (Request) for the trigger.

Build and instant cloud flow

When the flow opens, expand the trigger and select Use sample payload to generate schema.
Use the following JavaScript Object Notation (JSON)
{ “contact” : { “contactid”: “00000000-0000-0000-0000-000000000000”} }

Enter or paste a sample JSON payload

Click Done and the schema generates:

When HTTP request is received

Check the Origin

Next, I verify that the request’s origin is from my Power App. This prevents someone from using PostMan, Fiddler, etc. to build a request and hit the endpoint maliciously.

Click New Step > Control > Condition
Click left-hand Choose a value > Expression and enter the following:
triggerOutputs()[‘headers’][‘origin’]

Choose a Value and Expression

Click left-hand Choose a value and enter the root URL for Power App (ex. https://orgc76a7275.crm.dynamics.com)

Email the Contact

Inside of the Yes condition, click Add an action > choose Microsoft Dataverse as the connector and then Get a row by ID.
Select Contact as the table and select the contacted from Dynamic content in the Row ID field.

Select contact as the table

Inside the Yes condition, click Add an action > type Outlook, select Office 365 Outlook as the connector type, and then Send an email.
Click To > Add dynamic content and then select Email.

Send an email

I fill in other required fields to send a test email.

Send a test email

Click Save

Get the POST URL

Now that the flow is complete and we have saved it, the POST URL is made available. I scroll back up to the top of the flow and expand the trigger; I copy the URL and save it later.

HTTP Post URL

Triggering the Flow from the Control

Now that I have the PCF control project in place and the flow complete, I can finish the control. I return to VS Code, open the index.ts, and perform the following steps:
I want to add a variable for the current context and the container that I will be using to build our control. Add the following lines just above the constructor.

               private _context: ComponentFramework.Context<IInputs>;
              private _container: HTMLDivElement;

Triggering the Flow from the Control

Next, I need to create an HTML Div element, add a button, and add an event listener for the click event. Locate the init method and replace the code inside with the following lines:

                              this._context = context;
                             this._container = document.createElement("div");
               container.appendChild(this._container);
 
                             let button: HTMLButtonElement = document.createElement("button");
                             button.id = "cf6e3bc1-1cc1-4404-9171-9b7c981f97f6"
                            button.innerHTML = "Email Contact";
 
               button.addEventListener("click", this.emailContactOnClickHandler.bind(this));
                     this._container.appendChild(button);
 

Triggering flow from control

Finally, I need to add the click event handler. This code gets the Id of the current Id of the record, builds the JSON payload that I used earlier, posts it to the URL from our flow, and disables the button (to prevent spamming it). Below the destroy method but inside the class, add the following method:

	private emailContactOnClickHandler(event: Event): void {
		var context = this._context;

		var contactId = (<any>context).page.entityId;
		var contactPayload = '{ "contact" : { "contactid": "' + contactId + '"} }';

		var req = new XMLHttpRequest();
		var url = "[URL FROM FLOW]";
		req.open("POST", url, true);
		req.setRequestHeader('Content-Type', 'application/json');
		req.send(contactPayload);

		let button = document.getElementById("cf6e3bc1-1cc1-4404-9171-9b7c981f97f6")
		if (button) {
			(button as HTMLButtonElement).innerHTML = "Email Sent...";
			(button as HTMLButtonElement).disabled = true;
		}
	}

Destroy Method

You may have noticed that I am using a GUID for the button; there is nothing special about this GUID, and any can be used. I just needed a unique Id for the button control to retrieve it later. I also left out the URL from the flow; this should be the value created when the flow was saved. Now I can build and run the project:

  • Build the project: npm run build
  • Run the project: npm start

When I run the project, the PCF Control Sandbox opens, and we can see our button element now.

Power Apps Test Environment

I close the Sandbox window and use CTRL+C to terminate the Sandbox in the terminal window.

Deploying the PCF Control

Now I am ready to deploy the control of my Model Driven App; in this case, I am using a Dynamics 365 Sales Trial. I deploy the new control by doing the following:

Publishing the Solution

  • In the terminal window, I authenticate with my App: pac auth create –url https://orgc76a7275.crm.dynamics.com
  • I build the project: npm run build
  • When the login window pops up, I enter my credentials
  • I deploy the solution: pac pcf push –publisher-prefix ais

Adding the Control to the UI

I click on the gear, select Advanced Settings.

Advanced Settings

I click the drop-down next to Settings and then click Customizations and then Customize the System.

Customize the System

Note: Normally, I would use a custom solution instead of Customize the System. When I use Customize the System, I am editing the default solution that I am not concerned about for this example.

In the Solution window that opens, I expand Entities, locate and expand Contact, select Fields, and then click New.

Contact Fields

For the Display Name, I use “PCF Email Contact” and change Searchable equal to false; this will prevent the field from appearing as a search as an option. This field will serve as a placeholder for my control. I leave everything else as-is and click Save and Close.

PCF Email Contract

In the Solution, select Forms and then open the form. In my case, the entity is configured to use the Sales Insights by default, so I will edit it.

Sales Insights

In the form window that opens, I locate my newly created field and drag it right below the Email field.

New Field with Email

I double-click on the field, uncheck the Display label on the form, and then select the Controls tab.

Display label on the form

In the window that pops up, I can see my new control.

Add PCF Trigger Control

I select it and click Add.
In the Field Properties window, I tick the options Web, Phone, Tablet and click Ok.

Field Properties

In the form window, I click Save and Close.
In the Solutions window, I click Publish All Customizations.

PCF Control in Action

Now that I have the PCF control deployed and the flow waiting for a POST request, I am ready to test. I have created a Contact record with my email address, and now when I edit it, I see the Email Contact button.

PCF Control in Action

When I click it, it is disabled and changes to Email Sent.

Disable and Send Email

When I navigate to Power Automate I see that my flow has been completed successfully.

Successful flow completed

Finally, I check my email and it is correct.

Check email

Conclusion

As you can see, wiring up a control to your Power App with server-side logic is easy with Power Platform. Power Automate provides you with fantastic low / no-code solutions, and the Power Apps Component Framework makes it easy to design, publish and connect controls to your user interface.

I recently had the opportunity to perform a lift-and-shift migration of a SharePoint 2016 environment to cloud Infrastructure as a Service (IaaS) in Amazon Web Services (AWS). To support the long-term goals of the client, Okta would be implemented for authentication. Additionally, the client had several product integrations, including SQL Server Reporting Services (SSRS) (Integrated Mode), Office Online Server (OOS), Gimmal, and Nintex.

One of the first problems that I ran into was very little knowledge or lessons learned available. Okta does provide an Integration Guide; however, it simply walks you through setting up Okta as a Trusted Claims Provider for SharePoint. The guide does not cover or even mention potential architectural, migration, or integration concerns. I found a useful article at SharePointDoctors.com that does a great job filling in some of the gaps left in the Okta documentation, and I highly recommend reviewing it. One of the most critical points made by SharePointDoctors.com was to “Test, Test, and Test Again,” which is exactly what we did to discover and solve migration issues. In this post, I will share some of the issues we encountered migrating to Okta and what we did to mitigate them.

Lesson 1: Authentication Providers and the People Picker

When configuring Okta, there is no way to switch to Okta entirely; Windows Authentication is required for service accounts, and search crawls — watching an Okta product presentation, around the 8:20 mark, the presenter glazes over this fact. He claims that when they are ready for the final cutover, they disable Windows Authentication.

Claims Authentication Types

Initially, we had both Okta and Windows Authentication enabled for the Default Zone. If you configure SharePoint this way, users will be asked to select which credentials to use to log on to the site when they navigate the site.

Windows Authentication Sign In

If you do not want users to be prompted with this, follow these steps:

  1. Open Central Administration
  2. Select Manage Web Applications
  3. Select your web application and then Authentication Providers from the ribbon
  4. Select the Zone (usually Default)
  5. Scroll down to the Sign In Page URL and select Custom Sign In Page
  6. Enter /_trust/

Custom Sign In Page

This will force the user to use Okta when navigating to the site. You can find additional information about the login sequence here.

When we configured both authentication modes in the Default Zone, we found that whenever the People Picker was used, it would return two users: The Windows User and the Okta User. We knew that this would be very confusing and decided to extend the Web Application with a different Zone. Thus, we had a web application (ex. internal.contoso.com) in the Default Zone with Windows Authentication and a second web application (ex. contoso.com) in the Internet Zone with Okta Authentication. Using this arrangement, admins, third-party integrations, and search crawls could operate in the Default Zone, and all users would have access within the Internet Zone. You will understand why we chose this as you see the issues we encountered later in this article.
If the issues that we encountered are not applicable and you decide to use both authentication types in the Default Zone, you can hide AD from the People Picker using the following script:

Add-PSSnapin Microsoft.SharePoint.Powershell
$cpm = Get-SPClaimProviderManager
$ad = get-spclaimprovider -identity "AD"
$ad.IsVisible = $false
$cpm.Update()

SHAREPOINT MIGRATION FOR DHS
AIS helped the U.S. Department of Homeland Security migrate its large SharePoint Server environment to M365 in just six months, reducing costs and improving collaboration. Read how we did it.

Lesson 2: Migrating Claims

Using Move-SPUser

When I started scripting out the migration of users, I initially started with the script provided by SharePointDoctors.com, iterating over each site collection and calling Move-SPUser for each user.

However, SharePointDoctors.com warns that migrating a user twice is bad:
“SharePoint deletes all instances of the original user and replaces it with a new blank one that owns nothing. As a result, you lose all permissions information about the user.”

This concerned me greatly and since we had four web applications and numerous site collections with the same user in several places. I was concerned that if Move-SPUser was called more than once for the same user, the user would be over-written; after meeting with Microsoft, I found that this is not the case. The overwrite concern would be if a user logged into the system premigration creating a scenario where an Okta user (ex. c:0-.t|okta|chettinger) and a Windows user (ex. i:0#.w|ais\chettinger) exist in the system. Once Move-SPUser migrated the Windows User, the original Okta User would be overwritten with a new Okta user. In other words, there is no problem with calling Move-SPUser more than once if you happen to do so over multiple site collections.

Planning Time for User Migration

Another interesting issue that we encountered was the time it took to migrate many users (20k+). After working with Microsoft and researching the logs, we found that it took longer to migrate a user on servers where 3rd party integrations were installed. For example, when we ran the script on the SSRS or Search servers, it would only take 1 second to migrate a user. If we ran it on one of the application servers, it would take 1.5 minutes per user. In our case, we had Nintex and Gimmal installed. After working with Microsoft and testing thoroughly, we determined that it was perfectly safe to run the migration on the faster servers and that there was no negative impact on the migration.

Using SPFarm.MigrateUser()

While working on the script to migrate groups using SPFarm.MigrateGroup() I found that there was also a SPFarm.MigrateUser() function. It seemed more efficient to build a list of users and iterate over it, calling SPFarm.MigrateUser() for each one. Once again, we met with Microsoft, and they assured us that the SPFarm.MigrateUser() function would behave just like the Move-SPUser command, only at the farm level. Ultimately, we used this as it allowed us to batch up the user migration into PowerShell Jobs easily. This is the script that we ended up using across multiple servers.

Add-PSSnapin Microsoft.SharePoint.Powershell
$domain = "ais"
$dryrun = $true

$muliplier = 0    #The number of server instance migrating users (0,1,2,3,etc.)
$jobCount = 20    #The number of parallel jobs
$chuckSize = 100  #The number of users to process at one time
$maxUsers = 2500  #The number of users to process on this server

$usersString = 
"<ActiveUsers>
    <ActiveUser>i:0#.w|ais\user1</ActiveUser>
    <ActiveUser>i:0#.w|ais\user2</ActiveUser>
    <ActiveUser>i:0#.w|ais\user3</ActiveUser>
    <ActiveUser>i:0#.w|ais\user4</ActiveUser>
</ActiveUsers>"

#Using a string in this example, but a separate file would be more appropriate
$ScriptRoot = Split-Path $MyInvocation.MyCommand.Path
#$UsersXml = (Get-Content "$($ScriptRoot)\ActiveUsers.xml") 

$UsersXml = $usersString
$users = $UsersXml.ActiveUsers.ActiveUser

#Use Dry Run to test
$dryrunText = "[DRYRUN]"
if($dryrun -eq $false){
    $dryrunText = ""
}

if($maxUsers -ne $null){
    $users = $users| select -Skip ($maxUsers*$muliplier) | select -first $maxUsers
}

$oktaClaimChar = Get-SPClaimTypeEncoding |  Where-Object { $_.ClaimType -like '*Okta*' }

Write-Host "$($dryrunText)Start: $(Get-Date)"

#Build Chunks
$chunks = [System.Collections.ArrayList]::new()
for ($i = 0; $i -lt $users.Count; $i += $chuckSize) {    
    
    if (($users.Count - $i) -gt ($chuckSize-1)  ) {
        $chunks.add($users[$i..($i + ($chuckSize-1))]) | Out-Null
    }
    else {
        $chunks.add($users[$i..($users.Count - 1)]) | Out-Null
    }
}


for ($i = 0; $i -lt $chunks.Count; $i++) {  
    $chunk = $chunks[$i]
    Write-Progress -Id 0 -Activity Updating -Status 'Progress->' -PercentComplete ($i/$chunks.Count * 100) -CurrentOperation Chunks
    $running = @(Get-Job | Where-Object { $_.State -eq 'Running' })
    if ($running.Count -ge $jobCount) {
        $running | Wait-Job -Any | Out-Null
    }
    $jobName = "Chunk$i"
    $job = Start-Job -Name $jobName -OutVariable $job {
        Add-PSSnapin Microsoft.SharePoint.Powershell
        $chunk = $using:chunk
        $dryrun = $using:dryrun
        $dryrunText = $using:dryrunText
        $i = $using:i
        
        $oktaClaimChar = $using:oktaClaimChar        
        $farm = Get-SPFarm

        for ($j = 0; $j -lt $chunk.Count; $j++) {
            $user = $chunk[$j] 
            if($user -ne $null)
            {
                $oldUserName = $user.ToLower()
                $newUserName =  $user.Replace("i:0#.w|", "i:0$($oktaClaimChar.EncodingCharacter).t|okta|")
                $newUserName =  $newUserName.Replace("$domain\", "")               
                if($oldUserName -ne $newUserName)
                {
                    Write-Host "  $($dryrunText) Moving User $oldUserName  to $newUserName"
                    if($dryrun -eq $false)
                    {     
                        try{    
                            $farm.MigrateUserAccount($oldUserName,$newUserName,$false)
                        }catch{
                            Write-Host $_
                        }
                    }              
                }     
            }                       
        }      
    }
}
Wait-Job * | Out-Null 

# Process the results
foreach($job in Get-Job)
{
    $result = Receive-Job $job
    Write-Host $job.Name
    Write-Host $result
}
Remove-Job -State Completed
Write-Host "$($dryrunText)End: $(Get-Date)" 

Lesson 3: Integration with SQL Server Reporting Services (SSRS)

As mentioned earlier, our environment was running SQL Server Reporting Services (SSRS) – Integrated Mode. There were no changes necessary for reports to work for the end-user. However, for report authors to create and edit reports, they needed to use Windows authentication. How you decide to handle this is tightly coupled with what I covered in Lessons 1 and 2. If you choose to use both Okta and Windows Authentication in a single zone, you will face issues when editing a report with the Report Builder while logged in as an Okta user.

This was the second reason why we went with two authentication zones. To edit the report, the authors would connect to the Default Zone URL (ex. https://internal.contoso.com); however, if the data source is a SharePoint list, the Internet Zone URL is used (ex. https://contoso.com). SharePoint will respect the permissions of the SharePoint user (in this case, Okta).

Input Data Source Type

For all of this to work together, we migrated content, then migrated users and groups to Okta claims, and then added new Windows groups so that certain users could log in with Windows credentials and edit reports.
CAUTION: This creates the scenario that I warned about; so, make sure your user migration was successful before adding these groups and letting report authors access the system. If you migrate the Windows user to Okta, and then the user logs in with windows credentials, there will be two claims in the system (ex. c:0-.t|okta|chettinger and i:0#.w|ais\chettinger). If you were to migrate the new Windows user a second time, it would likely overwrite the Okta user and its permissions.

Lesson 4: Integration with Microsoft Office Products

Microsoft Word

Okta did not seem to consider Microsoft Office when developing its SharePoint integration solution. Editing items in Word, Excel, and PowerPoint is an important feature, and our users wanted it to work. When the Open in Word option is used on a file in SharePoint, the Word application will open on the user’s computer and attempt to authenticate with Okta.

Open in Word

Under the hood, Office products use an outdated browser control based on Internet Explorer version 9. The Okta login page would not render correctly in the browser control due to compatibility issues; instead, it would throw a script error, and controls would not render.

Script Error

We had to work with Okta to get them to change it for our login page. Microsoft loosely explains how to fix it in this article; however, Okta had to detect the browser version and add the meta tag accordingly. Ultimately, if you plan on using Office products with SharePoint, you will need to work with Okta to get your login page fixed.

Microsoft Outlook

We also ran into a unique issue with Microsoft Outlook and adding Calendars. Outlook 365 users had to go to File > Options > Trust Center > Form-based Sign-in and choose Ask me what to do for each host.

Trust Center

For Outlook clients before 365 (Outlook 2019 in our case), the Form-based Sign-In option was unavailable. We had to work with our Group Policy Object (GPO) Administrators and create a GPO to set this and add the hostname (ex. contoso.com). Unfortunately, this only partially fixed the problem; once users added the calendar, they started getting prompted with Windows credentials. After working with Microsoft, we found out that when the calendar is added to Outlook, it stores the URL somewhere, and it gets it from SharePoint based on the first Zone it finds a URL in.

  • It checks the Zones in the following order: Intranet, Default, Extranet, Internet, Custom. If you remember, we had the following:
  • Intranet – empty
  • Default – https://internal.contoso.com (Windows Auth)
  • Internet – https://contoso.com (Okta Auth)
  • Extranet – empty
  • Custom – empty

Outlook was storing the URL from the Default Zone, which was Windows authentication, and prompted the user. So, what was the fix? First, we had to move https://contoso.com to the Intranet Zone so that Outlook would store it instead.

  • Intranet – https://contoso.com (Okta Auth)
  • Default – https://internal.contoso.com (Windows Auth)
  • Internet – empty
  • Extranet – empty
  • Custom – empty

Lesson 5: Integration with Nintex Forms and Workflows

When we started testing Nintex Forms and Workflows, we quickly found that users had not been migrated as we had hoped. So the first thing we did was add the Nintex MigrateUser operation right after the farm-migrate user command in the script above:

$farm.MigrateUserAccount($oldUserName,$newUserName,$false)
NWAdmin.exe -o MigrateUser -oldUser $oldUserName -newUser $newUserName 

According to Nintex, this command only updates user settings and history, not in the workflow definitions or running workflows. So, to fix the workflows, I wrote a script that recursively goes through all of SharePoint and looks for the hidden library NintexWorkflows. Each of these libraries exports each workflow, replaces the Windows claim with an Okta claim and then deploys the workflow. It does all of this using NWAdmin.exe operations and was approved by Nintex. Here is the script that we used:

$domain = "ais"
$dryrun = $true
$dryrunText = "DRYRUN"


$oktaClaimChar = Get-SPClaimTypeEncoding | Where-Object { $_.ClaimType -like '*Okta*' }
$encodedValue = [int]([char]($oktaClaimChar.EncodingCharacter))
$encodingCharacter = "&#$($encodedValue);"
if($dryrun -eq $false){
    $dryrunText = ""
}

function CheckWorkflow($asset)
{ 
    $text = [System.Text.Encoding]::ASCII.GetString($asset.OpenBinary())

    if($text.Contains("i:0#.w|$domain\"))
    {        

        try {
            $assetConfig = $asset.ParentFolder.Files | Where-Object Name -eq $($asset.Title +".xoml.wfconfig.xml")
            $configText = [System.Text.Encoding]::ASCII.GetString($assetConfig.OpenBinary())
            $configXml = $configText
            $listId = $configXml.WorkflowConfig.Association.ListID

            $path = $asset.Web.Url.Replace('https://','')
            $pattern = '[\\/]'
            $path = $path -replace $pattern, '-'
            $nwfFile = "C:\Temp\NintexMigration\$path\$($asset.title).nwf"


            if((Test-Path "C:\Temp\NintexMigration\$path") -eq $false){
                New-Item -ItemType Directory -Path "C:\Temp\NintexMigration\$path"
            }

            if($null -ne $listId)
            {
                $list = $asset.Web.Lists | Where-Object Id -eq $listId
                $listName = $list.Title

                $output = & NWAdmin.exe -o ExportWorkflow -siteUrl $($asset.Web.Url) -list "$($listName)" -workflowName "$($asset.title)" -fileName "$($nwfFile)" -workflowtype list           
                if ($output  -eq "Exporting complete.")
                {
                    $nwfText = Get-Content -Path "$($nwfFile)"
                    $newNwfText = $nwfText
                    $newNwfText = $newNwfText.Replace("i:0#.w|$domain\","i:0$($encodingCharacter).t|okta|")    
                    Set-Content -Path "$($nwfFile)" -Value $newNwfText

                    Write-Host "$dryrun TextChange Type=""List"" SiteUrl=""$($asset.Web.Url)"" TargetList=""$($listName)"" WorkflowName=""$($asset.title)"" NWFFile=""$($nwfFile)"" Web=""$($asset.Web.Url)"" File=""$($asset.Url)"" DateTime=""$(get-date -f MM-dd-yyyy_HH_mm_ss)"""
                    if($dryrun -eq $false) {  
                        & NWAdmin.exe -o DeployWorkflow -siteUrl $($asset.Web.Url) -targetlist "$($listName)" -workflowName "$($asset.title)" -nwffile "$($nwfFile)" -overwrite
                    }
                } else{
                    Write-Host "$dryrunText $output"
                }
            }
            else
            {
                $output = & NWAdmin.exe -o ExportWorkflow -siteUrl $($asset.Web.Url) -workflowName "$($asset.title)" -fileName "$($nwfFile)" -workflowtype site            
                if ($output  -eq "Exporting complete.")
                {
                    $nwfText = Get-Content -Path "$($nwfFile)"
                    $newNwfText = $nwfText
                    $newNwfText = $newNwfText.Replace("i:0#.w|$domain\","i:0$($encodingCharacter).t|okta|")   
                    Set-Content -Path "$($nwfFile)" -Value $newNwfText

                    Write-Host "$dryrun TextChange Type=""Site"" SiteUrl=""$($asset.Web.Url)"" WorkflowName=""$($asset.title)"" NWFFile=""$($nwfFile)"" Web=""$($asset.Web.Url)"" File=""$($asset.Url)"" DateTime=""$(get-date -f MM-dd-yyyy_HH_mm_ss)"""
                    if($dryrun -eq $false) {  
                        & NWAdmin.exe -o DeployWorkflow -siteUrl $($asset.Web.Url) -workflowName "$($asset.title)" -nwffile "$($nwfFile)" -overwrite
                    }
                } else{
                    Write-Host "$dryrunText $output"
                }
            }

        } catch {
            Write-Line $_
        }
    }   
}

function CheckWorkflows($w)
{
    foreach ($list in $w.Lists)
    {
        if ( $list.title.tolower().contains( "nintexworkflows" ) )
        {
            foreach ($item in $list.items)
            {
                $asset = $item.file
                CheckWorkflow($asset)
            }
        }
    }
    foreach($sub in $w.Webs)
    {
        CheckWorkflows($sub)    
    }
}

$spWebApps = Get-SPWebApplication
foreach ($spWebApp in $spWebApps)
{      
    foreach ($spSite in $spWebApp.Sites)
    {
        if ($null -ne $spSite)
        {
            CheckWorkflows($spSite.RootWeb)
            $spSite.Dispose()
        }
    } 
}

Conclusion

There is much to consider if you want to use Okta as your authentication provider for SharePoint On-Premises. If you are using integrations such as Nintex, SSRS, and Microsoft Office, there will be a lot of work ahead of you. Hopefully, this blog post will save you some time with planning and risk mitigation. Either way, the most important take away to be sure to test thoroughly.

We're hiring for SharePoint careers at AIS. Help us deliver solutions and support client SharePoint environments.

This blog series is for anyone trying to learn how to use the Power Platform. Readers should know software development fundamentals. This post builds on the first two parts by adding smartphone camera capabilities to the interface. It starts by covering how to add an Image field to the Item form. Next, it illustrates how to implement bar code readers in the app. Last, a demonstration of the capabilities is performed with screen captures from the smartphone.

The blog series assumed the following:

  • You have a Power Apps tenant with administrative privileges
  • You know software development fundamentals
  • You have completed Part One of this series
  • You have completed Part Two of this series

Adding the Item Image to the User Interface

When we created the Item table in the Microsoft Dataverse, we included an Image column. Now that we are ready to start using the phone, we can add this to the interface.

Expand the Item Edit Screen and Select the frmItemEdit. Click Edit Fields and select the Image field.

Item Edit Screen for frmltemEdit

Reposition the Image field on the form if necessary.
Reposition the Image field

Adding the Bar Code Scanner to the User Interface

Some updates need to be made to add the bar code readers to the interface to edit the form.

Model Number field

Select the Model Number field (make sure the field is selected, not the Card). Change the Width function to:

(Parent.Width - 60)*.8

Set the Default to:

If(IsBlank(modelScanValue),Parent.Default,modelScanValue)

Note: sets the value to modelScanValue if not empty otherwise to its value. The formula will have an error until the steps below are complete. For more information, see If and Switch functions in Power Apps and Blank, Coalesce, IsBlank, and IsEmpty functions in Power Apps.

Serial Number field

Select the Serial Number field (make sure the field is selected, not the Card). Change the Width function to:

(Parent.Width - 60)*.8

Set the Default to:

If(IsBlank(serialScanValue),Parent.Default,serialScanValue)

Cancel Button

Select the cancel icon (X) and change the OnSelect function to:

ResetForm(ItemEditForm);Navigate(ItemsScreen,ScreenTransition.Fade);Set(modelScanValue,"");Set(serialScanValue,""); 

Barcode Scanner Options

Select the Model Number Card, click on the + button on the far left, expand Media and select Barcode Scanner
Expand Media and select Barcode Scanner

Select the new button and set the following properties:

  • Text: “|||”(Note: I chose three pipes because it kind of resembles a barcode and is small enough to fit)
  • Width: (Parent.Width – 60)*.1
  • Size: 20
  • All Padding Properties: 1
  • All Radius Properties: 0
  • Position: 528,63

OnScan:

Set(modelScanValue, barcodeModelNumber.Value)

Note: sets variable modelScanValue to the value. For more information, see the Set function in Power Apps.

Rename the barcode scanner to barcodeModelNumber
Select the Serial Number Card, click on the + button on the far left, expand Media and select Barcode Scanner.

Select the new button and set the following properties:

  • Text: “|||”
  • Width: (Parent.Width – 60)*.1
  • Size: 20
  • All Padding Properties: 1
  • All Radius Properties: 0
  • Position: 528,63

OnScan:

Set(serialScanValue, barcodeSerialNumber.Value)

Note: sets variable serialScanValue to the value.
Rename the barcode scanner to barcodeSerialNumber

Review

Several changes have been made to the app, so let us review what has changed:

  1. We added the Image field to the Item interface to add a photo of the item to the record.
  2. We added two new barcode scanner buttons and positioned them to match other interface elements
  3. Each barcode button stores its scanned value into its corresponding variable
  4. The Model Number and Serial Number fields will use its corresponding variable as its value if available; otherwise, it uses the default.Reviewing Changes

Camera Enhancements in Action

In this walkthrough, I will be using my phone for data entry. We must first File > Save and then Publish the app. I have downloaded the Power Apps mobile app to my phone and logged in with my credentials. For this example, I will be using a stick of Samsung memory. While no one would likely record a single stick of memory in a home inventory app, it is a useful example.|

Next Steps

With our user interface complete, we can start adding real data into the system. In part four of this series, we will begin working on the Admin web interface to edit all data and export capabilities.

POWER PLATFORM ADOPTION FRAMEWORK
Discover the start to finish approach to adopting Power Platform at scale to transform your business in the cloud with our free whitepaper.

This blog series is for anyone trying to learn how to use the Power Platform. Readers should know software development fundamentals. This blog begins by covering how to create a base Canvas App from an existing table in the Microsoft Dataverse. It reviews all the screens, forms, and changes necessary to make the user interface work as defined in the requirements. By the end of this blog, a fully functioning app will be ready for data entry.

The blog series assumed the following:

  • You have a Power Apps tenant with administrative privileges
  • You have knowledge of software development fundamentals
  • You have completed Part One of this series

Creating the Canvas App

In Part One of this series, I defined our home inventory application, its database, and how its user interface (UI) would look. I also created the underlying tables for our application. With a data source in place, I can create a Canvas App connected to the Microsoft Dataverse. Making and table-based canvas app saves time by creating the base screens. It is also valuable for reverse engineering to understand how to customize the app on your own.

When prompted, use the Items table as this is the focal point of the application.
Focus point of application

This will create a basic Canvas App; the main screen can search for items (there are none at this time of course)
Main screen creating basic canvas app

Clicking the + opens a form to create a new record; at this time, only the Name column is provided.
Creating new record

Expand the EditScreen1 and Select the EditForm1.
Next, add the custom columns that were created in the Microsoft Dataverse to EditForm1.
Adding custom columns created in Dataverse to EditForm1

Pressing the Play button and then clicking + opens the edit form below. The only problem is there are no Item Types, Locations, or Manufacturers yet.

Editing Form with Item Types, Locations, and Manufacturers

Creating the Lookup Forms

Near the top left of the window, click on New screen. Choose Form as the type.
Power Apps Home to create Lookup Forms

On the newly created screen, click on the EditForm element created. On the right side of the screen, select Item Types for the Data source. Click on the LblAppName created and change the Title Property from [Title] to Item Types. Repeat this with Manufacturers and Locations.

Take few minutes to rename the screen and its elements following the Power Apps Canvas App Accessibility Guidelines and the PowerApps Canvas App Coding Standards and Guidelines. This is important for many reasons; for example, screen readers will read these values aloud, and “EditScreen1” would not be valuable to the end-user. Following these guidelines, I have renamed all screens, forms, and controls within the app. For example, these are the names of the screens that I will be using:

Rename the Screen and Elements

Be sure to save if you have not already; click File and then Save (or CTRL+S).

Manufactures Lookup Form

Expand the Manufacturer Edit Screen, select the frmManufacturer, click Edit Fields, and then add the columns Support Phone and Support URL. Delete the Created-On column if desired.
Click the Advanced tab for the OnSuccess action, enter

Navigate(‘Item Edit Screen,’ ScreenTransition.Fade)

Note: This tells the form to navigate back to the Item Edit Screen when this form is complete. For more information, see Back and Navigate functions in Power Apps.
Insert and edit the Manufacturer

Select the cancel icon (X) and change the OnSelect function to:

      ResetForm(frmManufacturerEdit);Navigate(‘Item Edit Screen’,ScreenTransition.Fade)
 

Note: This tells the form to clear its columns and navigate back to the Item Edit Screen. For more information, see Reset function in Power Apps and Back and Navigate functions in Power Apps.

Creating columns to navigate back to the Item Edit Screen

If necessary, rearrange elements; the form should look something like this:
Rearrange Elements to customize form

Locations Lookup Form

Since the Location table only has the Name column, there is no need to add any columns to it; however, I need to repeat the same steps from the Manufacturers form.
Select the frmLocationEdit and update the OnSuccess action to:

      Navigate(‘Item Edit Screen’,ScreenTransition.Fade)
 

Note: This tells the form to navigate back to the Item Edit Screen when this form is complete.
Select the cancel icon (X) and change the OnSelect function to:

      ResetForm(frmLocationEdit);Navigate(‘Item Edit Screen’,ScreenTransition.Fade)
 

Item Type Lookup Form

Since the Item Type table only has the Name column, there is no need to add any columns to it; however, I need to repeat the same steps from the Manufacturers form.
Select the frmItemTypeEdit and update the OnSuccess action to:

       Navigate(‘Item Edit Screen’,ScreenTransition.Fade)
 

Select the cancel icon (X) and change the OnSelect function to:

      ResetForm(frmItemTypeEdit);Navigate(‘Item Edit Screen’,ScreenTransition.Fade)

Completing the Form

Select the frmItemEdit and update the OnSuccess action to:

      Navigate(‘Items Screen’,ScreenTransition.Fade)

Select the cancel icon (X) and change the OnSelect function to:

     ResetForm(frmItemEdit);Navigate(‘Items Screen’,ScreenTransition.Fade)

Select the Item Type field (make sure the field is selected, not the Card). Change the Width function to:

 
     (Parent.Width - 60)*.8

Repeat this for Manufacturer and Location fields.

Creating the Add Buttons

Now I will add the buttons to open the necessary to create a new Item Type, Location, and Manufacturer from the Item form. I will be selecting the data card for Item Type, Location, and Manufacturer and adding a button to it during this process. Any property not mentioned will use the default value. Each of the three buttons shares these properties:

  • Width: (Parent.Width – 60)*.1
  • Text: “+”
  • Size: 48
  • All Padding Properties: 1
  • All Radius Properties: 0
  • Position: 528,63

Select the Item Type card, click on the + button on the far left, expand Popular and select Button. Select the new button, set the shared properties, and then set OnSelect:

 NewForm(frmItemTypeEdit);Navigate(‘Item Type Edit Screen’, ScreenTransition.None)
 

For more information, see EditForm, NewForm, SubmitForm, ResetForm, and ViewForm functions in Power Apps.
Creating Popular navigation
Customize Padding Elements

Select the Manufacturer card, click on the + button on the far left, expand Popular and select Button. Select the new button, set the shared properties, and then set OnSelect:

NewForm(frmManufacturerEdit);Navigate(‘Manufacturer Edit Screen’, ScreenTransition.None)

Select the Location card, click on the + button on the far left, expand Popular and select Button. Select the new button, set the shared properties, and then set OnSelect:

NewForm(frmLocationEdit);Navigate(‘Location Edit Screen’, ScreenTransition.None)

The form should look something like this:
Creating custom forms

Items Screen

Expand the Items Screen and then select the galItems. Notice that the Layout property is Title, subtitle, and body. Now, click on Edit by the Fields property. I want to change ours to Serial Number for lblSerialNumber, Model for lblModel.
Expending Item Screen

Item Details Screen

The last thing that I need to do is complete the Item Details Screen to show a read-only version of the Item record.
Expand the Item Details Screen and select the frmItemDetails. Click on Edit Fields and add the custom fields created in Microsoft Dataverse.
Detailed Item Screen

Next, rearrange the field to match our edit form. Delete the Created-On field if desired.
Customize Field to match Edit form

Click on the Date Purchased value, click Advanced, and Unlock. Change the Text value to:

      DateValue(Parent.Default)
 

This will hide the time as it is not relative. For more information, see DateValue, TimeValue, and DateTimeValue functions in Power Apps.
Adjusting Date Purchased capability

Testing the User Interface

Now, I can test out the implementation:

Next Steps

With our base UI in place, I can enter data; however, there are a couple of final touches that need to be completed. In part three of the series, I will be adding the ability to add item images and barcode scanning.

This blog series is for anyone trying to learn how to use the Power Platform. Readers should know software development fundamentals. It begins by explaining why we would want to create a home inventory system. It then takes the reader over the requirements of the system, including potential future enhancements. It takes time to explain the database model and how it translates into tables in the Microsoft Dataverse. This post concludes with a review of the User Interface mockup.

The blog series assumed the following:

  • You have a Power Apps tenant with administrative privileges
  • You have knowledge of software development fundamentals

Why Create a Home Inventory?

Several years ago, I started considering the number and value of technical assets I had obtained over the years. I wondered what would happen if I were robbed, had a fire, or some other disaster; and found myself wondering if my insurance would cover all my technical assets? According to Consumer Reports:

“having a list or a visual reminder of your belongings can make a big difference in how much your homeowners’ insurance will pay, and thus how well you’ll recover financially.”

At the time, I purchased Quicken Home Inventory Manager, which was adequate at the time; however, advancements in technology have made it obsolete. I want to use my phone and its many capabilities for data entry instead of walking back-and-forth to the computer. I set out designing my home inventory system using the Power Platform with these things in mind.

Requirements

The requirements of the initial system are straight forward:

  • The system should include a phone app for data entry and a web interface for administration
  • Phone and Admin Apps should provide the ability to create, update and delete Items
    • Columns: Name, Description, Item Type, Manufacturer, Serial Number, Model Number, Location, Date Purchased, and Approx. Value, Image
    • Required columns: Name, Model Number, Item Type, Manufacturer, Location, and Image
    • Item Type will look up from the Item Types table
    • The manufacturer will look up from the Manufacturers table
    • Location will look up from the Locations table
    • Users should be able to add an Item Image
  • Phone and Admin Apps should provide the ability to create Item Types
    • Columns: Name
    • Required columns: Name
  • Phone and Admin Apps should provide the ability to create Manufacturers
    • Columns: Name, Support URL, and Support Phone
    • Required columns: Name
    • Support URL should be validated
    • Support Phone should be validated
  • Phone and Admin Apps should provide the ability to create Locations
    • Columns: Name
    • Required columns: Name
  • Admin App should provide the ability to create, update and delete Item Types
  • Admin App should provide the ability to create, update and delete Locations
  • Admin App should provide the ability to create, update and delete Manufacturers
  • The system should allow Admins to export the inventory

Future Enhancements

In addition to the requirements, there are some future enhancements or “nice to haves” that I can investigate adding to the system later, for example:

  • Bar code Scanner – for items with a bar code, it would be helpful to scan these automatically. This could be for serial numbers, model numbers, or even UPC for the entire item.
  • Photo of Receipt – item would be a nice feature to include a picture of the receipt for proof-of-purchase and returns.
  • AI – While this is a tall order, it would be neat to take a picture of an item and have it recognized what kind of item it is, perhaps even determine its Item Type.

The Database Model

Using the requirements above, I can derive a simple database model. There is an Item table that contains most of the data. Each Item record may have 0 or more Image records. Each Item will also have a lookup to a Manufacturer (ex. Microsoft, Sony, etc.), a Location (ex. Master Bedroom, Living Room, etc.), and an ItemType (ex. Appliance, Networking, Computer, etc.).

Database Model

The Dataverse Tables

Using the database model above, I can create custom tables for the system in the Microsoft Dataverse. Due to requirements in the Microsoft Dataverse, I will need to add a Name column to the Image table; I will want to handle this name’s generation when I implement the Item Images part of the application. Below is the resulting data model summary.

Dataverse Table

User Interface Mockup

For the user interface, I will begin with the phone app and develop the administration app later. As data gets added to the system, I will do this because I may discover challenges and new ideas. Additionally, the mockup below does not include the Item Image library, as I will cover it in a separate post. The user interface below begins with the Items screen. Here, users can search, sort, and scroll through items in the system. From the Items screen, a user can view or create a new Item. When a user views an item, they can delete it or edit it. If the user creates or edits an item, they use the same screen; they can also create new Item Types and Manufacturers from this screen. On all screens, cancel, save, and delete actions return to the previous screen. You may notice that the Item Image is not included; this is because I will be adding this later in the series when I start using the phone. For now, I am focusing on the base implementation.

User Interface Mockup

Next Steps

With the requirements defined and tables created in the Microsoft Dataverse, I can now work on the User Interface. In part two of the series, I will create all the necessary screens for the phone app and begin data entry.

Executive Summary

This blog post is for anyone working with AWS CloudFormation Templates and are looking for ways to improve template versatility by including mappings. It begins by describing how the scenario was discovered. It then describes the solution for generating mappings.

The Scenario

I worked with AWS Cloud Formation and was looking for a starting point for a Windows Server 2012 R2 with an Active Directory template. The AWS CloudFormation Templates page has plenty to start with; I went to Browse sample templates by the AWS region and then US West (Oregon) region. I downloaded the Windows_Single_Server_Active_Directory.template to see if I could use it as a starting point.

While reviewing the template, I noticed that it used a Region to AMI mapping to determine the Amazon Machine Image (AMI) to be used and that Windows2012r2 was available. As of this writing, the AMI for US West (Oregon) (us-west-2) would be ami-07c77fe35092981e9.

AMI West Region

Unfortunately, I was unsure if this image was valid or not, as it would return as null. According to the documentation, it should have produced a DescribeImagesResponse. I determined that that the template had outdated/incorrect AMI ids in it.

Outdated and Incorrect AMI ID image

I started wondering how I could build a mapping of my own. I used the Get-SSMLatestEC2Image command to get a list of images for the us-west-2 region.

List of Images for West Region

I located the Windows_Server-2012-R2_RTM-English-64Bit-Base image and decided to build a custom mapping for it.

The Solution

I determined the solution would be to write a PowerShell script that would iterate over an array of regions, locate the desired image, and then generate a JSON string that could be inserted into the template. First, you will need to install the AWS PowerShell Tools if you have not already.

Next, I put together the following script that would perform the steps above. I decided to use the region list used in the sample template.

$regions = @("us-east-1","us-west-2","us-west-1","eu-west-1","eu-west-2","eu-west-3",
    "eu-central-1","eu-north-1","ap-northeast-1","ap-northeast-2",
    "ap-northeast-3","ap-southeast-1","ap-southeast-2","ap-south-1",
    "us-east-2","ca-central-1","sa-east-1","cn-north-1","cn-northwest-1")

$searchString = "Windows_Server-2012-RTM-English-64Bit-Base"
$friendlyName = "Windows2012r2"
foreach($region in $regions)
{
    
    try {
        
        $ami=Get-SSMLatestEC2Image -Path ami-windows-latest -ImageName $searchString -Region $region
        Write-Host "`"$region`"        : {`"$friendlyName`" : `"$($ami)`"},"
    }
    catch {
        #do nothing, no access to region likely

I decided to add a “Friendly Name” that you could use to make it cleaner in the JSON. Here are the results:

Friendly Name Results for Cleaner JSON

Now copy/paste this text into the template mapping, and now you have a custom mapping:

Custom Mapping Template

This allows you to use the FindInMap function to locate the AMI for the provided region when defining the ImageId.

FindIn Map Function to locate AMI

Conclusion

As you can see, creating a custom mapping is not that difficult. This approach demonstrates how to retrieve data from AWS using PowerShell and generate JSON that can be used in a Cloud Formation template. It could be used to keep a Cloud Formation template easy-to-read, organized, and more versatile.

Executive Summary

This blog post is for Power Apps developers with basic experience with Microsoft Excel and .NET development. It begins by defining a real-world example for data migration. Next, it describes the steps necessary to use Excel to generate repetitious code within a provided Visual Studio Console Application. Finally, it demonstrates the solution being used and the result data in a Power App.

The Scenario

In part one of this two-part series, we covered the basics techniques of how Excel could be used to generate code and improve efficiency. In this blog post, we continue by showing how these techniques could be used to design a data loader for Microsoft Power Apps and Dynamics.

For this scenario, let’s imagine that a client is looking to migrate to Microsoft Power Apps. The solution has been designed in Power Apps and you are ready to migrate the client’s data to the new system. The client provides you with an Excel workbook with all of their data. You open the workbook and see that the client has included four spreadsheets: Clients, Accounts, Products, and Invoices.

The Solution

Download and review the solution. The code within it is outside of the scope of this demonstration; however, let’s review a few key parts:

  • The Enums folder contains all of the enumerations used in the project. I like using enums for self-documenting code and they are especially useful when mapping column indexes. This is one place you will be placing Excel-generated code.
  • The Models folder contains all of the models that are populated from the Excel file and ultimately used to populate CRM entities. This is another place you will be placing Excel-generated code.
  • The Main method in the Program.cs is the starting point of the application.
  • The GetClients, GetAccounts, GetProducts, and GetInvoices methods will be used to get data from the Excel sheets.
    • Each method is the same; it creates an instance of an Excel Application, opens the workbook, opens the sheet, gets the rows, iterates over them populating the model, closes the workbook, and exits the Excel Application.
    • Each method has an area comment //ADD MAPPNG HERE. This is where you will be adding Excel-generated code later.
  • The MigrateClients, MigrateAccounts, MigrateProducts, and MigrateInvoices methods are used to create records within the Power App.
    • Each method is the same; it gets the records from the Excel sheet, builds an Entity record from it, and saves it to the Power App instance.
    • Each method has an area comment //ADD MAPPNG HERE. This is where you will be adding an Excel-generated code later.
  • Earlybound Entities folder contains the classes that are generated from the Power App entities. The entities used in this example are all out-of-the-box and were generated using the Xrm Toolbox Early Bound Generator tool.
  • The app.config contains the properties used by the console application; you will need to update some of these settings prior to running the project:
    • CrmConnectionOrgName – your organization name (ex. org1a234567)
    • CrmConenctionRegion – your region (ex. North America)
    • CrmConnectionUsername – your username
    • CrmConnectionPassword – your password
    • DataMigrationConnectionString – the path to the mock data Excel file
    • DryRunMode – when “true”, nothing will be written to Power App
    • MigrateClients, MigrateAccounts, MigrateProducts, MigrateInvoices – use this to bypass migration
    • ClientIdNumStart , AccountIdNumStart, ProductIdNumStart, ProductIdNumStart – use this to start migration as a particular record (helpful if an error occurred during migration)

Creating the Models

  1. Create a new Excel workbook named Mapping.xlsx and rename the first sheet to Models
  2. Open the MOCK_DATA.xlsx and select the headers for the Clients sheet and copy (CTRL+C)
    Client Header
  3. Return to the Models sheet, select cell A1
  4. Select the drop-down under the Paste and select Transpose
  5. Select cell B1 and enter the formula: =”public string “&A1 & ” { get; set; }”
  6. Drag the formula down to B8; your sheet should look like this:
    Dragging Formula
  7. Select cells B1:B8 and copy/paste into the client model in the Models folder
  8. Repeat these steps for the Account, Product, and Invoice models (NOTE: you can paste the values in step 5 below the previous model and drag the formula you already created)

Creating the Column Enums

  1. In the Mapping.xlsx create a new sheet named Enums
  2. Select cell A1, press “=”, select Models tab, select cell A1 and press Enter
  3. Cell A1 on Enums sheet should now have the same value as Models sheet
  4. Drag the formula down far enough to include all values on the Models sheet
  5. Select B1 and enter the number 1
  6. Select B2 and enter the number 2
  7. Select cells B1:B2 and drag down to the last Client Value
  8. Repeat steps 5-7 for other models
  9. Select cell C1 and enter the formula: =” ” & A1 & ” = ” & B1 & “,”
  10. Drag the formula down to include all of the values in column A; your sheet should look like this:
    Creating Column Enums
  11. Select the corresponding cells for each model and paste into its column enum

Completing the Record Loaders

  1. In the Mapping.xlsx create a new sheet named Loaders
  2. Select cell A1, press “=”, select Models tab, select cell A1 and press Enter
  3. Cell A1 on Loaders sheet should now have the same value as Models sheet
  4. Drag the formula down far enough to include all values on the Models sheet
  5. Select cell B1 and enter the formula: =”record.”&A1&” = (string)xlRange.Cells[rowNumber, (int)ColumnEnum.”&A1&”].Text;”
  6. Drag the formula down to include all of the values in column A; your sheet should look like this:
    Completing the Record Loaders
  7. Select the applicable cells for each model and paste over the comment //ADD MAPPNG HERE in each Get method (ex. GetClients)
  8. Replace ColumnEnum with appropriate Enum (Note: you could fix this in the formula ahead of time as well of you prefer)

Completing the Migrators

  1. In the Mapping.xlsx create a new sheet named Migrators
  2. Select cell A1, press “=”, select Models tab, select cell A1 and press Enter
  3. Cell A1 on Migrators sheet should now have the same value as Models sheet
  4. Drag the formula down far enough to include all values on the Models sheet
  5. Click cell B1 and type “contact”
  6. Drag down to the end of the Contact values
  7. Repeat for “account”, “product”, and “invoice”; your sheet should look like this:
    Completing the Migrators
  8. Select C1 and enter the following formula: = B1 & ” = record.” & A1 & “;”
  9. Drag the formula down to include all of the values in column A; your sheet should look like this:
    Add Value to Column A
  10. Select the applicable cells for each model and paste over the comment //ADD MAPPNG HERE in each of the Migrate methods. NOTE: there will be warning in the code
  11. In the code, go through each of the errors and select the Power App field that maps to record the field. In some cases, you will need to use Guid.Parse, DateTime.Parse, etc. to map the fields.

Here is my mapping for the Contact:
Mapping Example

NOTE: new_migrationid is a custom Whole number field that I have added to the Contact entity to keep track of the corresponding record in the spreadsheet. I do this for the following reasons:

  • Keeping track of which record the code is on in case an error occurs.
  • Deleting records that were created by the migration

Running the Migration

Once the code is completed, make sure that the DryRunMode is true in the app.config and that you do not have the MOCK_DATA.xlsx file open. When you run the project, it will open the Excel workbook, get the records, and map them to an Entity. Nothing will be saved at this time. The output should look something like this:
Running the Migration

Review the output for errors and fix them as needed. Once there are no errors, switch the DryRunMode back to false and run the loader again. Once it is complete, navigate to your Power App instance, and now your newly migrated data will be available!
Active Contact List

Conclusion

This has been only one example of how you can leverage Excel to aid with generating repetitious code. Additionally, the data loader included in this project can be reused for your custom Power App; simply use the techniques we have covered to create Models, Column Enums, Loaders, and Migrators.

The example solution used in this walkthrough can be downloaded here.

The completed solution with the steps in this blog post complete can be downloaded here.

This blog post is for all developers of all levels that are looking for ways to improve efficiency and time-saving ideas. It begins by providing some background on me and how my experience with Microsoft Excel has evolved and aided me as a developer. Next, we cover a scenario where Excel can be leveraged to save time. Finally, we go over a step-by-step example using Excel to solve the problem.

Background

As a teenager growing up in the 80s, I was fortunate enough to have access to a computer. One of my favorite applications to use as a kid was Microsoft Excel. With Excel, I was able to create a budget and a paycheck calculator to determine my meager earnings from my fast food job. As my career grew into software development, leveraging all of the tools at my disposal as a solution against repetitive and mundane tasks made me more efficient. Over the years, colleagues have seen solutions I have used and have asked me to share how I came up with and implemented them. In this two-part blog post, I will share the techniques that I have used to generate C#, XML, JSON, and more. I will use data-loading in Microsoft Power Apps and Dynamics as a real-word example; however, we will need to start with the basics.

The Basics

Before going into the data-loading example, I wanted to provide a very simple example. Keep in mind that there may be more effective solutions to this specific example that do not use Excel; however, I am using it to illustrate this simple example. Let’s say you had a data model and a contact model that, for the most part, were the same with the exception of some property names, and you needed to write methods to map them. You know the drill:

var contact = new Contact();
contact.FirstName = datamodel.firstName;
contact.LastName = datamodel.lastName;
contact.PhoneNumber = datamodel.phoneNumber;
contact.CellPhone = datamodel.mobileNumber;

Not a big deal, right? Now let’s say you have a hundred of these to do and each model may possibly have 50+ properties! This would very quickly turn into a time consuming and mundane task; not to mention you would likely make a typo along the way that another developer would be sure to let you know about in the next code review. Let us see how Excel could help in this situation.

In this scenario, the first thing you will need is the row data for the contact and data models. One way would be using the properties. Consider the classes below:

Use Properties to Identify Classes

  1. Create 3 Excel worksheets called Primary, Secondary, and Generator
  2. Copy/paste the property statements from Contact into Primary worksheet and ContactDataModel into a Secondary worksheet.
  3. Select Column A in the Primary worksheet
    Create three Excel Worksheets
  4. In Excel, select the Data tab and then Text to Columns
  5. Choose Delimited, then Next
    Choose Delimited
  6. Uncheck all boxes and then check the Space checkbox, then Finish
    Uncheck All Boxes
  7. Your worksheet should look like the following:
    Sample of Worksheet
  8. Repeat 3-7 with the Secondary worksheet
  9. Select cell A1 and then press the = key
  10. Select the Primary worksheet and then cell D1
  11. Press the Enter key, you should return to the Generator worksheet and the text “FirstName” should be in cell A1
  12. Select cell B1 and then press the = key
  13. Select the Secondary worksheet and then cell D1
  14. Press the Enter key, you should return to the Generator worksheet and the text “firstName” should be in cell A1
  15. Drag and select A1:B1. Click the little square in the lower-right corner of your selection and drag it down to row 25 or so. (Note: you would need to keep dragging these cells down is you added more classes.)
    You will notice that by dragging the cells down, it incremented the rows in the formula.
    Incremented Rows in the Formula
    Press CTRL+~ to switch back to values.
  16. Select cell C1 and enter the following formula:
    =IF(A1=0,””,A1 & “=” &B1&”;”)
    As a developer, you probably already understand this, but the if statement is checking to see if A1 has a value of 0 and simply returns an empty string if so. Otherwise, string concatenation is built.
  17. Similar to an earlier step, select cell C1 and drag the formula down to row 25. Your worksheet should look like:
    Select and Drag Formula
  18. You can now copy/paste the values in column C into the code:
    Copy and Paste Values into Column C

As you continue on, Excel keeps track of the most recent Text to Columns settings used; so, if you pasted another set into the Primary and Secondary worksheets, you should be able to skip steps 1-5 for remaining classes. In the sample class file and workbook, I have included Address models as an illustration.

Next Steps

This example has covered the basic concepts of code generation with Microsoft Excel: extracting your data and writing the formulas that generate the necessary code. Depending on what you are trying to accomplish, these requirements may grow in complexity. Be sure to consider the time investment and payoff of using code generation and use where it makes sense. One such investment that has paid off for me is data loading in Microsoft Power Apps which we will cover in the next post: Code Generation with Microsoft Excel: A data-loading exercise in Microsoft Power Apps.

Download Example Workbook

Download Address Models

Executive Summary

This blog post is for Power Apps developers as well as end-users that are looking for a wildcard-like solution for Lookup controls in Microsoft Power App and Dynamics. It begins by describing the problem that a client had with finding records where only part of the name was known. Next, it covers the workarounds that were offered to the client as well as the client’s response. Next, the solution is explained in detail, including the necessary code and instructions for setting up within the Power Apps solution. Finally, a video of the final product is provided demonstrating how the implementation behaves.

The Problem

In 2019, I worked with a client that wanted to migrate a CRM system that had been developed in FileMaker (an Apple Subsidiary). The client was an Apple / Mac user through and through; however, he did like Microsoft Office products. He had researched Power Apps and wanted to migrate his system to it and integrate it with Microsoft Outlook. After spending a few months migrating the system to Power Apps the client was pleased but was a little frustrated with some of the user interface elements. He did not like the way that the Lookup controls functioned and claimed that it was not finding all the records.

After meeting with the client, I figured out the problem. In the example below, we are using fictitious company names. Let us pretend that we are adding an Account for a Contact and are having trouble remembering the exact Account name. We remember that the word “Generic” is in the name, but cannot remember the rest.

If you start by typing “gen” into the Lookup control, notice that the lookup only returns items that begin with “gen”.

Lookup Control

This scenario is what the client was upset about, but I had some workarounds for him.

The Workarounds

I explained to the client that if he entered a wildcard before typing the search term that it would return the results that he was expecting. I also informed him that he could configure and use Relevance Search to locate the name and then enter a full name into the Lookup control. The client found these workarounds to be frustrating and wished that it behaved like his FileMaker solution. He did not want to have to explain wildcards or relevance search to office staff and wanted the lookup to behave how he viewed to be correct.
If you start typing “*gen” into the Lookup control, it will return Accounts that contain “gen”, including “Advanced Generic Corporation”, the company we are looking for.

Relevance Search in Lookup Control

I put together a prototype using a Single Line of Text field with an Auto Complete control like below. When you enter “gen” into the Auto Complete control, it behaves like the client wanted, returning matches that started with or contained the term. The client also likes the fact that it highlighted the text that matched too. My initial prototype had the Lookup and the Autocomplete next to each other. The user could use the OOTB Lookup or use the Auto Complete control. Once a user was selected a record in the Auto Complete control, the Lookup would be resolved based on the record selected. Unfortunately, the client found this to be “clunky” and insisted that it work like user interfaces do on Apple products.

Auto Complete in Lookup Control

The Solution

I started working on a solution making these 2 controls work together to achieve the client’s expectations. The implementation would follow these rules:

  1. On load:
    1. If the Lookup was a value, show it and hide the Auto Complete field (note: the lookup is what is important). Otherwise, show the Auto Complete and hide the Lookup.
    2. Populate a list of all entity values to be used by the Auto Complete.
  2. When the Auto Complete changes, try to resolve it against a pre-populated list of Entity values. In this example, I use the Account name field, but it could be any entity.
    1. If there is a single match, set the Lookup field to that value, show the Lookup and hide the Auto Complete.
    2. If there is more than one match, alert the user and force them to use Lookup.
    3. If no match is found, do nothing.
  3. When the Lookup changes, if an item has been selected, hide the Auto Complete field. If not, then hide the Lookup and show the Auto Complete (i.e. if the user clears the lookup).

The JavaScript

There are a variety of ways the scenario could be achieved. I prefer to establish a single event on the form and wire up events within my JavaScript. I also broke this code up for cleanliness and reusability. For this blog post, I have all the JavaScript in a single file; however, some methods could be moved to a shared.js file and reused across multiple forms. Here is a breakdown of the code:

  • configureForm – a method called when the form is loaded. It sets up the on-change events for the fields and initializes the fields.
  • onAccountAutoCompleteChange – event handler for the on-change event of the Auto Complete textbox. It simply calls onAutoCompleteChanged with the necessary parameters.
  • onAccountLookupChange –calls the onLookupChanged with the necessary parameters and is used to initialize the fields.

These methods could be moved to a shared file if desired:

  • populateAccountSet –populates the accountSet array. It calls populateSet with the necessary parameters.
  • populateSet – populates an array with the desired entity field values. These values are used to resolve against when the user selects a value in the Auto Complete control.
  • onAutoCompleteChanged – performs the logic established above for the Auto Complete field.
  • onLookupChanged – performs the logic established above for the Lookup field.
var accountSet = new Array();
var formContext = null;
function configureForm(executionContext) {
    formContext = executionContext.getFormContext(); 
    populateAccountSet();
    formContext.data.entity.attributes.getByName('new_accounttext').addOnChange(onAccountAutoCompleteChange);
    formContext.data.entity.attributes.getByName('new_account').addOnChange(onAccountLookupChange);
    onAccountLookupChange();
}

function onAccountAutoCompleteChange() {
    onAutoCompleteChanged("new_accounttext", "new_account", "account", accountSet);
}

function populateAccountSet() {
    populateSet("account", "?$select=accountid,name", accountSet, "accountid", "name");
}

function populateSet(entity, query, set, idField, nameField) {
    if (set !== null && set.length > 0) return;
    Xrm.WebApi.retrieveMultipleRecords(entity, query).then(
        function success(result) {
            if (result.entities.length > 0) {
                for (var i = 0; i < result.entities.length; i++) {
                    set.push(
                        new Array(
                            result.entities[i][idField],
                            result.entities[i][nameField]
                        )
                    );
                }
            }
        },
        function (error) {
            Xrm.Utility.alertDialog(error.message, null);
        });
}

function onAutoCompleteChanged(autoCompleteField, lookupField, entityType, dataSet) {
    var value = formContext.data.entity.attributes.getByName(autoCompleteField).getValue();
    if (value !== null) {
        var valueLowerCase = value.toLowerCase();
        var matches = [];
        for (var i = 0; i < dataSet.length; i++) { if (dataSet[i][1] !== null && dataSet[i][1] !== undefined) { if (valueLowerCase === dataSet[i][1].toLowerCase()) { matches.push(dataSet[i]); } } } if (matches.length > 0) {
            if (matches.length > 1) {
                var friendlyEntityType = entityType.replace("new_", "");
                friendlyEntityType = friendlyEntityType.replace("_", " ");
                var alertStrings = {
                    text: "More than one record exists; please use default " + friendlyEntityType + " control."
                };
                Xrm.Utility.alertDialog(alertStrings);
                formContext.data.entity.attributes.getByName(autoCompleteField).setValue(null);
                formContext.data.entity.attributes.getByName(lookupField).setValue(null);
                formContext.getControl(autoCompleteField).setVisible(false);
                formContext.getControl(lookupField).setVisible(true);
            } else {
                var lookupVal = new Array();
                lookupVal[0] = new Object();
                lookupVal[0].id = matches[0][0];
                lookupVal[0].name = matches[0][1];
                lookupVal[0].entityType = entityType;
                formContext.data.entity.attributes.getByName(lookupField).setValue(lookupVal);
                onLookupChanged(autoCompleteField, lookupField);
            }
        }
    } else {
        formContext.getControl(lookupField).setVisible(true);
    }
}

function onLookupChanged(autoCompleteField, lookupField) {
    var lookupVal = formContext.data.entity.attributes.getByName(lookupField).getValue();
    if (lookupVal !== null && lookupVal !== undefined) {
        formContext.getControl(autoCompleteField).setVisible(false);
        formContext.getControl(lookupField).setVisible(true);
    } else {
        formContext.getControl(autoCompleteField).setVisible(true);
        formContext.getControl(lookupField).setVisible(false);
        formContext.data.entity.attributes.getByName(autoCompleteField).setValue(null);
    }
}

function onAccountLookupChange() {
    onLookupChanged("new_accounttext", "new_account");
}
 

Updating the PowerApps Solution

For this script to be used, you must first create a Web Resource for it. In this example, I created one called accountwildcard.

Create a Web Resource

Next, you need to add an event handler function to the OnLoad event; be sure to check the box Pass execution context as the first parameter. Add the library to the form and tell it to call the configureForm method for the OnLoad event.

Adding and Event Handler Function

Your form properties should look like this then you are done.

You will also want to make sure the labels for both fields are the same to make the transition look clean.

Make sure both labels are the same

The Final Product

As you can see below, the transition is smooth and seamless. When you type, the Auto Complete control serves up wild-carded results. After hitting tab or moving outside of the control, the Lookup is populated with the match and the Auto Complete is hidden. If you clear the Lookup, it hides itself and shows the Auto Complete again.

This is a specialized solution and I would recommend steering your client towards using a wild card in Lookup control; however, if you have a client that is insistent on this functionality, I hope this saves some time. The client I did this for is pleased with the results and has had no issues with it since.

To benefit the most from this post, you should understand the following concepts:

  • Dynamics CRM Workflows
  • DocuSign Integration with Dynamics CRM
  • DocuSign merge fields and merge-back

The Problem

I recently experimented with integrating DocuSign with Dynamics 365 — specifically, the merging of data into a DocuSign form and then writing the data back into Dynamics. After reading the DocuSign for Dynamics 365 CRM – 6.1 documentation I found that DocuSign Drop Downs and Radio Button controls are not supported for Dynamics merging and write backs. I started work on a solution that would use a Checkbox field in DocuSign and a Two Options field in Dynamics. I had all my text fields working correctly and assumed it would be straightforward as there were both Boolean fields.

I was disappointed to find out that the solution would not merge. After researching online and trying a few suggestions, I finally decided to add a temporary text field to my DocuSign form and see what Dynamics was putting into it, and found that the value was “Yes.” Then I looked at the form data in DocuSign…and it had the value “X.” I tried replacing the values for “Yes” and “No” in the Dynamics Two Options field with “X” and “O”, but that didn’t work either.

The Solution

I finally decided to change the “Yes” and “No” values to “true” and “false.”

This time, when the data was merged, the checkbox was checked!

And once the client receives the email, fills out the form, and the .pdf files are sent…this is when the ‘X’ we saw in the form data is used:

Finally, I verified it worked end-to-end by unchecking the box in Dynamics and saving the record:

After firing off the workflow to merge data in DocuSign form, the box is unchecked now:

Send the email off to be filled, check the box and add a new allergy:

Now, wait for the Dynamics envelope status workflow to complete. Check the record and the record will update successfully in Dynamics.

Conclusion

Albeit small, I’m surprised I didn’t find it documented. So if you’ve come across this issue working with DocuSign Checkbox fields and Dynamics 365, I hope this post saves you some time!