This blog series is for anyone trying to learn how to use the Power Platform. Readers should know software development fundamentals. It begins by explaining why we would want to create a home inventory system. It then takes the reader over the requirements of the system, including potential future enhancements. It takes time to explain the database model and how it translates into tables in the Microsoft Dataverse. This post concludes with a review of the User Interface mockup.

The blog series assumed the following:

  • You have a Power Apps tenant with administrative privileges
  • You have knowledge of software development fundamentals

Why Create a Home Inventory?

Several years ago, I started considering the number and value of technical assets I had obtained over the years. I wondered what would happen if I were robbed, had a fire, or some other disaster; and found myself wondering if my insurance would cover all my technical assets? According to Consumer Reports:

“having a list or a visual reminder of your belongings can make a big difference in how much your homeowners’ insurance will pay, and thus how well you’ll recover financially.”

At the time, I purchased Quicken Home Inventory Manager, which was adequate at the time; however, advancements in technology have made it obsolete. I want to use my phone and its many capabilities for data entry instead of walking back-and-forth to the computer. I set out designing my home inventory system using the Power Platform with these things in mind.


The requirements of the initial system are straight forward:

  • The system should include a phone app for data entry and a web interface for administration
  • Phone and Admin Apps should provide the ability to create, update and delete Items
    • Columns: Name, Description, Item Type, Manufacturer, Serial Number, Model Number, Location, Date Purchased, and Approx. Value, Image
    • Required columns: Name, Model Number, Item Type, Manufacturer, Location, and Image
    • Item Type will look up from the Item Types table
    • The manufacturer will look up from the Manufacturers table
    • Location will look up from the Locations table
    • Users should be able to add an Item Image
  • Phone and Admin Apps should provide the ability to create Item Types
    • Columns: Name
    • Required columns: Name
  • Phone and Admin Apps should provide the ability to create Manufacturers
    • Columns: Name, Support URL, and Support Phone
    • Required columns: Name
    • Support URL should be validated
    • Support Phone should be validated
  • Phone and Admin Apps should provide the ability to create Locations
    • Columns: Name
    • Required columns: Name
  • Admin App should provide the ability to create, update and delete Item Types
  • Admin App should provide the ability to create, update and delete Locations
  • Admin App should provide the ability to create, update and delete Manufacturers
  • The system should allow Admins to export the inventory

Future Enhancements

In addition to the requirements, there are some future enhancements or “nice to haves” that I can investigate adding to the system later, for example:

  • Bar code Scanner – for items with a bar code, it would be helpful to scan these automatically. This could be for serial numbers, model numbers, or even UPC for the entire item.
  • Photo of Receipt – item would be a nice feature to include a picture of the receipt for proof-of-purchase and returns.
  • AI – While this is a tall order, it would be neat to take a picture of an item and have it recognized what kind of item it is, perhaps even determine its Item Type.

The Database Model

Using the requirements above, I can derive a simple database model. There is an Item table that contains most of the data. Each Item record may have 0 or more Image records. Each Item will also have a lookup to a Manufacturer (ex. Microsoft, Sony, etc.), a Location (ex. Master Bedroom, Living Room, etc.), and an ItemType (ex. Appliance, Networking, Computer, etc.).

Database Model

The Dataverse Tables

Using the database model above, I can create custom tables for the system in the Microsoft Dataverse. Due to requirements in the Microsoft Dataverse, I will need to add a Name column to the Image table; I will want to handle this name’s generation when I implement the Item Images part of the application. Below is the resulting data model summary.

Dataverse Table

User Interface Mockup

For the user interface, I will begin with the phone app and develop the administration app later. As data gets added to the system, I will do this because I may discover challenges and new ideas. Additionally, the mockup below does not include the Item Image library, as I will cover it in a separate post. The user interface below begins with the Items screen. Here, users can search, sort, and scroll through items in the system. From the Items screen, a user can view or create a new Item. When a user views an item, they can delete it or edit it. If the user creates or edits an item, they use the same screen; they can also create new Item Types and Manufacturers from this screen. On all screens, cancel, save, and delete actions return to the previous screen. You may notice that the Item Image is not included; this is because I will be adding this later in the series when I start using the phone. For now, I am focusing on the base implementation.

User Interface Mockup

Next Steps

With the requirements defined and tables created in the Microsoft Dataverse, I can now work on the User Interface. In part two of the series, I will create all the necessary screens for the phone app and begin data entry.

Executive Summary

This blog post is for anyone working with AWS CloudFormation Templates and are looking for ways to improve template versatility by including mappings. It begins by describing how the scenario was discovered. It then describes the solution for generating mappings.

The Scenario

I worked with AWS Cloud Formation and was looking for a starting point for a Windows Server 2012 R2 with an Active Directory template. The AWS CloudFormation Templates page has plenty to start with; I went to Browse sample templates by the AWS region and then US West (Oregon) region. I downloaded the Windows_Single_Server_Active_Directory.template to see if I could use it as a starting point.

While reviewing the template, I noticed that it used a Region to AMI mapping to determine the Amazon Machine Image (AMI) to be used and that Windows2012r2 was available. As of this writing, the AMI for US West (Oregon) (us-west-2) would be ami-07c77fe35092981e9.

AMI West Region

Unfortunately, I was unsure if this image was valid or not, as it would return as null. According to the documentation, it should have produced a DescribeImagesResponse. I determined that that the template had outdated/incorrect AMI ids in it.

Outdated and Incorrect AMI ID image

I started wondering how I could build a mapping of my own. I used the Get-SSMLatestEC2Image command to get a list of images for the us-west-2 region.

List of Images for West Region

I located the Windows_Server-2012-R2_RTM-English-64Bit-Base image and decided to build a custom mapping for it.

The Solution

I determined the solution would be to write a PowerShell script that would iterate over an array of regions, locate the desired image, and then generate a JSON string that could be inserted into the template. First, you will need to install the AWS PowerShell Tools if you have not already.

Next, I put together the following script that would perform the steps above. I decided to use the region list used in the sample template.

$regions = @("us-east-1","us-west-2","us-west-1","eu-west-1","eu-west-2","eu-west-3",

$searchString = "Windows_Server-2012-RTM-English-64Bit-Base"
$friendlyName = "Windows2012r2"
foreach($region in $regions)
    try {
        $ami=Get-SSMLatestEC2Image -Path ami-windows-latest -ImageName $searchString -Region $region
        Write-Host "`"$region`"        : {`"$friendlyName`" : `"$($ami)`"},"
    catch {
        #do nothing, no access to region likely

I decided to add a “Friendly Name” that you could use to make it cleaner in the JSON. Here are the results:

Friendly Name Results for Cleaner JSON

Now copy/paste this text into the template mapping, and now you have a custom mapping:

Custom Mapping Template

This allows you to use the FindInMap function to locate the AMI for the provided region when defining the ImageId.

FindIn Map Function to locate AMI


As you can see, creating a custom mapping is not that difficult. This approach demonstrates how to retrieve data from AWS using PowerShell and generate JSON that can be used in a Cloud Formation template. It could be used to keep a Cloud Formation template easy-to-read, organized, and more versatile.

Executive Summary

This blog post is for Power Apps developers with basic experience with Microsoft Excel and .NET development. It begins by defining a real-world example for data migration. Next, it describes the steps necessary to use Excel to generate repetitious code within a provided Visual Studio Console Application. Finally, it demonstrates the solution being used and the result data in a Power App.

The Scenario

In part one of this two-part series, we covered the basics techniques of how Excel could be used to generate code and improve efficiency. In this blog post, we continue by showing how these techniques could be used to design a data loader for Microsoft Power Apps and Dynamics.

For this scenario, let’s imagine that a client is looking to migrate to Microsoft Power Apps. The solution has been designed in Power Apps and you are ready to migrate the client’s data to the new system. The client provides you with an Excel workbook with all of their data. You open the workbook and see that the client has included four spreadsheets: Clients, Accounts, Products, and Invoices.

The Solution

Download and review the solution. The code within it is outside of the scope of this demonstration; however, let’s review a few key parts:

  • The Enums folder contains all of the enumerations used in the project. I like using enums for self-documenting code and they are especially useful when mapping column indexes. This is one place you will be placing Excel-generated code.
  • The Models folder contains all of the models that are populated from the Excel file and ultimately used to populate CRM entities. This is another place you will be placing Excel-generated code.
  • The Main method in the Program.cs is the starting point of the application.
  • The GetClients, GetAccounts, GetProducts, and GetInvoices methods will be used to get data from the Excel sheets.
    • Each method is the same; it creates an instance of an Excel Application, opens the workbook, opens the sheet, gets the rows, iterates over them populating the model, closes the workbook, and exits the Excel Application.
    • Each method has an area comment //ADD MAPPNG HERE. This is where you will be adding Excel-generated code later.
  • The MigrateClients, MigrateAccounts, MigrateProducts, and MigrateInvoices methods are used to create records within the Power App.
    • Each method is the same; it gets the records from the Excel sheet, builds an Entity record from it, and saves it to the Power App instance.
    • Each method has an area comment //ADD MAPPNG HERE. This is where you will be adding an Excel-generated code later.
  • Earlybound Entities folder contains the classes that are generated from the Power App entities. The entities used in this example are all out-of-the-box and were generated using the Xrm Toolbox Early Bound Generator tool.
  • The app.config contains the properties used by the console application; you will need to update some of these settings prior to running the project:
    • CrmConnectionOrgName – your organization name (ex. org1a234567)
    • CrmConenctionRegion – your region (ex. North America)
    • CrmConnectionUsername – your username
    • CrmConnectionPassword – your password
    • DataMigrationConnectionString – the path to the mock data Excel file
    • DryRunMode – when “true”, nothing will be written to Power App
    • MigrateClients, MigrateAccounts, MigrateProducts, MigrateInvoices – use this to bypass migration
    • ClientIdNumStart , AccountIdNumStart, ProductIdNumStart, ProductIdNumStart – use this to start migration as a particular record (helpful if an error occurred during migration)

Creating the Models

  1. Create a new Excel workbook named Mapping.xlsx and rename the first sheet to Models
  2. Open the MOCK_DATA.xlsx and select the headers for the Clients sheet and copy (CTRL+C)
    Client Header
  3. Return to the Models sheet, select cell A1
  4. Select the drop-down under the Paste and select Transpose
  5. Select cell B1 and enter the formula: =”public string “&A1 & ” { get; set; }”
  6. Drag the formula down to B8; your sheet should look like this:
    Dragging Formula
  7. Select cells B1:B8 and copy/paste into the client model in the Models folder
  8. Repeat these steps for the Account, Product, and Invoice models (NOTE: you can paste the values in step 5 below the previous model and drag the formula you already created)

Creating the Column Enums

  1. In the Mapping.xlsx create a new sheet named Enums
  2. Select cell A1, press “=”, select Models tab, select cell A1 and press Enter
  3. Cell A1 on Enums sheet should now have the same value as Models sheet
  4. Drag the formula down far enough to include all values on the Models sheet
  5. Select B1 and enter the number 1
  6. Select B2 and enter the number 2
  7. Select cells B1:B2 and drag down to the last Client Value
  8. Repeat steps 5-7 for other models
  9. Select cell C1 and enter the formula: =” ” & A1 & ” = ” & B1 & “,”
  10. Drag the formula down to include all of the values in column A; your sheet should look like this:
    Creating Column Enums
  11. Select the corresponding cells for each model and paste into its column enum

Completing the Record Loaders

  1. In the Mapping.xlsx create a new sheet named Loaders
  2. Select cell A1, press “=”, select Models tab, select cell A1 and press Enter
  3. Cell A1 on Loaders sheet should now have the same value as Models sheet
  4. Drag the formula down far enough to include all values on the Models sheet
  5. Select cell B1 and enter the formula: =”record.”&A1&” = (string)xlRange.Cells[rowNumber, (int)ColumnEnum.”&A1&”].Text;”
  6. Drag the formula down to include all of the values in column A; your sheet should look like this:
    Completing the Record Loaders
  7. Select the applicable cells for each model and paste over the comment //ADD MAPPNG HERE in each Get method (ex. GetClients)
  8. Replace ColumnEnum with appropriate Enum (Note: you could fix this in the formula ahead of time as well of you prefer)

Completing the Migrators

  1. In the Mapping.xlsx create a new sheet named Migrators
  2. Select cell A1, press “=”, select Models tab, select cell A1 and press Enter
  3. Cell A1 on Migrators sheet should now have the same value as Models sheet
  4. Drag the formula down far enough to include all values on the Models sheet
  5. Click cell B1 and type “contact”
  6. Drag down to the end of the Contact values
  7. Repeat for “account”, “product”, and “invoice”; your sheet should look like this:
    Completing the Migrators
  8. Select C1 and enter the following formula: = B1 & ” = record.” & A1 & “;”
  9. Drag the formula down to include all of the values in column A; your sheet should look like this:
    Add Value to Column A
  10. Select the applicable cells for each model and paste over the comment //ADD MAPPNG HERE in each of the Migrate methods. NOTE: there will be warning in the code
  11. In the code, go through each of the errors and select the Power App field that maps to record the field. In some cases, you will need to use Guid.Parse, DateTime.Parse, etc. to map the fields.

Here is my mapping for the Contact:
Mapping Example

NOTE: new_migrationid is a custom Whole number field that I have added to the Contact entity to keep track of the corresponding record in the spreadsheet. I do this for the following reasons:

  • Keeping track of which record the code is on in case an error occurs.
  • Deleting records that were created by the migration

Running the Migration

Once the code is completed, make sure that the DryRunMode is true in the app.config and that you do not have the MOCK_DATA.xlsx file open. When you run the project, it will open the Excel workbook, get the records, and map them to an Entity. Nothing will be saved at this time. The output should look something like this:
Running the Migration

Review the output for errors and fix them as needed. Once there are no errors, switch the DryRunMode back to false and run the loader again. Once it is complete, navigate to your Power App instance, and now your newly migrated data will be available!
Active Contact List


This has been only one example of how you can leverage Excel to aid with generating repetitious code. Additionally, the data loader included in this project can be reused for your custom Power App; simply use the techniques we have covered to create Models, Column Enums, Loaders, and Migrators.

The example solution used in this walkthrough can be downloaded here.

The completed solution with the steps in this blog post complete can be downloaded here.

This blog post is for all developers of all levels that are looking for ways to improve efficiency and time-saving ideas. It begins by providing some background on me and how my experience with Microsoft Excel has evolved and aided me as a developer. Next, we cover a scenario where Excel can be leveraged to save time. Finally, we go over a step-by-step example using Excel to solve the problem.


As a teenager growing up in the 80s, I was fortunate enough to have access to a computer. One of my favorite applications to use as a kid was Microsoft Excel. With Excel, I was able to create a budget and a paycheck calculator to determine my meager earnings from my fast food job. As my career grew into software development, leveraging all of the tools at my disposal as a solution against repetitive and mundane tasks made me more efficient. Over the years, colleagues have seen solutions I have used and have asked me to share how I came up with and implemented them. In this two-part blog post, I will share the techniques that I have used to generate C#, XML, JSON, and more. I will use data-loading in Microsoft Power Apps and Dynamics as a real-word example; however, we will need to start with the basics.

The Basics

Before going into the data-loading example, I wanted to provide a very simple example. Keep in mind that there may be more effective solutions to this specific example that do not use Excel; however, I am using it to illustrate this simple example. Let’s say you had a data model and a contact model that, for the most part, were the same with the exception of some property names, and you needed to write methods to map them. You know the drill:

var contact = new Contact();
contact.FirstName = datamodel.firstName;
contact.LastName = datamodel.lastName;
contact.PhoneNumber = datamodel.phoneNumber;
contact.CellPhone = datamodel.mobileNumber;

Not a big deal, right? Now let’s say you have a hundred of these to do and each model may possibly have 50+ properties! This would very quickly turn into a time consuming and mundane task; not to mention you would likely make a typo along the way that another developer would be sure to let you know about in the next code review. Let us see how Excel could help in this situation.

In this scenario, the first thing you will need is the row data for the contact and data models. One way would be using the properties. Consider the classes below:

Use Properties to Identify Classes

  1. Create 3 Excel worksheets called Primary, Secondary, and Generator
  2. Copy/paste the property statements from Contact into Primary worksheet and ContactDataModel into a Secondary worksheet.
  3. Select Column A in the Primary worksheet
    Create three Excel Worksheets
  4. In Excel, select the Data tab and then Text to Columns
  5. Choose Delimited, then Next
    Choose Delimited
  6. Uncheck all boxes and then check the Space checkbox, then Finish
    Uncheck All Boxes
  7. Your worksheet should look like the following:
    Sample of Worksheet
  8. Repeat 3-7 with the Secondary worksheet
  9. Select cell A1 and then press the = key
  10. Select the Primary worksheet and then cell D1
  11. Press the Enter key, you should return to the Generator worksheet and the text “FirstName” should be in cell A1
  12. Select cell B1 and then press the = key
  13. Select the Secondary worksheet and then cell D1
  14. Press the Enter key, you should return to the Generator worksheet and the text “firstName” should be in cell A1
  15. Drag and select A1:B1. Click the little square in the lower-right corner of your selection and drag it down to row 25 or so. (Note: you would need to keep dragging these cells down is you added more classes.)
    You will notice that by dragging the cells down, it incremented the rows in the formula.
    Incremented Rows in the Formula
    Press CTRL+~ to switch back to values.
  16. Select cell C1 and enter the following formula:
    =IF(A1=0,””,A1 & “=” &B1&”;”)
    As a developer, you probably already understand this, but the if statement is checking to see if A1 has a value of 0 and simply returns an empty string if so. Otherwise, string concatenation is built.
  17. Similar to an earlier step, select cell C1 and drag the formula down to row 25. Your worksheet should look like:
    Select and Drag Formula
  18. You can now copy/paste the values in column C into the code:
    Copy and Paste Values into Column C

As you continue on, Excel keeps track of the most recent Text to Columns settings used; so, if you pasted another set into the Primary and Secondary worksheets, you should be able to skip steps 1-5 for remaining classes. In the sample class file and workbook, I have included Address models as an illustration.

Next Steps

This example has covered the basic concepts of code generation with Microsoft Excel: extracting your data and writing the formulas that generate the necessary code. Depending on what you are trying to accomplish, these requirements may grow in complexity. Be sure to consider the time investment and payoff of using code generation and use where it makes sense. One such investment that has paid off for me is data loading in Microsoft Power Apps which we will cover in the next post: Code Generation with Microsoft Excel: A data-loading exercise in Microsoft Power Apps.

Download Example Workbook

Download Address Models

Executive Summary

This blog post is for Power Apps developers as well as end-users that are looking for a wildcard-like solution for Lookup controls in Microsoft Power App and Dynamics. It begins by describing the problem that a client had with finding records where only part of the name was known. Next, it covers the workarounds that were offered to the client as well as the client’s response. Next, the solution is explained in detail, including the necessary code and instructions for setting up within the Power Apps solution. Finally, a video of the final product is provided demonstrating how the implementation behaves.

The Problem

In 2019, I worked with a client that wanted to migrate a CRM system that had been developed in FileMaker (an Apple Subsidiary). The client was an Apple / Mac user through and through; however, he did like Microsoft Office products. He had researched Power Apps and wanted to migrate his system to it and integrate it with Microsoft Outlook. After spending a few months migrating the system to Power Apps the client was pleased but was a little frustrated with some of the user interface elements. He did not like the way that the Lookup controls functioned and claimed that it was not finding all the records.

After meeting with the client, I figured out the problem. In the example below, we are using fictitious company names. Let us pretend that we are adding an Account for a Contact and are having trouble remembering the exact Account name. We remember that the word “Generic” is in the name, but cannot remember the rest.

If you start by typing “gen” into the Lookup control, notice that the lookup only returns items that begin with “gen”.

Lookup Control

This scenario is what the client was upset about, but I had some workarounds for him.

The Workarounds

I explained to the client that if he entered a wildcard before typing the search term that it would return the results that he was expecting. I also informed him that he could configure and use Relevance Search to locate the name and then enter a full name into the Lookup control. The client found these workarounds to be frustrating and wished that it behaved like his FileMaker solution. He did not want to have to explain wildcards or relevance search to office staff and wanted the lookup to behave how he viewed to be correct.
If you start typing “*gen” into the Lookup control, it will return Accounts that contain “gen”, including “Advanced Generic Corporation”, the company we are looking for.

Relevance Search in Lookup Control

I put together a prototype using a Single Line of Text field with an Auto Complete control like below. When you enter “gen” into the Auto Complete control, it behaves like the client wanted, returning matches that started with or contained the term. The client also likes the fact that it highlighted the text that matched too. My initial prototype had the Lookup and the Autocomplete next to each other. The user could use the OOTB Lookup or use the Auto Complete control. Once a user was selected a record in the Auto Complete control, the Lookup would be resolved based on the record selected. Unfortunately, the client found this to be “clunky” and insisted that it work like user interfaces do on Apple products.

Auto Complete in Lookup Control

The Solution

I started working on a solution making these 2 controls work together to achieve the client’s expectations. The implementation would follow these rules:

  1. On load:
    1. If the Lookup was a value, show it and hide the Auto Complete field (note: the lookup is what is important). Otherwise, show the Auto Complete and hide the Lookup.
    2. Populate a list of all entity values to be used by the Auto Complete.
  2. When the Auto Complete changes, try to resolve it against a pre-populated list of Entity values. In this example, I use the Account name field, but it could be any entity.
    1. If there is a single match, set the Lookup field to that value, show the Lookup and hide the Auto Complete.
    2. If there is more than one match, alert the user and force them to use Lookup.
    3. If no match is found, do nothing.
  3. When the Lookup changes, if an item has been selected, hide the Auto Complete field. If not, then hide the Lookup and show the Auto Complete (i.e. if the user clears the lookup).

The JavaScript

There are a variety of ways the scenario could be achieved. I prefer to establish a single event on the form and wire up events within my JavaScript. I also broke this code up for cleanliness and reusability. For this blog post, I have all the JavaScript in a single file; however, some methods could be moved to a shared.js file and reused across multiple forms. Here is a breakdown of the code:

  • configureForm – a method called when the form is loaded. It sets up the on-change events for the fields and initializes the fields.
  • onAccountAutoCompleteChange – event handler for the on-change event of the Auto Complete textbox. It simply calls onAutoCompleteChanged with the necessary parameters.
  • onAccountLookupChange –calls the onLookupChanged with the necessary parameters and is used to initialize the fields.

These methods could be moved to a shared file if desired:

  • populateAccountSet –populates the accountSet array. It calls populateSet with the necessary parameters.
  • populateSet – populates an array with the desired entity field values. These values are used to resolve against when the user selects a value in the Auto Complete control.
  • onAutoCompleteChanged – performs the logic established above for the Auto Complete field.
  • onLookupChanged – performs the logic established above for the Lookup field.
var accountSet = new Array();
var formContext = null;
function configureForm(executionContext) {
    formContext = executionContext.getFormContext(); 

function onAccountAutoCompleteChange() {
    onAutoCompleteChanged("new_accounttext", "new_account", "account", accountSet);

function populateAccountSet() {
    populateSet("account", "?$select=accountid,name", accountSet, "accountid", "name");

function populateSet(entity, query, set, idField, nameField) {
    if (set !== null && set.length > 0) return;
    Xrm.WebApi.retrieveMultipleRecords(entity, query).then(
        function success(result) {
            if (result.entities.length > 0) {
                for (var i = 0; i < result.entities.length; i++) {
                        new Array(
        function (error) {
            Xrm.Utility.alertDialog(error.message, null);

function onAutoCompleteChanged(autoCompleteField, lookupField, entityType, dataSet) {
    var value =;
    if (value !== null) {
        var valueLowerCase = value.toLowerCase();
        var matches = [];
        for (var i = 0; i < dataSet.length; i++) { if (dataSet[i][1] !== null && dataSet[i][1] !== undefined) { if (valueLowerCase === dataSet[i][1].toLowerCase()) { matches.push(dataSet[i]); } } } if (matches.length > 0) {
            if (matches.length > 1) {
                var friendlyEntityType = entityType.replace("new_", "");
                friendlyEntityType = friendlyEntityType.replace("_", " ");
                var alertStrings = {
                    text: "More than one record exists; please use default " + friendlyEntityType + " control."
            } else {
                var lookupVal = new Array();
                lookupVal[0] = new Object();
                lookupVal[0].id = matches[0][0];
                lookupVal[0].name = matches[0][1];
                lookupVal[0].entityType = entityType;
                onLookupChanged(autoCompleteField, lookupField);
    } else {

function onLookupChanged(autoCompleteField, lookupField) {
    var lookupVal =;
    if (lookupVal !== null && lookupVal !== undefined) {
    } else {

function onAccountLookupChange() {
    onLookupChanged("new_accounttext", "new_account");

Updating the PowerApps Solution

For this script to be used, you must first create a Web Resource for it. In this example, I created one called accountwildcard.

Create a Web Resource

Next, you need to add an event handler function to the OnLoad event; be sure to check the box Pass execution context as the first parameter. Add the library to the form and tell it to call the configureForm method for the OnLoad event.

Adding and Event Handler Function

Your form properties should look like this then you are done.

You will also want to make sure the labels for both fields are the same to make the transition look clean.

Make sure both labels are the same

The Final Product

As you can see below, the transition is smooth and seamless. When you type, the Auto Complete control serves up wild-carded results. After hitting tab or moving outside of the control, the Lookup is populated with the match and the Auto Complete is hidden. If you clear the Lookup, it hides itself and shows the Auto Complete again.

This is a specialized solution and I would recommend steering your client towards using a wild card in Lookup control; however, if you have a client that is insistent on this functionality, I hope this saves some time. The client I did this for is pleased with the results and has had no issues with it since.

To benefit the most from this post, you should understand the following concepts:

  • Dynamics CRM Workflows
  • DocuSign Integration with Dynamics CRM
  • DocuSign merge fields and merge-back

The Problem

I recently experimented with integrating DocuSign with Dynamics 365 — specifically, the merging of data into a DocuSign form and then writing the data back into Dynamics. After reading the DocuSign for Dynamics 365 CRM – 6.1 documentation I found that DocuSign Drop Downs and Radio Button controls are not supported for Dynamics merging and write backs. I started work on a solution that would use a Checkbox field in DocuSign and a Two Options field in Dynamics. I had all my text fields working correctly and assumed it would be straightforward as there were both Boolean fields.

I was disappointed to find out that the solution would not merge. After researching online and trying a few suggestions, I finally decided to add a temporary text field to my DocuSign form and see what Dynamics was putting into it, and found that the value was “Yes.” Then I looked at the form data in DocuSign…and it had the value “X.” I tried replacing the values for “Yes” and “No” in the Dynamics Two Options field with “X” and “O”, but that didn’t work either.

The Solution

I finally decided to change the “Yes” and “No” values to “true” and “false.”

This time, when the data was merged, the checkbox was checked!

And once the client receives the email, fills out the form, and the .pdf files are sent…this is when the ‘X’ we saw in the form data is used:

Finally, I verified it worked end-to-end by unchecking the box in Dynamics and saving the record:

After firing off the workflow to merge data in DocuSign form, the box is unchecked now:

Send the email off to be filled, check the box and add a new allergy:

Now, wait for the Dynamics envelope status workflow to complete. Check the record and the record will update successfully in Dynamics.


Albeit small, I’m surprised I didn’t find it documented. So if you’ve come across this issue working with DocuSign Checkbox fields and Dynamics 365, I hope this post saves you some time!

I recently encountered an issue when trying to create an Exact Age column for a contact in Microsoft Dynamics CRM. There were several solutions available on the internet, but none of them was a good match for my specific situation. Some ideas I explored included:

  1. Creating a calculated field using the formula DiffInDays(DOB, Now()) / 365 or DiffInYears(DOB, Now()) – I used this at first, but if the calculated field is a decimal type, then you end up with a value like 23 years old which is not desirable. If the calculated field is a whole number type, then the value is always the rounded value. So, if the DOB is 2/1/1972 and the current date is 1/1/2019, the Age will be 47 when the contact is actually still 46 until 2/1/2019.
  2. Using JavaScript to calculate the Age – The problem with this approach is that if the record is not saved, then the data becomes stale. This one also does not work with a view (i.e., if you want to see a list of client ages). The JavaScript solution seems more geared towards the form of UI experience only.
  3. Using Workflows with Timeouts – This approach seemed a bit complicated and cumbersome to update values daily across so many records.

Determining Exact Age

Instead, I decided to plug some of the age scenarios into Microsoft Excel and simulate Dynamic CRM’s calculations to see if I could come up with any ideas.

Note: 365.25 is used to account for leap years. I originally used 365, but the data was incorrect. After reading about leap years, I decided to plug 365.25 in, and everything lined up.

Excel Formulas

Setting up the formulas above, I was able to calculate the values below. I found that subtracting the DATEDIF Rounded value from the DATEDIF Actual value produced a negative value when the month/day was after the current date (2/16/2019 at the time). This allowed me to introduce a factor of -1 when the Difference was less than or equal to 0.  Using this finding, I set up the solution in CRM.

Excel Calculations

The Solution

  1. Create the necessary fields.
    Field  Data Type  Field Type  Other  Formula 
    DOB  Date and Time  Simple  Behavior: User Local   
    Age Actual  Decimal Number  Calculated  Precision: 10  DiffInDays(new_dob, Now()) / 365.25 
    Age Rounded  Whole Number  Calculated    DiffInDays(new_dob, Now()) / 365.25 
    Age Difference  Decimal Number  Calculated  Precision: 10  new_ageactual – new_agerounded 
    Age  Whole Number  Calculated    See below 
  1. Create a business rule for DOB; setting it equal to birthdate when birthdate contains data. This way when birthdate is set, the DOB is set automatically. This arrangement is necessary for other calculated fields.
    Business Rules
  2. Set up the Age calculated field as follow:
    Age Calculated Field

Once these three steps have been completed, your new Age field should be ready to use. I created a view to verify the calculations. I happened to be writing this post very late on the night of 2/16/2019. I wrote the first part before 12:00 a.m., then I refreshed the view before taking the screenshot below. I was happy to see Age Test 3 record flip from 46 to 47 when I refreshed after 12:00 a.m.

Age Solution Results

Determining Exact Age at Some Date in the Future

The requirement that drove my research for this solution was the need to determine the exact age in the future. Our client needed to know the age of a traveler on the date of travel. Depending on the country being visited and the age of the traveler on the date of departure, different forms would need to be sent in order to prevent problems when the traveler arrived at his or her destination. The solution was very similar to the Age example above:

The Solution

  1. Here is an overview of the entity hierarchy:
    Age at Travel Entities
  2. Create the necessary fields.
    Entity  Field  Data Type  Field Type  Other  Formula 
    Trip  Start Date  Date and Time  Simple  Behavior: User Local   
    Contact  DOB  Date and Time  Simple  Behavior: User Local   
    Trip Contact  Age at Travel Actual  Decimal Number  Calculated  Precision: 10  DiffInDays(contact.dobnew_trip.start) / 365.25 
    Trip Contact  Age at Travel Rounded  Whole Number  Calculated  n/a  DiffInDays(contact.dobnew_trip.start) / 365.25 
    Trip Contact  Age at Travel Difference  Decimal Number  Calculated  Precision: 10  new_ageattravelactual – new_ageattravelrounded 
    Trip Contact  Age at Travel  Whole Number  Calculated  n/a  See below 
  1. Create a business rule for Contact DOB; setting it equal to birthdate when birthdate contains data. This way when birthdate is set, the DOB is set automatically. This arrangement is necessary for other calculated fields.
    Business Rules
  2. Set up the Trip Contact’s Age at Travel calculated field as follow:
    Age at Travel Calculated Field

Once these steps have been completed, your new Age at Travel field should be ready to use. I created a view to verify the calculations.

You’ll notice that in the red example, the trip starts on 8/14/2020. The contact was born on 9/29/2003 and is 16 on the date of travel but turns 17 a month or so later. In the green example, the trip is also on 8/14/2020. The contact was born 4/12/2008 and will turn 12 before the date of travel.

Age at Travel Solution Results


While there are several approaches to the Age issue in Dynamics CRM, this is a great alternative that requires no code and works in real time. I hope you find it useful!

sharepoint 2013 logoI came across an interesting bug while trying to add a user the Administrators of a Search Service Application in SharePoint 2013. When I tried adding the user, and clicking OK, and error is returned: “User does not have permission to perform this action” along with a correlation ID. Further investigation in the ULS logs revealed that the problem was SQL permission related: “The EXECUTE permission was denied on the object ‘proc_MSS_GetConfigurationProperty’, database ‘SPSearch’, schema ‘dbo’.” Additionally performing a search fails and logs the error: “There was an exception in the Database. Please retry your operation and if the problem presists, contact an administrator.” (The error message has a typo too).  Read More…
sharepoint 2013 logoRecently, I encountered an issue with SharePoint 2013 search crawls where .pdf files smaller than 1 MB reported a warning: “The item has been truncated in the index because it exceeds the maximum size”. The default MaxDownLoadSize for documents in SharePoint is 64MB, which was more than enough the handle these relatively small .pdf files.

After I reached out to some co-workers; one suggested that the error might be a false-positive and the entire document had been crawled. I tested this by first searching for words at the end of the document and no matches were found; this would be expected if it were truncated. Next, I tried searching for text in the middle of the document, no matches were found either. I thought it must have truncated a lot of text and tried searching for text contained at the very beginning of the document. No results were found! So when the warning said it truncated the item, it had truncated the whole document. Read More…

sharepoint 2013 logoIntroduction

One of the many challenges that SharePoint developers face is returning meaningful search results that allow users to access information efficiently. Oftentimes, data retrieved from search could be more effective if we could modify it slightly. Other times, search results would be enhanced if we could include related information that does not reside within SharePoint. FAST for SharePoint 2010 provided pipeline extensibility which allowed us to modify content on the “pipeline” using a PowerShell script or a compiled application. SharePoint 2013 introduced Content Enrichment which allows us to enrich content during the content processing phase using a WCF Web Service as seen in Figure 1. In this 3-part series, we will examine Content Enrichment being leveraged to enhance data three different ways. In part one, we will develop a simple Content Enrichment Service that combines two existing SharePoint managed properties into a single managed property. In part two, we will enhance data by taking a single managed property and querying a database to obtain related details. Finally, in part three, we will enhance content by taking a single managed property and obtaining details from a web service.

Read More…