Application Performance Management (APM) is the monitoring and management of performance and availability of a software application. A proactive approach to monitoring the application can reduce business escalations and increase the availability of an application to greater than 99.9%. It helps detect and diagnose complex application issues with minimum effort. The customer experience quotient is a powerful metric to understand whether the end-user of an application is pleased or disappointed depending on how the application is performing. The APM tools can be used by IT and business teams to better understand the pain points for the customers and identify opportunities to constantly improve the features of an application. The tool monitors multiple aspects of an application like the application layer, network servers, integration point to other systems, database, workflow process, etc.

Dynatrace is a software-intelligence monitoring platform that simplifies enterprise cloud complexity and accelerates digital transformation. It can be used for Real User Monitoring, Mobile App Monitoring, Server-side service monitoring, Network monitoring, process, and host monitoring, Cloud and Virtual Machine Monitoring, Container Monitoring, and Root Cause Analysis.

In this article I will explain the process of setting up a Dynatrace Synthetic Browser Type Monitor which helps monitor the availability and performance of web applications as experienced by an end-user. It provides 24/7 visibility to how an application is performing which helps application teams in critical decision making. Dynatrace offers two types of Synthetic monitoring.

  1. Browser Type Monitors
  2. HTTP Type Monitors

Google Chrome is the supported browser for building synthetic monitors.

Creating a Browser Monitor

After logging into Dynatrace with the appropriate credentials click on the Synthetic Option.

Log into Dynatrace and Click on Synthetic icon

Step 1: Click on Create a browser monitor.

Install Dynatrace Synthetic Recorder Chrome
Step 2: Install the Dynatrace Synthetic Recorder Chrome extension. This extension will help us in recording the click paths easily.

Install the Dynatrace Synthetic Recorder Chrome extension
Step 3: In the next step we provide a name for the monitor and the URL for the application or an endpoint that we plan to monitor.

Name for the monitor and the URL

Step 4: Click on the Record Click path option so that Dynatrace Synthetic recorder will start recording the actions performed by us. In the record browser instance that pops up, perform the necessary actions to simulate a use case as per the requirement.

lick on the Record Click path option so that Dynatrace Synthetic recorder starts recording

After performing the necessary actions click on the Cancel option.
Events in the recorded click path will be displayed.

Event is record path displayed

Step 5: The next step is selecting the frequency and the locations.]

Select the frequency
Step 6: The Summary Screen will provide the details of the monitor and you can perform a review before creating it.

The Summary Screen will provide the details of the monitor

After a few minutes, you can see the monitoring data for the monitor.

Monitoring Data
For more details on how the application is performing click on the monitor.

Application is performing click on the monitor

Click on Analyze Availability.

Click on Analyze Availability
To analyze a specific run, click on a location and you can drill down on the details.

Click on a location and you can drill down on the details
There is an option to enable/disable/delete/edit a monitor by clicking on the 3 dots available at the top.

Enable/disable/delete/edit a monitor

Synthetic Alerting Process

Dynatrace can create Problems and send alerts whenever a monitor fails due to availability or performance issues because of the thresholds that we set. The thresholds for the monitors are specified in the monitor settings.


Local problem is created when the monitor is unavailable

A local problem is created when the monitor is unavailable for one or more consecutive runs at any location.
A global problem is created when all the monitored locations are unavailable.


Performance Thresholds

For example if we expect the 1st event which is Loading of the page to complete in less than 5 seconds then we can set the threshold accordingly. A problem is created when any location exceeds the threshold configured.

A problem is created when any location exceeds the threshold configured.


A problem is created when a monitor violates the thresholds. To identify the list of Active Problems, click on the Problems section.

Identify the list of Active Problems, click on the Problems section
For example, there is an ongoing Problem for a monitor. For more details on the failure click on the Problem.

Ongoing Problem for a monitor
Availability Check
Click on Analyze availability to get more details on the reason for failure. We see that is not a valid URL. So Dynatrace has reported a DNS Lookup Failure Error. Please note that the URL was modified to report a failure.

Dynatrace has reported a DNS Lookup Failure Error

Dynatrace Email Notification

Dynatrace allows you to integrate with multiple third-party systems such as email for sending notifications when a Problem is created. Let’s understand the process of doing this setup.
Navigate to Settings > Integrations > Problem notifications > Set up notifications

Dynatrace Email Notification

Click on Email.

Click on Email

Provide the recipients for the alerts.
Alerting Profile allows you to control exactly which metrics can result in Problem Creation.
Navigate to Settings > Alerting > Alerting Profiles

Alerting Profiles

Let’s create a Profile with the name TestProfile. By default, the profile will be created with the below settings.

Create a Test Name

The setting can be modified based on current requirements.
By default, the system alerting rules trigger notifications immediately for availability and after 30 minutes for slowdown performance problems.
Availability and Slowdown are the metrics that pertain to Synthetic Monitoring.
We can adjust the recipient configuration using these settings available in the Alerting Profile. For example, you can notify or escalate an issue if a Problem remains open for a longer duration.

To ensure this alerting profile is associated to a single monitor, create a Tag, and associate the alerting Profile accordingly.
For example, I created a Tag for the monitor. is the tag name.

Tag for the monitor.

Now in the Alerting Profile Section I can filter the monitor based on the Tags. So Dynatrace will apply this rule only for this specific Monitor.

Filter Monitor based on Tags

After we associate the Tag to the alerting profile the next step is to associate the Email Integration setup with the Alerting Profile.

Email Integration setup with the Alerting Profile

After the TestProfile is selected click on Send test notification.
Dynatrace will trigger a test notification email to the recipients.

Dynatrace will trigger a test notification email to the recipients

So, in this blog we have seen how Dynatrace Browser monitors can help identify problems and determine if a website is experiencing slowness or downtime before the problem impacts end-users or customers. It helps monitor critical business flows and helps Applications teams take necessary proactive action.

I hope you found this interesting. Thank you for reading!

The pandemic has changed the way Microsoft has had to deliver new product enhancements, but it hasn’t slowed down the respective productive teams from unveiling significant changes to Microsoft 365. Last week, the Microsoft Build conference became the showcase for several Microsoft 365 announcements, and now that it is complete, we can summarize and reflect on how these announcements will change the way we use the platform.

In this post we will look at the highlight announcements and discuss how these changes can impact your usage of Microsoft 365, whether you’re an administrator, user, or implementer.

Microsoft Lists

There is no doubt that one of the biggest announcements last week was Microsoft Lists. What this effectively continues is the trend of Microsoft taking the pieces of SharePoint and building them out across Microsoft 365.

The biggest change is that now Microsoft Lists are their own application inside of Microsoft 365 with its own landing page. It takes what we already had in modern SharePoint lists and made them available outside of just a SharePoint context. Now these lists, which are really small applications, can be outside of SharePoint or can be created inside of a Group connected SharePoint Team Site (but unfortunately it doesn’t seem to be available to create in Communication sites, although you can still get much of the functionality as a SharePoint list in that site design).

Microsoft Lists

These lists have the functionality we are used to like custom formatting, integration with Power Apps/Power Automate, rich filtering, and editing experiences, and more. There are some good enhancements such as a gallery (or “card”) view, a modern monthly calendar view, conditional metadata show/hide based on criteria, a conversational notification creation interface, and a lot more. Also, there are now prebuilt templates for various list types, and all of this is seamlessly available to be surfaced inside of Microsoft Teams.

The richness of Microsoft Lists will allow users to build rather complex applications with a very straight forward yet powerful interface, and when you want to do something more complex, the Power Platform will allow you to enhance them even further.

Here are Microsoft resources explaining the announcement in greater detail:

Enhancements to Microsoft Teams

While Microsoft Lists may have been the biggest single addition to Microsoft 365 last week, there remains no mystery that Microsoft Teams continues to be the darling of Microsoft 365. To that end, there are several changes that make Teams an ever more compelling product, and that is especially true as the pandemic pushes more organizations to embrace distributed work.

AIS' Accelerated Teams solution quickly deploys Microsoft Teams within days to support your remote workforce using Teams and staying productive.

There have been recent changes such as a new 3×3 video grid when in a call, “raise a hand” to ask a question and changes to the pre-join experience that allows you to set settings easier. These weren’t announced directly at BUILD, but these are important changes worth mentioning. To get an overview, see this video on Microsoft Mechanics: Microsoft Teams Updates | May 2020 and Beyond. One seemingly small but important change is that now when using the search box in Teams, it can now default to your current context such as a chat, which will have a very big discoverability improvement.

Regarding developer announcements at Build, several new changes were announced:

  • New interface inside of tenant administration to build Teams templates where you can set pre-defined channels and tabs/apps.
  • New Visual Studio and Visual Studio Code extensions to build apps for Teams.
  • Single-button deployment of Power Apps applications into Teams.
  • New Power Automate triggers for Teams.
  • Customizable application notifications using the Microsoft Graph.

The biggest takeaway from all these announcements is that Microsoft wants to provide as many avenues to quickly extend Teams whether that’s a more traditional programmatic solution using the Visual Studio family of products or using the Power Platform to enable a new class of power users that are familiar with those products.

Read more about these announcements at the Microsoft Teams blog: What’s New in Microsoft Teams | Build Edition 2020.

Project Cortex Release Date and Taxonomy APIs

While Project Cortex was announced at the Ignite Conference last year, we now know that Project Cortex will enter general availability in early summer this year, which may be no more than a month or two away. While the impact of Project Cortex will have on our Microsoft 365 implementations remains to be seen, it certainly has the promise to change the dynamic of how we do information management in Microsoft 365.

The interesting announcement that came out for developers were new APIs to complete CRUD operations on the Term Store through the Microsoft Graph. This has never been possible before, and it will be interesting to see how customers will integrate this functionality. What is clear is that if you have been ignoring either the Microsoft Graph or Managed Metadata, the time is to investigate how these opportunities can maximize your Microsoft 365 investment.

Microsoft Graph Connectors Entering Targeted Release

Like Project Cortex, this is not a new announcement, but the fact that these are now going to be more broadly available in the targeted release channel in the near future is an exciting development. Essentially, these connectors allow your organization to surface external data sources into search using the Microsoft Graph. If you’re interested in seeing the range of connectors available, check out the Microsoft Graph Connectors gallery.

Implement Today

If you are interested in more Microsoft 365 Announcements, Microsoft has released its Build conference book of news that summarizes all the announcements across all their product lines.

There are great announcements last week but digesting them can be daunting. Let AIS help you understand their impact on your organization and help ensure your investment in Microsoft 365 is being maximized. Contact us today to start the conversation.

In this post let’s see how Microsoft Teams’ capability to embed tools and applications on Teams channels can be leveraged to enhance collaboration and visibility within a team.

We will create a Power App to record Objectives and Key Results (OKR) of employees and teams. We then create a Power BI report to better visualize the data and metrics. The Power App and the Report will be added as tabs to the Teams channel so that users can collaborate with their team members, keep updating the OKR, and view the progress charts.

Let’s Get Started!

Data Model

One of the critical aspects of the app is to identify the attributes that are needed to capture relevant data. As the name suggests, Objectives and Key results are the primary attributes that will be needed. Along with that, we will also have to create Owner (users), the Team (to which the owners belong), and the Progress Bar to track the completion of the “Key Results” on the “Objective”. For this app, we will use a SharePoint List as a data source and implement the above data model.

Power Apps

  1. The canvas app is created in a tablet mode and the SharePoint list is added as a data source. Relevant styling and functionality are built in the app.
  2. This is a screen on which the data from SharePoint is displayed in the context of the current logged in user. Users can perform CRUD operations on the data from this screen.Canvas App in SharePoint
  3. This tab displays basic information such as the current date and the currently logged-in user.
  4. This is a refresh button that refreshes the data source and brings the latest data to the app.
  5. This icon creates a new item in SharePoint with a parent title to store the Objectives.
  6. This is a gallery control that displays data from the SharePoint list. Each objective can have multiple key results that can be created by hitting the ‘+’ icon in the gallery.

Power BI

The data stored on the SharePoint list is analyzed via charts on a Power BI report. Charts are created to visualize metrics as below:

  1. Percentage objective completion
  2. Objective completion by team
  3. Objective completion by individual users

Data stored on the SharePoint list is analyzed via charts on a Power BI report

Relevant metrics can be similarly visualized using charts in Power BI and the data set can be set to refresh on a schedule.

MS Teams (Setup in Action)

Team members can collaborate on a single channel and do not have to switch between devices, platforms, screens, applications, etc. The canvas app is added to the Teams channel as a tab as shown in the image below.

MS Teams Set Up in Action

The Power BI report is added to the channel as a tab as shown in the image below.

Power BI Report

In this post we saw how day-to-day processed within groups in an organization can be digitally transformed to apps using Power Apps, metrics analyzed on Power BI visuals can be implemented on MS Teams channels to enhance collaboration and teamwork.
Teammates can view their respective objectives, responsibilities, and track their productivity in a single place without having to launch or switch between different applications or reports.

Processes like these bring transparency in evaluating productivity with respect to the tasks, teamwork, and individual standpoints. Microsoft Teams is a great way to enable your teams to better collaborate, and plugging in Power Apps, Power BI, and other tooling can snowball the impact it has on your organization’s digital transformation efforts.

I hope you found this interesting and it helped you. If you’re looking to empower your team with a similar solution, check out our Accelerated Teams Enablement or Teams Governance Quick Start offerings, as well as our Power Platform Quick Start! Thank you for reading!

The first time I was introduced to Azure Cognitive Search was from the Microsoft AI Dev Intersection conference in 2019. I thought to write a quick blog post on it to help others understand its features and benefits. This blog is not only for developers, so if you are a Business Analyst, SharePoint Analyst, Project Manager, or Ops Engineer, you will still find the information useful from this blog.


Azure Cognitive Search (ACS) is a technique for using artificial intelligence (AI) to extract additional metadata from images, blobs, and other unstructured data. It works well for both structured and unstructured data. In the past, we needed to set up a separate search farm to fulfill the search requirements for a web application. Since ACS is a Microsoft Cloud service, we do not need to set up any servers or be a search expert. You can prove these concepts in front of your customer in minutes.

When can we use it?

Most of the businesses have many handwritten documents, forms, emails, PowerPoints, Word documents, of unstructured data. For handwritten documents, even if you scan and digitize it, how can we make content searchable?  If you have images, drawings, and picture data, how do we extract text contents out of it and make it searchable? If you have many handwritten documents, you can scan it, upload it to Azure Blob Storage containers in an organized fashion and Azure Cognitive search can import the documents from Blob Containers and create the search indexes.  The below diagram shows the paper document flow.

Paper Documents Flow:

Paper Documents Flow

Below are a few cases where ACS can really come handy:

  • If the local-file share has many documents and running out of space. Example: If your organization is storing documents in File Server, you can index those documents using ACS and can provide a good search experience so users do not have to use Windows, search explorer to search. You can design nice web application UI which can search using ACS indexes.
  • The customer already has data in the cloud. Like data stored in Azure Blob Storage, Azure SQL Database, or Azure Cosmos Db. ACS can easily connect and create indexes on Azure Blob Storage, Azure SQL Db, and Azure Cosmos DB.
  • International business companies have documents in many languages. Out of the box, ACS search indexes translated results in many different languages. You can show your search result in a different language as well.
  • The client needs to apply AI to business documents.
  • Documents are lacking the Metadata. Example: Documents that are having Title only as metadata so all you can search by is Title! But ACS can extract many key phrases from documents, and we can search on key phrases as well.

We will next learn how to quickly prove this concept.

Creating Service and Indexes from Azure Portal

The below diagram shows the simple flow from the Azure portal. You can prove the ACS concepts in front of clients in minutes.

Creating Service and Indexes from Azure Portal

Log in to the Azure portal and create the Azure cognitive search service. You can find steps on how to create ACS here.

Once your service has been created, follow the below steps to quickly prove the concept.

  • Step 1: Start with documents (unstructured text) such as PDF, HTML, DOCX, Emails, and PPTX in Azure Blob storage. Upload your contents in Azure blob Storage and in ACS. Import your data from Azure Blob Storage.
  • Step 2: Select this option if you would like to apply cognitive skills (see the next section for understanding the cognitive skills)
  • Step 3: Define an index (structure) to store the output (raw content, Step 2-generated name-value pairs).
  • Step 4: Create an indexer, Indexer fills the data into your index fields.

(See the next section for understanding the Index and Indexer)

  • Step 5: You can quickly search on indexes by using Azure Search Explorer.

Understanding Index and Indexer

The search index is like you are creating an empty table and fields. If you want to search on your data, first we need to figure out which fields we want to make it searchable. Once we decide the fields, how can we populate data into it? The search indexer pulls the data from your source and fills your search indexes with data so you can search on search indexes. It is very quick to define your search indexes and create an indexer from Azure Portal in ACS. In ACS search index is just Json objects.

Understanding Index and Indexer

Understanding Text Cognitive Skills and Image Skills 

Out of the box Text Cognitive skills in ACS can extract the people’s names, organization names, location names, and key phrases from your data or documents. Text Cognitive skills can also translate the result in different languages and can also detect the language.

See below an example of results translated into the Hindi language.

Understanding Text Cognitive Skills and Image Skills 

Image skills can generate tags and captions from images and can also identify celebrities.

See below JSON search index as an example of Image cognitive skill.

Image Cognitive Skill


Since Azure Cognitive Search is cloud service, it is very quick to use it if you already have data in cloud or on-premises. If you have data in your own data center, you can push the data into Azure cognitive search indexes. Below two are my favorite demo sites, they used ACS to extract the content out of paper documents and images.


On a previous project, I was a part of, the client wanted to explore a chatbot application for its employees. The goal was for the chatbot to help increase the office’s productivity. Certain skills would be developed to enable swift access to popular actions such as opening a timesheet, password reset help, etc. The client also expressed a need for seamlessly adding new features, to the chatbot, when necessary. It was also decided that the chatbot would communicate with external services to fetch data. Taking what was discussed, we went to the drawing board to devise a plan on how to develop a scalable solution.

On top of the application having to be scalable, there was a decision to try and make the application as maintainable as possible too. Since this application will increase in size over time, it was key for us to lay down a foundation for how the chatbot would interact with classes and other services. As the architecture was finalized, it was apparent to us that there were critical dependencies on several Azure cognitive services. Thus, it became important that we try and ensure that the chatbot application would be maintainable to accommodate for those services. In order to accomplish this, a cascading approach to calling our dependencies was used.

Before I delve into the cascading approach, I want to spend some time talking about bots and the services used alongside them. Ultimately, the main goal of a bot is to accept a request from a user and process it based on what they ask for. For example, a user can ask a bot a question about company policies, the weather, recent documents they worked on or to open web pages.


Now, in order to process those types of requests, Azure provides a couple of cognitive services to assist. One of these services is called LUIS (Language Understanding Intelligent Service). At a high level, LUIS determines an “intent” from statements (often called utterances) that you define in custom models for which you build and train. For example, LUIS can receive an utterance of “What’s the weather”. When an intent is found, there will be a confidence score (a value ranging from 0–1 inclusive) associated with the intent. This score just shows you how confident the service was in determining the intent. The closer the value is to 1, the more confident the service was, and the closer it is to 0 denotes how less confident the service was. In this example, the intent could be something like “GetWeather” with a 0.96 confidence score.

QnA Maker

Another cognitive service that is used with bot apps is QnA Maker. This service excels at housing data that is best suited for the question and answer pairs. The question and answer pairs are stored in what’s called a knowledgebase. A knowledgebase typically encapsulates data that pertains to a specific business domain (i.e. Payroll, HR, etc.). Like LUIS, QnA Maker utilizes machine learning, cognitive models, and confidence scores. When a QnA Maker knowledge base receives a question, it will use machine learning to determine if there is an answer associated with the question. A confidence score (ranging from 0-1 inclusive) will be associated with the results. If you would like to learn more about bot development and the different cognitive services offered in Azure, check out the links at the bottom of this post.

The Initial Approach

The chatbot solution included 1 LUIS service along with 3 separate QnA Maker knowledgebases. In our initial approach, we created intent definitions in LUIS that corresponded with each of our QnA Maker knowledgebases. We then trained LUIS to recognize if the user’s message was a question that could be answered by one of the knowledgebases. When messages came to the bot from the user, we would always send them to LUIS first. If it returned an intent that corresponded with one of our QnA Maker knowledgebases, we would then redirect the request to the identified knowledgebase. Then the knowledgebase would hopefully recognize the question and return an answer. That said, each call to a knowledgebase was dependent on the LUIS service correctly recognizing intents. This was not an ideal approach.

Having the QnA Maker knowledgebases dependent on the LUIS service was an issue. This meant that for a knowledge base to get a hit, the LUIS model would need to be properly trained and up to date. The LUIS model would need to be built and trained with data that closely matches that of each QnA Maker knowledgebase. That said, if the LUIS model is updated and it impacts a given QnA Maker knowledge base, then that knowledgebase would have to be updated and trained to contain the new data from the LUIS model. This approach would ensure the models from both LUIS and QnA Maker are in sync with each other. As you can probably see, this poses as a maintenance concern.

Cascading Solution

So, in order to alleviate this concern, a different approach was taken. The LUIS model would have no knowledge of any data from the QnA Maker knowledgebases and vice versa. That meant updating the LUIS model to remove data that corresponded to any of the QnA Maker knowledgebases. The same approach was done within each QnA knowledge base. This made it so both LUIS and QnA Maker were completely independent of each other. This led to having a cascading approach to calling each of our dependencies. As a result, this would resolve the imposing maintenance issue. (See image below)

Cascading Approach

It is worth noting that we used Microsoft’s Bot Framework SDK for this solution, but the strategies you will see in this post can be used for any chatbot technology.

If the LUIS request handler was unable to handle the request, no problem! The next request handler would attempt to handle the request. This flow would proceed until one of the request handlers successfully handled a request. If none were successful, then the chatbot would tell our telemetry client, in our case Azure App Insights, to log the unrecognized user message. This would provide insight into model training opportunities. Finally, the chatbot would return a default message back to the client. (See image below)

Chatbot return a default message back to the client

Cascading Solution: Confidence Score Thresholds

Each result returned by a cognitive service holds a confidence score. This data proved to be very useful for the chatbot. In the LUIS and QnA Maker request handler classes, there was logic to determine if the returned confidence score met a given threshold. If the score was high enough, meaning the service was confident that it found the right data, then the given request handler can proceed with handling the request. If the score was found to be lower than the threshold, then the request handler does not continue with handling the request. (See image below of a sample method to handle an intent request)

Cascading Solution: Confidence Score Thresholds

Instead, the next request handler will be told to execute. Having this implementation in place helps be explicit with defining an acceptable confidence score. That said, determining a confidence score threshold depends on how extensive your cognitive models are. If your cognitive models account for various phrases and spelling of keywords, then your cognitive services will have an easier time identifying intents/answers. In practice, I found that 0.70 and 0.75 to be satisfactory threshold values.

Cascading Solution: Dialog Turn Status

The final piece to the cascading solution was handling the progression or conclusion of a dialog turn. Think of a dialog turn as a face-to-face conversation. You might initiate the conversation with a question, which is a turn. Then, the other person would answer your question. That is also a turn. This can continue until the conversation ends. Conversations with a bot follow a similar flow. When it’s the bot’s “turn” to reply to the client, it performs its logic then responds. Below is a diagram, provided by Microsoft, illustrating the high-level nature of a conversation with a bot.

Cascading Solution: Dialog Turn Status

Image from:
In the cascading solution, we were explicit when the bot’s turn was over or when it should continue processing the request. Ultimately, when the chatbot found an answer to the user’s question or request, we would tell the chatbot that its turn is complete. On the contrary, we had several scenarios where we told the chatbot to keep its turn going. One scenario was if LUIS did not return an intent or if the confidence score was below our threshold. Another one was if a QnA Maker knowledge base did not find an answer to the given question passed to it. After each request handler executes, there is a check to verify if the turn is complete or not.

Closing Thoughts

The cascading approach for handling calls to the different services/knowledgebases was a huge win for this bot application. It offers clear, concise, and manageable code. Every LUIS and QnA Maker cognitive model is independent of each other and each request handler is independent of each other as well. In addition, the implementation of confidence score thresholds ensured that we were explicit with how we further processed client requests. Finally, adding progression and termination logic for a dialog turn certified that it would be appropriately processed. This whole approach helped paint a clear picture of what our chatbot was doing.


As we are still adjusting to remote work, it has allowed me to uncover some extra time to record a Power Platform Administration Foundation Pluralsight course to complement the “dev side” course released in 2018, Low-Code Development with Power Apps.

This course is designed for developers–both citizen and professional–interested in a low-code approach for building mobile applications. In this course, I used aspects of Azure PowerShell/CLI in demonstrating Power Platform management as I see a commonality between PowerApps and Azure Administration.

You can find the Pluralsight course here.

My daughter drew the following sketch to captures the principal motivation for why I created this course, which was to help the Power Platform admins attain equilibrium. Today, the balance seems to be tilting towards the citizen developers, making IT departments uncomfortable with moving forward with the Microsoft Power Platform adoption.

Illustration of Power Platform Course

The  Power Platform Adoption Framework created by Andrew Welch and the team speaks to the “adoption at scale” theme and an approach to enterprise management and governance that resonates with customers and puts IT and security departments at ease.

This course is about teaching admins the basic skills needed to effectively administer the Power Platform. My hope is that making the Power Platform admin tools somewhat match that of Azure, acceptance of the platform may be a bit easier.

Congratulations to the AIS USA and India teams who represented AIS this past weekend in the global Hack4Good Hackathon and took home first and second place! The two teams of AIS employees went up against some of the best in the world – 390 participants across 40 global teams – and we are very humbled to be able to share their success. Not only did they place globally, but each group took home first place in their regions. Check out the details of their solutions below.

Team BTD: Winner of India Region and Global Winner

The hackathon was focused on building solutions based on one’s location. As there are lockdowns and strict quarantining happening in India, we came up with a solution to centralize donations from people. State and Central Governments have been sanctioning rations and funds, but they are not entirely reaching those actually in need. Common people who have the capacity want to donate some rations and generic use material but are unable to due to the lockdown. And then there are NGO’s and organizations that have the appropriate Personal Protect Equipment for their staff who can pick up items from the donors and deliver those to the needful but they do not have enough material to distribute.

The solution is comprised of 4 parts:

  1. Front for Donors to add their donations: We created three mediums for people to list the items that they want to donate. These forms collect information from the user such as Items, Items best before date, Preferred date and time slot for the pickup and their address. Donors can use a Power Apps Portal App and register to add multiple donations (maybe helping their neighbors who are not so tech-enabled) or use an MS Forms form that can be filled from a Phone or through a Power Virtual Agent bot. The information is all stored on the CDS. Using the chatbot the donors can track or cancel their donation requests as well (one of the innovative parts of the solutions is generating an OTP code for the user to authenticate in the chatbot).
  2. For the Pick-Up personnel: These folks are on the ground enabled with PPE’s and permissions from the local authorities and can pick up items from the donors. We created a canvas app that the Pick-Up executives can use. Once the app is loaded on the mobile phone, the current location of the Pick-Up executive is used to calculate Donor addresses within a 5km radius (donor location is calculated from their address and the Bing Maps API). Once the user can see the donor requests they can select one and that renders a Bing map with a route to guide the pickup executive from their current location to the donor address. The pickup executive can then select “Going for pickup” that notifies the donor of the Pickup executive’s information. Once the pickup executive is within 100 meters of the donor’s address, they can mark the items as picked up on the app and click a picture to be sent to the backend.
  3. Power BI reports: These reports are visible to the NGO team and the main component is a map that displays the location points of the donor addresses and based on that, the team can focus more and get active in areas where most donations are coming from. The reports also show the number of items that will be perishing in the next 7 days so that they can be immediately distributed.
  4. Model-Driven App: Once the items are picked up and the picture is sent, the team on the backend marks the donation as validated and creates inventory that can further be used for distribution management.

We came up with this idea and chose to work on it such that it does not die after the pandemic situation ends and has longevity as donations can be made anytime and NGO’s, organizations can better track those with the above solution (after some more tweaks and completion).

Check out the first place winner, TeamBTD’s full solution demo:

Team BTD:

Yash Agarwal – Microsoft Business Apps MVP, Software Developer (@yashagarwal1651)
Ritika Agarwal – Cloud Specialist (@agarwal_ritika)
Shivi Agarwal – studying Computer Sciences
Vansh Agarwal – studying Business Administration for Big data and Analytics

Team Fighting Faucis: Winner of Americas Region and Global Runner Up

Unlike past hackathons, this global Hack4Good didn’t provide teams with problems to solve. Instead, they encouraged us to think of COVID-19 related problems that we’ve witnessed or experience firsthand and try to solve them with the Power Platform – in 24 hours.  As a team, we discussed many of the ways COVID-19 has affected us directly or indirectly, but one image resonated with us the most: the image of a truck dumping potatoes in a Walmart parking lot because the farmer couldn’t sell them to restaurants.   We also, like so many, had seen stories of people needing help from local food banks due to sudden unemployment and closure of schools that often provided free meals to low-income families. We decided to create Farm to Spoon – a solution to connect that farmer with local food banks to get his excess inventory to the people who need it before it is wasted.

The solution has 5 parts:

  1. Farmer’s Administration App: We used CDS to create store the farmer’s data about what types of produce he grows, how much is available for donation, and when it expires.  This data is visible in a Model-Driven App that also allows him to look at all reservations for food made by local foodbanks
  2. Farm to Spoon Website: We used Power Apps Portals to create a website for local foodbanks to check what food the farmer has available and make reservations to pick up the food.
  3. Notifications and Reminders: We used Power Automate to confirm the food bank’s reservation, to let the farmer know when a new reservation has been made and to send out reminder notifications to the food bank to pick up their food before the expiration date
  4. Reservation Checker App: Finally, we created a Power Apps canvas app designed for the farmer’s phone.  The farmer asks the person picking up the food for their phone number, then it looks up all reservations associate with that app and provides a signature field for them to sign for the pick-up, and then the app updates CDS so the farmer has a record of all transactions.
  5. Farmer’s Metrics Report: We used Power BI to give the farmer insight into the supply and demand for his food.  What food is currently available for donation and what food has been reserved (and by whom).

Check out the solution demo for Fighting Faucis here:

Team Fighting Faucis:
Maxwell Bloch – Senior Software Engineer
Jonathan Eckman – Solutions Architect
Benyamin Famili – Solutions Specialist (OCTO)
Nick Gill  – Power Platform Developer
Andrea Pinillos – Solutions Specialist
Uday Virk – Software Engineer
Jamie Willis-Rose  – Power Platform Developer