Part one of this series focused on several methods and patterns to make the most out of your terraform code, explicitly focusing on keeping your terraform code and configurations organized and easy to maintain.

Utilizing registries to modularize your infrastructure is only a small part of the improvements you can make to your Terraform code.

One of the significant concepts of Terraform is how it tracks the state of your infrastructure with a state file. In Terraform, you need to define a “remote state” for each grouping of infrastructure you are trying to deploy/provision. This remote state could be stored in an S3 bucket, Azure Storage account, Terraform Cloud, or another applicable service. This is how Terraform knows where to track the state of your infrastructure to determine if any changes need to be applied or if the configuration has drifted away from the baseline defined in your source code. The organization of your state files is essential, especially if you are managing it yourself and not using Terraform Cloud to perform your infrastructure runs.

Keep in mind that as your development teams and applications grow, you will frequently need to manage multiple developments, testing, quality assurance, and production environments. Several projects/components will need to be managed simultaneously, as well as the number of configurations, variable files, CLI arguments, and provider information will become untenable over time.

How do you maintain all this infrastructure reliably? Do you track all the applications for one tier in one state file? Or do you break them up and track them separately?

How do you efficiently share variables across environments and applications without defining them multiple times? How do you successfully apply these terraform projects in a continuous deployment pipeline that is consistent and repeatable for different types of Terraform projects?

At one of our current clients, we were involved with onboarding Terraform as an Infrastructure as Code (IaC) tool. However, we ran into many challenges when trying to deploy multiple tiers (dev, test, stage, production, etc.) across several workstreams, specifically in a continuous manner within a deployment pipeline.

Learn how to automate a file transfer from FTP server to AWS S3 bucket using Terraform to provide you flexible means when downloading files from an FTP server.

The client I work for has the following requirements for the web UI portion of the services they offer (consider Azure Cloud provider for context):

  • Each tier has *six applications* for different areas of the United States
  • *Each application* has a web server, a redis cache, app service plan, a storage account, and a key vault access policy to access a central key store
  • Development and test tiers are deployed to a single region
  • Applications in development and test tier both share a redis cache
  • Applications in Staging and production environments have individual redis caches
  • Stage and production environments are deployed to two regions, east and central

Application Development

Stage and Production tiers have up to 48 resources, respectively; the diagram above only represents three applications and excludes some services. Our client also had several other IT services that needed similar architectural setups; most projects involved deploying six application instances (for each service area of the United States), each configured differently through application settings.

Initially, our team decided to use the Terraform CLI and track our state files using an Azure Storage Account. Within the application repository, we would store several files alongside our terraform code for each tier and pass them dynamically to terraform init –backend-config=<PATH> when we want to initialize a specific environment. We also passed variable files to terraform [plan|apply|destroy] –var-file=<PATH> to combine common and tier-specific application setting template files. We adopted this process in our continuous deployment pipeline by ensuring the necessary authentication principals and terraform CLI packages were available on our deployment agents and then running the appropriate constructed terraform command in the appropriate directory.

This is great but presented a few problems when scaling our process. The process we used initially allowed developers to create their own terraform modules specific to their application, utilizing either local or shared modules in our private registry. One of the significant problems came when trying to apply these modules in a continuous integration pipeline. Each project had its unique terraform project and its configurations that our constant deployment pipeline needed to adhere to.

Naming Conventions

Let’s also consider something relatively simple, like the naming convention of all the resources in a particular project. Usually, you would want the same named prefix on your resources (or apply it as a resource tag) to visually identify what projects the resources belong to. Since we had to maintain multiple tiers of environments for these projects (dev, test, stage, prod), we wanted to share this variable across environments, only needing to declare it once. We also wanted to declare other variables and optionally override them in specific environments. With the Terraform CLI, there is no way to merge inputs and share variables across environments. In addition, you cannot use expressions, functions, or variables in the terraform remote state configuration blocks, forcing you to either hardcode your configuration or apply it dynamically through the CLI; see this issue.

We began to wonder if there was a better way to organize ourselves. This is where Terragrunt comes into play. Instead of keeping track of the various terraform commands, remote state configurations, variable files, and input parameters we needed to consolidate to provision our terraform projects, what if we had a declarative way of defining the configuration of our Terraform?

Terragrunt is a minimal wrapper around Terraform that allows you to dynamically assign and inherit remote state configurations, provider information, local variables, and module inputs for your terraform projects through a hierarchal folder structure with declarative configuration files. It also gives you a flexible and unopinionated way of consolidating everything Terraform does before it runs a command. Every option you pass to terraform can be specifically configured through a configuration file that inherits, merges, or overrides components from other higher-level configuration files.

Terragrunt allowed us to do these essential things:

  • Define a configuration file to tell what remote state file to save based on application/tier (using folder structure)
  • It allowed us to run multiple terraform projects at once with a single command
  • Pass outputs from one terraform project to another using a dependency chain.
  • Define a configuration file that tells terraform what application setting template file to apply to an application; we used .tpl files to
  • apply application settings to our Azure compute resources.
  • Define a configuration file that tells terraform what variable files to include in your terraform commands
  • It allowed us to merge-common input variables with tier-specific input variables with the desired precedence.
  • It allowed us to consistently name and create state files.


Custom Pages give developers full flexibility to customize the page and connect to multiple data sources. Moreover, it allows us to use the low-code logic by using canvas designer to edit and create the pages.


  • Previously in the Model-Driven App, we could only add the existing view, charts, and dashboard into the Model-Driven App, which all correspond to one data source. But on a custom page, you can add one or more data sources by using the build-in connectors or creating a custom one.
  • Instead of using the predefined layouts of the forms and views, you can create your page’s custom design and layout, which can be responsive too, using the containers.
  • From one custom page, you can also navigate to form or view or another custom page on a click event of a button or selecting an item in the gallery.


We have created a use case where we are using an existing table named Report Cycles. This table contains the Month and Year for which the reports are created. Each record contains four modules, and for each module, we have four statuses like “Not Started,” “In Progress,” “Exporting,” “Completed.” Each module goes from these four statues, and when it reaches the last status, i.e., “Completed,” it is marked as done.

The classic view may look like this:

Active Report Cycles

But with a custom page we can create this:

Custom Page

Major Components

Vertical Gallery

The Data source for the Gallery is the Report Cycles table in Dataverse. The Gallery contains two labels and two shapes, Rectangle and Arrow. To show the selected item in a different color than the default one, I replaced the default value for the Template Fill property with:
Vertical Gallery: The Data source for the Gallery is the Report Cycles table in Dataverse. The Gallery contains two labels and two shapes, Rectangle and Arrow. To show the selected item in a different color than the default one, I replaced the default value for the Template Fill property with:

If(ThisItem.IsSelected,RGBA(246, 88, 16, 1),RGBA(0, 0, 0, 0))

The function return the color RGBA(246, 88, 16, 1) if the current item in gallery is selected otherwise return RGBA(0, 0, 0, 0) Which is white.
Similarly for the Arrow Shape Color and Labels Text gets changed to White when selected.

Change Color of Arrow

Colored Status

For each Module, there is one label that tells the status for it. The color for the text changes to red when it is “Not Started,” blue when it is “In Progress,” orange when it is “Exporting,” and Green when it is “Completed.”

Colored Status

This scenario has been achieved by applying the below formula for the Color property of the Label.

IsMatch Function

The IsMatch function takes two parameters and checks whether both of them are equal or not. If not, it will return false, and if equal, it returns true.

Edit Button

The edit button helps navigate the user to the edit form for the selected item. This again is a handy feature of custom pages. In this use case, it opens the edit form for the chosen item in the Gallery.

Navigate Functionality

Navigate function is used, which accepts the target value of the selected item from the gallery here. This formula is given from the OnSelect property, which executes the given function when the button is clicked.

Status Indicator

This shape helps the user inform what percentage of the task has been completed. If we dive deeper, we will know that these are four different shapes, which become visible at different statuses. For example, the quarter circle shape has the given formula on the Visible property. This means when the Label equals “Not Stated,” the shape will be visible.

Similar to other shapes like Semicircle, Three Quarter circle, and Circle, they get visible when statuses are “In Progress,” “Exporting,” and “Completed,” respectively.


Sometimes you may encounter this error which can be resolved by enabling third-party cookies for the site.

Resolve Error Enabling third party cookies

Dynamics 365 Error


This blog has explained the use of custom pages in Model-Driven App and its additional features, which were not there earlier. It also describes the pre-known canvas designer and its low code platform for custom pages. The blog covers a use case and its major components. Each component is explained with the formulas used and screenshots highlighting the details.


My last post explored How to Build A Teams apps using SharePoint Framework. To continue exploring Teams app development, I will build an app using Power Platform to compare in this post. Many walk-throughs exist; I used this one. My steps appear below, along with my thoughts, to compare the two approaches.

First steps:

  1. Install Power Apps in your Teams instance.
  2. Open MS Teams (either the desktop or the web version) and then click the ellipsis at the bottom of the left column and enter “power” in the search bar to filter results.
  3. Click the Power Apps item to install it.

Installing Power Apps

A new icon appears in the left column of Teams: Power Apps. Click on it to open Power Apps. We will start building an app that will appear as a new tab in your chosen team.
Click the button New App.

Add New App

NOTE: If you don’t see the New App button, click the Start now button. The New app button will appear after you have created at least one.

Select the team for your app and then click Create. If you do not have any teams created yet, follow these instructions to add one: Create your first teams and channels – Microsoft Teams | Microsoft Docs.

Create the App and Select Team

Once your app has been initialized, you will be prompted to name it. For example, to match my earlier SPFx example app, I decided to create a Company Scheduler app, so I called it “CompanySchedule.” (I often create names with no spaces and then edit to add a space for readability.)

Name the App

You’ll be asked to choose whether to start with the layout or the data; pick either one because you can follow up with the other later. For today I decided to start with data.

Pick your start screen

Adding Data

This is an enormous difference from building with SPFx! Adding data to Power Apps for Teams is incredibly easy. Data can be added from numerous sources; SharePoint, spreadsheets, but the option to “Create A Table” is a seamless invitation to use Dataverse. Usually a costly data option in Power Apps in the Teams version, Dataverse is currently free. The implications are significant. For now, let us add our small table to get a sense of how it works.

Add a Simple Table to test

Create a Table

With our WorkShift table created, the screen to configure it appears. Click the plus to add columns. For example, we will add a column for the date of the employee’s shift.

Workshifts Column Customization

You can rename the columns if you would like; I go back and add spaces for readability. To edit the column name, right-click on the name to see this dialog:

Edit columns in the table

The same view where we edited the column names allows us also immediately enter data.

Enter Data into table

In the Tree view on the left, “Screens” can be considered pages. The default screen is a convenient data edit view. Let’s add another one for a view-only display of the data for our display. Click New screen:

New Screen Tree View

Any layout will work. I chose the “Header, main section, footer” screen.

Choose your layout

Here is our new screen. First, let us display our schedule table. Choose the Insert beside the plus icon in the left column to do this—type “table” in the search box to filter to the Table control.

Insert the table

Once you click on the Table control, you are prompted to choose a data source for it. I selected the WorkShifts table, readily displayed in the Select a data source dialog.

Select a Data Source

The data and column headings appear immediately. Next, we can decorate the screen a little. I chose colors to match my earlier Teams app sample, big purple. Edit the colors and fonts using the properties panel to the left. Next, select each control in the Tree View on the right. Add more controls such as labels or text boxes by selecting Insert in the left column under Tree View.

Edit colors and fonts

The tree view shows the order that the screens will appear in your team tab. Right-click Screen2 and select to move it above Screen1, so our display table will be the primary view after publishing the app. Time to publish! Click Publish to Teams in the upper right column.

Publishing an App to Teams

Teams Publishing Prompt

Choose a team to publish to. Highlight the team Channel, and then click the plus beside it to display teams under it. Select the team and then click Save and close.

Save and Close to Channel

End Result

Once publish is complete, navigate to the team you chose, and find the tab Company Schedule. It should display your table.

Company Schedule Created

Concluding Thoughts

When understanding when building Teams apps with SPFx or Power Apps, how should we think? Ultimately, one is not immediately better than the other. They are so different; they apply to very different development needs. Below, I’ve outlined a quick summary of limitations and advantages of building using Power Apps:


  • Using Power Apps, creating an app for one team can only display in a tab for that team. Using SPFx, we created an app that could be installed by the end-user in any team and can appear in the left column as well as in a team tab.
  • Using Power Platform, the layout is restricted to predefined layouts and formatting is limited. In SPFx, any html decorated by custom CSS and driven by your favorite JavaScript framework. It can be used to build anything.
  • The SPFx demo used only mock data because it’s decidedly more effort to connect to data from the code, and just the work it takes to create the app alone was enough content for a blog post. This is the most valuable advantage of building using Power Apps: the overall effort is lower, including adding data.


  • Using Power Apps is vastly faster and easier.
  • More people can jump in and create business apps without being a developer, using Power Apps.
  • In Power Apps, powerful data capability is available incredibly easily.

These are great choices to have an understanding both will help to know which one to choose.

Establishing a Clear Cloud Adoption Strategy

As we hear a lot of buzz about cloud adoption, I wanted to share my thoughts about cloud adoption approaches and strategies to migrating on-premises workloads to public clouds in this blog. This blog predominantly covers infrastructure assets migration in line with applications. As many would feel that cloud adoption brings significant cost savings to OpEx and CapEx, it requires meticulous planning, design, and architecting. This requires careful consideration of various aspects such as network traffic inflow and outflow, Business Continuity Plan (BCP) like High Availability Disaster Recovery (HA/DR), Data Replication within Regions or Geographies, etc. Data Transfer to Cloud during migration, which is a one-time exercise.

There are many ways you can migrate your applications to the cloud. However, before the migration phase, it is essential to determine the current environment, dependencies, servers and applications, licenses, and much more.

Three Popular Migration Approaches

  • Rehost: Generally referred to as lift and shift or like to like migration, lifts servers or applications from the current hosting environment and shifts them to infrastructure in the public cloud. This approach has low risks in the cloud journey but may not leverage full cloud benefits.
  • Replatform: Minor changes to the application and/or Infrastructure architecture during the migration. This approach leverages cloud benefits such as App Pools, Availability Sets, AZs, etc.
  • Refactor: Re-architect your existing application to leverage the features and services offered by the cloud environments. Refactor can become an expensive option but can avail the best possible benefits of the cloud. One can re-architect the application to become highly available and scalable. It is the most time-consuming among all options since it requires complete awareness of current architecture and building a new solution by leveraging cloud services.

Learn about the top three modernization approaches — rehosting, replatforming, and refactoring — uncovering pros, cons, and process ideas as you go.

Critical Factors to Consider

Vendor Lock-in

  • Vendor lock-in is the common term for a situation where an organization wishes to transfer its business away from one of its current vendors but cannot do so due to the projected cost, duration, or complexity of switching.
  • Considerations when choosing a cloud partner:
    • Create and Negotiate Your Exit Strategy
      • Always remember that the best time to create an exit strategy is before signing an initial service agreement.
    • Develop or design portable vendor-agnostic applications
      • Building portable applications can also help organizations avoid cloud vendor lock-in. Suppose you develop a business-critical application whose core functionality depends on a platform-specific feature like Azure’s data lake analytics or Amazon Lex. In that case, you’ll end up locked into that cloud service provider.
  • Multi-Cloud
    • This cloud computing model encompasses the usage of multiple cloud providers, choosing specific services from each, as per their requirements. This approach reduces the dependency on single cloud technology or cloud service provider. And thus, it offers avoidance of cloud vendor Lock-in and provides the best of multiple cloud service providers.

Business Continuity and Disaster Recovery

  • Business Continuity and Disaster Recovery (BCDR or BC/DR) is a set of processes and techniques used to help an organization recover from a disaster and continue or resume routine business operations. It is a broad term that combines the roles and functions of IT and business in the aftermath of a disaster.
  • High Availability Design: When there is a system outage, the Recovery Point Objective (RPO) is a measure of how frequently you take backups. Recovery Time Objective (RTO) is the amount of downtime a business can tolerate.
  • Fault Domains: A set of hardware components that share a single point of failure.
  • Update Domains:  A logical group of the underlying hardware that can undergo maintenance or be rebooted at the same time.
  • Availability Set: A logical grouping of VMs that allows Azure to understand how your application is built to provide for redundancy and availability.
  • Availability Zones: A logical data center in a region available for use by any AWS customer.

Network Costs

  • While on-premises, data transfer doesn’t cost a dime; data transit in the cloud does. Various cloud providers have multiple charges. This is one of the significant components that need to be factored in while moving to the cloud.
  • Ingres or inflow to the cloud is not chargeable. Egress or outflow traffic is chargeable and is applicable for data flow from cloud to on-premises or cloud to the Internet.
  • While data within the same availability zone is free for some cloud providers, it is chargeable across regions or Geographies.

Security Risks and Compliance

Shared Resources in Public Cloud

  • For some clients, their applications or systems (VMs) shouldn’t be hosted on the same physical host as their competitors. When opted for VMs, one won’t control which physical system these VMs are created or migrated to.
    For this kind of compliance requirement, the client should opt for dedicated hosts.

Data Security During Transit and Rest

  • Most of the CSPs offer data encryption at rest, by default, while it’s in transit over the internet from the user to cloud, and then internally when it’s moving within the provider network, for example, between data centers, zones, and regions.
  • Data at rest – data stored in buckets/disks can also be secured by encrypting, and cloud providers offer various services such as managed keys and client-sourced keys, etc.

Backup and Restoration

  • While backing up the data to cloud location doesn’t incur network traffic costs, many CSPs have storage costs during the backup stage. However, while the recovery process, network costs do get charged.
  • While storing data at cloud locations has many offerings by CSPs, things to be considered while deciding storage solution are frequency of data access – hot/cold/archive, etc.
  • Restoration of backed up data
  • Data archival solution for compliance requirements

Four Stages of Cloud Adoption: A Quick Glance

  1. Assessment and Design: The first stage has Asset Discovery, Asset Consolidation, Scope Definitions, Costing.
  2. Planning: Wave group creation, cutover planning, defining exit criteria, stakeholder identification, and notification channel creation.
  3. Cutover and Execution: Migration rollout, sign-off approvals, and rollback (if any).
  4. Analysis and Optimization: Baselining and performance analysis, Optimize the resource utilization by leveraging cloud-native tools.

Asset Discovery

  • The first and most crucial step in cloud adoption is tracing out the inventory like hardware and software assets and how they are interconnected. For example, Servers (physical, virtual, Linux or Windows), Network gear like switches, routers, wireless devices, Voice devices, etc., Storage arrays, Backup devices, Security devices such as Firewalls, VPN gateways, etc.
  • Validate identified assets with available CMDB of the customer and create application affinity mapping by having workshops, surveys, interviews with application teams
  • Discover infrastructure workload footprint
  • Application to server grouping
  • High-level application dependency mapping
  • Gather as-is resource parameters

Application Views

Define Migration Scenarios

  • Assess various approaches for migration and best fits for multiple scenarios
  • Proof of concept and benchmarking cut over timelines
  • The Migration strategy finalization for various scenarios. This includes 2 tier architecture, 3 tier architecture, INFRA Utility servers like
  • AD, print servers, collaboration servers, etc.

Wave Group Creation

  • Finalize wave groups in line with Application dependency
  • Create cutover plans and finalize downtimes, roll back time also need to be included
    • The cutover plan also needs to have test cases from INFRA and App/Biz teams
  • Identification of stakeholders and communication modes

Migration Execution

  • Communication channels initiation and notification to all stakeholders
  • Initiate pre-migration activities:
    • Target environment building, NW connectivity establishment, and necessary firewall rules enablement and data replication
    • Migration tool setup
  • Cutover initiation and progress monitoring
  • Cutover to Target environment on public cloud
  • UAT by application team and sign off
  • Cutover and go-live
  • Rollback, in case of failures

Hand Over to Support Team

  • Heightened support for 48 hours by the Migration team
  • Handover of successful workloads on cloud environment to the operations team
  • Request support team to decommission source workloads at On-Premises after one week (As per accepted retention policy by customer)

Bringing Organizations Cost-Effective Solutions

Cloud adoption has become one of the critical strategies for many organizations to bring cost-effective solutions to their business community. Nowadays, many cloud Solution Specialists (CSPs) and System Integrators (SI) offer professional consulting services to facilitate the cloud adoption journey. It is recommended to decide which cloud strategy works for your organization and validate the abovementioned items to monitor and optimize resource utilization.

Transform your business into a modern enterprise that engages customers, supports innovation, and differentiates your organization, all while cutting costs.

AIS recently held an internal hackathon for Microsoft Power Platform to expose our team to the platform, concepts, approaches through hands-on experience and to demonstrate the role Power Platform plays in modernizing legacy applications in the cloud.

As a premier Microsoft partner, Power Platform has quickly become a core part of our holistic enterprise cloud transformation strategy. Our teams have helped enterprises across Financial Services, Insurance, Health and Life Sciences, and the Government leverage Power Platform for enterprise-grade application modernization in the last three years.

We’re integrating Power Platform with Azure and Microsoft 365 for powerful legacy modernization capabilities, and throughout the project, we uncovered new lessons learned to share across project teams.

The Power Platform Market

There’s been a surge in demand for Low Code Modernization solutions in the last few years. Many organizations are looking to enable Line of Business (LOB) owners and teams to build productivity and collaboration solutions, offsetting the continued strain on IT teams.

Microsoft answered the call with Power Platform. Since its release, we’ve been working with product teams and industry leaders to develop adoption frameworks, accelerators, and solutions to help organizations get the most out of their Power Platform investments. The demand has grown such that we’re continuing to introduce new ways to build our Power Platform skills. Enter Our hackathon team.

“Power platform is really a ‘work less do more,’ as it would even get the integrations done with legacy systems without writing a single line of code. This I have never experienced in my 13 years of experience, just amazing. The perspective of low-code-no-code is completely changed with this experience, it is an enterprise development tool that can connect your teams’ apps to outlook app to data verse to SQL. You name the technology; it has a connector to fit in.”
– Technology Lead at AIS, Kranthi Kiran Gullapalli

Meet the Hackathon Team

There was a lot of initial interest in learning the platform from our India-based team out of the AIS Hyderabad HQ. Over a dozen people committed weeks of their time inside and outside of regular working hours in the pursuit of the Power Platform experience.

Some of the participants included employees from our HUB team. The HUB is a group of employees fully dedicated to helping AIS project teams deliver successful cloud modernization projects.​ This team consolidates knowledge and experience to provide rapid research and guidance for cloud migration and modernization projects­­­. As app modernization using Power Platform work grows, we wanted to allow individuals to skill up on the platform.

But the HUB team members weren’t the only participants. These groups comprised learners and teachers across account teams. Teams were broken down into several pods with a leader to guide that team’s focus area for the project, including Dataverse data migration and integration, authentication, Power Apps Portals development, and more.

“MPP Hackathon” is where the idea of converting a monolithic application to a Power Apps Portal became a reality. Initially, some ideas flowed on reimagining a typical legacy app to Power Apps Portal, but the hackathon allowed us to experiment & try it. Hackathon helped me & my team to have hands-on exposure to the MPP environment. We worked with data modeling and got to know better on Dataverse (In-house storage in MPP). We took our first of many steps towards App Modernization with this hackathon as we are new to MPP. Microsoft Power Platform is a suite of business services that unifies our data. It provides a powerful way to analyze data, act on newly generated insights, deliver personalization, and automate business processes.”
– Jagrati Modi, Software Developer

A Mission to Innovate, Passion for Learning

AIS strives to provide ongoing learning opportunities for our employees. And we’re lucky to have a team that is so passionate about learning, innovation, and the continual pursuit of new skills.

We’ve developed programs like aisU to nurture a growing community of continued education, knowledge sharing, and technical excellence. We offer various opportunities to gain exposure to new technology and concepts, such as training boot camps, technology deep dives, and level-up sessions presented by consultants based on project experience.

Access to the methodologies, approaches, and lessons learned from these project teams helps employees gain the know-how and resources to consult on and deliver similar projects within their accounts successfully.

These opportunities aim to help AIS improve overall project delivery, presales support, and envisioning capabilities for better customer outcomes.

Up Next: Our Approach

Stay tuned for part two of this series as we dive deeper into the approach and execution of an end-to-end application modernization project leveraging Power Platform, API Integration, Dataverse, Azure DevOps, and more.

Until then, check out some of our open job opportunities. We’re always looking to add passionate technologists to our growing team. Introduce yourself and let’s talk about a future career at AIS.

This blog series is for anyone trying to learn how to use the Power Platform. Readers should know software development fundamentals. This blog begins by covering how to create a base Canvas App from an existing table in the Microsoft Dataverse. It reviews all the screens, forms, and changes necessary to make the user interface work as defined in the requirements. By the end of this blog, a fully functioning app will be ready for data entry.

The blog series assumed the following:

  • You have a Power Apps tenant with administrative privileges
  • You have knowledge of software development fundamentals
  • You have completed Part One of this series

Creating the Canvas App

In Part One of this series, I defined our home inventory application, its database, and how its user interface (UI) would look. I also created the underlying tables for our application. With a data source in place, I can create a Canvas App connected to the Microsoft Dataverse. Making and table-based canvas app saves time by creating the base screens. It is also valuable for reverse engineering to understand how to customize the app on your own.

When prompted, use the Items table as this is the focal point of the application.
Focus point of application

This will create a basic Canvas App; the main screen can search for items (there are none at this time of course)
Main screen creating basic canvas app

Clicking the + opens a form to create a new record; at this time, only the Name column is provided.
Creating new record

Expand the EditScreen1 and Select the EditForm1.
Next, add the custom columns that were created in the Microsoft Dataverse to EditForm1.
Adding custom columns created in Dataverse to EditForm1

Pressing the Play button and then clicking + opens the edit form below. The only problem is there are no Item Types, Locations, or Manufacturers yet.

Editing Form with Item Types, Locations, and Manufacturers

Creating the Lookup Forms

Near the top left of the window, click on New screen. Choose Form as the type.
Power Apps Home to create Lookup Forms

On the newly created screen, click on the EditForm element created. On the right side of the screen, select Item Types for the Data source. Click on the LblAppName created and change the Title Property from [Title] to Item Types. Repeat this with Manufacturers and Locations.

Take few minutes to rename the screen and its elements following the Power Apps Canvas App Accessibility Guidelines and the PowerApps Canvas App Coding Standards and Guidelines. This is important for many reasons; for example, screen readers will read these values aloud, and “EditScreen1” would not be valuable to the end-user. Following these guidelines, I have renamed all screens, forms, and controls within the app. For example, these are the names of the screens that I will be using:

Rename the Screen and Elements

Be sure to save if you have not already; click File and then Save (or CTRL+S).

Manufactures Lookup Form

Expand the Manufacturer Edit Screen, select the frmManufacturer, click Edit Fields, and then add the columns Support Phone and Support URL. Delete the Created-On column if desired.
Click the Advanced tab for the OnSuccess action, enter

Navigate(‘Item Edit Screen,’ ScreenTransition.Fade)

Note: This tells the form to navigate back to the Item Edit Screen when this form is complete. For more information, see Back and Navigate functions in Power Apps.
Insert and edit the Manufacturer

Select the cancel icon (X) and change the OnSelect function to:

      ResetForm(frmManufacturerEdit);Navigate(‘Item Edit Screen’,ScreenTransition.Fade)

Note: This tells the form to clear its columns and navigate back to the Item Edit Screen. For more information, see Reset function in Power Apps and Back and Navigate functions in Power Apps.

Creating columns to navigate back to the Item Edit Screen

If necessary, rearrange elements; the form should look something like this:
Rearrange Elements to customize form

Locations Lookup Form

Since the Location table only has the Name column, there is no need to add any columns to it; however, I need to repeat the same steps from the Manufacturers form.
Select the frmLocationEdit and update the OnSuccess action to:

      Navigate(‘Item Edit Screen’,ScreenTransition.Fade)

Note: This tells the form to navigate back to the Item Edit Screen when this form is complete.
Select the cancel icon (X) and change the OnSelect function to:

      ResetForm(frmLocationEdit);Navigate(‘Item Edit Screen’,ScreenTransition.Fade)

Item Type Lookup Form

Since the Item Type table only has the Name column, there is no need to add any columns to it; however, I need to repeat the same steps from the Manufacturers form.
Select the frmItemTypeEdit and update the OnSuccess action to:

       Navigate(‘Item Edit Screen’,ScreenTransition.Fade)

Select the cancel icon (X) and change the OnSelect function to:

      ResetForm(frmItemTypeEdit);Navigate(‘Item Edit Screen’,ScreenTransition.Fade)

Completing the Form

Select the frmItemEdit and update the OnSuccess action to:

      Navigate(‘Items Screen’,ScreenTransition.Fade)

Select the cancel icon (X) and change the OnSelect function to:

     ResetForm(frmItemEdit);Navigate(‘Items Screen’,ScreenTransition.Fade)

Select the Item Type field (make sure the field is selected, not the Card). Change the Width function to:

     (Parent.Width - 60)*.8

Repeat this for Manufacturer and Location fields.

Creating the Add Buttons

Now I will add the buttons to open the necessary to create a new Item Type, Location, and Manufacturer from the Item form. I will be selecting the data card for Item Type, Location, and Manufacturer and adding a button to it during this process. Any property not mentioned will use the default value. Each of the three buttons shares these properties:

  • Width: (Parent.Width – 60)*.1
  • Text: “+”
  • Size: 48
  • All Padding Properties: 1
  • All Radius Properties: 0
  • Position: 528,63

Select the Item Type card, click on the + button on the far left, expand Popular and select Button. Select the new button, set the shared properties, and then set OnSelect:

 NewForm(frmItemTypeEdit);Navigate(‘Item Type Edit Screen’, ScreenTransition.None)

For more information, see EditForm, NewForm, SubmitForm, ResetForm, and ViewForm functions in Power Apps.
Creating Popular navigation
Customize Padding Elements

Select the Manufacturer card, click on the + button on the far left, expand Popular and select Button. Select the new button, set the shared properties, and then set OnSelect:

NewForm(frmManufacturerEdit);Navigate(‘Manufacturer Edit Screen’, ScreenTransition.None)

Select the Location card, click on the + button on the far left, expand Popular and select Button. Select the new button, set the shared properties, and then set OnSelect:

NewForm(frmLocationEdit);Navigate(‘Location Edit Screen’, ScreenTransition.None)

The form should look something like this:
Creating custom forms

Items Screen

Expand the Items Screen and then select the galItems. Notice that the Layout property is Title, subtitle, and body. Now, click on Edit by the Fields property. I want to change ours to Serial Number for lblSerialNumber, Model for lblModel.
Expending Item Screen

Item Details Screen

The last thing that I need to do is complete the Item Details Screen to show a read-only version of the Item record.
Expand the Item Details Screen and select the frmItemDetails. Click on Edit Fields and add the custom fields created in Microsoft Dataverse.
Detailed Item Screen

Next, rearrange the field to match our edit form. Delete the Created-On field if desired.
Customize Field to match Edit form

Click on the Date Purchased value, click Advanced, and Unlock. Change the Text value to:


This will hide the time as it is not relative. For more information, see DateValue, TimeValue, and DateTimeValue functions in Power Apps.
Adjusting Date Purchased capability

Testing the User Interface

Now, I can test out the implementation:

Next Steps

With our base UI in place, I can enter data; however, there are a couple of final touches that need to be completed. In part three of the series, I will be adding the ability to add item images and barcode scanning.

This blog series is for anyone trying to learn how to use the Power Platform. Readers should know software development fundamentals. It begins by explaining why we would want to create a home inventory system. It then takes the reader over the requirements of the system, including potential future enhancements. It takes time to explain the database model and how it translates into tables in the Microsoft Dataverse. This post concludes with a review of the User Interface mockup.

The blog series assumed the following:

  • You have a Power Apps tenant with administrative privileges
  • You have knowledge of software development fundamentals

Why Create a Home Inventory?

Several years ago, I started considering the number and value of technical assets I had obtained over the years. I wondered what would happen if I were robbed, had a fire, or some other disaster; and found myself wondering if my insurance would cover all my technical assets? According to Consumer Reports:

“having a list or a visual reminder of your belongings can make a big difference in how much your homeowners’ insurance will pay, and thus how well you’ll recover financially.”

At the time, I purchased Quicken Home Inventory Manager, which was adequate at the time; however, advancements in technology have made it obsolete. I want to use my phone and its many capabilities for data entry instead of walking back-and-forth to the computer. I set out designing my home inventory system using the Power Platform with these things in mind.


The requirements of the initial system are straight forward:

  • The system should include a phone app for data entry and a web interface for administration
  • Phone and Admin Apps should provide the ability to create, update and delete Items
    • Columns: Name, Description, Item Type, Manufacturer, Serial Number, Model Number, Location, Date Purchased, and Approx. Value, Image
    • Required columns: Name, Model Number, Item Type, Manufacturer, Location, and Image
    • Item Type will look up from the Item Types table
    • The manufacturer will look up from the Manufacturers table
    • Location will look up from the Locations table
    • Users should be able to add an Item Image
  • Phone and Admin Apps should provide the ability to create Item Types
    • Columns: Name
    • Required columns: Name
  • Phone and Admin Apps should provide the ability to create Manufacturers
    • Columns: Name, Support URL, and Support Phone
    • Required columns: Name
    • Support URL should be validated
    • Support Phone should be validated
  • Phone and Admin Apps should provide the ability to create Locations
    • Columns: Name
    • Required columns: Name
  • Admin App should provide the ability to create, update and delete Item Types
  • Admin App should provide the ability to create, update and delete Locations
  • Admin App should provide the ability to create, update and delete Manufacturers
  • The system should allow Admins to export the inventory

Future Enhancements

In addition to the requirements, there are some future enhancements or “nice to haves” that I can investigate adding to the system later, for example:

  • Bar code Scanner – for items with a bar code, it would be helpful to scan these automatically. This could be for serial numbers, model numbers, or even UPC for the entire item.
  • Photo of Receipt – item would be a nice feature to include a picture of the receipt for proof-of-purchase and returns.
  • AI – While this is a tall order, it would be neat to take a picture of an item and have it recognized what kind of item it is, perhaps even determine its Item Type.

The Database Model

Using the requirements above, I can derive a simple database model. There is an Item table that contains most of the data. Each Item record may have 0 or more Image records. Each Item will also have a lookup to a Manufacturer (ex. Microsoft, Sony, etc.), a Location (ex. Master Bedroom, Living Room, etc.), and an ItemType (ex. Appliance, Networking, Computer, etc.).

Database Model

The Dataverse Tables

Using the database model above, I can create custom tables for the system in the Microsoft Dataverse. Due to requirements in the Microsoft Dataverse, I will need to add a Name column to the Image table; I will want to handle this name’s generation when I implement the Item Images part of the application. Below is the resulting data model summary.

Dataverse Table

User Interface Mockup

For the user interface, I will begin with the phone app and develop the administration app later. As data gets added to the system, I will do this because I may discover challenges and new ideas. Additionally, the mockup below does not include the Item Image library, as I will cover it in a separate post. The user interface below begins with the Items screen. Here, users can search, sort, and scroll through items in the system. From the Items screen, a user can view or create a new Item. When a user views an item, they can delete it or edit it. If the user creates or edits an item, they use the same screen; they can also create new Item Types and Manufacturers from this screen. On all screens, cancel, save, and delete actions return to the previous screen. You may notice that the Item Image is not included; this is because I will be adding this later in the series when I start using the phone. For now, I am focusing on the base implementation.

User Interface Mockup

Next Steps

With the requirements defined and tables created in the Microsoft Dataverse, I can now work on the User Interface. In part two of the series, I will create all the necessary screens for the phone app and begin data entry.

I am currently working as a development operation engineer at a client that maintains a large and complex infrastructure with an even more complicated development workflow. Cloud infrastructure had already been provisioned, and several development cycles had been completed when I joined the team a couple of years ago. As a result, I came into an environment that had already established a development workflow, with barely any infrastructure as code (IaC) practices in place.

What is unique about this client is that it maintains six separate web applications, each designated for a different area in the United States; the backends for each application vary from application to application, but the front ends are mostly similar. To avoid a maintenance nightmare, we use code from one repo to deploy the main UI of all six applications; the same code is deployed to each application but configured to run differently through a master configuration file. Each web application is compromised of several sub-applications, each with its codebase.

One of my responsibilities is to spin up and maintain application stacks for new development teams that are working on a specific feature for this client. That feature may or may not be utilized by all six web applications. For this team to develop rapidly, they need a stable testing environment, equipped with continuous delivery, to do their work.

This “application stack” is comprised of:

  • 1 Azure App Service Plan
  • 1 App Service (up to 6)
  • 1 Redis Cache
  • 1 Storage Account

These are Microsoft Azure constructs and are not essential to understand in the context of this post. Keep in mind that 6 separate applications utilize a set of shared resources.

Since the DevOps team at this particular client is small, we were only able to create a script to deploy an application stack for a specific development effort, but we did not get the chance to implement any strategies around maintaining these stacks once they were deployed. As a result, we are applying configurations that need to be applied across the 25-30 application stacks that we maintain has turned out to be a logistical nightmare.

Model 1 Main App RepoThe diagram above represents a high-level overview of a single application stack that we need to maintain; the diagram is scaled down from 6 applications to 3 applications for clarity.

The current script we use to provision application stacks does the following:

  1. Creates all Azure resources
  2. Performs KeyVault configuration
    1. Enabled managed identity on all app services
    2. Adds access policy to the key vault for all app services
    3. Adds key vault references for secret app settings on all app services
  3. Creates 1 Azure DevOps release pipeline to deploy code to all app services; this is done by cloning a base template for a release pipeline that exists before the script is run.

The Problem

Although the deployment script that is currently in use saves time, it falls short on a few things:

If we want to make configuration changes across all the application stacks, we would have to make edits to the deployment script and rerun it on each application stack. This process is entirely too manual and can be improved.

  1. If we want to change the mechanics of how the underlying deployment pipeline functions across all the application stacks, we have to make edits on each deployment pipeline that is tied to a given application stack. This is too tedious and leaves lots of room for error; we initially mitigated some of this by utilizing Azure DevOps task groups.
  2. Configuration drift is widely prevalent; because we do not have an easy way to maintain all of the application environments across the board, minor configuration changes during development are difficult to track and often fail to propagate to our main application stacks.
  3. The Solution: TEMPLATE YOUR LIFE

Azure YAML Templates

This client is relatively young in terms of their cloud journey, especially with Azure and Azure DevOps. They currently rely heavily on Release Pipeline User Interface within Azure DevOps to deploy code to their applications. In recent years, Azure has been subtly urging teams to utilize multi-stage YAML templates instead of the Release Pipeline User Interface to adopt an “infrastructure as code” mindset to the continuous delivery process. With this mindset, there is no difference between “Build” pipelines and “Release” pipelines; the only construct is a pipeline where you can perform any task (build, test, deploy, etc.).

I encourage you to read more about Azure DevOps YAML templates. I’ve included some relevant links below:

Azure DevOps YAML Templates
Azure DevOps Multistage YAML UI Experience
YAML Schema

Given the problem statement above, there is a large need to develop a framework and process around maintaining the cloud infrastructure and its respective continuous delivery process that is easy to manage and propagate changes through.

This is where Azure DevOps YAML templates become useful.

All you need to create a pipeline in Azure DevOps is a YAML file. This YAML file can exist in any code repository. Once this file is imported into a code repository, you can use the Azure DevOps UI to create a pipeline from this file. While you can run the pipeline from the Azure DevOps UI, the pipeline’s changes will be maintained and version within the YAML file itself, just like you would with any application code.

This YAML file can inherit and reference another YAML file that exists in other code repositories.

Template Repository

I am currently developing a code repository that will contain:

  1. Azure ARM templates to version app service configurations (including virtual/sub applications)
  2. Deployment script to deploy ARM templates mentioned above
  3. Azure DevOps YAML files that will function as “release pipelines”:
    1. Create / Refresh shared resources
    2. Create / Refresh app services
    3. Deploy code to app services

When a new workstream or development effort begins, all they need to do is create a simple YAML file that extends from the main (release.yaml) template file in the repo mentioned above (see azurepipeline.yaml below). Once that is done, the first time the development team pushes code to a specified branch, they will be equipped with a new environment with their code changes.

Hierarchy of the Templates

The diagram above represents the hierarchy of the templates in the code repository mentioned above. You can see there are only 7 YAML template files to maintain. The azurepipeline.yaml file inherits these template files. This helps address the challenge mentioned above related to the daunting task of maintaining 25+ release pipelines; changes made to any of the template files will propagate to any azurepipeline.yaml files that inherit from the release.yaml file.

The idea is that we can import the azurepipeline.yaml file that can be into any repo or branch. This file is relatively simple:


# run on schedule instead of utilizing ci triggers
name: $(Date:yyyyMMdd)$(Rev:.r)
- none
  # pipeline artifact list (previously source artifacts in release pipeline)
  template: release.yaml@templates

The release.yaml file that the azurepipeline.yaml file extends from looks similar to the one below:


  # script and template reference
    - repository: templates
      type: git
      name: template-repo
    - pipeline: main
      project: ExampleProject
      source: YAML\main
      branch: stage
    - pipeline: subapp1
      project: ExampleProject
      source: YAML\subapp1
      branch: stage
    - pipeline: subapp2
      project: ExampleProject
      source: YAML\subapp2
      branch: stage
- stage: SHARED
  dependsOn: []
  displayName: refresh shared resources
  - job: refresh_shared_resources
      vmImage: 'ubuntu-latest'
      - template: templates/update-infrastructure.yaml@templates
          sharedOnly: true
- stage: APP1
  dependsOn: ['SHARED']
  - template: templates/appvars.yaml@templates
      appName: 'app1'
  displayName: app1 
  - job: REFRESH_APP1
      vmImage: 'ubuntu-latest'
      - template: templates/update-infrastructure.yaml@templates
          sharedOnly: true
          appName: app1
  - template: templates/app1.yaml@templates
- stage: APP1
  dependsOn: ['SHARED']
  - template: templates/appvars.yaml@templates
      appName: 'app1'
  displayName: app1 
  - job: REFRESH_APP1
      vmImage: 'ubuntu-latest'
      - template: templates/update-infrastructure.yaml@templates
          sharedOnly: true
          appName: app1
  - template: templates/app1.yaml@templates

App stages are created for each app (not shown).

The app template files referenced by the release.yaml file looks similar to the file below:


- deployment: app1
  dependsOn: REFRESH_APP1
  displayName: "DEPLOY APP1"
  environment: stage
    vmImage: windows-latest    
        - checkout: self
        - template: configure-app.yaml@templates
        - template: deploy-app.yaml@templates
            isVirtualApp: false
            appName: 'ui'
        - template: deploy-app.yaml@templates
            isVirtualApp: true
            appName: 'subapp1'
        - template: deploy-app.yaml@templates
            isVirtualApp: true
            appName: 'subapp2'

Take note of the different steps that are used to deploy an application. It comprises a configuration step and a deployment step for each sub-application that utilizes the same azure YAML template file with different parameters.

The release.yaml file results in a multi-stage YAML pipeline that looks like the one below:

Release.yaml file results


The resource section of the template defines what resources your pipeline will use; this is analogous to the artifact section in the release pipeline UI view. You can only have the resources section declared once in your series of templates for a given pipeline. In the example above, it is defined in the release.yaml file, the template that other templates will extend from.

In this example, the resource section references the template repository itself and other build pipelines that produce code artifacts that we will need to deploy to our applications.

Defining the resources section in the base template (release.yaml) allows us to be abstract the choice of artifact sources away from the end-user. This is advantageous if we want to add more sub-applications to our release pipeline; we would only need to change the release resources section.yaml file (changes will propagate to all inherited/extended pipelines).

At the client I work with; this is problematic. In the solution above, all pipelines that extend from release.yaml (that contains the resources section) is limited to only use the specific artifacts AND the branches they are set to pull from, as defined in the base template (release.yaml). We consistently have project teams that need sub-application artifacts from specific branches that their development work exists on. To solve this, we moved the resources section into the extended template (azurepipeline.yaml). However, this isn’t optimal because we would still need to update every comprehensive template if we wanted to add to the resources section across all application stacks.

As far as I know, there is no way to use pipeline variables or template expressions to determine what resource artifacts are needed dynamically. Keep the resources section in the base template and override them within the extended templates.

Dynamic Insertion
Currently, we must maintain a manifest that describes the relationship between each application and its respective sub-applications. For example, app1 could have sub-applications subapp1 and subapp2, and app2 could have sub-applications subapp1 and subapp3 (or any combination of sub-applications). We utilize a JSON file that defines the mappings between application and sub-application. This JSON file is parsed in the deployment script to ensure the correct sub-applications exist before code deployment; in Azure, the sub-application must live before you push code to it. As a result, we also need to maintain sub-applications in each of the different YAML step templates for each application. At this point, I am unaware of an elegant way to iterate over an Azure DevOps pipeline variable to create steps dynamically.

Variable Templates
Template expressions can be applied to variables. This was extremely useful in the context of the solution I was trying to solve. For example, each stage in the pipeline described above is for a particular application with a distinct name. This name is used to determine several properties for its respective pipeline stage. For example, the app1 pipeline stage uses the site name/url The app2 stage uses the site name/url You can see both site names follow the same naming convention. This is a great use case for a variable template.

Here is the variable template I used:


  appName: ""
  deployToSlot: false
  siteName: "$(tier)-$(region)-$(app)-${{parameters.appName}}"
  ${{ if eq(parameters.deployToSlot, true) }}:
  ${{ if eq(parameters.deployToSlot, false) }}:  
    corsOrigin: “https://${{parameters.appName}}"
  appName: ${{parameters.appName}}
  template: "$(System.DefaultWorkingDirectory)/arm_templates/azuredeploy.json"
  virtualApps: "$(System.DefaultWorkingDirectory)/manifest.json"

You can see that I’ve included a parameters section that takes in appName as a parameter. You can see it in the release.yaml file, this parameter is being applied with:

  - template: templates/appvars.yaml@templates
      appName: 'app1'

This allows us to cut down on repeated code by using an extendable template for the variables needed for each deployment stage. It is essential to understand how variables and parameters work in Azure pipelines and the different ways they can be expressed in your YAML.

Check out these resources:
YAML Variables Documentation
YAML Parameters Documentation
Developer Community Forum Post


Development operation decisions should be considered in architecture design. In the scenario described above, the architecture I inherited did not foster the most efficient development and deployment workflow. For example, the decision to have sub-applications exist on an individual app service instead of assigning them to their app services. This limited us to deploy the individual applications in series, as you are limited to run one deployment at a time to give app service.

Infrastructure as code implemented in the continuous integration and delivery process can be equally as important as implementing it for your existing infrastructure.

Suppose you are working on a project that requires a complicated development lifecycle with multiple teams needing distinct environments. In this case, it becomes increasingly important to formulate a strategy around maintaining the various environments. Environment creation and code deployment should be as automated as possible. Applying configurations across all environments should be an easy task.

The Dockerfile development workflow is fantastic if you are familiar with the documentation and have spent a lot of time troubleshooting. However, if you are writing your first Dockerfile, it may be challenging to identify and debug certain cases.

During my first foray into the world of containers, I was containerizing a React Application. As I got comfortable copying the correct files into the image and making sure I was running things correctly, I ran the “docker build .” command multiple times. Suddenly, I noticed that my computer was giving me warnings that it was running out of storage! In this article, I will be discussing the concept of a Dangling Container when creating your Dockerfile, and we can finally find out what is taking up so much space on your machine!

What is a Dockerfile?

To quickly recap, a Dockerfile allows you to customize any image to fit your needs. You can add packages, run servers, the possibilities are endless! You start with a base image, i.e. Microsoft Container Registry’s official dotnetcore3.1 image. Then, go through various steps inside of a text document that is parsed by the Docker Engine. Eventually, you have your application running on a container image that has your application. The Docker daemon will read the Dockerfile, and perform whatever tasks you have described in said Dockerfile. Each time a step is run inside a Docker Container – It is a run in an Intermediate Container. Intermediate containers allow Docker to be very efficient in caching, as layers of the intermediate containers make up your final image.

Here is the Dockerfile I will be running, and the application I am using is the simple example found of the reactjs site.

Dockerfile example

Inside my console, I can run “docker build .” and everything will run completely fine, as shown here:

removing intermediate containers

You will notice that the container I have successfully built is a GUID. I did not provide my Docker build step with a tag, provided with the -t flag after the Docker build command. I ran “docker build .” one more time and I have two images of 1.15GB in size.


However, this isn’t the full story – the intermediate containers are also saved along with your containers themselves. You can see them with the “docker images -a” command.

Docker images -a

As you can see here, I have occupied roughly ~8 GB of Storage when trying to run these containers within approximately the past fifteen minutes of development. During a long project, a microservices architecture running on containers, for example, it is possible to accumulate a significant amount of storage space. Dangling images themselves can be seen here with this command “docker images -f dangling=true”

Docker images prune

In this case, since I missed the -t tag initially, my images are essentially dangling images. They are not associated with a repository.

Finally, to clean up my dangling images, all I need to run is “docker images prune”

Docker images

Developing with Dockerfiles can be difficult, and it can be daunting to step into the world of containers. Remember to prune often, and make sure you tag your images!

Microsoft Power Platform: Application Development Platform for General Purpose Business Apps

In recent years, with the transition to the cloud, SharePoint teams’ recommendation has been to move custom functionality “down” to the client computer or “over” to a host outside of SharePoint. This change has had a direct impact on enterprises that have long viewed SharePoint as an application development platform.

Today’s best practices show that a grouping of applications, such as case management apps, inventory or issue tracking, and fleet management (depicted in the dotted area of the diagram below) is no longer best suited to be built on top of SharePoint. This is where Microsoft Power Platform comes in for cross-platform app development.

Microsoft Power Platform as an app development platform

What is the Microsoft Power Platform (MPP)?

Microsoft Power Platform (or MPP) is being seen as the aPaaS layer that nicely complements mature Azure IaaS and PaaS offerings. Here are a few reasons we’re excited about MPP:

  • It offers a low-code/no code solution for rapid application development
  • The Power Apps component supports cross-platform app development for mobile and responsive web application solutions
  • Flow within MPP also offers a workflow and rules capability for implementing business processes
  • CDS offers a service for storing business objects

SharePoint as an App Development Platform

On January 30, 2007, we released a paper called “SharePoint as an Application Development Platform” to coincide with the release of Microsoft Office SharePoint Server 2007 or MOSS. We didn’t have the slightest expectation that this paper would be downloaded over half a million times from our website.

Based on the broad themes outlined in this paper, we went on to develop several enterprise-grade applications on the SharePoint platform for commercial as well as public sector enterprises. Of course, anyone who participated in SharePoint development in 2007 and onwards can vouch for the excitement around the amazing adoption of SharePoint.

Features, Add-Ins or “Apps,” and More

A few years later, SharePoint 2010 was announced with even more features that aligned with the application development platform theme. In 2013, the SharePoint team went on to add the notion of apps (now called add-ins) – we even wrote a book on SharePoint Apps. The most recent version of SharePoint is 2019 and it continues the tradition of adding new development capabilities.

All the Ingredients for Unprecedented Success

If you look back, it is easy to see why SharePoint was so successful. SharePoint was probably the first platform that balanced self-service capabilities with IT governance. Even though the underlying SharePoint infrastructure was governed by IT, business users could provision websites, document libraries, lists, and more in a self-service manner.

Compare this to the alternative at that time, which was to develop an ASP.NET application from scratch and then having to worry about operationalizing it including backup, recovery, high availability, etc. Not to mention the content and data silo that may result from yet another application being added to the portfolio.

Furthermore, the “Out of the Box” (OOTB) SharePoint applications constructs including lists, libraries, sites, web parts, structured and unstructured content, granular permissions, and workflows allowed developers to build applications in a productive manner.

With all these ingredients for success, SharePoint went on to become an enterprise platform of choice with close to 200 million licenses sold and, in the process, creating a ten-billion-dollar economy.

By 2013, Signs of Strain Emerged for SharePoint as an App Development Platform

With the transition to the cloud and lessons learned from early design choices, weaknesses in SharePoint as an app development platform started to show. Here are some examples:

  • Limitations around structured data – Storing large lists has always been challenging, despite the improvements over the years to increased scalability targets for lists. Combine scalability challenges with the query limitations and it makes for a less than ideal construct for general-purpose business application development.
  • Isolation limitations – SharePoint was not designed with isolation models for custom code. As a result, SharePoint farm solutions that run as full-trust solutions on the server side have fallen out of favor. The introduction of sandbox mode didn’t help since isolation/multi-tenancy was not baked in the original SharePoint design. The current recommendation is to build customizations as add-ins that run on the client side or on an external system.
  • External data integration challenges – BDC (Business Data Services) was designed to bring data from external systems into SharePoint. Business Connectivity Services (BCS) extensibility model was designed to allow an ecosystem of third-party and custom connectors to be built. But BCS never gained significant adoption with limited third-party support. Furthermore, BCS is restricted in the online model.
  • Workflow limitations – Workflows inside SharePoint are based on Windows Workflow Foundation (WF). WF was designed before the REST and HTTP became the lingua franca of integration over the web. As a result, even though SharePoint-based workflows work well for a document-centric workflow like approval, they are limited when it comes to integrating with external systems. Additionally, the combination of WF and SharePoint has not been the easiest thing to debug.
  • Lack of native mobile support – SharePoint was not designed for mobile experiences from the ground up. Over the years, SharePoint has improved the mobile experience through the support for device channels. Device channels allow for selection of different master pages and style sheets based on the target device. But device support is limited to publishing pages and even those require non-trivial work to get right. In a mobile-first world, a platform built with mobile in mind is going to be the ideal cross-platform app development tool for enterprises.
  • Access Services (and other similar services) never took off – Access databases are quintessential examples of DIY Line of Business (LOB) Apps. So when SharePoint 2010 introduced a capability to publish Access databases to SharePoint, it sought to offer the best of both worlds, self-service combined with governance (the latter being the bane of access databases). However, that goal proved too good to be true and Access Services never caught on and are now being deprecated.
  • Development “cliffs” – SharePoint was supposed to enable business users to build their own customizations through tools like Visio and SharePoint designer. The idea was that business users would be able to build customizations themselves using SharePoint designer and if they ran into a limitation (or the “cliff”), they could export their artifacts into a professional development tool like Visual Studio. In reality, this dichotomy of tools never worked and you almost always had to start over.
  • State of art in-low / no-code development – If you look at leading high-productivity application development platforms, the state of art seems to be around a declarative model-driven application approach. In other words, using a drag and drop UI, a user can generate a metadata-based configuration that describes the application, flow of application pages, etc. At runtime, the underlying platform interprets this configuration and binds the actions to the built-in database. SharePoint obviously has a rich history of offering no-code solutions, but it is not based on a consistent and common data model and scripting language.
  • Monolith versus micro-services – In many ways, SharePoint has become a “monolith” with tons of features packed into one product — content management, records management, business process, media streaming, app pages — you name it. Like all monoliths, it may make sense to break the functionality into “micro” services.

Note: With all the challenges listed above, SharePoint as a collaboration software continues to grow. In fact, it’s stronger than ever, especially when it comes to building collaboration-centric apps and solutions using the SharePoint framework and add-ins.

Just visit the thriving open source-based development centered around SharePoint Development to see for yourself.

Enter the Microsoft Power Platform (MPP)

The below diagram depicts a high-level view of the Microsoft Power Platform (MPP).

Microsoft Power Platform Infrastructure Overview

The “UI” Layer: Power Apps, Power BI, and Flow

At the highest level, you have the Power Apps, Power BI, and Flow tools.

Power Apps is a low-code platform as a service (PaaS) solution that allows for business app development with very little code. These apps can be built with drag and drop UI elements that work across mobile, tablet, and web form factors. In addition to the visual elements, there’s an Excel-like language called Power Apps Expression Language that’s designed to implement lightweight business logic and binding visual controls to data. Since Power Apps comes with a player for Android and iPhone devices, it doesn’t need to be published or downloaded from app stores.

Additionally, admin functions like publication, versioning, and deployment environments are baked into the Power Apps service. The PaaS solution can be used to build two types of apps:

  1. Canvas apps – as the name suggests, these allow you to start with a canvas and build a highly-tailored app.
  2. Model-driven apps – also as the name suggests, these allow you to auto-generate an app based on a “model” – business and core processes.

You’re likely already familiar with Power BI. It’s a business analytics as a service offering that allows you to create rich and interactive visualizations of sources of data.

Flow is a Platform as a Service (PaaS) capability that allows you to quickly implement business workflows that connect various apps and services.

Working together, Power Apps, Flow and Power BI allow for business users to easily and seamlessly build the UI, business process, and BI visualizations of a cross-platform, responsive business application. These services are integrated together to make the experience even better. As an example, you can embed a Canvas Power App inside of Power BI or vice versa.

The “Datastore and Business Rules” Layer

Common Database Service (CDS) allows you to securely store and manage data used by business applications. In contrast to a Database as a service offering like Azure SQL Database, think of CDS as a “business objects as a service”. Azure SQL Database removes the need for physical aspects of the database but as a consumer of Azure SQL, you’re still required to own the logical aspects of the data, such as schema, indexing and query optimization, and permissions.

In CDS all you do is define your business entities and their relationships. All logical and physical aspects of the database are managed for you. In addition, you get auditing, field level security, OData API, and rich metadata for free.

Finally, CDS offers a place to host “server-side” business rules in the form of actions and workflows. It’s important to note that CDS is powered by Dynamics CRM under the covers (see [1] for more information on this). This means that any skills and assets that your team has around Dynamics will seamlessly transition to CDS.

“External Systems Connector” Layer

Power Apps comes with a large collection of connectors that allow you to connect with a wide array of third-party applications and services (like SalesForce and Work Day) and bind the data to Power Apps visual controls. In addition, you can also connect to a custom app of service via Azure API Management and Functions.

How Can MPP Alleviate SharePoint Development Challenges?

MPP is a platform designed from the ground up for building business apps. Here’s how it can help alleviate the challenges users may experience with SharePoint as an app development platform.

Challenge SharePoint as a dev platform Power Platform
Structured Data Limits of items, throttling, and queries Backed by a fully relational model
Isolation No server-side isolation model. Server-based farm solutions are discouraged. Each tenant is isolated and allows for running custom code
External Data Integration BCS – limited third-party support 230+ connectors
Workflow Limitations Document-centric workflow HTTP REST based scalable workflow construct that can leverage OOTB connectors
Mobile Support Limited support in the form of device channels Designed from the ground up for mobile support
Development “Cliffs” Hard to transition from citizen dev to prod dev tools There a single tool for citizen developers and pro dev users. Pro dev users have the ability to use the extensibility to call Azure Functions, Logics Apps etc. for richer and complex functionality.
Low Code Development No common domain model or scripting language Designed from the ground-up as a low code development environment.
Monolith vs micro-services-based approach Monolith Comprised of a collection of “micro” services Power Apps, Flow, Power BI, CDS

Comparing Reference Architectures

The following diagram compares the SharePoint and MPP reference architectures.

SharePoint and MPP Reference Architectures Compared

Power Platform Components Inside SharePoint?

It’s noteworthy that advancements in MPP are available in SharePoint. For example, “standalone” Canvas Power Apps and Flow integrate directly into SharePoint are shown in the table below.

In many ways, this development represents a realignment of SharePoint’s place in the application development space. We believe that MPP is the new “hub”, while SharePoint (and other Office 365 components) represent “spokes” in this model.

Power Apps as a “custom form” for a SharePoint list item.

PowerApps as a custom form for a SharePoint list item example

Flow powering a document-centric workflow

Flow powering a document-centric workflow example

Consider MPP for Your Next Business App Development Project

The Microsoft Power Platform is an aPaaS offering that’s designed from the ground up as a general-purpose business application development platform, which includes native mobile support, first-class low-code environment, business data as a service, a lightweight workflow engine, and rich business analytics all in one. We believe that a category of applications, that were previously built on SharePoint will benefit in moving to the Microsoft Power Platform.

[1] xRM with Dynamics CRM

xRM style applications have been built on Dynamics CRM for years. Unfortunately, the licensing story to support this model was not ideal. For example, customers could not license a “base” version without paying for sales, marketing functionality built OOB.

That’s all changing with CDS. Power Apps licenses, such as P1 and P2, give customers access to what’s referred to as “Application Common” or Base instance of CDS.

CDS Apps Instance vs Dynamics Instance

Learn More About MPP & Your App Development Platform Options

The road to general-purpose business application development can be challenging to navigate with all of the tools and platforms available, as well as the unique pros and cons of each option. Not sure where to start? Start with a conversation!

See if MPP is a good fit for your orgs app development needs. Contact your partner at AIS today!