Microsoft Azure Government DC is a group created for anyone in the IT world modernizing government with a goal of bringing real-world lessons to innovators in Government. AIS has supported and presented during these events when there were just 5 members. Now, the group in nearing 4,000 members. In March, we presented on Authority to Operate (ATO) and Compliance in Azure Gov. Check out the recording and overview below.

Here’s What You Missed

AIS Cloud Security and Compliance Solutions Architect, Bryan McGill presented at the latest AzureGov Meetup to demonstrate our repeatable ATO processes and secure and compliant cloud solutions in Azure and AzureGov for faster time to value.

Watch the Full Session:

ATO Session Recap

Bryan begins the session explaining ATO and the six steps of the Risk Management Framework (RMF) process:

  • Categorize Information System
  • Select Security Controls
  • Implement Security Solution
  • Assess Security Controls
  • Authorize System
  • Monitor Security Controls

Challenges when Adopting ATO

As the session continues, Bryan talks about the most significant advantages of a cloud solution: shared responsibility with Azure and Cloud Service Providers (CSP). The more you leverage Microsoft tooling and your Cloud Service Provider relationship, the more outcomes you can expect. This includes:

  • Microsoft Azure Inheritance (between 20% and 50% of all security controls could be inherited).
  • Documentation Templates to spend less time building required documentation, that are pre-mapped to control implementation statements.
  • Pre-crafted security control responses mapped to documentation.
  •  Azure Services like Sentinel, Security Center, Log Analytics, Monitoring, and Azure Active Directory.
  • Tools like Blueprints and Policies can be written, with Infrastructure as Code, before an environment is set up to ensure compliant, repeatable, and secure cloud solutions.

Our Approach

To round out the presentation, Bryan presents the AIS approach to ATOaaS. ATOaaS provides standardized ATO documentation and Blueprints to government customers. At AIS, we help you take a step back and look at the requirements needed and what services can be used with Azure or other tools to meet needs, offering templates and support for documentation.

Our ATO services deliver efficiency gains without sacrificing security and compliance, increasing your speed of deployment so you can start using cloud-native features and services. AIS can help you drive outcomes to include mission effectiveness, better security, agility and flexibility, operational efficiencies, and faster time to value.

Our ATOaaS Approach contains three engagements:

  • Kickstart Workshop – Targeted workshops focused on obtaining an audit compliance in Azure.
  •  Consulting Services – Azure Compliance Advisory, Security Gap Analysis, and recommendations for Audit Readiness.
  •  ATOaaS: Consult and Implement – Fully managed Control Implementation, Testing and Compliance Documentation.

Struggling with the ATO process? Reach out to AIS to figure out which Engagement Option is best for your team to get cloud accreditation and begin migrating your workloads to a secure, compliant cloud environment

AIS: Your ATO & Cloud Transformation

We help government organizations create cohesive multi-cloud strategies, bringing the expertise you require for successful cloud adoption, modernization, data intelligence, and beyond. As the first company to achieve Authority to Operate (ATO) in IL5 and IL6, as well as the first to establish cloud environments at Impact Level (IL) 5 and IL6, know you’re in good hands. We’ve been working with Azure for 12+ years and will have you well on your way to realizing the best that the cloud can offer.

Join us for future virtual meetings the Microsoft Azure Government User Community: https://www.meetup.com/dcazuregov/

This blog series is for anyone trying to learn how to use the Power Platform. Readers should know software development fundamentals. This post builds on the first two parts by adding smartphone camera capabilities to the interface. It starts by covering how to add an Image field to the Item form. Next, it illustrates how to implement bar code readers in the app. Last, a demonstration of the capabilities is performed with screen captures from the smartphone.

The blog series assumed the following:

  • You have a Power Apps tenant with administrative privileges
  • You know software development fundamentals
  • You have completed Part One of this series
  • You have completed Part Two of this series

Adding the Item Image to the User Interface

When we created the Item table in the Microsoft Dataverse, we included an Image column. Now that we are ready to start using the phone, we can add this to the interface.

Expand the Item Edit Screen and Select the frmItemEdit. Click Edit Fields and select the Image field.

Item Edit Screen for frmltemEdit

Reposition the Image field on the form if necessary.
Reposition the Image field

Adding the Bar Code Scanner to the User Interface

Some updates need to be made to add the bar code readers to the interface to edit the form.

Model Number field

Select the Model Number field (make sure the field is selected, not the Card). Change the Width function to:

(Parent.Width - 60)*.8

Set the Default to:

If(IsBlank(modelScanValue),Parent.Default,modelScanValue)

Note: sets the value to modelScanValue if not empty otherwise to its value. The formula will have an error until the steps below are complete. For more information, see If and Switch functions in Power Apps and Blank, Coalesce, IsBlank, and IsEmpty functions in Power Apps.

Serial Number field

Select the Serial Number field (make sure the field is selected, not the Card). Change the Width function to:

(Parent.Width - 60)*.8

Set the Default to:

If(IsBlank(serialScanValue),Parent.Default,serialScanValue)

Cancel Button

Select the cancel icon (X) and change the OnSelect function to:

ResetForm(ItemEditForm);Navigate(ItemsScreen,ScreenTransition.Fade);Set(modelScanValue,"");Set(serialScanValue,""); 

Barcode Scanner Options

Select the Model Number Card, click on the + button on the far left, expand Media and select Barcode Scanner
Expand Media and select Barcode Scanner

Select the new button and set the following properties:

  • Text: “|||”(Note: I chose three pipes because it kind of resembles a barcode and is small enough to fit)
  • Width: (Parent.Width – 60)*.1
  • Size: 20
  • All Padding Properties: 1
  • All Radius Properties: 0
  • Position: 528,63

OnScan:

Set(modelScanValue, barcodeModelNumber.Value)

Note: sets variable modelScanValue to the value. For more information, see the Set function in Power Apps.

Rename the barcode scanner to barcodeModelNumber
Select the Serial Number Card, click on the + button on the far left, expand Media and select Barcode Scanner.

Select the new button and set the following properties:

  • Text: “|||”
  • Width: (Parent.Width – 60)*.1
  • Size: 20
  • All Padding Properties: 1
  • All Radius Properties: 0
  • Position: 528,63

OnScan:

Set(serialScanValue, barcodeSerialNumber.Value)

Note: sets variable serialScanValue to the value.
Rename the barcode scanner to barcodeSerialNumber

Review

Several changes have been made to the app, so let us review what has changed:

  1. We added the Image field to the Item interface to add a photo of the item to the record.
  2. We added two new barcode scanner buttons and positioned them to match other interface elements
  3. Each barcode button stores its scanned value into its corresponding variable
  4. The Model Number and Serial Number fields will use its corresponding variable as its value if available; otherwise, it uses the default.Reviewing Changes

Camera Enhancements in Action

In this walkthrough, I will be using my phone for data entry. We must first File > Save and then Publish the app. I have downloaded the Power Apps mobile app to my phone and logged in with my credentials. For this example, I will be using a stick of Samsung memory. While no one would likely record a single stick of memory in a home inventory app, it is a useful example.|

Next Steps

With our user interface complete, we can start adding real data into the system. In part four of this series, we will begin working on the Admin web interface to edit all data and export capabilities.

POWER PLATFORM ADOPTION FRAMEWORK
Discover the start to finish approach to adopting Power Platform at scale to transform your business in the cloud with our free whitepaper.

This blog series is for anyone trying to learn how to use the Power Platform. Readers should know software development fundamentals. This blog begins by covering how to create a base Canvas App from an existing table in the Microsoft Dataverse. It reviews all the screens, forms, and changes necessary to make the user interface work as defined in the requirements. By the end of this blog, a fully functioning app will be ready for data entry.

The blog series assumed the following:

  • You have a Power Apps tenant with administrative privileges
  • You have knowledge of software development fundamentals
  • You have completed Part One of this series

Creating the Canvas App

In Part One of this series, I defined our home inventory application, its database, and how its user interface (UI) would look. I also created the underlying tables for our application. With a data source in place, I can create a Canvas App connected to the Microsoft Dataverse. Making and table-based canvas app saves time by creating the base screens. It is also valuable for reverse engineering to understand how to customize the app on your own.

When prompted, use the Items table as this is the focal point of the application.
Focus point of application

This will create a basic Canvas App; the main screen can search for items (there are none at this time of course)
Main screen creating basic canvas app

Clicking the + opens a form to create a new record; at this time, only the Name column is provided.
Creating new record

Expand the EditScreen1 and Select the EditForm1.
Next, add the custom columns that were created in the Microsoft Dataverse to EditForm1.
Adding custom columns created in Dataverse to EditForm1

Pressing the Play button and then clicking + opens the edit form below. The only problem is there are no Item Types, Locations, or Manufacturers yet.

Editing Form with Item Types, Locations, and Manufacturers

Creating the Lookup Forms

Near the top left of the window, click on New screen. Choose Form as the type.
Power Apps Home to create Lookup Forms

On the newly created screen, click on the EditForm element created. On the right side of the screen, select Item Types for the Data source. Click on the LblAppName created and change the Title Property from [Title] to Item Types. Repeat this with Manufacturers and Locations.

Take few minutes to rename the screen and its elements following the Power Apps Canvas App Accessibility Guidelines and the PowerApps Canvas App Coding Standards and Guidelines. This is important for many reasons; for example, screen readers will read these values aloud, and “EditScreen1” would not be valuable to the end-user. Following these guidelines, I have renamed all screens, forms, and controls within the app. For example, these are the names of the screens that I will be using:

Rename the Screen and Elements

Be sure to save if you have not already; click File and then Save (or CTRL+S).

Manufactures Lookup Form

Expand the Manufacturer Edit Screen, select the frmManufacturer, click Edit Fields, and then add the columns Support Phone and Support URL. Delete the Created-On column if desired.
Click the Advanced tab for the OnSuccess action, enter

Navigate(‘Item Edit Screen,’ ScreenTransition.Fade)

Note: This tells the form to navigate back to the Item Edit Screen when this form is complete. For more information, see Back and Navigate functions in Power Apps.
Insert and edit the Manufacturer

Select the cancel icon (X) and change the OnSelect function to:

      ResetForm(frmManufacturerEdit);Navigate(‘Item Edit Screen’,ScreenTransition.Fade)
 

Note: This tells the form to clear its columns and navigate back to the Item Edit Screen. For more information, see Reset function in Power Apps and Back and Navigate functions in Power Apps.

Creating columns to navigate back to the Item Edit Screen

If necessary, rearrange elements; the form should look something like this:
Rearrange Elements to customize form

Locations Lookup Form

Since the Location table only has the Name column, there is no need to add any columns to it; however, I need to repeat the same steps from the Manufacturers form.
Select the frmLocationEdit and update the OnSuccess action to:

      Navigate(‘Item Edit Screen’,ScreenTransition.Fade)
 

Note: This tells the form to navigate back to the Item Edit Screen when this form is complete.
Select the cancel icon (X) and change the OnSelect function to:

      ResetForm(frmLocationEdit);Navigate(‘Item Edit Screen’,ScreenTransition.Fade)
 

Item Type Lookup Form

Since the Item Type table only has the Name column, there is no need to add any columns to it; however, I need to repeat the same steps from the Manufacturers form.
Select the frmItemTypeEdit and update the OnSuccess action to:

       Navigate(‘Item Edit Screen’,ScreenTransition.Fade)
 

Select the cancel icon (X) and change the OnSelect function to:

      ResetForm(frmItemTypeEdit);Navigate(‘Item Edit Screen’,ScreenTransition.Fade)

Completing the Form

Select the frmItemEdit and update the OnSuccess action to:

      Navigate(‘Items Screen’,ScreenTransition.Fade)

Select the cancel icon (X) and change the OnSelect function to:

     ResetForm(frmItemEdit);Navigate(‘Items Screen’,ScreenTransition.Fade)

Select the Item Type field (make sure the field is selected, not the Card). Change the Width function to:

 
     (Parent.Width - 60)*.8

Repeat this for Manufacturer and Location fields.

Creating the Add Buttons

Now I will add the buttons to open the necessary to create a new Item Type, Location, and Manufacturer from the Item form. I will be selecting the data card for Item Type, Location, and Manufacturer and adding a button to it during this process. Any property not mentioned will use the default value. Each of the three buttons shares these properties:

  • Width: (Parent.Width – 60)*.1
  • Text: “+”
  • Size: 48
  • All Padding Properties: 1
  • All Radius Properties: 0
  • Position: 528,63

Select the Item Type card, click on the + button on the far left, expand Popular and select Button. Select the new button, set the shared properties, and then set OnSelect:

 NewForm(frmItemTypeEdit);Navigate(‘Item Type Edit Screen’, ScreenTransition.None)
 

For more information, see EditForm, NewForm, SubmitForm, ResetForm, and ViewForm functions in Power Apps.
Creating Popular navigation
Customize Padding Elements

Select the Manufacturer card, click on the + button on the far left, expand Popular and select Button. Select the new button, set the shared properties, and then set OnSelect:

NewForm(frmManufacturerEdit);Navigate(‘Manufacturer Edit Screen’, ScreenTransition.None)

Select the Location card, click on the + button on the far left, expand Popular and select Button. Select the new button, set the shared properties, and then set OnSelect:

NewForm(frmLocationEdit);Navigate(‘Location Edit Screen’, ScreenTransition.None)

The form should look something like this:
Creating custom forms

Items Screen

Expand the Items Screen and then select the galItems. Notice that the Layout property is Title, subtitle, and body. Now, click on Edit by the Fields property. I want to change ours to Serial Number for lblSerialNumber, Model for lblModel.
Expending Item Screen

Item Details Screen

The last thing that I need to do is complete the Item Details Screen to show a read-only version of the Item record.
Expand the Item Details Screen and select the frmItemDetails. Click on Edit Fields and add the custom fields created in Microsoft Dataverse.
Detailed Item Screen

Next, rearrange the field to match our edit form. Delete the Created-On field if desired.
Customize Field to match Edit form

Click on the Date Purchased value, click Advanced, and Unlock. Change the Text value to:

      DateValue(Parent.Default)
 

This will hide the time as it is not relative. For more information, see DateValue, TimeValue, and DateTimeValue functions in Power Apps.
Adjusting Date Purchased capability

Testing the User Interface

Now, I can test out the implementation:

Next Steps

With our base UI in place, I can enter data; however, there are a couple of final touches that need to be completed. In part three of the series, I will be adding the ability to add item images and barcode scanning.

This blog series is for anyone trying to learn how to use the Power Platform. Readers should know software development fundamentals. It begins by explaining why we would want to create a home inventory system. It then takes the reader over the requirements of the system, including potential future enhancements. It takes time to explain the database model and how it translates into tables in the Microsoft Dataverse. This post concludes with a review of the User Interface mockup.

The blog series assumed the following:

  • You have a Power Apps tenant with administrative privileges
  • You have knowledge of software development fundamentals

Why Create a Home Inventory?

Several years ago, I started considering the number and value of technical assets I had obtained over the years. I wondered what would happen if I were robbed, had a fire, or some other disaster; and found myself wondering if my insurance would cover all my technical assets? According to Consumer Reports:

“having a list or a visual reminder of your belongings can make a big difference in how much your homeowners’ insurance will pay, and thus how well you’ll recover financially.”

At the time, I purchased Quicken Home Inventory Manager, which was adequate at the time; however, advancements in technology have made it obsolete. I want to use my phone and its many capabilities for data entry instead of walking back-and-forth to the computer. I set out designing my home inventory system using the Power Platform with these things in mind.

Requirements

The requirements of the initial system are straight forward:

  • The system should include a phone app for data entry and a web interface for administration
  • Phone and Admin Apps should provide the ability to create, update and delete Items
    • Columns: Name, Description, Item Type, Manufacturer, Serial Number, Model Number, Location, Date Purchased, and Approx. Value, Image
    • Required columns: Name, Model Number, Item Type, Manufacturer, Location, and Image
    • Item Type will look up from the Item Types table
    • The manufacturer will look up from the Manufacturers table
    • Location will look up from the Locations table
    • Users should be able to add an Item Image
  • Phone and Admin Apps should provide the ability to create Item Types
    • Columns: Name
    • Required columns: Name
  • Phone and Admin Apps should provide the ability to create Manufacturers
    • Columns: Name, Support URL, and Support Phone
    • Required columns: Name
    • Support URL should be validated
    • Support Phone should be validated
  • Phone and Admin Apps should provide the ability to create Locations
    • Columns: Name
    • Required columns: Name
  • Admin App should provide the ability to create, update and delete Item Types
  • Admin App should provide the ability to create, update and delete Locations
  • Admin App should provide the ability to create, update and delete Manufacturers
  • The system should allow Admins to export the inventory

Future Enhancements

In addition to the requirements, there are some future enhancements or “nice to haves” that I can investigate adding to the system later, for example:

  • Bar code Scanner – for items with a bar code, it would be helpful to scan these automatically. This could be for serial numbers, model numbers, or even UPC for the entire item.
  • Photo of Receipt – item would be a nice feature to include a picture of the receipt for proof-of-purchase and returns.
  • AI – While this is a tall order, it would be neat to take a picture of an item and have it recognized what kind of item it is, perhaps even determine its Item Type.

The Database Model

Using the requirements above, I can derive a simple database model. There is an Item table that contains most of the data. Each Item record may have 0 or more Image records. Each Item will also have a lookup to a Manufacturer (ex. Microsoft, Sony, etc.), a Location (ex. Master Bedroom, Living Room, etc.), and an ItemType (ex. Appliance, Networking, Computer, etc.).

Database Model

The Dataverse Tables

Using the database model above, I can create custom tables for the system in the Microsoft Dataverse. Due to requirements in the Microsoft Dataverse, I will need to add a Name column to the Image table; I will want to handle this name’s generation when I implement the Item Images part of the application. Below is the resulting data model summary.

Dataverse Table

User Interface Mockup

For the user interface, I will begin with the phone app and develop the administration app later. As data gets added to the system, I will do this because I may discover challenges and new ideas. Additionally, the mockup below does not include the Item Image library, as I will cover it in a separate post. The user interface below begins with the Items screen. Here, users can search, sort, and scroll through items in the system. From the Items screen, a user can view or create a new Item. When a user views an item, they can delete it or edit it. If the user creates or edits an item, they use the same screen; they can also create new Item Types and Manufacturers from this screen. On all screens, cancel, save, and delete actions return to the previous screen. You may notice that the Item Image is not included; this is because I will be adding this later in the series when I start using the phone. For now, I am focusing on the base implementation.

User Interface Mockup

Next Steps

With the requirements defined and tables created in the Microsoft Dataverse, I can now work on the User Interface. In part two of the series, I will create all the necessary screens for the phone app and begin data entry.

Testing is and has always been a necessary action when writing code or creating templates. Throughout the history of writing code, tools have made it easier and more foolproof for developers to complete tasks and get increasingly better at test-driven development. During some time, I researched and was asked to add content to the Knowledge Center about testing tools Pester and Azure Resource Manager (ARM) Template. Without knowing too much about writing code or developing ARM Templates, I was able to see how these tools are powerful and resourceful when writing PowerShell commands and creating ARM Templates.

ARM TTK or Azure Resource Manager template Test Toolkit, test your ARM Templates by checking through a set of requirements that either pass or fail the test. The tests will check a template or set of templates to code best practices when the tool is run. If your template isn’t compliant with the pre-set list of requirements, it returns an easy-to-read list of warnings with the suggested changes. By using the test toolkit, you can learn how to avoid common problems in template development.

Although also a tool to test a written piece of work, Pester is a little more complicated than ARM TTK. Pester is a framework whose purpose is to ensure the PowerShell commands that are written, whether functions, modules, or scripts, do what is expected. Another way to use Pester is to use it to validate any changes that were made to an already written script.

ARM TTK Deeper Dive

ARM TTK or Azure Resource Manager template (ARM template) utilizes a list of preset unit test requirements for an ARM Template to receive a pass or fail. The tool is an open-source PowerShell library that you can use to validate your templates against a series of test cases. These test cases are generic and designed to validate that your templates are following best practices.

Some examples of the unit test:

  • VM-Size-Should-Be-A-Parameter
  • Virtual-Machines-Should-Not-Be-Preview
  • Resources-Should-Have-Location

The philosophy behind ARTM TTK is to validate the author’s intent when writing the templates, such as checking against unused parameters or variables. The tool utilizes knowledge on security practices for the language and uses the appropriate language construct for the task at hand. In other words, the tool would be checking environmental functions instead of hard-coding values. Yet as we know, not everything is appropriate for a universal set of tests as every test will not apply to every scenario. For flexibility, the framework allows for easy expansion and a unique selection of tests by simply skipping whatever test does not apply to your particular scenario.

For information on results and how to download the Linux tool, Windows, or Mac OS, click here.

ARM TTK Extension for Pipeline

For my research for the knowledge center, I focused on how to test ARM Templates that are part of a repository for a release pipeline. I had created a test Azure DevOps Organization and a test Azure DevOps Project to try the tool. Once I created the project, I cloned the ARM TTK Repo provided by Azure (https://github.com/Azure/arm-ttk). The repo provided me with the unit test and other files to help me understand the tool better. I had also downloaded the “Run ARM TTK Tests” (https://marketplace.visualstudio.com/items?itemName=Sam-Cogan.ARMTTKExtension) to be able to run the RunARMTTKTests@1 task.

Below is the YML file to run the tool. It is a very vanilla file, but its primary purpose is to test the repo’s ARM Templates by specifying the JSON file’s path.

name: armTTKbuiltINtool

trigger:
  batch: true
  branches:
    include:
    - 'master'
  paths:
    include:
    - '.azure'

pool:
  vmImage: 'windows-latest'

stages:
- stage: Build
  jobs:
  - job: BuildJob
    steps:
    - script: echo Building!
- stage: Test
  jobs:
  - job: TestOnWindows
    steps:
    - task: RunARMTTKTests@1
      displayName: TTK Built In Tools Test
      inputs:
        templatelocation: '$(System.DefaultWorkingDirectory)\.azure'
        resultLocation: '$(System.DefaultWorkingDirectory)\results'
        skipTests: 'Location Should Not Be Hardcoded'
    - task: PublishTestResults@2
      inputs:
        testResultsFormat: 'NUnit'
        testResultsFiles: '$(System.DefaultWorkingDirectory)\results\*-armttk.xml'
        searchFolder: '$(System.DefaultWorkingDirectory)'
        mergeTestResults: false
        failTaskOnFailedTests: false
        testRunTitle: ARMTTKtitle1
      condition: always()

- stage: Deploy
  jobs:
  - job: Deploy
    steps:
    - script: echo Deploying the code!

The main focus is on the part that details, which calls the task and has the basic parameters to go by:

task: RunARMTTKTests@1
  inputs:
    templatelocation: '$(System.DefaultWorkingDirectory)\templates'
    resultLocation: '$(System.DefaultWorkingDirectory)\results'
    includeTests: ''
    skipTests: ''

ARM TTK Results

Once your build is finished, you will be able to see where to click on to view your test results, as shown below:
ARM TTK Results View

In the “Test” stage, there is not a percentage on pass test which when clicked on, shows more details as shown below:
Test Stage percentage passed

This shows you the number of tests run and the percentage of passing tests.

Pester Deeper Dive

Pester, as mentioned earlier, is a framework which purpose is to ensure the PowerShell commands written, whether it be functions, modules, or scripts, meet expectations. As of now, Pester is in version 5.0. this version update came with a new way of writing your code which we will touch on later in the article.

To install Pester, you run “Install-Module Pester -Force” first, then “Import-Module Pester.” After which, you will see an output that details the name (Pester) and version (5.0 as of right now). Pester identifies which PowerShell (.ps1) scripts to test by a file name. This is important as Pester uses a naming convention, as “*.Tests.ps1” files will be inspected for tests. For example, a file can be named “ExampleScript.Tests.ps1” or “ExampleScript2.Tests.ps1”. Pester will then look for those files to test.

Now you may ask how all this is set up for Pester to identify those scripts. Picture a folder with ten scripts in that folder. Any script that has “.Tests.ps1” will be tested by Pester but must be pathed to get a better understanding; look at this example below.

Pester Example

  • Write a script and make sure the file ends in “.Tests.ps1”
    • For this example, let’s call it “ExamplePeser.Tests.ps1”
    BeforeAll { 
        function Get-Planet ([string]$Name = '*') {
            $planets = @(
                @{ Name = 'Mercury' }
                @{ Name = 'Venus'   }
                @{ Name = 'Earth'   }
                @{ Name = 'Mars'    }
                @{ Name = 'Jupiter' }
                @{ Name = 'Saturn'  }
                @{ Name = 'Uranus'  }
                @{ Name = 'Neptune' }
            ) | ForEach-Object { [PSCustomObject] $_ }
    
            $planets | Where-Object { $_.Name -like $Name }
        }
    }
    
    Describe 'Get-Planet' {
        It 'Given no parameters, it lists all 8 planets' {
            $allPlanets = Get-Planet
            $allPlanets.Count | Should -Be 8
        }
    }
    
  • Run “Invoke-Pester -Output Detailed
Starting discovery in 1 files.
Discovering in <file path on computer/ ExamplePeser.Tests.ps1>.
Found 1 tests. 41ms
Discovery finished in 77ms.

Running tests from <file path on computer/ ExamplePeser.Tests.ps1>'
Describing Get-Planet
  [+] Given no parameters, it lists all 8 planets 20ms (18ms|2ms)
Tests completed in 179ms
Tests Passed: 1, Failed: 0, Skipped: 0 NotRun: 0

Pester 5.0 Changes

The below information is from Pester Repository Readme file found at https://github.com/pester/Pester.

The fundamental change in this release is that Pester now runs in two phases: Discovery and Run. It quickly scans your test files and discovers all the Describes, Contexts, Its, and other Pester blocks during discovery. This powers many of the features in this release and enables many others to be implemented in the future.

For best results, there are new rules to follow:

  • Put all your code into It, BeforeAll, BeforeEach, AfterAll, or AfterEach.
  • Put no code directly into Describe, Context, or on the top of your file, without wrapping it in one of these blocks, unless you have a good reason to do so.
  • All misplaced code will run during discovery, and its results won’t be available during Run.

This will allow Pester to control when all of your code is executed and scope it correctly. This will also keep the amount of code executed during discovery to a minimum, keeping it fast and responsive. See discovery and script setup article for detailed information.

Lee BakerFastTrack Solutions

Microsoft FastTrack is a service provided by Microsoft that helps customers onboard Cloud Solutions and drive user adoption. The Microsoft Power Platform product engineering team recognizes individuals for consistently producing high-quality solutions using their experience in architecture solutions during customer engagements. Lee Baker, a Solutions Architect for AIS, has been recognized for his hard work and impressive client management.

“Lee is one of the most extraordinarily knowledgeable Power Platform technical leaders on the planet, particularly in the realms of data and integration across the whole Microsoft cloud. He’s also a great colleague and friend. This award is beyond well deserved.”

– Andrew Welch, Microsoft MVP | Director, Cloud Applications at AIS

Lee began his career in technology nine years ago as a developer. Since then, Lee has developed skills across Microsoft Azure, Dynamics 365, and Power Platform and now serves as an architect and technical advisor to several senior business stakeholders in the enterprise. Lee focuses on leading the architecture at global financial services and non-governmental organizations. He meets high standards and expects quality in customer success.

“I’m proud of Lee and our entire Power Platform team for what they’ve accomplished for our clients in such a short time. Lee’s technical skills and passion for this platform make him an ideal FastTrack Solution Architect. I’m excited that the FastTrack program will benefit from Lee’s exceptional capabilities.

– Larry Katzman | President & CEO at AIS

Congratulations, Lee! Learn more about Microsoft FastTrack and Lee’s achievement here.

Azure Spring Cloud logoSpring Framework is one of the most popular frameworks for application development using Java. Spring Boot is one of the top modules of Spring Framework, which is widely used to build enterprise-grade highly scalable backend applications. Spring Boot is very popular to develop microservices using open-source technologies, and its Pivotal Team develops it.

Microsoft and Pivotal/VMware build and operate Azure Spring Cloud, a native Azure service offered as Platform as a Service (PaaS) and available on Azure Marketplace.

Why Use Azure Spring Cloud?

There are many benefits in deploying an application to Azure Spring Cloud

  • Spring Boot framework helps to reduce the overall application development time.
  • Azure Spring Cloud further accelerates and simplifies infrastructure management needed for hosting Spring Boot applications.
  • It’s easy to deploy existing Spring Boot applications without code changes.
  • For new development, the managed environment provides infrastructure services such as Eureka, Config Server, Service Registry Server, and blue-green deployment. There is no need to create, deploy or manage services that can provide these functionalities.
  • Develop and deploy rapidly without containerization dependencies.
  • It is easy to bind applications with other Azure services such as Azure Cosmos DB, Azure Database for MySQL, and Azure Cache for Redis.
  • Monitor production workload efficiently and effortlessly.
  • Provides security features to manage secrets.
  • Supports hybrid deployment.
  • Easily control ingress and egress to the app.
  • Scalable global infrastructure
  • Automatically wire apps with Spring Cloud infrastructure

Azure Service Overview

Azure Spring Cloud offers first-class support for Spring tools, and it quickly binds to other Azure services, including storage, database, monitoring, etc.

Azure Service Overview
Picture 1 – Azure Spring Cloud-Service Overview

Reference – https://docs.microsoft.com/en-us/azure/spring-cloud/media/spring-cloud-principles/azure-spring-cloud-overview.png

Service Tiers and Limits
Azure Spring Cloud offers Basic and Standard tiers and set default limits and quotas. Some default limits can be increased vis a support ticket.

Tiers for Azure Spring Cloud
Table 1 – Quotas and limits

Java, Spring Boot, and Spring Cloud Versions

  • Azure Spring Cloud hosting environment contains the latest Azul Zulu OpenJDK version for Azure, and it supports Java 8 and Java 11.
  • The minimum Spring Boot version needed for Azure Spring Cloud is either Spring Boot version 2.1 or version 2.2

The following table lists the supported Spring Boot and Spring Cloud combinations:

Spring Boot vs. Spring Cloud
Table 2 – Java and Spring supported versions

Azure Spring Cloud Setup

In this review, the application to get weather details of cities is decomposed into three microservices to explain the capabilities of Azure Spring Cloud. It includes a gateway service and two microservices for city and weather details, respectively. Microservices talk to each other to retrieve weather details of cities. Various Azure infrastructure services used in the sample app are mentioned in respective sections of this document. Below are the details of microservices:

  • A gateway microservice based on Spring Cloud Gateway: gateway application
  • A reactive microservice that stores its data on Cosmos DB: city-service application
  • A microservice that stores its data on MySQL: weather-service application
Azure Spring Cloud Instance with Microservices
Picture 2 – Azure Spring Cloud Instance with microservices deployed on Azure Portal

Deploying Spring Boot applications into Azure Spring Cloud includes the general steps below:

  • Create an Azure Spring Cloud instance through Azure Portal or using CLI, if not already created.
  • Add the Azure Spring Cloud extension by running the following command: az extension add -n spring-cloud -y
  • Build Spring Boot application locally; mvn clean package
  • Create a new Azure Spring Cloud app within your Azure Spring Cloud instance: az spring-cloud app create -n {my-app}
  • Deploy the app to Azure Spring Cloud: az spring-cloud app deploy -n {my-app} –jar-path target/my-app.jar

Deploy a sample application to Azure Spring Cloud

Below are high-level steps to create and deploy a sample application on Azure Spring Cloud using Azure portal:

  • Look for your Azure Spring Cloud instance in your resource group
  • Click on the “Apps” link under “Settings” on the navigation sidebar.
  • Click on the “Create App” link at the top of the Apps page.
  • Create a new application named “simple-microservice.”
Sample application deployed view
Picture 3 – Sample application deployed view on Azure Portal
Sample application
Picture 4 – Sample application access response on browser

Key offerings from Azure Spring Cloud

Spring Cloud Config Server

Multiple microservices can use the same Spring Cloud Service Config Server, which is pre-configured and managed. It uses a pluggable repository that currently supports local storage, Git, and Subversion. Spring Cloud Config is used here to inject settings from a Git repository into the application and display it on the screen.

Below are steps to enable Azure Spring Cloud to create a configuration server with the configuration files from a public git repository – https://github.com/Azure-Samples/spring-cloud-sample-public-config.git

Note: A public git repository is used for example purposes only.

  • Go to the Azure portal.
  • Go to the overview page of your Azure Spring Cloud server and select “Config server” in the menu
  • Set the repository URL: https://github.com/Azure-Samples/spring-cloud-sample-public-config.git
  • Click on “Validate” and wait for the operation to succeed
  • Click on “Apply” and wait for the operation to succeed
Azure Spring Cloud Config Server
Picture 5 – Settings to configure Config Server

Log Streaming and Analytics

Log Analytics is part of Azure Monitor, and it’s integrated into Azure Spring Cloud. Azure Spring Cloud offers logs/metrics collection and storage relying on Log Analytics, Azure Storage, or Azure Event Hubs.

The following are steps to configure the Azure Spring Cloud instance to send its logs to the Log Analytics workspace:

  • Create Log Analytics workspace named sclab-la-hqd2ksd3p6z3g

Configure the log analytics workspace for the Azure Spring Cloud instance:

Azure spring cloud lab arunm
Picture 6 – Settings to configure log data collection

The workspace allows running queries on the aggregated logs. Below is a screenshot of querying logs using Azure Portal.

Log Access view on Azure Portal
Picture 7 – Log access view on Azure Portal

Distributed Tracing

Azure Spring Cloud provides the capability to view Application Map, which completely relies on Azure Application Insights. It provides a detailed view of how components interact with each other and the ability to trace requests. This helps to quickly visualize how applications are performing, as well as detect and diagnose issues across microservices.

Application Map on Azure Application Insights
Picture 8 – Example of an Application Map on Azure Application Insights

Performance Metrics

Azure Spring Cloud provides out-of-the-box capabilities to review application performance metrics.

Performance dashboard on Azure Portal
Picture 9 – Example of a Performance dashboard on Azure Portal

Application Scaling

The application can be scaled out quickly based on insights from distributed tracing. Both manual scaling and auto-scaling options are available. An application can easily be configured to scale up Azure Spring Cloud app resources as needed through the Azure Portal or via equivalent Azure CLI commands.

Microsoft Azure Portal application scaling
Picture 10 – Application scaling setting example
Autoscale Settings in Azure Portal
Picture 11 – Application autoscaling settings screen

Blue-Green Deployment

The blue-green deployment pattern allows testing the latest application changes on production infrastructure without exposing the changes to consumers until your testing is complete. Refer to this documentation to set up a staging environment in Azure Spring Cloud.

Conclusion

Azure Spring Cloud is an excellent service to quickly build, deploy and maintain Spring Boot applications securely in the cloud with minimal efforts and a rich set of application monitoring and alerting features.

In summary, Azure Spring Cloud provides the below benefits:

  • Easy and simple development approaches
  • Less infrastructure to manage
  • High availability with b/g deployment
  • A secure network
  • Elastic and scalable infrastructure
  • Highly automated environment

Known Limitations

  • Azure portal and ARM templates do not support uploading application packages.
  • Server.port default to port 1025; other values will be overridden.
  • Spring.application.name will be overridden by the application name that’s used to create each application.
  • The latest version of Java is not supported. Azure Spring Cloud supports Java 8 and 11.
  • Spring Boot 2.4.x application has a known TLS authentication issue with Eureka.
  • Service binding currently supports only limited services; Cosmos DB, Azure DB for MySQL, and Azure Cache for Redis are currently supported.

References

Did you know Microsoft Teams Admin Center is not the only place to configure Teams Security? Since Microsoft Teams is strongly wired with SharePoint, OneDrive, and Exchange, Teams takes advantage of Microsoft 365 Security-Protection-compliance policies. As a customer of Microsoft 365, we own our data residing in the Microsoft 365 Tenant; therefore, as an Enterprise Administrator, configuring advanced security and compliance capabilities is an essential part of the planning phase of Microsoft 365 workloads deployment, which includes Microsoft Teams. How these security offerings are inter-related with Microsoft Teams and other services is depicted in the following screen:

Microsoft 365 Tenant

Now, let us have a light dive on Teams specifics nine Microsoft 365 policies (at the time that this article was written) meant to secure and protect Teams and its content apart from policies that are available in Microsoft Teams Admin center as follows:

  1. Safe attachment policy – Protects users from opening or sharing a malicious file in Teams, including SharePoint and OneDrive.
  2. Safe links policy – Safeguards users from accessing malicious links in emails, documents, and Teams conversations.
  3. Conditional access policy – Provides users access control based on group membership, users, locations, devices, and applications.
  4. Data encryption policy – Provides an additional security layer for encrypting the content at the application level to align with organization compliance obligations.
  5. Information barrier policy – Helps to control communication in Teams between specific users for compliance reasons.
  6. Communication compliance policy – Helps detect and act upon unprofessional messages within the Microsoft Teams that may put your organization at risk.
  7. Sensitivity label policy – Allows users to apply sensitivity labels when creating or editing teams to secure the Teams’ content.
  8. Data loss prevention policy – Draws a boundary within internal or external users to protect sensitive information relevant to your business.
  9. Retention policy – Helps to retain or delete a Teams chat as per the organization policies, legal requirements, or industry standards.

This was a quick glance on Microsoft Teams’ security, protection, and compliance capabilities through Microsoft 365 policies. However, for more information, please look upon Microsoft Technical Community blog here  where I have added further details on each above policies.

This is a junior take at React Hooks – which is one of the most important things to understand when learning React. React is a JavaScript library. Learning React has been an interesting experience as most people are accustomed to object-oriented programming. With React being function-oriented, therefore immutable, it takes some time to familiarize. In this blog, we’ll dive into two of the most used Hooks in React: useState and useEffect. This will serve as a starting point in understanding how React works and how to mutate data in an immutable language.
First, we’ll start with the useState hook – the easiest one to understand in my experience. Here is an example of what useState Hook looks like:


       import { useState } from 'react';

const [variable, setVariable] = useState<boolean>(true);
const [stringArray, setStringArray] = useState<string[]>([]);

Before using the Hook, we need to Import it from the React library. I like to see that four parts in the useState are changeable. You have two variables in an array: the variable itself and the other is a function you call to set the variable straight forward. On the right side, we have the type of Boolean. In this case, it can be any type we want. Having this isn’t required if you leave it as useState(true). React will know that it is a Boolean, and lastly, we have the initial state, which is set to false. This is where you can have the initial value to be whatever you want when the component first renders. Now that we understand the parts that relate to the useState Hook, we will see an example.

            <div> {variable} </div>
    <Button onClick={setVariable(false)}>
        Click Me
    </Button>

Before the button is clicked, the variable will show as true because that is the initial state we had it set to. After clicking the button, the component will re-render and display the updated variable to be false. UseState is asynchronous and essential to remember because you might want to use the variable right after using setVariable, but update until the re-render.

        const [variable, setVariable] = useState<boolean>(true);

if (variable) {
    setVariable(false);
    console.log(variable)
}

Expectations will lead you to believe that the console.log will display false, but that isn’t the case. You will not be able to access the new variable until there has been a re-render. Now let’s move on to useEffect which helps you makes update when something changes.
This Hook can get tricky quickly and is difficult to get correctly.


    	import { useEffect } from 'react';

useEffect(() => {
    console.log('hello world');
}, []);

There are two parts to useEffect. The first argument is the closure/function that contains what you want to run the useEffect. The second part is the array towards the end of the useEffect. This array is how you can choose when the useEffect runs. In the array, you can add any variable that will result in this useEffect running when changed. Here is a quick example.

const [variable, setVariable] = useState<boolean>(true);

 useEffect(() => {
     console.log('hello world');
 }, [variable]);

 <Button onClick={setVariable(false)}>
    Click Me
 </Button>

This useEffect will run only once when the button is clicked. The variable will change from the useState, and will cause the useEffect to run. A couple of things to note about the useEffect are that; the first time it will run is when the component that’s using it is mounted. And if the array is empty, then useEffect will only run once. One thing that makes the useEffect tricky is when calling a function in useEffect. If useEffect is not added to the array, it will cause a warning. You can ignore the warning, and the code will run just fine. To get rid of the warning, you can add the array’s function, but that can cause an issue in that it might continue running useEffect multiple times and feels less controlled. One thing that can fix the warning is adding the function within the useEffect. That won’t cause unnecessary re-rendering and will get rid of the warning.

These two Hooks will give you a good start in understanding React because almost all components you create will use those two. Learning about these two Hooks gave me a much better understanding of React and I moved deeper into the React world.