I am currently working as a development operation engineer at a client that maintains a large and complex infrastructure with an even more complicated development workflow. Cloud infrastructure had already been provisioned, and several development cycles had been completed when I joined the team a couple of years ago. As a result, I came into an environment that had already established a development workflow, with barely any infrastructure as code (IaC) practices in place.

What is unique about this client is that it maintains six separate web applications, each designated for a different area in the United States; the backends for each application vary from application to application, but the front ends are mostly similar. To avoid a maintenance nightmare, we use code from one repo to deploy the main UI of all six applications; the same code is deployed to each application but configured to run differently through a master configuration file. Each web application is compromised of several sub-applications, each with its codebase.

One of my responsibilities is to spin up and maintain application stacks for new development teams that are working on a specific feature for this client. That feature may or may not be utilized by all six web applications. For this team to develop rapidly, they need a stable testing environment, equipped with continuous delivery, to do their work.

This “application stack” is comprised of:

  • 1 Azure App Service Plan
  • 1 App Service (up to 6)
  • 1 Redis Cache
  • 1 Storage Account

These are Microsoft Azure constructs and are not essential to understand in the context of this post. Keep in mind that 6 separate applications utilize a set of shared resources.

Since the DevOps team at this particular client is small, we were only able to create a script to deploy an application stack for a specific development effort, but we did not get the chance to implement any strategies around maintaining these stacks once they were deployed. As a result, we are applying configurations that need to be applied across the 25-30 application stacks that we maintain has turned out to be a logistical nightmare.

Model 1 Main App RepoThe diagram above represents a high-level overview of a single application stack that we need to maintain; the diagram is scaled down from 6 applications to 3 applications for clarity.

The current script we use to provision application stacks does the following:

  1. Creates all Azure resources
  2. Performs KeyVault configuration
    1. Enabled managed identity on all app services
    2. Adds access policy to the key vault for all app services
    3. Adds key vault references for secret app settings on all app services
  3. Creates 1 Azure DevOps release pipeline to deploy code to all app services; this is done by cloning a base template for a release pipeline that exists before the script is run.

The Problem

Although the deployment script that is currently in use saves time, it falls short on a few things:

If we want to make configuration changes across all the application stacks, we would have to make edits to the deployment script and rerun it on each application stack. This process is entirely too manual and can be improved.

  1. If we want to change the mechanics of how the underlying deployment pipeline functions across all the application stacks, we have to make edits on each deployment pipeline that is tied to a given application stack. This is too tedious and leaves lots of room for error; we initially mitigated some of this by utilizing Azure DevOps task groups.
  2. Configuration drift is widely prevalent; because we do not have an easy way to maintain all of the application environments across the board, minor configuration changes during development are difficult to track and often fail to propagate to our main application stacks.
  3. The Solution: TEMPLATE YOUR LIFE

Azure YAML Templates

This client is relatively young in terms of their cloud journey, especially with Azure and Azure DevOps. They currently rely heavily on Release Pipeline User Interface within Azure DevOps to deploy code to their applications. In recent years, Azure has been subtly urging teams to utilize multi-stage YAML templates instead of the Release Pipeline User Interface to adopt an “infrastructure as code” mindset to the continuous delivery process. With this mindset, there is no difference between “Build” pipelines and “Release” pipelines; the only construct is a pipeline where you can perform any task (build, test, deploy, etc.).

I encourage you to read more about Azure DevOps YAML templates. I’ve included some relevant links below:

Azure DevOps YAML Templates
Azure DevOps Multistage YAML UI Experience
YAML Schema

Given the problem statement above, there is a large need to develop a framework and process around maintaining the cloud infrastructure and its respective continuous delivery process that is easy to manage and propagate changes through.

This is where Azure DevOps YAML templates become useful.

All you need to create a pipeline in Azure DevOps is a YAML file. This YAML file can exist in any code repository. Once this file is imported into a code repository, you can use the Azure DevOps UI to create a pipeline from this file. While you can run the pipeline from the Azure DevOps UI, the pipeline’s changes will be maintained and version within the YAML file itself, just like you would with any application code.

This YAML file can inherit and reference another YAML file that exists in other code repositories.

Template Repository

I am currently developing a code repository that will contain:

  1. Azure ARM templates to version app service configurations (including virtual/sub applications)
  2. Deployment script to deploy ARM templates mentioned above
  3. Azure DevOps YAML files that will function as “release pipelines”:
    1. Create / Refresh shared resources
    2. Create / Refresh app services
    3. Deploy code to app services

When a new workstream or development effort begins, all they need to do is create a simple YAML file that extends from the main (release.yaml) template file in the repo mentioned above (see azurepipeline.yaml below). Once that is done, the first time the development team pushes code to a specified branch, they will be equipped with a new environment with their code changes.

Hierarchy of the Templates

The diagram above represents the hierarchy of the templates in the code repository mentioned above. You can see there are only 7 YAML template files to maintain. The azurepipeline.yaml file inherits these template files. This helps address the challenge mentioned above related to the daunting task of maintaining 25+ release pipelines; changes made to any of the template files will propagate to any azurepipeline.yaml files that inherit from the release.yaml file.

The idea is that we can import the azurepipeline.yaml file that can be into any repo or branch. This file is relatively simple:

azurepipeline.yaml

# run on schedule instead of utilizing ci triggers
name: $(Date:yyyyMMdd)$(Rev:.r)
trigger:
- none
  # pipeline artifact list (previously source artifacts in release pipeline)
extends:
  template: release.yaml@templates

The release.yaml file that the azurepipeline.yaml file extends from looks similar to the one below:

release.yaml

resources:
  # script and template reference
  repositories:
    - repository: templates
      type: git
      name: template-repo
  pipelines:
    - pipeline: main
      project: ExampleProject
      source: YAML\main
      branch: stage
    - pipeline: subapp1
      project: ExampleProject
      source: YAML\subapp1
      branch: stage
    - pipeline: subapp2
      project: ExampleProject
      source: YAML\subapp2
      branch: stage
stages:
- stage: SHARED
  dependsOn: []
  displayName: refresh shared resources
  jobs: 
  - job: refresh_shared_resources
    pool:
      vmImage: 'ubuntu-latest'
    steps:
      - template: templates/update-infrastructure.yaml@templates
        parameters:
          sharedOnly: true
- stage: APP1
  dependsOn: ['SHARED']
  variables:
  - template: templates/appvars.yaml@templates
    parameters:
      appName: 'app1'
  displayName: app1 
  jobs: 
  - job: REFRESH_APP1
    pool:
      vmImage: 'ubuntu-latest'
    steps:
      - template: templates/update-infrastructure.yaml@templates
        parameters:
          sharedOnly: true
          appName: app1
  - template: templates/app1.yaml@templates
- stage: APP1
  dependsOn: ['SHARED']
  variables:
  - template: templates/appvars.yaml@templates
    parameters:
      appName: 'app1'
  displayName: app1 
  jobs: 
  - job: REFRESH_APP1
    pool:
      vmImage: 'ubuntu-latest'
    steps:
      - template: templates/update-infrastructure.yaml@templates
        parameters:
          sharedOnly: true
          appName: app1
  - template: templates/app1.yaml@templates

App stages are created for each app (not shown).

The app template files referenced by the release.yaml file looks similar to the file below:

appl.yaml

jobs:
- deployment: app1
  dependsOn: REFRESH_APP1
  displayName: "DEPLOY APP1"
  environment: stage
  pool:
    vmImage: windows-latest    
  strategy:
    runOnce:
      deploy:
        steps:
        - checkout: self
        - template: configure-app.yaml@templates
        - template: deploy-app.yaml@templates
          parameters:
            isVirtualApp: false
            appName: 'ui'
        - template: deploy-app.yaml@templates
          parameters:
            isVirtualApp: true
            appName: 'subapp1'
        - template: deploy-app.yaml@templates
          parameters:
            isVirtualApp: true
            appName: 'subapp2'

Take note of the different steps that are used to deploy an application. It comprises a configuration step and a deployment step for each sub-application that utilizes the same azure YAML template file with different parameters.

The release.yaml file results in a multi-stage YAML pipeline that looks like the one below:

Release.yaml file results

Resources

The resource section of the template defines what resources your pipeline will use; this is analogous to the artifact section in the release pipeline UI view. You can only have the resources section declared once in your series of templates for a given pipeline. In the example above, it is defined in the release.yaml file, the template that other templates will extend from.

In this example, the resource section references the template repository itself and other build pipelines that produce code artifacts that we will need to deploy to our applications.

Defining the resources section in the base template (release.yaml) allows us to be abstract the choice of artifact sources away from the end-user. This is advantageous if we want to add more sub-applications to our release pipeline; we would only need to change the release resources section.yaml file (changes will propagate to all inherited/extended pipelines).

At the client I work with; this is problematic. In the solution above, all pipelines that extend from release.yaml (that contains the resources section) is limited to only use the specific artifacts AND the branches they are set to pull from, as defined in the base template (release.yaml). We consistently have project teams that need sub-application artifacts from specific branches that their development work exists on. To solve this, we moved the resources section into the extended template (azurepipeline.yaml). However, this isn’t optimal because we would still need to update every comprehensive template if we wanted to add to the resources section across all application stacks.

As far as I know, there is no way to use pipeline variables or template expressions to determine what resource artifacts are needed dynamically. Keep the resources section in the base template and override them within the extended templates.

Dynamic Insertion
Currently, we must maintain a manifest that describes the relationship between each application and its respective sub-applications. For example, app1 could have sub-applications subapp1 and subapp2, and app2 could have sub-applications subapp1 and subapp3 (or any combination of sub-applications). We utilize a JSON file that defines the mappings between application and sub-application. This JSON file is parsed in the deployment script to ensure the correct sub-applications exist before code deployment; in Azure, the sub-application must live before you push code to it. As a result, we also need to maintain sub-applications in each of the different YAML step templates for each application. At this point, I am unaware of an elegant way to iterate over an Azure DevOps pipeline variable to create steps dynamically.

Variable Templates
Template expressions can be applied to variables. This was extremely useful in the context of the solution I was trying to solve. For example, each stage in the pipeline described above is for a particular application with a distinct name. This name is used to determine several properties for its respective pipeline stage. For example, the app1 pipeline stage uses the site name/url app1.azurewebsites.net. The app2 stage uses the site name/url app2.azurewebsites.net. You can see both site names follow the same naming convention. This is a great use case for a variable template.

Here is the variable template I used:

appvars.yaml

parameters:
  appName: ""
  deployToSlot: false
variables:
  siteName: "$(tier)-$(region)-$(app)-${{parameters.appName}}"
  ${{ if eq(parameters.deployToSlot, true) }}:
    corsOrigin:"https://${{parameters.appName}}-$(slotName).azurewebsites.net"
  ${{ if eq(parameters.deployToSlot, false) }}:  
    corsOrigin: “https://${{parameters.appName}}.azurewebsites.net"
  appName: ${{parameters.appName}}
  template: "$(System.DefaultWorkingDirectory)/arm_templates/azuredeploy.json"
  virtualApps: "$(System.DefaultWorkingDirectory)/manifest.json"

You can see that I’ve included a parameters section that takes in appName as a parameter. You can see it in the release.yaml file, this parameter is being applied with:

variables:
  - template: templates/appvars.yaml@templates
    parameters:
      appName: 'app1'

This allows us to cut down on repeated code by using an extendable template for the variables needed for each deployment stage. It is essential to understand how variables and parameters work in Azure pipelines and the different ways they can be expressed in your YAML.

Check out these resources:
YAML Variables Documentation
YAML Parameters Documentation
Developer Community Forum Post

Conclusion

Development operation decisions should be considered in architecture design. In the scenario described above, the architecture I inherited did not foster the most efficient development and deployment workflow. For example, the decision to have sub-applications exist on an individual app service instead of assigning them to their app services. This limited us to deploy the individual applications in series, as you are limited to run one deployment at a time to give app service.

Infrastructure as code implemented in the continuous integration and delivery process can be equally as important as implementing it for your existing infrastructure.

Suppose you are working on a project that requires a complicated development lifecycle with multiple teams needing distinct environments. In this case, it becomes increasingly important to formulate a strategy around maintaining the various environments. Environment creation and code deployment should be as automated as possible. Applying configurations across all environments should be an easy task.

Microsoft has recently released a new office 365 service called SharePoint Syntex. SharePoint Syntex adds Artificial Intelligence (AI) capabilities to SharePoint document libraries. SharePoint Syntex is the first product to be released from Microsoft project Cortex. SharePoint Syntex provides us with out of the box capabilities to build no-code AI models and apply them to document libraries. SharePoint Syntex is now available for Office 365 commercial customers. You can integrate this service into your Office 365 tenant. There is also a free trial available for one month.

SharePoint contains collections of documents with hidden knowledge inside. SharePoint Syntex helps us extract information from these documents that is important to us using built-in AI models. We can use this metadata to process the documents automatically. Since we do not have to extract the document’s metadata anymore manually, it saves time and money. There is no need to call the external APIs anymore to extract the metadata from document libraries’ documents; SharePoint Syntex brings us two no-code AI models: The Document Understanding model and the Form Processing Model. Below is the example whiteboard diagram of SharePoint Syntex high-level explanation.

AI model Diagram

Document Understanding Models:

Document understanding models work best with unstructured documents (which have more text) such as contracts, resumes, letters, email messages, health records. You can create a document understanding model using the SharePoint Content Center site (Model creation interface) and then apply it to SharePoint document libraries in your Office 365 tenant. The document understanding model comes in two models:

  • Classifier: Identify and classify the documents (Examples: resumes, contracts, letters) uploaded to document libraries.
  • Extractor: Optionally add the extractor to your model to extract the keywords from essential documents. For example, you may extract critical metadata from your document like “Person Name,” “Organization Name” “Contract Start Date” and add them as column values in a document library, and you can use these metadata to process your document further automatically. Example: If “Contract Start Date” is more significant than today, send an email to a group of people.

To train the Classifier and Extractor, documents must have common text to identify using phrases or patterns. Example: If we want to create a model’s explanation for “Resumes,” we can add phrases like “Name,” “Degree,” “Address,” “Programmer,” “Network Engineer,” “Software Engineer” in the model explanation phrase list, which helps the model to identify the document type as “Resume.” Beyond the classification and extraction of keywords, you can also apply a retention label to document understanding models. These cannot be deleted from the document library for the specified time period. Below diagram shows the key steps we can perform from the SharePoint Content Center Site:

Figure 2 Document Understanding

Form Processing Model

The form processing model works best with structured documents like forms, purchase orders, and invoices. Unlike the document understanding model, Form processing can be created directly from SharePoint document libraries, using Microsoft PowerApps AI Builder and Flow. Using the Form Processing model, we can extract the key-value pairs and table data from structure or semi-structured documents and add them as column values in SharePoint libraries.

The below diagram shows the key steps to create and publish the Form Processing model.

Create a Form Processing Model

SharePoint Library Form ProcessingConclusion

SharePoint Syntex can help organizations to automate the business processes since it automatically extracts metadata (information) from the documents. We can use metadata to process documents further, using Flow or any other workflow tools. Now, we have meaningful metadata available on document libraries, which improves the search results. It helps manage the compliance risk since the AI model can also apply a retention label to documents, not deleted from libraries for a specific period. We are getting these features with no code AI Models! This year, Microsoft Project Cortex is also coming up with many new AI features (Topic cards, Topic Pages, Knowledge Centers) in Office 365, which will automatically help us turn content into knowledge.

Referenced for getting started

Recently, I was working on a SharePoint Framework web part that requires displaying large lists and libraries in SharePoint 2016. As many of you may already know, with a list of more than 5000 items, you may run into a list view threshold error. My requirement is allowing users to be able to view more than 20,000 items. After exploring various options, I decided to create a solution that recursively calls the SharePoint REST service with page and skip-token query options, limiting page size per request. Promise objects are used in my solution to perform the asynchronous functions; the items collection handler is used when the value is resolved. Any error is passed down the chain and handled in the desired place without any extra effort.

Code Example

Code Example

The method getListItem(url: string) is used to perform REST calls against SharePoint. It returns a promise as an object that waits for a GET action to finish.

Method used to perform REST

Code example 3

The getListItemRecursive is very straightforward. It invokes the getListItem method; if the result contains more than x numbers of items, the listing service will return an odata.nextLink property. With the promise, I’m able to get the odata.nextLink value in the then handler after the returned items are resolved. If the odata.nextLink value is not null, the getListItemRecursive method will call itself to retrieve the next set of results by sending the URL value of the odata.nextLink property to the getListItem process. The list service will continue to return a reference to the next set of data in the odata.nextLink property with each response until all of the results have been returned. The items collection is called only when each promise is fulfilled.

Invoking request getListItem

This function above is invoked to request getListItem, and supply a callback function that will later be invoked with the result.

Conclusion

The example above performs well in a SharePoint Framework web part. The asynchronous operation helps the overall performance and responsiveness of my application, particularly when accessing large datasets.

Gaining experience through new technologies

I’m here to share my internship experience at AIS to hopefully give a good idea of what to expect if you, too, are considering entering the program. Your experience will likely be a bit different as I started during the COVID-19 pandemic. Once I accepted the position, I was shipped a laptop, and it was off to the races.

By the second day, I had a project assigned: implementing Jupyter Notebooks into Scale. This project helped me become more comfortable with popular tools like Docker, Jupyter, and Python and projects AIS currently works on, such as SEED and Scale. Learning these technologies was very helpful once I started my full-time position.

The second project I was assigned was updating the front end of the Ohio demo site. I had a lot of freedom over the site’s design, and I received some excellent experience in responsive design using Material Design Lite and basic HTML/CSS. Overall I learned a lot and expanded my technology repertoire.

During my internship, the projects I worked on were mostly a solo effort, but I did have help along the way. During my internship, I was assigned a first-week “buddy,” an intern supervisor, and had someone to check in with every week. If I needed help from other people at AIS, that was not an issue either. Everyone was happy to help or pointed me toward someone else who would be more helpful.

The internship structure was intended to be a three-month period, in which I set three goals to complete and used an Azure DevOps board to track these goals. At the end of each month, I would write up a report of what I did that month, what struggles and successes I had, and what I was planning on achieving the next month. At the end of the internship, I was asked to present the Jupyter Notebook Scale integration and write up an internship review. The review was an excellent addition as I gave my thoughts about the experience and got feedback. The experience was great, and for anyone thinking about interning at AIS, I highly recommend it. The team here is sharp and approachable, and the projects are engaging.

If you are passionate about emerging technology and creating innovative solutions, view our open careers and apply today! We look forward to hearing from you.

This past Thanksgiving marked the first anniversary of going live with a SharePoint environment that AIS migrated from on-prem to Microsoft Azure IL5. Since then, our client has experienced 100% uptime during business hours and reduced deployment timelines, from weeks to minutes.

Challenge: Improve Performance, Speed Up Deployments

AIS set out to help a DoD agency that had experienced ongoing service issues with their existing provider while operating their on-prem SharePoint farm. During the year before the migration, the DoD customer experienced three service outages during business hours, which halted the ability to perform mission-critical activities. Additionally, their existing enterprise service provider required a lead-in time of 1-2 weeks to deploy any code changes or new capabilities into the environment. AIS was tasked with building a cloud solution to maximize uptime and accommodate rapid deployments to better serve the fast tempo required by our DoD customer.

Solution: Hybrid IaaS/Paas in Azure IL5

To provide a solution tailored to maximize uptime and accommodate rapid deployments, AIS architected a DoD first: a hybrid IaaS/PaaS environment in Azure Government IL5 that utilized the DISA Cloud Access Point to integrate with NIPRNet. We leveraged a suite of technologies to employ DevSecOps methodologies, allowing the solution to remain scalable while adhering to industry best practices. By implementing an automated code scanning solution, we reduced deployment lead-in time from weeks to minutes. Our infrastructure as code (IaC) development also drastically reduced the time required to build a new environment from several days to under one hour.

Looking Ahead: Cost-Sharing, Scale Across the DoD

AIS has worked with our DoD customers to offer these cloud services to neighboring agencies to benefit from cost-sharing. In doing so, we have passed on lessons learned and processes that we have developed to share our success across the DoD enterprise. As we grow, we continue to integrate evolving best practices to remain at the DoD DevSecOps initiative’s forefront.

AIS FIRST TO DEPLOY AZURE IL6 EVNVIRONMENT AND ACHIEVE ATO!

An AIS client recently started developing a design system of components commonly used throughout multiple Angular 9 applications. To that end, we decided to take a component-driven approach using Storybook JS, version 6.1. We developed individual base components that would be used in more complex components, and then we would use a combination of those complex components and base components in web pages.

We quickly discovered that though Storybook supports Angular and other frameworks, it is primarily documented for React development. We also found some other techniques that are useful for development and documentation.

Here’s an overview of what we’ll cover in this piece:

  • Development
    • Rebuilding and Watch Mode
    • Router
    • Testing Components that Rely on the Output of Other Components
    • Mocking Services
  • Documentation
    • JSDoc Tags
    • Quotation Marks in JSDoc
    • MDX or TS Documentation?
    • documentation.json

Development

Rebuilding and Watch Mode

When developing with Storybook locally with ‘npm run storybook,’ Storybook has a watch mode that monitors an application’s source code and automatically rebuilds the application, similar to Angular’s watch mode when using ‘ng serve.’

There are a few minor caveats to Storybook’s watch mode, however. If you make changes to any of Storybook’s configuration files in the ‘.storybook’ directory, these files are not rebuilt by the Storybook process. If changes are made to the ‘.storybook’ directory, halt the ‘npm run storybook’ process, and restart it. Similarly, if you add new assets such as images or fonts to your application’s assets folder, Storybook will not package these new assets until you restart the process.

Router

If developing a project that uses the Angular router, and attempt to import the AppModule into the ModuleMetadata of one of your stories, then you will see the following error:
Unhandled Promise rejection: No base HREF set. Please provide a value for the APP_BASE_HREF token or add a base element to the document. ; Zone: ; Task: Promise.then ; Value: Error: No base HREF set. Please provide a value for the APP_BASE_HREF token or add a base element to the document.

The easiest way to avoid this is not to import any module that defines any routes inside it or any of its imported modules. Instead, directly import the components that your application relies on in the story, or specify another module to import that contains only your components and not your application:

export default {
  title: 'My Story Name',
  component: MyStoryComponent,
  argTypes: {

  },
  decorators: [
    moduleMetadata({
      declarations: [
        MyStoryComponent,
        MyDependentComponent
      ],
    }),
  ]
};

If you have components that include the Angular router tests, look at the “Mocking Services” section of this post.

Testing Components that Rely on the Output of Other Components

One component we were developing in our application was a Password Strength Meter, which changes its styles and colors based on the complexity of the password provided to the meter. It took this password as an input property on the component. At first, we allowed the Storybook website user to modify the input property directly in the component’s property list in the Storybook Controls addon. However, we decided this was an insufficient test.

We discovered that it was effortless to define a new Angular component in the .stories.ts file for the Password Strength Meter. You specify this the same way you’d describe any other Angular component, using the @Component decorator, and implementing ngOnInit. Since it is not exported in an Angular module, this component will not be included in the deployment. We defined the component’s Template HTML inline, including both a text input field and our Password Strength Meter. We managed any of the ngModel events in the definition of the component.

Within the same .stories.ts file, you can define separate templates for individual stories. In that way, we can have both a “No text field” story that tests the Password Strength Meter without a text field and an “Integration” story that tests the password strength meter integrated with a text field:

const NoTextFieldTemplate = (args: PasswordStrengthComponent) => ({
  props: args,
  component: PasswordStrengthComponent,
});
export const NoTextField = NoTextFieldTemplate.bind({});
NoTextFieldTemplate.args = {
  ...actionsData
};
const IntegrationTemplate = (args: PasswordStrengthTestComponent) => ({
  props: args,
  component: PasswordStrengthTestComponent,
});

export const IntegrationWithTextField = IntegrationTemplate.bind({});
IntegrationWithTextField.args = {
  ...actionsData
};

Mocking Service

In our application development, we ran into situations where we would need to develop components that used dependency injection to obtain both built-in Angular services like the router and our services.
Since we define an inline Angular module in our story definition using ‘moduleMetadata’, it’s simple enough to add a “providers” line and use the “useValue” property.
For example, let’s say we have a service called SessionService that stores the user’s local session data, including their username. That service has a method named “getUsername()” that a component relies on. Also, this component utilizes the router. Then, inside our “.stories.ts” file, we can define our inline module:

export default {
  title: 'Header',
  component: BannerComponent,
  argTypes: {

  },
  decorators: [
    moduleMetadata({
      declarations: [
        BannerComponent
      ],
      imports: [
        RouterModule.forRoot([])
      ]
      providers: [
        {
          provide: SessionService, useValue: {
           getUsername: () => {
             console.log(`SessionService.getUsername called`);
             return `user1`;
           },
          }
        },
        {provide: APP_BASE_HREF, useValue: '/'}
      ]
    }),

  ]
};

In this example, we can see a mock of the SessionService that provides a fixed value to getUsername and the APP_BASE_HREF value that the Angular router relies on.

Documentation

One of the most significant benefits of Storybook is the amount of documentation that can be generated. Storybook uses Compodoc to generate its documentation, but you do have some options for overriding the default documentation generated. Storybook also offers many additional properties, as explained in their documentation.

Compodoc uses JSDoc to allow developers to design documentation for components easily. However, there are some limitations, as well as some other options for documentation.

JSDoc Tags

Compodoc only supports a small number of JSDoc tags, as documented on their website: @returns, @ignore, @param, @link, and @example. By far, the most useful of these in Storybook is @ignore. The @ignore tag will remove the property from the Storybook Controls panel, which helps hide properties and functions that external components should not use. Storybook infers many of these other properties, and we can define code examples in either the .stories.ts file or the .stories.mdx file.

Quotation Marks in JSDoc

One of the first things we found when writing JSDoc for Storybook is that either Compodoc or Storybook does not escape quotation marks properly for displaying in web browsers. We had to use the HTML escape for apostrophes and quotes, “’” and “”” inside our JSDoc documentation to get around this issue.

MDX or TS Documentation?

MDX is a file format similar to Markdown provided as an option for Storybook documentation. We found that MDX is slightly harder to use for Angular developers, as it requires learning a new syntax based on React development. Despite this drawback, MDX gives the ability to provide complex Markdown formatting to the pages. When developing in Storybook, consider using MDX if you need an involved documentation page that does not fit the default Storybook documentation formatting style.

You can mix and match MDX with Typescript and define your stories in a TS file and import them in an MDX file if you need complex story definitions alongside complex markdown.
Often, TS documentation will be enough; we can use the various properties Storybook provides alongside JSDoc. However, for those times that we need a bit more customization, MDX files can provide what we need.

documentation.json

When Compodoc builds, it creates a large file in the root directory of the application called documentation.json. This file is created when Compodoc is run. This file does not need to be committed to git and can safely be included in a .gitignore file.

Conclusion

Storybook is a powerful tool used to develop and document components in Angular. Though its documentation is primarily written for React, its capabilities are equally strong with Angular and will help design document reusable components for a large application.

2020 is a year we will never forget. A year fraught with challenges of all shapes and sizes. But it was also a year full of learnings, gratitude, and reflection. We’re looking back at the top blogs from the past year from team members committed to our mission and embracing our corporate value of continued learning. We are lucky to know, learn from, and grow alongside them.

Top 10 AIS Blogs from 2020:

  1. ASP.NET Core: Telemetry and Application Insights, by  Kip Streithorst
  2. Azure Government: DISA CAP Connection Process, by Yared Tsegaye
  3. Using an Azure Logic App to Query Data in an On-Premises SQL Server Database, by Patrick O’Gorman
  4. Application Gateway Ingress Controller for Azure Kubernetes Service (AKS), by Johnny Abell
  5. 10 Tips for Easy and Engaging Meetings with Microsoft Teams by Tacy Holliday, Chris Miller, and Guy Schmidt
  6. Build a Leave Request App with Power Apps, by Andrea Pinillos
  7. How to Create an ASP.NET Core and API SDK Development with Refit, by Sean McGettrick
  8. AIS is the Microsoft Partner of the Year for Power Apps and Power Automate
  9. Azure VM Disk Encryption using Deployment Scripts in ARM Templates, by Selvi Kalaiselvi
  10. Azure DevOps Services for Government: Access Control, by Jonathan Eckman

Happy New Year to all our readers and bloggers! Be sure to follow AIS on TwitterFacebook, or LinkedIn so you’ll never miss an insight. Perhaps you’ll even consider joining our team in 2021?

Do your users want reports in SharePoint? Yes! They crave reports and charts. Regardless of which version of SharePoint they are using, getting started now to build their data visualizations in Power BI will position the reports for seamless migration to future SharePoint versions. These are the necessary steps to take to add a simple report in a SharePoint modern page.

Important note: To embed Power BI reports in SharePoint Online, a Power BI Pro license is required.

The Process Flow

This is the flow:

The Process Flow Figure 1

Figure 1: The Process Flow

  1. Create your data in SharePoint; say a list or library.
  2. Start Power BI Desktop to connect to SharePoint to read the data and transform it and create visualizations.
  3. Publish to Power BI Online, where a link becomes available to paste into the Power BI webpart available in SharePoint Modern pages.

We’ve gone full circle! Let’s look at the steps.

SharePoint Online Data

For this example, we will read data from a simple custom list. I added the list to my SPO Dev Tenant site named Vacation Planner. Since all our “vacay” are now “staycay,” I decided to make a board game list. Along with the default Title column that you get with any new list, I added three more. Each is a number column. Then I added games to the list; I listed every game I could think of. For each one, I entered somewhat random numbers for Difficulty and Minimum age and Popularity, although I am pretty sure Candy Land is for 4-year-olds.

SharePoint Online Data Figure 2

Figure 2: Board games list

To create the list, I was logged into SPO as a fictitious test user I named Gwen Lim.

Build the Report

Install the Power BI Desktop application to build the report. It’s free: download it here.

On first use, you will be prompted to sign in. If the login type option appears, choose “Organizational” and log in with a Windows account. I logged in with fictional Gwen Lim’s account. The app, either from the startup splash screen or the menu, chooses “Get Data.”

Select Data Source Figure 4

Figure 3: Select a data source

From the Common data sources dropdown, select “More…” at the bottom. Then click the “Online Services” option, and you should see “SharePoint Online List” on the right. Select that and then click “Connect” at the bottom.

Choose SharePoint online Figure 5

Figure 4: We will choose SharePoint online list

In the SharePoint Online Lists dialog, paste the address URL of the SharePoint site that contains your list. You can check the 2.0 (Beta) radio button (see figure 6) to enable the app to open the default view of your list or leave it 1.0 if you prefer.

SharePoint Site URL Figure 6

Figure 5: Enter the SharePoint site URL

A Navigator window appears with all of the lists available in the SharePoint site in the left columns with checkboxes. Then, check BoardGames to see a view of the data in the right side of the pane. Click the Load button.

Select the List Figure 7

Figure 6: Select the List

You can start building the report. The fields of the data display on the right side. Having chosen a specific, limited column view as default for the list, along with selecting the 2.0 radio button, you will see only a few fields (aka columns) on the right, which is easy to work with.

BoardGames List App

Figure 7: The BoardGames list fields appear

Ignore the fields for a moment while you chose a Visualization. Select the doughnut. Now, it’s time to apply fields to the doughnut. Drag Title into the Legend box under Visualizations. A legend appears beside the doughnut chart. Drag Popularity into the Values box, and your doughnut comes to life with color.

Pick a visualization Figure 9

Figure 8: Pick a visualization chart and add fields

When you hover the chart, tooltips appear with data for each game. Age level, Difficulty, and Popularity values have been imported as decimal values, which would be more readable as whole numbers. To alter this, and to edit column heading text, click on the ribbon’s Transform Data button.

Modify the Data Figure 10

Figure 9: Modify the data

To change the column value from a decimal to a whole number, click the column title to select it and then click on the ribbon’s Data Type button. Select Whole Number as in figure. Double click the column heading to rename the column.

Changing field titles and data types

Figure 10: Changing field titles and data types

Click the Close & Apply button on the left in the ribbon to cause the visualization to appear with the changes applied. Now when you hover your cursor over a section, Minimum Age will appear with a space and both values as whole numbers.

Ready to Publish Figure 11

Figure 11: Improved tooltips

Display in SharePoint

To display the report in SharePoint, click the Publish button in the ribbon on the right side. You will be prompted to save your report in .pbix format.

Ready to publish report figure

Figure 12: Ready to publish!

Save anywhere you want to keep it, and then the Publish to Power BI dialog appears. Varied workspaces can be configured, but initially, you only have “My Workspace” as an option, which is fine. Select it and then click “Select.”

Publishing to Power BI

Figure 13: Successful publish to Power BI Online

When you see the Success! dialog, click on the link to open the .pbix in Power BI online to view your report. In the Share menu above the report, drop down the menu options, and hover over Embed report. Here you want to see an option for SharePoint online.

Link to use in SharePoint page Figure 14

Figure 14: Get a link to use in a SharePoint page

This will be missing until you upgrade to a Power BI Pro license. This is not free, but the trial is for 60 days. Once you have that option in the menu and select it, you are rewarded with the Embed link to use in SharePoint.

Embed link for SharePoint

Figure 15: Click to highlight and then copy

Click that link to highlight it and copy. Now head over to your SharePoint site and create a page.

Locate built-in Power BI

Figure 16: Locate the built-in Power BI webpart

Click the webpart plus sign, and in the search box, type “power” to filter the results. The Power BI webpart will appear. Click on it, and the webpart will be inserted into your page. You will see a green button for Add report; click it to open the properties panel on the right. Paste in the embed link you got from Power BI online.

Apply the embed link

Figure 17: Apply the embed link

Click away from that textbox and your report will appear on the left.

Report Displayed Correctly

Figure 18: Report successfully displayed in SharePoint Online

Conclusion

This is a no-code solution and a simple demo. However, the depth of tooling provided by Power BI to enable developers and business data experts to transform and visualize organizational data is immense. The speed and ease with which we can integrate data reporting into SharePoint modern pages will be welcome to customers as they migrate to current SharePoint versions.

Links

Embed a report web part in SharePoint Online – Power BI | Microsoft Docs

Microservices are a typical development pattern in the modern era of software development. Unfortunately, it can be challenging to configure and test sprawling applications that utilize dozens of microservices.

For our example, let’s say that you have several teams that are developing the following microservices for a simplified online store application:

  • An authentication service, which allows users to log in and other services to verify a user’s authentication token. This service will need to talk to the database.
  • A product service, which retrieves information about products from the database.
  • A cart service, which allows a user to add or remove products from their cart. This service will need to communicate with the authentication service to verify the user’s credentials.

We will assume that only a single database is used for this application, which contains user data, product data, and cart data. This could also be stored across multiple databases, one for each microservice. We will also need a website, which provides the frontend for the three microservices. Though small, we can already see where configuring an environment for local testing can become burdensome. These services may require different frameworks or runtime environments and a locally deployed database. These services will also need to communicate with each other and the database. We will explore how we can use Docker and Docker Compose to aid in developing this simple application.

Why Use Containers?

Containers help ensure that an application runs as expected on any machine. By using containers, we can help solve the familiar “works on my machine” problem. We isolate the microservices into their environment and can test them independent of local environment configuration. This gives us confidence that our application will behave as expected both for new developers and production deployments.

For this project, we will use Docker, a popular container engine. This gives us access to Docker Compose, a tool that we can use to orchestrate multiple containerized applications together.

Why Use Docker Compose?

Though we can run each Docker container individually, that quickly becomes tedious and difficult to manage. Docker Compose allows us to define a file that defines all of the containers we need to run inside a YAML file and define any environment variables we need to use to run the application. Docker Compose also allows us to communicate easily between multiple containers. Using Docker Compose, we can easily configure a local runtime environment that will include our database, our microservices, and our website applications. We can quickly rebuild any databases, stand up an entire testing environment, and just as quickly tear it down.

It’s important to note that Docker Compose is not meant for production environments. Unlike other tools such as Docker Swarm or Kubernetes, Docker Compose does not provide scaling, monitoring, load balancing, or other management capabilities.

Prerequisites

Setup

We will first set up a simple microservice-based application that will follow the following structure:

The website will communicate three microservices to the database, while one microservice will communicate with another. We could almost as easily have separate databases for each microservice; similarly, we could have the Cart service also talk to the Product service to retrieve Product data.

For this application, we will use the following technologies:

  • Database: MySQL
  • API applications: Node.js
  • Website: Angular

These technologies are not required. For example, the database could just as easily be another database type, such as PostgreSQL, MongoDB, or CosmosDB.

Directory Configuration

The first thing we should do is create a project folder. For this application, we’ll call it store. Then, we will create folders for each microservice: auth-service, product-service, and cart-service. We’ll also create one for the website and one for the database called DB.

The folder structure should look as follows:

  • store
    • auth-service
    • cart-service
    • db
    • product-service
    • website

Database Setup

To get our application started, we will first configure our database. I will be using the MySQL docker container, version 5. To do this, we should first create our dockerfile. Create the file store/db/Dockerfile and add the following:

FROM mysql:5
ADD . /docker-entrypoint-initdb.d

This will add all files from the store/db folder to our database and executes all *.sql files in alphabetical order.

You can then define each of our tables, and some test data, in a separate file so that your directory looks as follows:

  • db
    • 001-Products.sql
    • 002-AppUsers.sql
    • 003-CartItems.sql
    • 999-testdata.sql
    • Dockerfile

When the MySQL Docker image is executed, it scans the docker-entrypoint-initdb.d directory, and executes them in alphabetical order. This will initialize our database easily and quickly.

You could also define a database for each of our services if you follow a strict microservice pattern where each service has its database. You can do this by creating multiple db folders and putting the Dockerfile in each one.

Note: There exist docker images for various other databases, as well; MySQL is not required. You can view documentation for a few of these options as follows:

API Application Setup

In each of the service folders, we will be creating a simple Node.js application.
In the store/auth-service directory, run the command npm init. Leave all of the default values in the initialization script. Install the following dependencies:

  • npm i express
  • npm i cors
  • npm i mysql@^2.18

Create the file, store/auth-service/index.js. The easiest way to pass arguments to applications in Docker is using environment variables, so we will pass the location, user, password, and name of the database through these environment variables:

var express = require('express');
const mysql = require('mysql');
const cors = require('cors');
const app = express();
const port = 80;
const db_location = process.env.DB_LOCATION;
const db_user = process.env.DB_USER;
const db_password = process.env.DB_PASSWORD;
const db_name  = process.env.DB_NAME;


app.use(express.json());
app.use(cors());

...

As this is not an Express tutorial or credential, I will not provide the authentication service’s implementation details here.

From here, we can continue to develop our authentication service. We can then create a Dockerfile for this application at store/auth-service/Dockerfile. This Dockerfile will use the Node image for Docker.

FROM node:12
WORKDIR /app
COPY package.json package.json
COPY package-lock.json package-lock.json
RUN npm install
COPY . .
CMD [ "node", "index.js" ]

We will also create a .dockerignore file at store/auth-service/.dockerignore with the following, so that our local environment’s node_modules folder and log are not copied to the container:

node_modules
npm-debug.log

We will follow the same steps for the store/product-service folder, and for the store/cart-service folder.

Since the cart-service will communicate with the auth-service, I have also installed the axios library by running npm i axios in store/cart-service, and have included it in store/cart-service/index.js with the line const axios = require('axios');. Also, the cart-service will need the URL of the auth-service, so I have added environment variable AUTH_URL.

With this, store/cart-service/index.js begins as follows:

var express = require('express');
const mysql = require('mysql');
const axios = require('axios');
const cors = require('cors');
const app = express();
const port = 80;
const db_location = process.env.DB_LOCATION;
const db_user = process.env.DB_USER;
const db_password = process.env.DB_PASSWORD;
const db_name = process.env.DB_NAME;
const auth_service_url = process.env.AUTH_URL;

app.use(express.json());
app.use(cors());

Note: As with MySQL, Node.js is not a requirement. There are docker images for various other platforms, as well:

Website Setup

We will use Angular for our website, so let’s first initialize our Angular application by running the command npx ng new in the store/website directory. Use all the default values, and then open the file store/website/environments/environment.ts

At this point, we determine which ports we will run our services. Replace the content of the file with the following:

export const environment = {
  production: false,
  auth_url: "http://localhost:30500",
  products_url: "http://localhost:30501",
  cart_url: "http://localhost:30502"
};

This will give us access to the URL of each of the microservices that we have defined previously. We can then create several angular services that read from this environment file and use the built-in Angular HTTP service to access our web services.

(Note: There are numerous ways to supply environment variables such as this to a web application. This is simply one of the easiest ways to get us up and running as quickly as possible.)

We can then create a Dockerfile, similar to the one we made for our microservices, in store/website/Dockerfile:

FROM node:12
WORKDIR /app
COPY package.json package.json
COPY package-lock.json package-lock.json
RUN npm install
COPY . /app
CMD [ "npx", "ng", "serve","--host","0.0.0.0","--poll","2000"]

This will host an Angular development application inside our container that will continuously monitor for changes and be exposed on port 4200, the default Angular development port.

Testing with Docker Compose

Now that our simple database, microservices, and angular application have been developed, we can set up Docker Compose to run all five of these Dockerfiles at once and establish communication between them. To do this, we need to create a docker-compose.yml file in the store directory.

A Starting Point

Our first goal with Docker Compose is to get all five containers running. This is a relatively simple task with docker-compose.

Inside store/docker-compose.yml, use the following code:

version: "3.8"
services:
  sql:
    build: ./db
    command: --default-authentication-plugin=mysql_native_password
    restart: always
    environment:
      - MYSQL_RANDOM_ROOT_PASSWORD=1
      - MYSQL_USER=db_user
      - MYSQL_DATABASE=db
      - MYSQL_PASSWORD=pwd
  auth:
    build: ./auth-service
  product:
    build: ./product-service
  cart:
    build: ./cart-service
  website:
    build: ./website
    ports:
      - '4200:4200'
    volumes:
      - ./website/src:/app/src

What is this doing?
First, we are defining all of our services using the services configuration option inside the YAML file. From there, we give names to each of our five services: sql, auth, product, cart, and website. We point each of these services at our Dockerfiles using build and providing the path to the folder.
By default, no ports are exposed outside of Docker, so we need to let Docker Compose know that we need to expose the port the website’s 4200 port to the local machine’s port 4200 the ports configuration option.

The volumes configuration option specifies a mapping of a local folder to a docker folder. In this case, on the website, we supply the folder ./website/src as a local folder and /app/src as the Docker container folder. This allows us to make changes to the Angular project’s source directory while the Angular server in the container is monitoring for changes, allowing easier development.

The SQL database is configured to use environment variables, as defined by its documentation. We define those environment variables here to give us a consistent (but insecure) database, username, and password.

The restart: always argument will restart the MySQL container if it shuts down unless the container is manually terminated. We can supply additional arguments to a command-line application or override the command with the command configuration option.

Communicating Between Containers

We’re closer now, but we still need to configure docker-compose.yml to allow communication between containers. Earlier on, we defined some environment variables: DB_LOCATION, DB_USER, DB_PASSWORD, DB_NAME, and cart-service AUTH_URL.

When we set up containers with Docker Compose, the hostname of any environment within the Docker Compose runtime environment is the name that we provide in the services, so our five hostnames are sql, auth, product, cart, and website. These allow us to make MySQL calls to sql with an appropriate library and authentication calls to auth with a simple POST. So, using the same format as we did with the sql service, we can provide environment variables to auth, product, and cart:

version: "3.8"
services:
  sql:
    build: ./db
    command: --default-authentication-plugin=mysql_native_password
    restart: always
    environment:
      - MYSQL_RANDOM_ROOT_PASSWORD=1
      - MYSQL_USER=db_user
      - MYSQL_DATABASE=db
      - MYSQL_PASSWORD=pwd
  auth:
    build: ./auth-service
    environment:
      - DB_LOCATION=sql
      - DB_USER=db_user
      - DB_PASSWORD=pwd
      - DB_NAME=db
  product:
    build: ./product-service
    environment:
      - DB_LOCATION=sql
      - DB_USER=db_user
      - DB_PASSWORD=pwd
      - DB_NAME=db
  cart:
    build: ./cart-service
    environment:
      - DB_LOCATION=sql
      - DB_USER=db_user
      - DB_PASSWORD=pwd
      - DB_NAME=db
      - AUTH_URL=http://auth
  website:
    build: ./website
    ports:
      - '4200:4200'
    volumes:
      - ./website/src:/app/src

Notice that the URL of AUTH_URL in the cart service is simply http://auth. This is enough to access the other container from the cart-service application!

Communicating with the Browser

We could try to do the same thing with the Angular website, but we wouldn’t be successful. Since a website runs in the browser and not within the Docker environment, our browser would not be aware of the location of http://auth, http://product, and http://cart.

We will have to expose the ports of our auth-service, product-service, and cart-service, similar to how we expose our website’s ports. Recall that we hosted our Express applications on port 80. We will map those ports to ports 30500, 30501, and 30502, as we configured in the Angular application’s local environment variables:

version: "3.8"
services:
  sql:
    build: ./db
    command: --default-authentication-plugin=mysql_native_password
    restart: always
    environment:
      - MYSQL_RANDOM_ROOT_PASSWORD=1
      - MYSQL_USER=db_user
      - MYSQL_DATABASE=db
      - MYSQL_PASSWORD=pwd
  auth:
    build: ./auth-service
    ports:
      - '30500:80'
    environment:
      - DB_LOCATION=sql
      - DB_USER=db_user
      - DB_PASSWORD=pwd
      - DB_NAME=db
  product:
    build: ./product-service
    ports:
      - '30501:80'
    environment:
      - DB_LOCATION=sql
      - DB_USER=db_user
      - DB_PASSWORD=pwd
      - DB_NAME=db
  cart:
    build: ./cart-service
    ports:
      - '30502:80'
    environment:
      - DB_LOCATION=sql
      - DB_USER=db_user
      - DB_PASSWORD=pwd
      - DB_NAME=db
      - AUTH_URL=http://auth
  website:
    build: ./website
    ports:
      - '4200:4200'
    volumes:
      - ./website/src:/app/src

This is our final docker-compose.yml file. It performs the following tasks:

  • Create five containers, one for our database, one for each of our services, and one for our website
  • Establish network communication between the containers
  • Expose ports for containers that need to communicate with the browser
  • Set up source mapping to allow for automatic change polling in the Angular server

Running Docker Compose

Now that we have our docker-compose.yml file running, Docker Compose is very simple. In the store directory, run the command, docker-compose up --detach.

This command will create or recreate each of the containers defined in the docker-compose.yml file and immediately run our containers.  You can then access the Angular application at http://localhost:4200 or access any of the three microservices with POST commands in a tool like cURL or Postman at http://localhost:30500; or http://localhost:30501; http://localhost:30502.

When we want our containers to stop, run the command docker-compose down -v. This will both stop and remove all of the containers from your Docker runtime environment and tear down the database, so it has to be recreated. If you do not want the database recreated, omit the -v option; you can run the command docker-compose rm -v to remove the database volume later.

There are many other options available to Docker Compose. For more information, check the docker-compose CLI reference and the Compose file reference on the official Docker website.

Conclusion

Though not intended for production environments, Docker Compose provides a valuable tool for the development of microservices. With appropriate use of Docker and Docker Compose, we can easily port entire development environments to other developers’ machines with minimal local environment configuration, including a local database, microservices, and a website. Once the environment has been configured, a new developer can run a single command and start developing and testing the application immediately.