Recently, I was involved in a project where we had requirements. A report needed to be built based on data from several sources, including Azure Cosmos DB and Azure SQL. This report was required to be stored in Azure, and data was subject to a retention policy where it would be unable to be deleted. For this use case, we decided to use Azure Data Factory to read the data, transform the data, and load the data into a CSV file within a blob container in an Azure Storage Account. Then, to prevent the deletion of data, we would create a retention policy on the blob container to enable immutable storage, giving us write-once, read-many capabilities.

Azure Cosmos Data Flow

We found that we needed a Data Flow within Azure Data Factory to perform logic such as joining across our data sources. We created an output sink to a Delimited Text output within an Azure Blob Storage Linked Service within our data flow.

Immutable storage policies divided into retention policies and legal holds can be enabled on a storage account to enforce write-once, read-many policies, which allow new documents to be added to the storage account. But, it prevents the modification and deletion of existing documents. In the case of a retention policy, individual blobs cannot be modified or deleted until the retention period has passed for a given blob. In the case of legal holds, no blobs can be modified or deleted until the legal hold is manually removed from the blob container. Legal holds can be used to fulfill various legal requirements, including keeping data under active investigation from being modified. You can read more about immutable storage in the official Microsoft documentation for Immutable storage for blobs.

Our project had legal requirements for retaining these created documents, so we found immutable storage policies to be a good fit. When we enabled Immutable Storage, either through a retention policy or a legal hold on the Blob Container, we encountered one of two errors:

Job failed due to reason: Spark job failed in one of the cluster nodes while writing data in one partition to sink, with following error message: This operation is not permitted as the blob is immutable due to a policy.

OR

Job failed due to reason: Spark job failed in one of the cluster nodes while writing data in one partition to sink, with the following error message: This operation is not permitted as the blob is immutable due to one or more legal holds.

Here, we will examine why this error occurs, and how we can resolve these errors.

Setup

To simplify our example, I will create a simple Azure Data Factory pipeline that uses a data flow to read from an Azure SQL server, performs a Select operation on that data, and then output a CSV file into an Azure Storage Blob Container.

Data Flow pipeline

I created an Azure SQL server to hold our source data. This server has a database named testdb, and a table named MOCK_DATA with a few columns of varying data types. I then inserted some mock data into the table.

SERVICE OFFERING: REHOST ON AZURE
Initiate your cloud journey on Microsoft technologies to accelerate your cloud transformation. Learn more about the benefits of rehosting.

I then created an Azure Storage Account to hold our final data, with a blob container named “output.” This is where I will store the final CSV files output by the Azure Data Factory.

Azure Storage Account

To make these blobs immutable, I added a legal hold to the output container:

Make blobs immutable

I then created a new Azure Data Factory:

Create and Open Azure Data Factory

Opening Azure Data Factory Studio, I created the following. A new Linked Service, associated with my Azure SQL Server:

Create New Linked Service associated with Azure SQL Server

A second Linked Service, associated with Azure Blob Storage:

Associate Linked Service with Azure Blob Storage

An Azure SQL Database Dataset, with the linked service associated to my SQL Database and the table source associated with my mock data table:

Azure SQL Dataset

An Azure Blob Storage Delimited Text Dataset, associated with my Blob Storage Linked Service, and with two parameters, Container and Blob, that are associated with the File path:

Azure Blob Storage Delimited Text Dataset

A Dataflow, which has the following configuration:

  • A parameter named filename
  • A Source associated with the mock data table dataset, with all default options.
  • A Select activity to simulate processing the data to select specific columns
  • A Sink activity with the following settings:
    • Dataset: The delimited text dataset
    • Settings tab:
      • File name option: Output to a single file
      • File name: $filename
      • Optimize: Single partition

Data Flow Activity and Configuration

A Pipeline with a Data Flow activity, associated with the Data Flow that was just built, with the following parameters:

  • Sink parameters:
    • Container: output
    • Blob: A dynamic content value with an empty string, ”
  • Other parameters:
    • filename: Data flow expression, “file1.csv”

Main Data Flow Parameters

When I run this pipeline, I get the following error:

Job failed due to reason: Spark job failed in one of the cluster nodes while writing data in one partition to sink, with the following error message: This operation is not permitted as the blob is immutable due to one or more legal holds.

Solution

Why do we get an error when running this pipeline? When the Spark data cluster associated with the Data Flow activity runs, it creates several temporary files within the Blob Container. It then attempts to append to these files and delete them after processing.

Error when running pipeline

This violates the write-once, read-many restrictions on the storage blob container so that the Data Factory job will fail.

To resolve this issue, I will need to create a temporary location for these files to be built and then copy that data from one storage account to another. First, I will create a new blob storage container with no retention restrictions, which I will call “temp”:

Temporary Blob Storage Container

Then, on the storage account, to clean up the temporary data, I will enable lifecycle management. I created a new lifecycle management rule on the storage account to delete old files by providing the following settings:

  • Rule scope: Limit blobs with filters
  • Blob type: Block blobs
  • Blob subtype: Base blobs
  • Base blobs rules:
    • Add a rule: If base blobs haven’t been modified in 1 day, then delete the blobs
  • Filter set:
    • Blob prefix: temp/*.csv

Base Blobs Subtype

Create the Filter Set

Next, back in Azure Data Factory, in my pipeline, I changed the dataflow sink parameters to use the temp blob container rather than the output blob container:

Change parameters in dataflow pipeline

The Copy Data activity does not work the same way as a Data Flow sink and does not create temporary files in its process. I created a new Copy Data activity in the pipeline, and attached the data flow’s anchor to the copy data activity:

Create New Data Activity in Pipeline

For the Source of the Copy Data activity, I selected the Delimited Text dataset and provided the following settings:

  • Container: temp
  • Blob: file1.csv

For the Sink of the Copy Data activity, I selected the same Delimited Text dataset and provided the following settings:

  • Container: output
  • Blob: file1.csv

Rerun the pipeline. The pipeline should run successfully, and your output file will exist in both the temp and output blob containers. After our lifecycle management policies find that the temporary blob is over a day old, the temporary blob will be deleted, while the output blob will remain. The retention policy will prevent the output blob from being deleted.

Run Pipeline with output and temp blob containers

Summary

The way the Data Flow activity in Azure Data Factory uses temporary files to build delimited text outputs, using a delimited text output as a Data Sink is not possible on an Azure Storage Blob Container with an immutable storage policy. A temporary storage output destination must exist as the output for the data flow, which is then copied over to the final, firm storage destination with a Copy Data activity to work around this restriction. The Copy Data activity is not processed the same way as the Data Flow sink. It does not create temporary files, so it is suitable for copying data from the temporary container to the output container. The temporary container is then cleaned regularly by creating a lifecycle management policy on the storage account.

An AIS client recently started developing a design system of components commonly used throughout multiple Angular 9 applications. To that end, we decided to take a component-driven approach using Storybook JS, version 6.1. We developed individual base components that would be used in more complex components, and then we would use a combination of those complex components and base components in web pages.

We quickly discovered that though Storybook supports Angular and other frameworks, it is primarily documented for React development. We also found some other techniques that are useful for development and documentation.

Here’s an overview of what we’ll cover in this piece:

  • Development
    • Rebuilding and Watch Mode
    • Router
    • Testing Components that Rely on the Output of Other Components
    • Mocking Services
  • Documentation
    • JSDoc Tags
    • Quotation Marks in JSDoc
    • MDX or TS Documentation?
    • documentation.json

Development

Rebuilding and Watch Mode

When developing with Storybook locally with ‘npm run storybook,’ Storybook has a watch mode that monitors an application’s source code and automatically rebuilds the application, similar to Angular’s watch mode when using ‘ng serve.’

There are a few minor caveats to Storybook’s watch mode, however. If you make changes to any of Storybook’s configuration files in the ‘.storybook’ directory, these files are not rebuilt by the Storybook process. If changes are made to the ‘.storybook’ directory, halt the ‘npm run storybook’ process, and restart it. Similarly, if you add new assets such as images or fonts to your application’s assets folder, Storybook will not package these new assets until you restart the process.

Router

If developing a project that uses the Angular router, and attempt to import the AppModule into the ModuleMetadata of one of your stories, then you will see the following error:
Unhandled Promise rejection: No base HREF set. Please provide a value for the APP_BASE_HREF token or add a base element to the document. ; Zone: ; Task: Promise.then ; Value: Error: No base HREF set. Please provide a value for the APP_BASE_HREF token or add a base element to the document.

The easiest way to avoid this is not to import any module that defines any routes inside it or any of its imported modules. Instead, directly import the components that your application relies on in the story, or specify another module to import that contains only your components and not your application:

export default {
  title: 'My Story Name',
  component: MyStoryComponent,
  argTypes: {

  },
  decorators: [
    moduleMetadata({
      declarations: [
        MyStoryComponent,
        MyDependentComponent
      ],
    }),
  ]
};

If you have components that include the Angular router tests, look at the “Mocking Services” section of this post.

Testing Components that Rely on the Output of Other Components

One component we were developing in our application was a Password Strength Meter, which changes its styles and colors based on the complexity of the password provided to the meter. It took this password as an input property on the component. At first, we allowed the Storybook website user to modify the input property directly in the component’s property list in the Storybook Controls addon. However, we decided this was an insufficient test.

We discovered that it was effortless to define a new Angular component in the .stories.ts file for the Password Strength Meter. You specify this the same way you’d describe any other Angular component, using the @Component decorator, and implementing ngOnInit. Since it is not exported in an Angular module, this component will not be included in the deployment. We defined the component’s Template HTML inline, including both a text input field and our Password Strength Meter. We managed any of the ngModel events in the definition of the component.

Within the same .stories.ts file, you can define separate templates for individual stories. In that way, we can have both a “No text field” story that tests the Password Strength Meter without a text field and an “Integration” story that tests the password strength meter integrated with a text field:

const NoTextFieldTemplate = (args: PasswordStrengthComponent) => ({
  props: args,
  component: PasswordStrengthComponent,
});
export const NoTextField = NoTextFieldTemplate.bind({});
NoTextFieldTemplate.args = {
  ...actionsData
};
const IntegrationTemplate = (args: PasswordStrengthTestComponent) => ({
  props: args,
  component: PasswordStrengthTestComponent,
});

export const IntegrationWithTextField = IntegrationTemplate.bind({});
IntegrationWithTextField.args = {
  ...actionsData
};

Mocking Service

In our application development, we ran into situations where we would need to develop components that used dependency injection to obtain both built-in Angular services like the router and our services.
Since we define an inline Angular module in our story definition using ‘moduleMetadata’, it’s simple enough to add a “providers” line and use the “useValue” property.
For example, let’s say we have a service called SessionService that stores the user’s local session data, including their username. That service has a method named “getUsername()” that a component relies on. Also, this component utilizes the router. Then, inside our “.stories.ts” file, we can define our inline module:

export default {
  title: 'Header',
  component: BannerComponent,
  argTypes: {

  },
  decorators: [
    moduleMetadata({
      declarations: [
        BannerComponent
      ],
      imports: [
        RouterModule.forRoot([])
      ]
      providers: [
        {
          provide: SessionService, useValue: {
           getUsername: () => {
             console.log(`SessionService.getUsername called`);
             return `user1`;
           },
          }
        },
        {provide: APP_BASE_HREF, useValue: '/'}
      ]
    }),

  ]
};

In this example, we can see a mock of the SessionService that provides a fixed value to getUsername and the APP_BASE_HREF value that the Angular router relies on.

Documentation

One of the most significant benefits of Storybook is the amount of documentation that can be generated. Storybook uses Compodoc to generate its documentation, but you do have some options for overriding the default documentation generated. Storybook also offers many additional properties, as explained in their documentation.

Compodoc uses JSDoc to allow developers to design documentation for components easily. However, there are some limitations, as well as some other options for documentation.

JSDoc Tags

Compodoc only supports a small number of JSDoc tags, as documented on their website: @returns, @ignore, @param, @link, and @example. By far, the most useful of these in Storybook is @ignore. The @ignore tag will remove the property from the Storybook Controls panel, which helps hide properties and functions that external components should not use. Storybook infers many of these other properties, and we can define code examples in either the .stories.ts file or the .stories.mdx file.

Quotation Marks in JSDoc

One of the first things we found when writing JSDoc for Storybook is that either Compodoc or Storybook does not escape quotation marks properly for displaying in web browsers. We had to use the HTML escape for apostrophes and quotes, “’” and “”” inside our JSDoc documentation to get around this issue.

MDX or TS Documentation?

MDX is a file format similar to Markdown provided as an option for Storybook documentation. We found that MDX is slightly harder to use for Angular developers, as it requires learning a new syntax based on React development. Despite this drawback, MDX gives the ability to provide complex Markdown formatting to the pages. When developing in Storybook, consider using MDX if you need an involved documentation page that does not fit the default Storybook documentation formatting style.

You can mix and match MDX with Typescript and define your stories in a TS file and import them in an MDX file if you need complex story definitions alongside complex markdown.
Often, TS documentation will be enough; we can use the various properties Storybook provides alongside JSDoc. However, for those times that we need a bit more customization, MDX files can provide what we need.

documentation.json

When Compodoc builds, it creates a large file in the root directory of the application called documentation.json. This file is created when Compodoc is run. This file does not need to be committed to git and can safely be included in a .gitignore file.

Conclusion

Storybook is a powerful tool used to develop and document components in Angular. Though its documentation is primarily written for React, its capabilities are equally strong with Angular and will help design document reusable components for a large application.

Microservices are a typical development pattern in the modern era of software development. Unfortunately, it can be challenging to configure and test sprawling applications that utilize dozens of microservices.

For our example, let’s say that you have several teams that are developing the following microservices for a simplified online store application:

  • An authentication service, which allows users to log in and other services to verify a user’s authentication token. This service will need to talk to the database.
  • A product service, which retrieves information about products from the database.
  • A cart service, which allows a user to add or remove products from their cart. This service will need to communicate with the authentication service to verify the user’s credentials.

We will assume that only a single database is used for this application, which contains user data, product data, and cart data. This could also be stored across multiple databases, one for each microservice. We will also need a website, which provides the frontend for the three microservices. Though small, we can already see where configuring an environment for local testing can become burdensome. These services may require different frameworks or runtime environments and a locally deployed database. These services will also need to communicate with each other and the database. We will explore how we can use Docker and Docker Compose to aid in developing this simple application.

Why Use Containers?

Containers help ensure that an application runs as expected on any machine. By using containers, we can help solve the familiar “works on my machine” problem. We isolate the microservices into their environment and can test them independent of local environment configuration. This gives us confidence that our application will behave as expected both for new developers and production deployments.

For this project, we will use Docker, a popular container engine. This gives us access to Docker Compose, a tool that we can use to orchestrate multiple containerized applications together.

Why Use Docker Compose?

Though we can run each Docker container individually, that quickly becomes tedious and difficult to manage. Docker Compose allows us to define a file that defines all of the containers we need to run inside a YAML file and define any environment variables we need to use to run the application. Docker Compose also allows us to communicate easily between multiple containers. Using Docker Compose, we can easily configure a local runtime environment that will include our database, our microservices, and our website applications. We can quickly rebuild any databases, stand up an entire testing environment, and just as quickly tear it down.

It’s important to note that Docker Compose is not meant for production environments. Unlike other tools such as Docker Swarm or Kubernetes, Docker Compose does not provide scaling, monitoring, load balancing, or other management capabilities.

Prerequisites

Setup

We will first set up a simple microservice-based application that will follow the following structure:

The website will communicate three microservices to the database, while one microservice will communicate with another. We could almost as easily have separate databases for each microservice; similarly, we could have the Cart service also talk to the Product service to retrieve Product data.

For this application, we will use the following technologies:

  • Database: MySQL
  • API applications: Node.js
  • Website: Angular

These technologies are not required. For example, the database could just as easily be another database type, such as PostgreSQL, MongoDB, or CosmosDB.

Directory Configuration

The first thing we should do is create a project folder. For this application, we’ll call it store. Then, we will create folders for each microservice: auth-service, product-service, and cart-service. We’ll also create one for the website and one for the database called DB.

The folder structure should look as follows:

  • store
    • auth-service
    • cart-service
    • db
    • product-service
    • website

Database Setup

To get our application started, we will first configure our database. I will be using the MySQL docker container, version 5. To do this, we should first create our dockerfile. Create the file store/db/Dockerfile and add the following:

FROM mysql:5
ADD . /docker-entrypoint-initdb.d

This will add all files from the store/db folder to our database and executes all *.sql files in alphabetical order.

You can then define each of our tables, and some test data, in a separate file so that your directory looks as follows:

  • db
    • 001-Products.sql
    • 002-AppUsers.sql
    • 003-CartItems.sql
    • 999-testdata.sql
    • Dockerfile

When the MySQL Docker image is executed, it scans the docker-entrypoint-initdb.d directory, and executes them in alphabetical order. This will initialize our database easily and quickly.

You could also define a database for each of our services if you follow a strict microservice pattern where each service has its database. You can do this by creating multiple db folders and putting the Dockerfile in each one.

Note: There exist docker images for various other databases, as well; MySQL is not required. You can view documentation for a few of these options as follows:

API Application Setup

In each of the service folders, we will be creating a simple Node.js application.
In the store/auth-service directory, run the command npm init. Leave all of the default values in the initialization script. Install the following dependencies:

  • npm i express
  • npm i cors
  • npm i mysql@^2.18

Create the file, store/auth-service/index.js. The easiest way to pass arguments to applications in Docker is using environment variables, so we will pass the location, user, password, and name of the database through these environment variables:

var express = require('express');
const mysql = require('mysql');
const cors = require('cors');
const app = express();
const port = 80;
const db_location = process.env.DB_LOCATION;
const db_user = process.env.DB_USER;
const db_password = process.env.DB_PASSWORD;
const db_name  = process.env.DB_NAME;


app.use(express.json());
app.use(cors());

...

As this is not an Express tutorial or credential, I will not provide the authentication service’s implementation details here.

From here, we can continue to develop our authentication service. We can then create a Dockerfile for this application at store/auth-service/Dockerfile. This Dockerfile will use the Node image for Docker.

FROM node:12
WORKDIR /app
COPY package.json package.json
COPY package-lock.json package-lock.json
RUN npm install
COPY . .
CMD [ "node", "index.js" ]

We will also create a .dockerignore file at store/auth-service/.dockerignore with the following, so that our local environment’s node_modules folder and log are not copied to the container:

node_modules
npm-debug.log

We will follow the same steps for the store/product-service folder, and for the store/cart-service folder.

Since the cart-service will communicate with the auth-service, I have also installed the axios library by running npm i axios in store/cart-service, and have included it in store/cart-service/index.js with the line const axios = require('axios');. Also, the cart-service will need the URL of the auth-service, so I have added environment variable AUTH_URL.

With this, store/cart-service/index.js begins as follows:

var express = require('express');
const mysql = require('mysql');
const axios = require('axios');
const cors = require('cors');
const app = express();
const port = 80;
const db_location = process.env.DB_LOCATION;
const db_user = process.env.DB_USER;
const db_password = process.env.DB_PASSWORD;
const db_name = process.env.DB_NAME;
const auth_service_url = process.env.AUTH_URL;

app.use(express.json());
app.use(cors());

Note: As with MySQL, Node.js is not a requirement. There are docker images for various other platforms, as well:

Website Setup

We will use Angular for our website, so let’s first initialize our Angular application by running the command npx ng new in the store/website directory. Use all the default values, and then open the file store/website/environments/environment.ts

At this point, we determine which ports we will run our services. Replace the content of the file with the following:

export const environment = {
  production: false,
  auth_url: "http://localhost:30500",
  products_url: "http://localhost:30501",
  cart_url: "http://localhost:30502"
};

This will give us access to the URL of each of the microservices that we have defined previously. We can then create several angular services that read from this environment file and use the built-in Angular HTTP service to access our web services.

(Note: There are numerous ways to supply environment variables such as this to a web application. This is simply one of the easiest ways to get us up and running as quickly as possible.)

We can then create a Dockerfile, similar to the one we made for our microservices, in store/website/Dockerfile:

FROM node:12
WORKDIR /app
COPY package.json package.json
COPY package-lock.json package-lock.json
RUN npm install
COPY . /app
CMD [ "npx", "ng", "serve","--host","0.0.0.0","--poll","2000"]

This will host an Angular development application inside our container that will continuously monitor for changes and be exposed on port 4200, the default Angular development port.

Testing with Docker Compose

Now that our simple database, microservices, and angular application have been developed, we can set up Docker Compose to run all five of these Dockerfiles at once and establish communication between them. To do this, we need to create a docker-compose.yml file in the store directory.

A Starting Point

Our first goal with Docker Compose is to get all five containers running. This is a relatively simple task with docker-compose.

Inside store/docker-compose.yml, use the following code:

version: "3.8"
services:
  sql:
    build: ./db
    command: --default-authentication-plugin=mysql_native_password
    restart: always
    environment:
      - MYSQL_RANDOM_ROOT_PASSWORD=1
      - MYSQL_USER=db_user
      - MYSQL_DATABASE=db
      - MYSQL_PASSWORD=pwd
  auth:
    build: ./auth-service
  product:
    build: ./product-service
  cart:
    build: ./cart-service
  website:
    build: ./website
    ports:
      - '4200:4200'
    volumes:
      - ./website/src:/app/src

What is this doing?
First, we are defining all of our services using the services configuration option inside the YAML file. From there, we give names to each of our five services: sql, auth, product, cart, and website. We point each of these services at our Dockerfiles using build and providing the path to the folder.
By default, no ports are exposed outside of Docker, so we need to let Docker Compose know that we need to expose the port the website’s 4200 port to the local machine’s port 4200 the ports configuration option.

The volumes configuration option specifies a mapping of a local folder to a docker folder. In this case, on the website, we supply the folder ./website/src as a local folder and /app/src as the Docker container folder. This allows us to make changes to the Angular project’s source directory while the Angular server in the container is monitoring for changes, allowing easier development.

The SQL database is configured to use environment variables, as defined by its documentation. We define those environment variables here to give us a consistent (but insecure) database, username, and password.

The restart: always argument will restart the MySQL container if it shuts down unless the container is manually terminated. We can supply additional arguments to a command-line application or override the command with the command configuration option.

Communicating Between Containers

We’re closer now, but we still need to configure docker-compose.yml to allow communication between containers. Earlier on, we defined some environment variables: DB_LOCATION, DB_USER, DB_PASSWORD, DB_NAME, and cart-service AUTH_URL.

When we set up containers with Docker Compose, the hostname of any environment within the Docker Compose runtime environment is the name that we provide in the services, so our five hostnames are sql, auth, product, cart, and website. These allow us to make MySQL calls to sql with an appropriate library and authentication calls to auth with a simple POST. So, using the same format as we did with the sql service, we can provide environment variables to auth, product, and cart:

version: "3.8"
services:
  sql:
    build: ./db
    command: --default-authentication-plugin=mysql_native_password
    restart: always
    environment:
      - MYSQL_RANDOM_ROOT_PASSWORD=1
      - MYSQL_USER=db_user
      - MYSQL_DATABASE=db
      - MYSQL_PASSWORD=pwd
  auth:
    build: ./auth-service
    environment:
      - DB_LOCATION=sql
      - DB_USER=db_user
      - DB_PASSWORD=pwd
      - DB_NAME=db
  product:
    build: ./product-service
    environment:
      - DB_LOCATION=sql
      - DB_USER=db_user
      - DB_PASSWORD=pwd
      - DB_NAME=db
  cart:
    build: ./cart-service
    environment:
      - DB_LOCATION=sql
      - DB_USER=db_user
      - DB_PASSWORD=pwd
      - DB_NAME=db
      - AUTH_URL=http://auth
  website:
    build: ./website
    ports:
      - '4200:4200'
    volumes:
      - ./website/src:/app/src

Notice that the URL of AUTH_URL in the cart service is simply http://auth. This is enough to access the other container from the cart-service application!

Communicating with the Browser

We could try to do the same thing with the Angular website, but we wouldn’t be successful. Since a website runs in the browser and not within the Docker environment, our browser would not be aware of the location of http://auth, http://product, and http://cart.

We will have to expose the ports of our auth-service, product-service, and cart-service, similar to how we expose our website’s ports. Recall that we hosted our Express applications on port 80. We will map those ports to ports 30500, 30501, and 30502, as we configured in the Angular application’s local environment variables:

version: "3.8"
services:
  sql:
    build: ./db
    command: --default-authentication-plugin=mysql_native_password
    restart: always
    environment:
      - MYSQL_RANDOM_ROOT_PASSWORD=1
      - MYSQL_USER=db_user
      - MYSQL_DATABASE=db
      - MYSQL_PASSWORD=pwd
  auth:
    build: ./auth-service
    ports:
      - '30500:80'
    environment:
      - DB_LOCATION=sql
      - DB_USER=db_user
      - DB_PASSWORD=pwd
      - DB_NAME=db
  product:
    build: ./product-service
    ports:
      - '30501:80'
    environment:
      - DB_LOCATION=sql
      - DB_USER=db_user
      - DB_PASSWORD=pwd
      - DB_NAME=db
  cart:
    build: ./cart-service
    ports:
      - '30502:80'
    environment:
      - DB_LOCATION=sql
      - DB_USER=db_user
      - DB_PASSWORD=pwd
      - DB_NAME=db
      - AUTH_URL=http://auth
  website:
    build: ./website
    ports:
      - '4200:4200'
    volumes:
      - ./website/src:/app/src

This is our final docker-compose.yml file. It performs the following tasks:

  • Create five containers, one for our database, one for each of our services, and one for our website
  • Establish network communication between the containers
  • Expose ports for containers that need to communicate with the browser
  • Set up source mapping to allow for automatic change polling in the Angular server

Running Docker Compose

Now that we have our docker-compose.yml file running, Docker Compose is very simple. In the store directory, run the command, docker-compose up --detach.

This command will create or recreate each of the containers defined in the docker-compose.yml file and immediately run our containers.  You can then access the Angular application at http://localhost:4200 or access any of the three microservices with POST commands in a tool like cURL or Postman at http://localhost:30500; or http://localhost:30501; http://localhost:30502.

When we want our containers to stop, run the command docker-compose down -v. This will both stop and remove all of the containers from your Docker runtime environment and tear down the database, so it has to be recreated. If you do not want the database recreated, omit the -v option; you can run the command docker-compose rm -v to remove the database volume later.

There are many other options available to Docker Compose. For more information, check the docker-compose CLI reference and the Compose file reference on the official Docker website.

Conclusion

Though not intended for production environments, Docker Compose provides a valuable tool for the development of microservices. With appropriate use of Docker and Docker Compose, we can easily port entire development environments to other developers’ machines with minimal local environment configuration, including a local database, microservices, and a website. Once the environment has been configured, a new developer can run a single command and start developing and testing the application immediately.