When Microsoft introduced pipelines as part of their Azure DevOps cloud service offering, we received the tools to add continuous integration (CI) and continuous delivery (CD) practices to our development processes. An Azure DevOps pipeline can be created in two ways: 1) The current generally available “classic” pipeline tooling, and 2) the new multi-stage YAML pipeline feature which is currently in preview.

Classic Pipelines

Classic pipelines achieve CI through Azure DevOps build pipelines. A build pipeline executes before a developer integrates code changes into a code base. The pipeline does things like execute a build task, run the unit tests and/or run static code analysis. It then either accepts or rejects the new changes based on the outcome of these tasks.

CD is achieved through Azure DevOps release pipelines. After the build pipeline has produced a build artifact, a release pipeline will publish the artifact to various environments for manual functional testing, user experience testing and quality assurance. When testers have thoroughly tested the deployed artifacts, the release pipeline can then push the artifacts to a production environment.

As powerful as these classic CI/CD pipelines are, they do have their drawbacks. Firstly, the tooling for creating build and release pipelines does not provide a unified experience. CI pipelines provide an intuitive GUI to create and visualize the integration steps…

Classic build pipeline editor

… and also allow you to define those very same steps in YAML:

YAML build pipeline editor

Release pipelines also provide a GUI to create and visualize the pipeline steps. The problem is that the interface is different from that of the build pipeline and does not allow YAML definitions:

Classic release pipeline editor

Multi-stage Pipelines

To resolve these discrepancies Microsoft introduced multi-stage pipelines. Currently in preview, these pipelines allow an engineer to define a build, release or a combined build and release pipeline as a single YAML document. Besides the obvious benefits gained through a unified development experience, there are many other good reasons to choose YAML over classic pipelines for both your builds and releases.

Since you commit YAML definitions directly to source control, you get the same benefits source control has been providing developers for decades. Here are the top 10 reasons (in no particular order) you should choose YAML for your next Azure DevOps pipeline:

1. History

Want to see what your pipeline looked like last month before you moved your connection strings to Azure KeyVault? No problem! Source control allows you to see every change ever make to your pipeline since the beginning of time.

2. Diff

Have you ever discovered an issue with your build but not known exactly when it started failing or why? Having the ability to compare the failing definition with the last known working definition can greatly reduce the recovery time.

3. Blame

Similarly, it can be useful to see who committed the bug that caused the failure and who approved the pull request. You can pull these team members into discussions on how best to fix the issue while ensuring that the original objectives are met.

4. Work Items

Having the ability to see what was changed is one thing but seeing why it was changed is another. By attaching a user story or task to each pipeline commit, you don’t need to remember the thought process that went into a particular change.

5. Rollback

If you discover that the pipeline change you committed last night caused a bad QA environment configuration, simply rollback to the last known working version. You’ll have your QA environment back up in minutes.

6. Everything As Code

Having your application, infrastructure and now build and release pipelines as code in the same source control repository gives you a complete snapshot of your system at any point in the past. By getting an older version of your repo, you can easily spin up an identical environment, execute the exact same pipelines and deploy the same code exactly as it was. This is an extremely powerful capability!

7. Reuse and Sharing

Sharing or duplicating a pipeline (or part thereof) is as simple as copy and paste. It’s just text so you can even email it to a colleague if desired.

8. Multiple Engineers

Modern CI/CD pipelines can be large and complex, and more than one engineer might modify the same YAML file, causing a conflict. Source control platforms solved this problem long ago and provide easy to use tools for merging conflicting changes. For better or worse, YAML definitions allow multiple engineers to work on the same file at the same time.

9. Peer Reviews

If application code peer reviews are important, so are pipeline peer reviews. The ability to submit a pull request before bringing in new changes allows team members to weigh in and provides an added level of assurance that the changes will perform as desired.

10. Branching

Have a crazy idea you want to try out? Create a new branch for it and trigger a pipeline execution from that branch. If your idea doesn’t pan out, simply delete the branch. No harm done.

Though still in preview, the introduction of fully text-based pipeline definitions that can be committed to source control provides benefits that cannot be achieved with classic GUI-based definitions, especially for larger organizations. Be sure to consider YAML for your next Azure DevOps pipeline implementation.

Part 4: Load testing the messaging integration style

In this four part series we have been looking at how different application integration styles handle spikes in load. In Part 1 we created and deployed a distributed system that used an RPC-based integration style. Our inventory application communicated with our purchasing application via a web service. In Part 2 we simulated a spike in load and caused the system fail. In Part 3 we updated the architecture from an RPC-based integration style to a messaging-based integration style. In this post, we are going to simulate the same spike in load and see how the messaging-based architecture copes.

Where are we now? We have updated our distributed system to use messaging as the communication mechanism between the applications. We have created an integration test that causes the inventory application to request stock replenishment from the purchasing application and we have created a load test that executes the integration test a thousand times and records the results. We have already tested our previous, RPC-based architecture and seen that it doesn’t hold up when there is more load than the hardware can handle. Read More…

Part 3: Re-architecting the system to use a messaging integration style

In this series of posts I am taking a practical look at how a messaging architecture can mitigate the risks associated with a spike in load if a server doesn’t have enough resources to handle the spike. In Part 1 I created a distributed system for a fictitious company. The system consisted of two nodes: an inventory node and a purchasing node. These nodes were integrated using an RPC-style architecture. In Part 2 I put the system under stress using a Visual Studio Load Test and saw how it failed when the virtual machine on which the purchasing system was deployed didn’t have enough resources to handle the load. In this third post I am going to use a messaging integration style over RabbitMQ to allow this distributed system to effectively handle spikes in load. Finally, in Part 4 I am going to simulate the same spike in load and see how the messaging architecture comfortably handles the spike. Read More…

Part 2: Load testing the RPC-based integration style

In Part 1 of this series I created a fictitious distributed enterprise system that allowed an inventory application to communicate with a purchasing application through an RPC integration style. In this post, I am going to give this distributed system a stress test, see how it fails and examine the consequences and severity of such a failure. In part 3 and 4 of the series I’ll take a different integration approach and update the system to integrate using a messaging style over the RabbitMQ messaging technology.

The test environment

As a reminder, this is how my distributed system is deployed:

Deployment Diagram

Read More…

Part 1: Creating a distributed system

I have recently been delving into the exciting world of distributed systems. From an architectural standpoint, distributed systems demand a different design approach than that of regular centralized applications. From a development standpoint, they provide a host of new and fun toys to play with. I have spent the last few months investigating some of the architectural styles and patterns associated with distributed systems. I have also been playing with some of the technologies used to implement these styles and patterns.  Read More…

Any website that allows users to create their own textual content requires a good text editor. Fortunately for today’s developer, there are many options available that provide users with a great text editing experience. Depending on the development platform, this is often as simple as plugging in an existing editor written for that platform. In early 2014 I was looking for just such an option for AngularJS. This tutorial will give you an overview of the steps I went through to add a text editor to my AngularJS application.

AngularJS is quickly becoming s a popular web application framework. Its scalable, modular structure and intuitive syntax make it a popular choice for building medium to large web applications. While AngularJS is certainly opinionated, its opinions turn out to be quite helpful – especially to those less nuanced in the intricacies of well-organized JavaScript. I found AngularJS, and its accompanying opinions, a pleasure to work with. Read More…

Video has become an integral part of our web experience.  This, coupled with the pervasiveness of connected and video capable devices, calls for an easy-to-use, flexible, reliable and scalable platform for hosting, processing and distributing media to anyone, anywhere, on any device.  The availability of Windows Azure Media Services (WAMS) Preview lets us explore a promising new platform which aims to bring us closer to that goal.  

Since WAMS is still in the preview release stage there are a few wrinkles in the platform that early adopters need to be aware of.  These issues should be corrected in upcoming releases but until then, there are a few alternate approaches that will help you get your media solution up and running with as little frustration as possible. In this post I will show you how to get video content hosted, encoded and delivered using the WAMS SDK and how to work around some of the quirks with the June 2012 Preview version.

Read More…