Part 4: Load testing the messaging integration style

In this four part series we have been looking at how different application integration styles handle spikes in load. In Part 1 we created and deployed a distributed system that used an RPC-based integration style. Our inventory application communicated with our purchasing application via a web service. In Part 2 we simulated a spike in load and caused the system fail. In Part 3 we updated the architecture from an RPC-based integration style to a messaging-based integration style. In this post, we are going to simulate the same spike in load and see how the messaging-based architecture copes.

Where are we now? We have updated our distributed system to use messaging as the communication mechanism between the applications. We have created an integration test that causes the inventory application to request stock replenishment from the purchasing application and we have created a load test that executes the integration test a thousand times and records the results. We have already tested our previous, RPC-based architecture and seen that it doesn’t hold up when there is more load than the hardware can handle.

Smoke testing the new architecture

Let’s execute our integration test just once to make sure our new messaging architecture is all hooked up correctly. Doing so gives us a green light indicating that our test passed. We can then examine the “stock-replenishment-requests” queue and see that we have one message waiting in the queue:

Stock Replenishment Requests Queue

Opening this message will show the stock item code that we will be placing the purchase order for. We can then fire up the Enterprise.Purchasing.QueueHandler.exe application on the virtual machine. This application will grab the message from the queue, process it and write the item code contained in the message to the PurchaseOrders.txt file. Examining the contents of the message before it has been processed and examining the console output and examining the contents of the PurchaseOrders.txt file after the message has been processed reveals that the item code has indeed been retrieved from the message, processed and saved to the file:

Message Journey

At this point we know our messaging architecture is working correctly.

Load testing the messaging architecture

In order to accurately compare our RPC-based architecture with our messaging-based architecture it is important that we run exactly the same load test as before. As such, I am not going to modify anything in the load test at all. I am simply going to rerun the same test with exactly the same settings. The first thing I am going to do is start the Enterprise.Purchasing.QueueHandler.exe application so it can begin waiting for messages to arrive on the queue. Next I am going to execute the same load test as before and see how my distributed system handles the spike in load now that I have a messaging architecture in place.

Executing the test produces the following results:

Load Test Results

The same load test with the messaging architecture in place resulted in zero failed tests. What’s more, each test took 72 milliseconds to complete. This is a significant improvement over our previous, RPC-based architecture.

Taking a look at the purchasing application after the test has completed shows that the queue handler takes quite a while to process all the messages in the queue. We can see the handler chugging away processing one message at a time:

Queue Handler

Another important thing to notice is that our virtual machine easily has enough resources needed to process these messages one at a time. Looking at the Task Manager now shows that the CPU is not at all taxed by the message processing:

Task Manager

Of course, nothing is for free. It is important to point out that the price you pay with this architecture is that the purchase orders are not immediately created. A purchase order will only be created when the handler gets around to processing the message. This can take some time. The inventory application can, however, be confident that the purchase orders will eventually be created, even though it may not happen immediately. This is what is known as eventual consistency.

Indeed, after our message queue handler has finished processing all the messages in the queue, we can examine the contents of the PurchaseOrders.txt file and see that all 1,000 purchase orders were successfully created:

Purchase Orders

In summary, these are the results of the three load test:

Architecture Failed Tests Avg. Test Time (sec)
RPC with no concurrent request limit 551 26.2
RPC while limiting concurrent requests to 3 792 2.9
Messaging 0 0.072

Summary

In this series of posts we used a practical, hands-on example to explore what happens to a distributed system when one node in that system doesn’t have enough resources to handle a spike in load. We first created an RPC-based distributed system comprising an inventory application that communicated with a purchasing application. We deployed the purchasing application to a virtual machine running on our local box. The virtual machine was intentionally configured with very limited resources so that we could push the machine to its limit. We then created a Visual Studio Load Test to put the system under stress and saw that when the virtual machine doesn’t have the resources to handle a spike in load, the requests fail. This is not a desirable situation.

We then tried limiting the number of concurrent requests allowed by ASP.NET and saw that while we could keep the CPU usage under control, the target machine would respond with a “503 Service Unavailable” message when the request queue was full. While this is a slightly better situation to be in (since we can be certain that no purchase order will be created if we receive a 503 response) it is still not a desirable way to handle spikes in load.

While keeping the configured virtual machine resources the same, we next replaced the RPC integration style with a messaging integration style using RabbitMQ. We ran the same load test again and could see that this time all our tests completed successfully and in a fraction of the time that the previous tests took to complete. We also saw that we had to pay a price to get this increased reliability and performance. The price we paid was that the purchase orders were not immediately created when the inventory application requested them. They would eventually be created but it took some time.

Resources