We know most software applications, desktop, mobile, or web, require a database at the backend to store data. When we look at current software applications, the complexity is very high, and they have a high frequency of data transactions. So, we need testing of the data stored and retrieved from the database to make sure it has proper data integrity. Any database operation performed by the application is always one of the four, Create, Retrieve, Update and Delete.

We were required to do database unit tests in one of our recent client application implementations, as the application is mainly data-centric. The decisive reason to push for database unit testing is to ensure the application has data integrity. We were required to develop database test cases to include checks for data integrity and confirming business rules.

For database unit testing, we have the following frameworks that can be used:

  • Microsoft SQL Server Data Tools (Using Visual Studio)
  • tSQLt Unit Testing Framework

This blog post will give a high-level experience in implementing and demonstrating how to create test cases that use tSQLt features on AdventureWorks sample databases. We choose the tSQLt Database Unit Testing framework for Azure SQL Database to implement database unit testing. tSQLt allows us to create isolated test cases defined with the data we need; each test case runs in its own transaction.

A tSQLt unit test case is based on the AAA rule, which consists of:
Rules for Database Testing

Step One: tSQLt Environment Setup

The first step is to install tSQLt in your database.

  • Setup includes a set of objects (tables, stored procedures, functions, and more) that you add to the database you want to test. Download “tSQLt_V1.0.5873.27393.zip” from the tSQLt site, and unzip the file.
  • Run the “tSQLt.class.sql” script against the database. You can find the script in the zip file that you downloaded from the tSQLt site.
    Once the run is successful, you can find the tSQLt schema. Assigned to that schema will be tables, views, stored procedures, and user-defined functions. These are the components that do all the processing when creating and running test cases against your database. As a part of best practices, we will not be moving any objects created by “tSQLt.class.sql” and unit test cases outside the development database.

Object Explorer

Before you create a test case, you need to create a test class where the test case will be located. A test class is a schema configured with an extended property that tells tSQLt that it is a test class. To create the test class, you use the NewTestClass stored procedure part of the tSQLt schema.

NewTestClass Stored Procedure

The EXEC statement creates a test class named TestPerson. Once we’ve completed this step, we can add one or more test cases to the test class, so let’s get started doing that.

Step 2: Testing [Person].[GetAddressByCity] Stored Procedure

In tSQLt, a test case is a stored procedure that’s part of a test class and uses tSQLt elements to perform the testing. We can develop and test stored procedures and functions in a database.

As part of developing the tSQLt test case, we’ll create a test case for a stored procedure [Person].[GetAddressByCity] which will return address details for a given city.

Creating a Test Procedure by City

Use the CREATE PROCEDURE statement to create a test case. The procedure name must start with the word “test” and be created in an existing test class; otherwise, making the test case is much like creating any other procedure. The following T-SQL script creates a test case named “TestGetAddressByCitySuccess” in the “TestPerson” test class:

Follow tSQL Script to Create Test Case

As you can see, we are using CREATE OR ALTER PROCEDURE statement to create a test case. The critical part of the test case is the main body of the procedure definition, between BEGIN and END. First, we use tSQLt’s FakeTable stored procedure, which creates a unique temporary table with the same name as the table referenced within the actual stored procedure we are testing. This will ensure that the data in an actual database and the table is not updated or deleted. Any subsequent references we make in our test case to that table will always point to the test table and not the actual table in the database. Then we populate the table with test data.

AIS achieved Windows Server and SQL Server-based workloads to Azure, reflecting our commitment and investments in delivery around helping organizations migrate workloads to Azure.

In the test case, we use two temporary tables, #Expected, used to store the expected data, and #Actual, which will store data once the stored procedure is successfully run. We are keeping both temp tables schema similar.

Finally, we use the tSQLt AssertEqualsTable stored procedure to compare the data in the #Actual table to the data in the #Expected table.

Run the test case using tSQLt.Run the stored procedure.

When we run the test case, it should evaluate to true and return the following results:

Final result for unit testing

Now you have the basics, which will allow you to go a long way with unit testing using tSQLt for SQL Server. You can refer to documentation by visiting the tSQLt User Guide.

I support projects where we have platforms like SharePoint, are looking towards adopting PowerApps, and have Azure Government subscriptions. While learning about containers, I wondered where they could fit into an environment where many applications are developed on top of SharePoint. This helped me better understand containers and discover the perfect use case. I thought of sharing my architecture idea through this blog post. If you would like to learn more about container architecture, read this blog post on Container Architecture Basics from our VP of Solution Engineering, Brent Wodicka.

Architecture Idea

If you are maintaining a SharePoint on-premise farm in a local data center or in the cloud, you cannot replace your SharePoint farm VMs with Containers. But what can you do is take the heavy business logic code out of your custom applications from SharePoint farm and deploy it as services using containers. Deploy your backend services using containers. You can call these services endpoints (Rest API Endpoints) from your SharePoint framework web parts and render UI inside the SharePoint framework webparts.

If your application is developed using PowerApps, I believe you can call custom service endpoints in PowerApps. So, your front end can still be deployed using SharePoint or PowerApps, but your business logic can be deployed as services inside the containers.

Where can containers fit in a multi-platform environment?

Answer: Below is two diagrams that explain the use case of the container in a multi-platform environment discussed above.
Business Logic Services Snapshot

Backend Databases and Cloud Services

Advantages of this Architecture

  • Since you have server-side code deployed inside the Azure Cloud, you can easily integrate with other Azure PaaS services, including Azure SQL Database, Azure Cognitive Search.
  • Since most business logic will deploy inside the containers as services, it is easy to implement these services in any other cloud provider.
  • Suppose you have complex legacy applications developed inside the SharePoint, and you are storing the data in SharePoint lists. In that case, you can move those SP lists data to the Azure SQL database and call the Azure SQL Apis from your services that deployed side the containers. ( See the second diagram above)

Conclusion

Suppose you have heavy business logic written in front end JavaScript files in SharePoint. You can rewrite those codes as server-side code using C# and deploy as services inside the containers and call these services endpoints from SharePoint webparts. Complex application data can even move from SharePoint lists to Azure SQL Databases. Containers solve deploying your custom code as services, but they cannot replace your SharePoint infrastructure.

Over the last couple of years, I’ve moved from serious SharePoint on-premise development to migrating web applications to Azure.  My exposure to Azure prior to the first application was standing up SharePoint development virtual machines that pretty much just used the base settings.  Trial by fire tends to be the way I learn things best which is good because not only was I having to learn about Azure resources, but I also had to refamiliarize myself with web development practices that I haven’t had to work with since .Net 2.0 or earlier.

One of my first tasks, and the focus of this post, was to migrate the SQL Server to Azure SQL (Platform as a Service) instance.  The databases generally migrated well.  Schemas and the majority of the stored procedures were compatible with Azure SQL.  Microsoft provides a program called Azure Data Migration Assistant. This program not only analyzes but can migrate the database.  The analysis will return any suggested or required changes.  There are several types of issues that could arise including deprecated features, incompatible features, and syntax blockers.

Deprecated Field Types

There are three field types that will be removed from future versions: ntext, text, and image. This won’t hold up migrating from SQL server to Azure, but I did change the fields to future-proof the database. The suggested solution is to use: nvarchar(max), varchar(max), or varbinary(max).

Azure SQL Database

T-SQL Concerns

Now let’s get into some of the Azure SQL server limitations. I ran into several issues in scripts. Some had simple solutions while others required some redesign.

  • ISSUE: Scripts can’t connect to multiple databases using the USE statement.
  • SOLUTION: The scripts that did use the USE statement didn’t need it as it was only meant to connect to a single database.
  • ISSUE: Functions such as sp_send_dbmail are not supported. The scripts would send notification emails rather than sending from the application.
  • SOLUTION: This required an involved redesign. I created a table that logged the email and a scheduled Azure Web Job to send the email and flag the record as sent. The reference to sp_send_dbmail was replaced with an INSERT INTO Email command. If the email needed to be sent near-instantly, I could have used an Azure Queue and had the Web Job listen to the Queue then send the email when a new one arrives.
  • ISSUE: Cross-database queries are not supported. A couple of the apps had multiple databases and data from one was needed to add or update records in the other.
  • SOLUTION: This was another major design change. I had to move that logic into the application’s business layer.T-SQL Concerns

Linked Servers

Some of the applications had other databases, including other solutions like Oracle, that the application accessed. They use Linked Server connections to execute queries against the other data sources. So far, they’ve just been read-only connections.

Linked Servers are not available in Azure SQL. To keep the functionality, we had to modify the data layer of the application to pull the resources from the external database and either:
A. Pass the records into the SQL Stored Procedure, or
B. Move the logic of the Stored Procedure into the application’s data layer and make the changes in the application

One of the applications had a requirement that the external database was staying on-prem. This caused an extra layer of complexity because solutions to create a connection back to the database, like Azure ExpressRoute, were not available or approved for the client. Another team was tasked with implementing a solution to act as a gateway. This solution would be a web service that the Azure application would call to access this gateway.

SQL Agent Jobs

SQL Agent Jobs allow for out-of-process data manipulation. A couple of the applications used these to send notification emails at night or to synchronize data from another source. While there are several options in Azure for recreating this functionality such as Azure Functions and Logic Apps, we chose to use WebJobs. WebJobs can be triggered in several ways including a Timer. The jobs didn’t require intensive compute resources so it could share resources with the application in the same Azure App Service. This simplified the deployment story because it could be packaged and deployed together.
SQL Agent Jobs
Database modifications tend to be one of the major parts of the migration project. Some of the projects have been simple T-SQL changes while others have needed heavy architectural changes to reproduce functionality in a PaaS environment. Despite the difficulties, there will be major cost savings for some of the clients because they no longer need to maintain an expensive, possibly underutilized, server. Future posts in this series will cover Automation & Deployment, Session State, Caching, Transient Fault Handling, and general Azure lessons learned.

If you have found yourself thinking…

“Changing our existing apps to run on PaaS is going to require a lot of refactoring. We want to run our applications in the Cloud as-is.”

“We cannot migrate to Azure SQL Database because our database relies on SQL CLR-based stored procedures. We want to use a full version of SQL Server.”

“We want the Cloud to be a seamless extension of our data center, not a walled garden. We want to use our existing IT setup and tools (AD, SCOM etc.) to manage the on-premises and Cloud-based applications.”

…then our upcoming Introduction to Windows Azure IaaS session is for you.

Read More…

Just like on-premises applications, the availability for cloud applications needs to be carefully planned. In this blog post I’ll discuss different levels of availability for Windows Azure-hosted applications.

Ultimately the level of availability you choose needs to be a business decision that balances cost with your personal tolerance for the nines.

To guide us though this discussion, I’ll use a typical Windows Azure application as an example. The architecture of our sample application consists of three primary components: Windows Azure Compute Instances, Blob Storage Data, and SQL Azure Data. Read More…