This blog is part of a series of hands-on labs on leveraging the Power Platform and Microsoft Azure. To break this series up into more manageable chunks, in this installment, we’ll focus on setting up the On-Premises Gateway that will connect to our local SQL Server and the Azure Logic App that will use the new gateway to connect to our local SQL Server 2017 and return customer data using an HTTP request and response.

We’ll wrap it up by using Postman to test our Logic App. If you don’t have Postman installed, you can download it here.

Stay tuned for the next lab in this series, Azure API Management, and Custom Connectors.

Scenario

The company has an on-premises SQL Server database that contains customer data that needs to be available to various apps built on the Common Data Service.

High-Level Steps

  1. Install and configure an On-Premises Data Gateway to connect to the local SQL Server
  2. Create an Azure Logic App to connect to the SQL Server through the gateway

On-Premises Data Gateway

As you can imagine, there are already plenty of good write-ups on installing and configuring an On-Premises Data Gateway, including an official version from Microsoft. It is very straight forward, but I’m here to offer you a couple of pro-tips.

Pro Tip #1: Pay attention to the account you use for installing the gateway. This account needs to belong to the same Azure tenant as the account you’ll use for building your Azure Logic Apps that will use this gateway.

Pro Tip #2: Pay attention to the region that is automatically selected when installing the gateway. For this example, the gateway needs to be installed in the same region as your Azure Logic App.

Azure Logic App

Now I’m going to step you through building an Azure Logic App that can be executed using a simple HTTP Request. The Logic App will accept a single parameter (CustomerId), or no parameter at all. If no id is presented, the Logic App will return all customer rows. Otherwise, it will filter on the passed in CustomerId.

  1. Create a new Azure Logic App.Logic App
  2. When the deployment of the Logic App is complete, start with the common trigger “When HTTP request is received”.Logic Apps Designer
  3. Add the following JSON schema for the Request Body and click Save. Once you save the Logic App you should see the HTTP POST URL field populated automatically.
    {
    "properties": {
        "CustomerId": {
          "type": "integer"
        }
    },
     "type": "object"
    }

  4. Add an Initialize variable step named CustomerIdVariable, type string, and set the value to the passed in CustomerId parameter.
  5. Add a new Condition step to the Logic App (Controls -> Condition) and change the operator to “Or”. Add a second line row to the condition step and configure as follows.
      1. CustomerIdVariable is equal to null
      2. CustomerIdVariable is equal to 0
      3. Where null is a function, and CustomerIdVariable is set from the Dynamic content. Save the Logic App.
  6. In the “true” block, add a SQL Server, Get rows (V2) action. We will need to set up a new connection here so let’s do that. Since we were sure to set up the On-Premises Gateway in the same Azure Subscription, you should see the gateway automatically available in the Connection Gateway dropdown. We are using Windows Authentication for authenticating to the SQL Server.
  7. Once the connection is created, set the server name, database name, and table name on the SQL Server Get rows (V2) action. Save the Logic App.
  8. Repeat this process for the False side of the conditional, using the same connection that was created in a previous step. When creating this action, click the Add new parameter button and add a Filter Query parameter.
  9. Set the Filter Query parameter to powerappscontactid eq CustomerIdVariable where the CustomerIdVariable is filled in using the Dynamic content. Save the Logic App.
  10. After the Get rows (V2) action in the False side of the conditional, add another conditional. Use the following expression for the conditional.
    length(body(‘Get_rows_(V2)_2’)?[‘value’]) is equal to 0.
    Where the first part is an expression.
  11. For this new conditional we’ll leave the False side empty, but we want to add a Response Request action to the True side. Configure the Response to use Status Code 200, and [] as the Body. Save the Logic App.
  12. Finally, outside of the conditionals, we’ll add another Response Request Action with a Status Code of 200. For this response, we’ll use Dynamic content and send back the ‘values’ from both SQL Server Get rows (V2) actions.
  13. The Logic App should look like this.
  14. Time to save and test your Logic App. We’re going to use Postman to do that since we haven’t set up our API yet in Azure API Management. Open up Postman and start a new request. Use HTTP Post and the URL from the “HTTP Request Received” trigger on your Logic App. I also tweaked that URL to include our CustomerId query string parameter.

Once the URL is put into Postman, the other parameters should be auto-populated in your Postman environment. Here is a snippet of the URL that shows how I included the CustomerId, followed by a screenshot of the Logic App run (after completion), the Postman results, and the contents of the table displayed in SQL Server Management Studio.

…triggers/manual/paths/invoke?CustomerId=2&api-version=2016-10-01…

Conclusion

I hope this helped demonstrate how quickly you can start querying data in your on-premises data stores from a cloud resource like an Azure Logic App. Please keep in mind that you’ll most likely want to secure this Logic App since right now, anyone can send a request to it. We’ll cover that in more detail in another write-up.

Over the last couple of years, I’ve moved from serious SharePoint on-premise development to migrating web applications to Azure.  My exposure to Azure prior to the first application was standing up SharePoint development virtual machines that pretty much just used the base settings.  Trial by fire tends to be the way I learn things best which is good because not only was I having to learn about Azure resources, but I also had to refamiliarize myself with web development practices that I haven’t had to work with since .Net 2.0 or earlier.

One of my first tasks, and the focus of this post, was to migrate the SQL Server to Azure SQL (Platform as a Service) instance.  The databases generally migrated well.  Schemas and the majority of the stored procedures were compatible with Azure SQL.  Microsoft provides a program called Azure Data Migration Assistant. This program not only analyzes but can migrate the database.  The analysis will return any suggested or required changes.  There are several types of issues that could arise including deprecated features, incompatible features, and syntax blockers.

Deprecated Field Types

There are three field types that will be removed from future versions: ntext, text, and image. This won’t hold up migrating from SQL server to Azure, but I did change the fields to future-proof the database. The suggested solution is to use: nvarchar(max), varchar(max), or varbinary(max).

Azure SQL Database

T-SQL Concerns

Now let’s get into some of the Azure SQL server limitations. I ran into several issues in scripts. Some had simple solutions while others required some redesign.

  • ISSUE: Scripts can’t connect to multiple databases using the USE statement.
  • SOLUTION: The scripts that did use the USE statement didn’t need it as it was only meant to connect to a single database.
  • ISSUE: Functions such as sp_send_dbmail are not supported. The scripts would send notification emails rather than sending from the application.
  • SOLUTION: This required an involved redesign. I created a table that logged the email and a scheduled Azure Web Job to send the email and flag the record as sent. The reference to sp_send_dbmail was replaced with an INSERT INTO Email command. If the email needed to be sent near-instantly, I could have used an Azure Queue and had the Web Job listen to the Queue then send the email when a new one arrives.
  • ISSUE: Cross-database queries are not supported. A couple of the apps had multiple databases and data from one was needed to add or update records in the other.
  • SOLUTION: This was another major design change. I had to move that logic into the application’s business layer.T-SQL Concerns

Linked Servers

Some of the applications had other databases, including other solutions like Oracle, that the application accessed. They use Linked Server connections to execute queries against the other data sources. So far, they’ve just been read-only connections.

Linked Servers are not available in Azure SQL. To keep the functionality, we had to modify the data layer of the application to pull the resources from the external database and either:
A. Pass the records into the SQL Stored Procedure, or
B. Move the logic of the Stored Procedure into the application’s data layer and make the changes in the application

One of the applications had a requirement that the external database was staying on-prem. This caused an extra layer of complexity because solutions to create a connection back to the database, like Azure ExpressRoute, were not available or approved for the client. Another team was tasked with implementing a solution to act as a gateway. This solution would be a web service that the Azure application would call to access this gateway.

SQL Agent Jobs

SQL Agent Jobs allow for out-of-process data manipulation. A couple of the applications used these to send notification emails at night or to synchronize data from another source. While there are several options in Azure for recreating this functionality such as Azure Functions and Logic Apps, we chose to use WebJobs. WebJobs can be triggered in several ways including a Timer. The jobs didn’t require intensive compute resources so it could share resources with the application in the same Azure App Service. This simplified the deployment story because it could be packaged and deployed together.
SQL Agent Jobs
Database modifications tend to be one of the major parts of the migration project. Some of the projects have been simple T-SQL changes while others have needed heavy architectural changes to reproduce functionality in a PaaS environment. Despite the difficulties, there will be major cost savings for some of the clients because they no longer need to maintain an expensive, possibly underutilized, server. Future posts in this series will cover Automation & Deployment, Session State, Caching, Transient Fault Handling, and general Azure lessons learned.

In this blog I’ll discuss some post-release reporting issues that we faced for one of our projects and the solutions we implemented. On the technology side, we had SQL Server 2008 R2 and MVC 4.0 application (which were hosted in Amazon Web Services) in our production environment.

The Problem

The post-production release reporting system was not responding as per the user expectations. For most of the high-volume reports (50K rows to 200K rows in report output), we were getting request timeout error. Client SLA for response time was two minutes; hence any report (big or small) must return data within two minutes. All the reports were designed using SQL Server Reporting Services 2008 R2. In all there were close to 40 reports with such timeout issues. Read More…

As SQL developers, we develop a well-defined understanding of relational data structure over the years that makes it feel natural to manipulate relational data using the SQL query language. Switching to MultiDimensional eXpressions (MDX) queries can be painful if some basic differences between SQL and MDX are not clearly understood. In this article, I will explain those differences from a SQL developer point of view and illustrate via some examples.

Read More…

I’ve been working with Powershell lately, and I wrote a couple of functions to help verify server permissions in Microsoft SQL Server.

Why would I want to verify server permissions instead of verifying server roles? In my case, I’m interested in knowing if a particular user is able to create a database on a SQL Server. While the dbcreator role does allow a user to create a database, there are other roles that will also allow a user to create a database, such as sysadmin. Additionally, if a user is a member of an AD Group that has the role assigned, then the role is not directly assigned to the user. In the end, I found it simplest to check for the actual CREATE ANY DATABASE permission.

In this post, we will see how to check if the current AD user has a specific server permission and how to check if an AD account other than the current user has a specific server permission.

Read More…

I recently attended SPC12 with many of my colleagues from AIS.  One of the sessions I really enjoyed was High Availability Solutions with SharePoint Server 2013 delivered by Bill BaerThis sessions was geared toward the ITPro (admin) audience and detailed the options when making SharePoint Highly Available.

During this session I found it interesting how much time was spent talking about mirroring.  Mirroring is now considered a deprecated technology but is still supported by SharePoint 2013.  Today I’d like to break down the session and talk about my thoughts on each point.

Read More…