If you want to create robust automated workflows in Azure, integrating Azure Logic Apps with Azure Functions can be a valuable solution. A logic app is a PaaS (Platform as a Service) that offers extensive options to create powerful workflows. This service is beneficial if you need a structured series of events to support your Azure-hosted application(s). To craft workflows, you can use a “no-code” method. Azure Logic Apps offers a graphical user interface approach to creating workflows, which bridges the gap between pairing your other Azure applications with your logic app.

Why Integrate?

One of the big reasons to integrate these two services is to help support a systematic end-to-end solution. You can add your essential function(s) to your workflow in a few steps, hence creating a more robust cloud-based solution.

When your workflow process gets triggered, such as an email reaching an inbox, your Azure Function can perform its specific business logic after the trigger fully executes. Then, the workflow can branch-off and perform more actions.

Using Azure Functions in your workflow also allows you to capture the HTTP status code from the resulting function call. If receiving a specific status code, such as an HTTP 500 status code, there can be logic in place to email support personnel to inform them, or the results of the failed function call can be inserted into Azure Storage. There are many possibilities.

Getting Started

When beginning integrating an Azure Function into a logic app, there are a few prerequisites. You will need to create both a logic app and function service in Azure. I will be using the out-of-the-box HTTP trigger Azure Function from Visual Studio, and it will simply return “Hello World” when it receives a request. Once the Azure Function service has been created, and the code deployed to it, the next area of focus is setting up the logic app workflow.

Setting up the Logic App Workflow

There are two ways to set up a workflow; the first is an infrastructure as code (IaC) approach, and the second is a GUI approach. For this walkthrough, I will use the GUI. The first thing you will need for the logic app is a trigger. This is an event that starts your logic app workflow. Many distinct types of triggers help us carry out many tasks. I will be using the timer trigger for this example.

Notification Frequency in Logic Apps

After you have published your code to your Azure Function, you will be able to add it as an Azure Function action. One thing to note about the Azure Function action is that it takes care of your function’s authentication. So, if your endpoint has a function authorization level assigned, the action will take care of authenticating so you can use the function in your workflow. Now, once the trigger activates, it will call the Azure Function. From there, it is up to the function to handle the request.

Azure Function Request

The Azure Function will process the request and fetch the correct data for the rest of the workflow to process. The next step will be to create a condition action. This will run as a conventional if statement. After the Azure Function executes a request, we want to verify if the request was successful or not.

What is neat about using logic apps is that when you use a service, like an Azure Function, it will detect specific properties from the service to use in later actions. For the condition action, we can grab the response status code of the function that just ran. Using that status code, we can create custom logic to look for the necessary code numbers. For this action, we will check for an HTTP 200 status code. When you create a condition, it automatically generates true and false scenarios. From there, you can set up what happens during each scenario.

For the following steps, we will set up one of my favorite parts of using a logic app that integrates with Outlook! There are many various tasks to use for Outlook. For this example, we will be using the send email task.

Send Email Task Outlook Task

When you set up an Outlook task, you must first log in with your desired Outlook account. This will produce an office365 API connection resource in your resource group. Thus, allowing the logic app workflow to work properly with Outlook!

After you log in, you can start using Outlook tasks. For this part of the workflow, we are going to send an email to the recipient. In the body of the email, we will send the response we get from the Azure Function we called in an earlier step. However, if the Azure Function call fails, then we can return a default email message to the recipient.

Azure Function Email Send

This concludes the setup for the Azure Logic App workflow that now integrates with an Azure Function. To run your newly created workflow, simply navigate to the “Overview” tab of your logic app resource. From there, click on “Run Trigger.”

Email Send Failure Function in Outlook

Congratulations! Your new workflow is now ready to run.

After the timer interval has passed, you will find a new email in your inbox from the Azure Logic App workflow.

Closing Thoughts

The beauty of using Azure Logic Apps is the customizability that it provides. The possibilities are almost endless. Whether your workflow is simple or complex, you have the tools at your disposal to carry out your systematic end-to-end flow. This post shows you one flavor of logic apps and the type of workflow you can create.

Building cross-platform applications is an excellent option for many organizations that need to support different technology vendors. Today, we can build websites that reach a massive audience regardless of the device that they use. Additionally, cross-platform mobile applications are an attractive option. All that is left now are the beloved applications that sit on our desktops. With tools such as Electron, desktop applications do not have to suffer from a lack of operating system support.

Electron is a framework that allows developers to create cross-platform desktop applications using a combination of HTML, CSS, and JavaScript. What makes Electron a viable solution is that it pairs with JavaScript frameworks very well. This means that there is generally a lower learning curve when injecting JavaScript into an Electron solution. Since JavaScript is such a popular option in today’s application development domain, it can make sense to leverage that experience in an Electron solution.
Developers can use any modern JavaScript framework, such as Angular, React, or Vue, within an Electron application.

For this post, I will be discussing how you can pair Electron with Vue to build your next cross-platform desktop application. I will cover some of the reasons why Vue should be considered as a viable integration option and highlight some useful features of Electron that I have been exposed to.

So Why Use Vue with Electron

While there is no perfect framework to pair with Electron, Vue offers some advantages. It is worth noting that the choice of the framework will depend on several factors. These are just general observations and findings from my experience with Vue.

Reasons to consider Vue:

  • The lower learning curve for new developers
  • Features a well-organized template for Vue files
  • Less boilerplate code that increases project size
  • Able to easily customize application builder (might be dependent on the way you scaffold your Electron solution)
  • Swiftly build prototypes and/or small-medium sized applications

A word of caution: if you choose Vue with Electron, the documentation is scarce in some areas. I assume that Vue is not a popular JavaScript framework like React and Angular.

During my time using Electron with Vue, I have come across several useful features. I will provide a few examples of these features, and hopefully, this will shorten the gap of implementing them into your next Electron/Vue application.

Custom Menus

One common scenario, when building Electron applications, is to come up with your menu schema. Electron ships with a default menu that includes options you would find in your Chrome browser since Electron runs on Chromium. That said, the default menu will probably not be good enough for a production build of your application. Depending on your situation, you might want to limit certain menu features from your users, such as the developer tools or the ability to force reload the application. Customizing your application’s menu is a straightforward process.

You can customize the menu within the Electron main process file.

Figure One Electron Vue

Custom Events

In my experience using Vue with Electron, I have used Electron specific events to accomplish certain tasks. Depending on your application, you will more than likely need to use events. One common scenario would be to show a confirmation popup window when you want to exit the application. This not only utilizes Electron’s built-in events, but it uses custom Electron events as well. Below is a simple implementation for this.

First, a preload.js file is needed so that Electron events can be properly used within Vue components.

Figure 2 Electron Vue

Next, the preload file will need to be referenced in the Electron main process file. The preload file path will depend on both the environment and operating system the app is running on. Additionally, the preload file will need to be added as an extra resource within a builder config file for production builds. This process will be discussed in a later section.

Within the method for creating the Electron window:

Figure 3 Electron Vue

In the Electron main process file, you can then send out a custom event to any Vue components that are subscribed to it.

Figure 4 Electron Vue

A Vue component can then perform certain logic once this event occurs.

Figure 5 Electron Vue

Here are the results from clicking the exit button in the application.

Welcome to Your Vue App Figure 6

Referencing External Resources

There might come a time where your application will need to reference-dependent static resources. This could be a spreadsheet file containing data for U.S. states, cities, zip codes, or a CSV file containing data about countries. Also, it could be a preload file, as previously mentioned in this post.

When your Electron/Vue application is built for production, the application gets bundled into an asar file. This is your whole application. If your application needs to load external files, such as a CSV file or preload file, it will need to point to that specific file. As mentioned earlier, your application gets bundled into an asar file. How can you reference a dependent file from your Electron/Vue application? There are essentially two things you need to do.

1 – Customizing the Electron Builder

The first step is to customize the Electron builder within a Vue config file. One thing to note is that I have used electron-builder for building Electron applications and used it for this example.

You will want to create a file called vue.config.js that resides in the root directory of your Electron/Vue solution. This file will, again, allow you to customize the builder for your Electron/Vue application. There is an option to specify any extra resources (or external resources) that your application may reference. The builder’s configuration will need to know where your resource is in your solution and where to place it.

Your config file should look something like this.

Electron Vue Figure 7

After the application gets built and installed onto the local machine, any resources within the extra resources section will be placed under the following paths:

  • Windows: C:\Users\\AppData\Local\Programs\\Resources\Vue
  • Mac: /Applications/.app/Contents/Resources/Vue

2 – Referencing the External Resource

The last step is to reference the external resource, using the appropriate path, within your application. It is recommended that you specify both the development and production path to your resource so that both environments can access it. This can be accomplished within the Electron main process file. The reason is that we can utilize the Node dirname variable. Using this variable will help us acquire the full path.

Figure 8 Electron Vue

From there, you can add this logic within a custom, or predefined, Electron event. The event could either process the file and send the results back to any components subscribed to a “reply” event, or the event could send the full file path to any subscribed components for them to process the file.

In this example, I used an approach where the file is processed in the Electron main process file, and the file contents are sent to a subscribed component.

Vue component publishing an event to get a file’s contents and then listening for a reply event.

Figure 9 Electron Vue

Once the event gets published, the Electron main process will read the given file. Then it will reply with the contents of the file.

Figure 10 Electron Vue

The way you process your file will depend on the file type and the actual data that is contained within it. For this example, I used a text file. That said, I encourage you to use the appropriate logic to extract the contents of your file.

Closing Thoughts

Using Vue with Electron has become an excellent duo for building cross-platform desktop applications. You can craft custom menu options that can help add to the user experience. Also, Electron and Vue allow you to develop custom events that allow for more robust business logic. Finally, crucial assets such as external files can be properly processed inside an Electron/Vue solution, which can help you need swift data processing. All in all, Electron and Vue give you the tools to build rich, customizable desktop applications.

Web development is arguably the most popular area of software development right now. Software developers can make snappy, eye-catching websites, and build robust APIs. I’ve recently developed a specific interest in a less discussed facet of web development: web scraping.

Web scraping is the process of programmatically analyzing a website’s Document Object Model (DOM) to extract specific data of interest. Web scraping is a powerful tool for automating certain features such as filling out a form, submitting data, etc. Some of these abilities will depend if the site allows web scraping or not. One thing to keep in mind, if you want to web scrape, is that some websites will be using cookies/session state. So some automation tasks might need to abide by the use of the site’s cookies/session state. It should go without saying, but please be a good Samaritan when web scraping since it can negatively impact site performance.

Getting Started

Let’s get started with building a web scraper in an Azure Function! For this example, I am using an HTTP Trigger Azure Function written in C#. However, you can have your Azure Function utilize a completely different trigger type, and your web scraper can be written in other languages if preferred.

Here is a list of Azure resources that were created for this demo:

Azure Resources

Before we start writing code, we need to take care of a few more things first.

Let’s first select a website to scrape data from. I feel that the CDC’s COVID-19 site is an excellent option for this demo. Next, we need to pick out what data to fetch from the website. I plan to fetch the total number of USA cases, new USA cases, and the date that the data was last updated.

Now that we have that out of the way, we need to bring in the dependencies for this solution. Luckily, there is only one dependency we need to install. The NuGet package is called HtmlAgilityPack. Once that package has been installed into our solution, we can then start coding.

Coding the Web Scraper

Since the web scraper component will be pulling in multiple sets of data, it is good to capture them inside a custom resource model. Here is a snapshot of the resource model that will be used for the web scraper.

Resource Model

Now it’s time to start coding the web scraper class. This class will utilize a few components from the HtmlAgilityPack package that was brought into the project earlier.

Web Scraper Class

The web scraper class has a couple of class-level fields, one public method, and a few private methods. The method “GetCovidStats” performs a few simple tasks to get our data from the website. The first step is setting up an HTML document object that will be used to load HTML and parse the actual HTML document we get back from the site. Then, there is an HTTP call out to the website we want to hit.

Right after that, we ensure the call out to the website results in a success status code. If not, an exception is thrown with a few details of the failing network call.

Ensure the Call Out

We then load the HTML that we received back from the network call into our HTML document object. There are several calls to a method that will perform the extraction of the data we are looking for. Now you might be wondering what those long strings are in the method call. Those are the full xpaths for each targeted HTML element. You can obtain them by opening the dev tools in your browser, selecting the HTML element, and right-clicking it in the dev tools. From there, select “copy full xpath”.

Load the HTML

Next, we need to set up the endpoint class for our Azure Function. Luckily for us, the out of the box template sets up a few things automatically. In the endpoint class, we are merely calling our web scraper class and returning its results to the client calling the Azure Function.

Endpoint Class for Azure Function

Now comes time to test out the Azure Function! I used Postman for this, and these are the results.

Test out the Azure Function

Closing Thoughts

Overall, web scraping can be a powerful tool at your disposal if sites do not offer APIs for you to consume. It allows you to swiftly grab essential data off of a site and even automate specific tasks in the browser. With great power comes great responsibility, so please use these tools with care!

Background

On a previous project, I was a part of, the client wanted to explore a chatbot application for its employees. The goal was for the chatbot to help increase the office’s productivity. Certain skills would be developed to enable swift access to popular actions such as opening a timesheet, password reset help, etc. The client also expressed a need for seamlessly adding new features, to the chatbot, when necessary. It was also decided that the chatbot would communicate with external services to fetch data. Taking what was discussed, we went to the drawing board to devise a plan on how to develop a scalable solution.

On top of the application having to be scalable, there was a decision to try and make the application as maintainable as possible too. Since this application will increase in size over time, it was key for us to lay down a foundation for how the chatbot would interact with classes and other services. As the architecture was finalized, it was apparent to us that there were critical dependencies on several Azure cognitive services. Thus, it became important that we try and ensure that the chatbot application would be maintainable to accommodate for those services. In order to accomplish this, a cascading approach to calling our dependencies was used.

Before I delve into the cascading approach, I want to spend some time talking about bots and the services used alongside them. Ultimately, the main goal of a bot is to accept a request from a user and process it based on what they ask for. For example, a user can ask a bot a question about company policies, the weather, recent documents they worked on or to open web pages.

LUIS

Now, in order to process those types of requests, Azure provides a couple of cognitive services to assist. One of these services is called LUIS (Language Understanding Intelligent Service). At a high level, LUIS determines an “intent” from statements (often called utterances) that you define in custom models for which you build and train. For example, LUIS can receive an utterance of “What’s the weather”. When an intent is found, there will be a confidence score (a value ranging from 0–1 inclusive) associated with the intent. This score just shows you how confident the service was in determining the intent. The closer the value is to 1, the more confident the service was, and the closer it is to 0 denotes how less confident the service was. In this example, the intent could be something like “GetWeather” with a 0.96 confidence score.

QnA Maker

Another cognitive service that is used with bot apps is QnA Maker. This service excels at housing data that is best suited for the question and answer pairs. The question and answer pairs are stored in what’s called a knowledgebase. A knowledgebase typically encapsulates data that pertains to a specific business domain (i.e. Payroll, HR, etc.). Like LUIS, QnA Maker utilizes machine learning, cognitive models, and confidence scores. When a QnA Maker knowledge base receives a question, it will use machine learning to determine if there is an answer associated with the question. A confidence score (ranging from 0-1 inclusive) will be associated with the results. If you would like to learn more about bot development and the different cognitive services offered in Azure, check out the links at the bottom of this post.

The Initial Approach

The chatbot solution included 1 LUIS service along with 3 separate QnA Maker knowledgebases. In our initial approach, we created intent definitions in LUIS that corresponded with each of our QnA Maker knowledgebases. We then trained LUIS to recognize if the user’s message was a question that could be answered by one of the knowledgebases. When messages came to the bot from the user, we would always send them to LUIS first. If it returned an intent that corresponded with one of our QnA Maker knowledgebases, we would then redirect the request to the identified knowledgebase. Then the knowledgebase would hopefully recognize the question and return an answer. That said, each call to a knowledgebase was dependent on the LUIS service correctly recognizing intents. This was not an ideal approach.

Having the QnA Maker knowledgebases dependent on the LUIS service was an issue. This meant that for a knowledge base to get a hit, the LUIS model would need to be properly trained and up to date. The LUIS model would need to be built and trained with data that closely matches that of each QnA Maker knowledgebase. That said, if the LUIS model is updated and it impacts a given QnA Maker knowledge base, then that knowledgebase would have to be updated and trained to contain the new data from the LUIS model. This approach would ensure the models from both LUIS and QnA Maker are in sync with each other. As you can probably see, this poses as a maintenance concern.

Cascading Solution

So, in order to alleviate this concern, a different approach was taken. The LUIS model would have no knowledge of any data from the QnA Maker knowledgebases and vice versa. That meant updating the LUIS model to remove data that corresponded to any of the QnA Maker knowledgebases. The same approach was done within each QnA knowledge base. This made it so both LUIS and QnA Maker were completely independent of each other. This led to having a cascading approach to calling each of our dependencies. As a result, this would resolve the imposing maintenance issue. (See image below)

Cascading Approach

It is worth noting that we used Microsoft’s Bot Framework SDK for this solution, but the strategies you will see in this post can be used for any chatbot technology.

If the LUIS request handler was unable to handle the request, no problem! The next request handler would attempt to handle the request. This flow would proceed until one of the request handlers successfully handled a request. If none were successful, then the chatbot would tell our telemetry client, in our case Azure App Insights, to log the unrecognized user message. This would provide insight into model training opportunities. Finally, the chatbot would return a default message back to the client. (See image below)

Chatbot return a default message back to the client

Cascading Solution: Confidence Score Thresholds

Each result returned by a cognitive service holds a confidence score. This data proved to be very useful for the chatbot. In the LUIS and QnA Maker request handler classes, there was logic to determine if the returned confidence score met a given threshold. If the score was high enough, meaning the service was confident that it found the right data, then the given request handler can proceed with handling the request. If the score was found to be lower than the threshold, then the request handler does not continue with handling the request. (See image below of a sample method to handle an intent request)

Cascading Solution: Confidence Score Thresholds

Instead, the next request handler will be told to execute. Having this implementation in place helps be explicit with defining an acceptable confidence score. That said, determining a confidence score threshold depends on how extensive your cognitive models are. If your cognitive models account for various phrases and spelling of keywords, then your cognitive services will have an easier time identifying intents/answers. In practice, I found that 0.70 and 0.75 to be satisfactory threshold values.

Cascading Solution: Dialog Turn Status

The final piece to the cascading solution was handling the progression or conclusion of a dialog turn. Think of a dialog turn as a face-to-face conversation. You might initiate the conversation with a question, which is a turn. Then, the other person would answer your question. That is also a turn. This can continue until the conversation ends. Conversations with a bot follow a similar flow. When it’s the bot’s “turn” to reply to the client, it performs its logic then responds. Below is a diagram, provided by Microsoft, illustrating the high-level nature of a conversation with a bot.

Cascading Solution: Dialog Turn Status

Image from: https://docs.microsoft.com/en-us/azure/bot-service/bot-builder-basics?view=azure-bot-service-4.0&tabs=csharp
In the cascading solution, we were explicit when the bot’s turn was over or when it should continue processing the request. Ultimately, when the chatbot found an answer to the user’s question or request, we would tell the chatbot that its turn is complete. On the contrary, we had several scenarios where we told the chatbot to keep its turn going. One scenario was if LUIS did not return an intent or if the confidence score was below our threshold. Another one was if a QnA Maker knowledge base did not find an answer to the given question passed to it. After each request handler executes, there is a check to verify if the turn is complete or not.

Closing Thoughts

The cascading approach for handling calls to the different services/knowledgebases was a huge win for this bot application. It offers clear, concise, and manageable code. Every LUIS and QnA Maker cognitive model is independent of each other and each request handler is independent of each other as well. In addition, the implementation of confidence score thresholds ensured that we were explicit with how we further processed client requests. Finally, adding progression and termination logic for a dialog turn certified that it would be appropriately processed. This whole approach helped paint a clear picture of what our chatbot was doing.

Links: