From C# Developer to DevOps Engineer

Over the last couple of years, I’ve become a DevOps Engineer after having been primarily a C# developer. Instead of primarily C# and SQL, I was now working almost exclusively with JSON, YAML, and PowerShell. While I was very familiar with Visual Studio 2013/2015/2017 and its excellent support for the .NET work I did over the years, I found the experience for building DevOps solutions to be underwhelming. At the time, the Intellisense for Azure Resource Manager (ARM) or Terraform templates, GitLab or Azure DevOps pipelines, and PowerShell was either non-existent or incomplete. In addition, Visual Studio was quite the resource hog when I wasn’t needing all the extras it provides.

Enter Visual Studio (VS) Code

Now, I had downloaded VS Code soon after it was released with the intent to use it at some point, to say I had. However, after seeing Visual Studio Code used in some ARM template videos where snippets were used, I decided to try it out. Like most Integrated Development Environments (IDE), VS Code isn’t truly ready to go right after installation. It’s taken me some time to build up my configuration to where I am today, and I’m still learning about new features and extensions that can improve my productivity. I want to share some of my preferences.

I want to point out a couple of things. First, I’ve been working primarily with GitLab Enterprise, Azure DevOps Services, and the Azure US Government Cloud. Some of these extensions are purely focused on those platforms. Second, I use the Visual Studio Code – Insiders release rather than the regular Visual Studio Code version. I have both installed, but I like having the newest stuff as soon as I can. For this post, that shouldn’t be an issue.

Theming

As long as there’s a decent dark color theme, I’m content. The bright/light themes give me headaches over time. VS Code’s default dark theme, Dark+, fits the bill for me.

One of the themes I didn’t know I needed before I stumbled across them was icon themes. I used to have the standard, generic folder and file icons, the Minimal theme in VS Code. That made it difficult to differentiate between PowerShell scripts, ARM templates, and other file types at a glance. There are a few included templates, but I’m using the VSCode Icons Theme. It’s one of the better options, but I’m contemplating making a custom one as this one doesn’t have an icon for Terraform variables files (.tfvars), and I’d like a different icon for YAML files. If the included themes aren’t suitable for you, there are several options for both types of themes and Product Icons themes through the marketplace.

Figure 1 – VS Code’s Minimal icon theme

Workspaces

Workspaces are a collection of folders that are a “collection of one or more folders are opened in a VS Code window.” A workspace file is created that contains a list of the folders and any settings for VS Code and extensions. I’ve only recently started using workspaces because I wanted to have settings configured for different projects.

Extensions in Visual Studio Code provide enhancements to improve productivity. Extensions include code snippets, new language support, debuggers, formatters, and more. I have nearly 60 installed (this includes several Microsoft pre-installs). We will focus on a handful that I rely on regularly.

Workspace Code Configuration
Figure 2 – VS Code Workspace configuration. Also shows the choice of Azure Cloud referenced in the Azure Account extension section below.

Azure Account

The Azure Account extension provides login support for other Azure extensions. By itself, it’s not flashy, but there are a few dozen other Azure extensions that can use the logged-on account from one to reference Azure resources targeted by the others. This extension has a setting, Azure Cloud, that was the main reason I started adopting Workspaces. The default is the commercial version, AzureCloud. I’ve changed it at the user level to AzureUSGoverment, but some of my recent projects use AzureCloud. I’ve set the workspace setting for those.

Azure Resource Manager (ARM) Tools

This extension will make your ARM template tasks much more manageable! It provides an extensive collection of code snippets to scaffolding out many different Azure resources. Schema support provides template autocompletion and linting-like validation. A template navigation pane makes finding resources in a larger template easy. There is also support for parameter files, linked templates, and more.

HashiCorp Terraform

Terraform is an offering of HashiCorp. They’ve provided an extension that supports Terraform files (.tf and .tfvars), including syntax highlighting. While there are only a few snippets included, the autocompletion when defining new blocks (i.e., resources, data) is quite extensive.

Terraform
Figure 3 – Terraform autocompletion

GitLens – Git Supercharged

GitLens is full of features that make tracking changes in code easily accessible. I installed this extension for the “Current Line Blame” feature that shows who changed the current line last, when they changed it and more. In addition, there are sidebar views for branches, remotes, commits, and file history that I use regularly. There are several other features that I either don’t use or even wasn’t aware of until writing this post, as well this is an excellent tool for Git repo users.
GitLens Line Blame

MSBuild Project Tools

I had a recent project that contained a relatively large MSBuild deployment package that needed to be updated to work with the changes made to migrate the application to Azure. I haven’t worked with MSBuild in several years. When I did, I didn’t have all the syntax and keywords committed to memory. This extension provides some essential support, including element completion and syntax highlighting. It did make the project a little easier to modify.

PowerShell Preview

I’ve become a bit of a PowerShell fan. I had been introduced to it when I was working with SharePoint, but since I’ve been doing DevOps work in conjunction with Azure, I’ve started enjoying writing scripts. The less-than-ideal support for PowerShell (at the time, at least) in Visual Studio 20xx was the main reason I gave VS Code a shot. This extension (or the stable PowerShell extension) provides the excellent IntelliSense, code snippets, and syntax highlighting you’d expect. However, it also has “Go to Definition” and “Find References” features that I relied on when writing C#. In addition, it incorporates linting/code analysis with PowerShell Script Analyzer, which helps you develop clean code that follows best practices.

PowerShell Preview

Powershell (stable)

Wrapping Up

I have far more than these extensions installed, but these are the ones I use the most when doing DevOps work. Some of the others either haven’t been used enough yet, aren’t helpful for a DevOps Engineer, or weren’t interesting enough to list for the sake of brevity.

However, I’ve created a Gist on my GitHub that contains the complete list of extensions I have installed if that’s of interest. Visual Studio Code is an amazing tool that, along with the proper configuration and extensions, has increased my productivity as a DevOps Engineer.

Have you spent a lot of time getting comfortable using Team Foundation Server (TFS) or Azure Pipelines only to be switched to GitLab CI/CD for a project? That’s what happened to me about four months ago. Since then, I’ve been learning the differences between the tools. There are several differences between Azure Pipelines and GitLab CI/CD that I’ve experienced. This post aims to be a primer for you to understand these differences and save you a lot of research time.

Classic Pipelines vs YAML

I worked with TFS’/Azure Pipelines’ Classic Editor the most before this project. While it’s a great visual designer for automating builds, they aren’t very portable from project-to-project or environment-to-environment because the exported JSON files reference instances of other resources like Service Connections by ID, not name. When importing these JSON files into another TFS or Azure Pipelines instance, changes need to be made to update affected references. This is where YAML, which stands for “YAML Ain’t Markup Language,” comes in. YAML pipelines are the future for both Azure Pipelines and GitLab CI/CD. YAML references the Service Connection by name, making it easier to use the pipeline in other environments if the connection name remains the same.

I didn’t realize before diving into YAML for GitLab because it’s not a language but more of a file format. I had expected the little Azure Pipelines YAML I had written would translate almost perfectly to GitLab. However, GitLab doesn’t use a lot of the syntax that Azure Pipelines does. I’ve listed some of the differences I’ve become aware of, but this is likely just a partial list.

Azure Pipeline and Description

For more details, here is the YAML reference for GitLab and Azure.

Integration with Azure

Azure Pipelines offers Service Connections to connect to remote services like Azure, Docker, or GitHub.
GitLab doesn’t have an equivalent offering. Making those connections must be done in Jobs. One of the concerns is that you would have to keep the connection credentials in environment variables. When doing this, it’s important to remember to mask sensitive variables like passwords to not appear in job logs. This also means you can’t use Azure Key Vault Secrets as pipeline variables, either, which is something my previous projects used heavily.

Runners

While project-specific, the GitLab runners that were made available don’t have all the tools needed for my project. Another team that is developing the runners has provided some basic images that don’t include tools like the Azure or JFrog CLIs, which are required for the environment. This is likely to be remedied, but it’s caused a need to learn how to install it. This isn’t an issue with more open solutions. Still, my project will be isolated from the web in the staging and production environments, making it challenging to use tools like Chocolatey, Nuget, or NPM to install packages from the web. This is still something the other team is working to resolve. Still, the only resolution I see is maintaining their own repository of packages in something like Artifactory and having images or scripts pull from there.

I’m far more comfortable with YAML because of this project. While GitLab is powerful and has received a few excellent features since I began the project, the creature comforts I’m used to with Azure Pipelines (notably the Service Connections) require additional work that seems unnecessary.

In my previous post, I discussed lessons learned about migrating SQL Server databases from on-premise to Azure SQL Databases. This post will share several of my issues and solutions around automating Azure resource provisioning and application code deployment. Before this project, I had never used Azure DevOps or Team Foundation Server for anything except a code repository or task tracking. Most of these applications would use Build and Release pipelines to configure the Azure subscription and deploy the applications. Besides, this was my first foray into the Azure Resource Manager (ARM) templates.

Azure Resource Manager (ARM) Templates

ARM templates provide a declarative method for provisioning and configuring Azure resources. These JSON files use a complex schema that provides deep configuration options for most of the available resources available in Azure. Some of the key benefits of ARM templates are:

  • Idempotent: templates can be redeployed many times returning the same result. If the resource exists, changes will be applied, but the resource will not be removed and recreated.
  • Intelligent processing: Resource Manager can manage the order of resource deployment, provisioning resources based on dependencies. It will also process resources in parallel whenever possible, improving deployment times.
  • Modular templates: JSON can get incredibly hard to maintain when files are large. ARM provides Linked Templates to split resources into separate files to simplify maintenance and readability. This also provides the opportunity to reuse some templates in multiple deployment scenarios.
  • Exportable: Resources have an option to export the template in Azure Portal. This is available if you’re creating a new resource at the validation stage or in the resource’s management pane. A Resource Group also provides a method for exporting the template for multiple resources. This was very useful for understanding more advanced configurations.

For the first few projects, I built large templates that deployed several resources. This presented several hurdles to overcome. First, large templates are hard to troubleshoot. Unlike scripting solutions, no debugger allows you to step through the deployment. The portal does provide logging, but some of the error descriptions can be vague or misleading especially in more advanced configuration settings. In situations where there are multiple instances of one type of resource, there may be no identifying information for which instance caused the error. In theory, linked templates would be a way to handle this, but linked templates require a publicly accessible URI. The client’s security rules did not allow this level of exposure. The best solution was to add resources one at a time, testing until successful before adding the next resource.

I also had some minor issues with schema support in Visual Studio Code. The worst of it was false positive warnings on the value of the “apiVersion” property for a resource not being valid despite the schema documentation showing that it is. This didn’t cause any deployment errors, just the yellow “squiggly” line under that property. Another inconvenience to note is when exporting a template from the portal, not all resources will be in the template. This was most noticeable when I was trying to find the right way of adding an SSL certificate to an App Service. The App Service had a certificate applied to it, but the template did not include a resource with the type, Microsoft.Web/certificates.

While templates are supposed to be idempotent, there are some situations where this isn’t the case. I found this out with Virtual Machines and Managed Disks. I had a template that was creating the disks and attaching them to the VM but found out later that the disk space was too small. Changing the “diskSizeGB” property and re-deploying fails because attached disks are prohibited from resizing. Since this isn’t likely to happen when we get out of the lab environment, I changed the sizes in the portal by deallocating the VM and changing the size.

Azure & TFS Pipelines

Azure Pipelines, part of the Azure DevOps Services/Server offering, provides the capability to automate building, testing, and deploying applications or resources. Team Foundation Server (TFS) is the predecessor to Azure DevOps also offers pipeline functionality. These projects use both solutions. Since there is no US Government instance of Azure DevOps Services, we use TFS 2017 deployed to a virtual machine in their environments, but we use Azure DevOps Services for our lab environment since it’s less expensive than standing up a VM. While the products are very similar, there are some differences between the two systems.

First, TFS does not support YAML. YAML allows using and managing a configuration in the source code as it becomes part of the repository making it more integrated into Git Flow (versioning, branching, etc). Also, YAML is just text. Editing text is much quicker than having to click through tasks to change textbox values.

Another difference is being able to change release variables in queue time. Azure DevOps release variables can be flagged “Settable at release time.” This is very useful as we typically have at least two, if not three, instances of an application running in a lab environment. A variable for the environment can be added and set at release time making a single pipeline usable for all the environments instead of having to either edit the pipeline and change the value or create multiple pipelines that do essentially the same thing.

Create a New Release

There were some positive lessons learned while working with both services. Pipelines could be exported and imported between the two with only minor modifications to settings. Since we were using the classic pipeline designer, there are far more mouse clicks to create pipelines. Exporting them generates a JSON file. Once imported into another environment, there were usually one or two changes that had to be made because the JSON schema uses IDs to reference other resources, such as Task Groups, instead of a name. Not only did this save time, but it cut down on human error when configuring the tasks.

Task Groups provide a way to group a set of tasks to be reused across multiple pipelines and can be exported, too. In some projects, we had multiple pipelines that were essentially the same process, but with different settings on the tasks. For example, one client had eight web applications that deploy to a single Azure App Service. One was at the root while the others were in their virtual directories. Also, they each had their development cycle so it wouldn’t make sense to deploy all eight every time one needed updates. We created a build and release pipeline for each application. Creating two Task Groups, one build, and one release, allowed us to add a reference to the appropriate group in each pipeline and just change the parameters passed to it.

New Pipeline

Conclusion

Automating resource provisioning and application deployment saves time and creates a reliable, reusable process over a manual alternative. ARM templates provide deep, complex customization even more than Azure PowerShell or the Azure CLI, in some cases. Pipelines then take those templates and consistently provision those resources across environments. It would have made life much easier in several of my past roles. While there were some “gotchas” with the various technologies, Azure has been developed with automation being a top priority.

Over the last couple of years, I’ve moved from serious SharePoint on-premise development to migrating web applications to Azure.  My exposure to Azure prior to the first application was standing up SharePoint development virtual machines that pretty much just used the base settings.  Trial by fire tends to be the way I learn things best which is good because not only was I having to learn about Azure resources, but I also had to refamiliarize myself with web development practices that I haven’t had to work with since .Net 2.0 or earlier.

One of my first tasks, and the focus of this post, was to migrate the SQL Server to Azure SQL (Platform as a Service) instance.  The databases generally migrated well.  Schemas and the majority of the stored procedures were compatible with Azure SQL.  Microsoft provides a program called Azure Data Migration Assistant. This program not only analyzes but can migrate the database.  The analysis will return any suggested or required changes.  There are several types of issues that could arise including deprecated features, incompatible features, and syntax blockers.

Deprecated Field Types

There are three field types that will be removed from future versions: ntext, text, and image. This won’t hold up migrating from SQL server to Azure, but I did change the fields to future-proof the database. The suggested solution is to use: nvarchar(max), varchar(max), or varbinary(max).

Azure SQL Database

T-SQL Concerns

Now let’s get into some of the Azure SQL server limitations. I ran into several issues in scripts. Some had simple solutions while others required some redesign.

  • ISSUE: Scripts can’t connect to multiple databases using the USE statement.
  • SOLUTION: The scripts that did use the USE statement didn’t need it as it was only meant to connect to a single database.
  • ISSUE: Functions such as sp_send_dbmail are not supported. The scripts would send notification emails rather than sending from the application.
  • SOLUTION: This required an involved redesign. I created a table that logged the email and a scheduled Azure Web Job to send the email and flag the record as sent. The reference to sp_send_dbmail was replaced with an INSERT INTO Email command. If the email needed to be sent near-instantly, I could have used an Azure Queue and had the Web Job listen to the Queue then send the email when a new one arrives.
  • ISSUE: Cross-database queries are not supported. A couple of the apps had multiple databases and data from one was needed to add or update records in the other.
  • SOLUTION: This was another major design change. I had to move that logic into the application’s business layer.T-SQL Concerns

Linked Servers

Some of the applications had other databases, including other solutions like Oracle, that the application accessed. They use Linked Server connections to execute queries against the other data sources. So far, they’ve just been read-only connections.

Linked Servers are not available in Azure SQL. To keep the functionality, we had to modify the data layer of the application to pull the resources from the external database and either:
A. Pass the records into the SQL Stored Procedure, or
B. Move the logic of the Stored Procedure into the application’s data layer and make the changes in the application

One of the applications had a requirement that the external database was staying on-prem. This caused an extra layer of complexity because solutions to create a connection back to the database, like Azure ExpressRoute, were not available or approved for the client. Another team was tasked with implementing a solution to act as a gateway. This solution would be a web service that the Azure application would call to access this gateway.

SQL Agent Jobs

SQL Agent Jobs allow for out-of-process data manipulation. A couple of the applications used these to send notification emails at night or to synchronize data from another source. While there are several options in Azure for recreating this functionality such as Azure Functions and Logic Apps, we chose to use WebJobs. WebJobs can be triggered in several ways including a Timer. The jobs didn’t require intensive compute resources so it could share resources with the application in the same Azure App Service. This simplified the deployment story because it could be packaged and deployed together.
SQL Agent Jobs
Database modifications tend to be one of the major parts of the migration project. Some of the projects have been simple T-SQL changes while others have needed heavy architectural changes to reproduce functionality in a PaaS environment. Despite the difficulties, there will be major cost savings for some of the clients because they no longer need to maintain an expensive, possibly underutilized, server. Future posts in this series will cover Automation & Deployment, Session State, Caching, Transient Fault Handling, and general Azure lessons learned.

Screen Shot 2015-12-09 at 1.21.26 PMWith the abundance of JavaScript libraries and frameworks available today, it’s hard to decide what is going to work best for a certain requirement. Add in the fact that there are many server-side tools that can also accomplish the task and you could spend hours just narrowing down options to test before deciding on the path you’ll take in the end. This was a recent conundrum for me when approached to incorporate child data management in the parent forms on a SharePoint 2010 project. My experience with JavaScript has been limited over my career because I’ve been so focused on the backend of SharePoint during the majority of that time. My current client has need for a better user experience, so I’ve been trying to fill that hole in my skills.  This project offered an opportunity to do just that.

While it’s possible to put an ASP GridView control in an Update Panel, a client-side approach seemed cleaner and a way to expand my JavaScript skills. I looked at many options like JQuery Datatables, koGrid, and a few others, but they didn’t give me the look, price (free), and/or TypeScript definitions for me to easily take off with implementing it.

I decided to build my own solution since it would be a relatively simple design and it would let me dig into KnockoutJS. In addition, it would be easier to use TypeScript to build an easier-to-maintain solution since it incorporates many of the ECMAScript 6 features like classes and modules, among others. Read More…

SharePoint Server 2013 offers a completely new architecture for Workflow utilizing Workflow Foundation 4.5.  I’ve already covered the high-level changes in a previous post called “What Changed in SharePoint 2013 Workflow? Pretty Much Everything” and discussed how a SharePoint 2010 Workflow project would be designed differently in my post titled “Redesigning a SharePoint 2010 Workflow Project for SharePoint 2013.”  Both of these posts discuss the new reliance on web services for data in SharePoint Workflow.  While it’s obvious that Visual Studio workflows would interact with web services, SharePoint Designer 2013 offers web service communication, as well.  This post will detail the new actions available in SharePoint Designer 2013 for interacting with web services and how to use them. Read More…
Workflow in SharePoint 2013 has undergone quite the architectural change from its SharePoint 2010 ancestor.  I documented many of the major changes in a previous blog post, “What Changed in SharePoint 2013 Workflow? Pretty Much Everything.”  While SharePoint 2013 is backwards-compatible with SharePoint 2010 workflows, you may decide that the benefits of the new design are needed.  The purpose of this post is to illustrate the new considerations you’ll need to keep in mind when targeting SharePoint 2013 workflows. The SharePoint 2010 project we’ll use for this example is the one from my very first AIS blog post, “Developing Multi-Tiered Solutions for SharePoint.”

Web API Methods Mockup screen shot
Figure 1 - Sample WebAPI methods for Section Document Merge and Post-Merge Actions.

In our example project there are actually two workflows, SectionDocumentApprovalState (SDAS) and MasterDocumentApproval (MDA). The MDA checks if the various SDAS-related sections have been merged and finalized, then notifies specific users for approval of the final document. An instance of SDAS is created for each section, created from the Master Document that monitors the editing and approval of the specific section. We’ll focus on just the SDAS workflow. In the previous post, I referred to the workflows as being part of the Presentation Layer and the custom code called into the Business Layer.  Both of these layers will change in a SharePoint 2013 workflow solution.

Read More…

We’ve reached the end of this series.  In part one, we discussed the basics of PowerShellPart two showed some of the ways to interact with SharePoint via PowerShell.  Today we’ll look at parts of a script I compiled to build out a SharePoint 2013 development virtual machine.

Environment and Build Notes

I want to start off with some notes about the assumptions I took and the configuration I used. First, this VM is running in Hyper-V on Windows 8 and uses Windows Server 2012 which was installed through the GUI. (I’ll try to figure out PowerShell remoting and Hyper-V at a later date, but that wasn’t in the cards for this post.) Second, I’ve configured two virtual networks, one internal with a static IP and one external with a dynamic IP. I configured those through the GUI as well. However, almost everything else has been built using PowerShell. While we’ll only highlight some of the script in this post, you can find the full script at my CodePlex Project: Useful PowerShell Cmdlets.

Read More…

The first post in this series covered the basics of PowerShell including variables, loops, and decisions. It also introduced a few scripts I’ve used over the last several weeks. In this post, we’ll discuss how to use PowerShell in a SharePoint farm, some of the more useful capabilities (especially to developers), and a few more scripts that I’ve written to bring the topics covered together.

Integrating PowerShell With SharePoint

PowerShell has been natively supported in SharePoint since SharePoint Foundation 2010 and SharePoint Server 2010. When SharePoint is installed, in addition to the Product Configuration Wizard and Central Administration shortcuts, a shortcut for the SharePoint Management Shell is available. This application is a PowerShell console with a blue background and the SharePoint Snap-in loaded. The PowerShell Snap-in is a PowerShell version 1 object that, when loaded, makes additional functions (or cmdlets) available to call in the current PowerShell session. To do this, simply execute the following:

Add-PSSnapin "Microsoft.SharePoint.PowerShell"

This will allow you to access the SharePoint cmdlets in any PowerShell session. It’s also important to run the PowerShell or SharePoint Management Shell console as administrator as the logged-in user may only have limited access to the SharePoint farm.

Read More…

When I apply for speaking engagements, I usually submit at least one session that will require me to learn a new language or technology.  I did this with the upcoming SharePoint Saturday Austin with a session titled “PowerShell for Developers: IT Pros Need to Share.”  This session is the catalyst for this three-part blog series, which will be published over the next few Fridays. This first post is intended to get everyone up to speed with the basics of PowerShell.

What is PowerShell?

PowerShell is a command-line language built on the Microsoft .NET Framework. This is a ridiculously simplified explanation, though. PowerShell is a task automation tool that takes common command-line languages and magnifies their power exponentially through the use of Cmdlets.  Cmdlets are verb-noun commands that perform computer and application management tasks. These tasks can be as simple as restarting the computer or changing the time zone.

PowerShell is a sophisticated development language that contains many of the constructs found in other modern languages. Variables, functions, looping, and decision statements are all present among other important features. We will discuss these features and how they function in this post. Read More…