Following up my last post on Azure Web App for Containers, in part two we’ll go through the various types of storage options available with Azure Web App for Containers, along with the scenarios where they fit best.

As of writing this post, there are 3 storage options:

  1. Stateless
  2. Storage on App Service Plan
  3. Storage using a Storage Account File Share

Stateless

As you all know, containers without any volume mounts are completely stateless — i.e., the container will not persist data once it is shut down. When you create an instance of Azure Web App for Containers, this is the default option. Reboots on Azure App Service platform can happen from time to time for maintenance. The only files which are persisted across reboots in this mode are the logs which are located under the /home/LogFiles folder.

This scenario is best applicable for APIs where you don’t have to store any data on the server itself. A typical use case would be a 3-tier architecture application where each layer resides as a separate resource in Azure.

Storage on App Service Plan

Using this option allows you to store data on the App Service Plan. In order to enable this, you would have to create an App Setting (WEBSITES_ENABLE_APP_SERVICE_STORAGE=true). In this mode, what happens is that the /home directory is persisted across reboots. Azure does this by mounting storage behind the scenes at this path and then persisting it. As this storage is maintained by Azure, it ensures that it is performant. From what I have seen, in case there is an issue (Performance or Availability) with the storage, Azure will try to switch to the secondary copy and while this is being done, this storage becomes read-only.

When site level backups are enabled on such an instance, the contents of the /home directory are also backed up. The downside to this is that this storage is only visible to your Web app and is not accessible to the outside world.

This option is best for scenarios where storage is required on the server with a minimal overhead of maintenance like when hosting Drupal & WordPress.

Storage using a Storage Account File Share

As of writing this, this option is still in Preview but allows you to connect external services to the Storage Account. This can be set up by going to the “Path Mappings” section of the Azure App Configuration –

Path Mappings Storage Account

This type of mapping supports both Blob containers and Azure Storage File Shares. Although the functionality provided by both is pretty much the same, the technologies behind the scenes which support these are different. For Blob Containers, Blobfuse is used to handle the translation and mapping of file paths to remote blob paths whereas mounting Azure Storage File Shares uses SMB Protocol and uses the CIFS mounts on Linux.

I would suggest you use the Azure Storage File Shares and not the Blob Containers for these storage mounts as Blobfuse is not POSIX (Portable Operating System Interface) compliant. For best performance and stability, use the Azure Storage File Share mount.

While using this option, do keep in mind that the site backups do not back up the mounts. You would have to manage the backups of the mounts on your own.

This option is best suited for scenarios where you need more control over the storage and the ability to connect other devices to the same storage as the Web App for Container.

Stay tuned for the next part!

Azure Web Apps Background

I’ve been working with Azure Web Apps for a long time. Before the launch of Azure Web Apps for Containers (or even Azure Web App on Linux), these web apps ran on Windows Virtual Machines managed by Microsoft. This meant that any workload running behind IIS (i.e., ASP.Net) would run without hiccups — but that was not the case with workloads which preferred Linux over Windows (i.e., Drupal).

Furthermore, the Azure Web Apps that ran on Windows were not customizable. This meant that if your website required a custom tool to work properly, chances are it was not going to work on an Azure Web App, and you’d need to deploy a full-blown IaaS Virtual Machine. There was also a strict lockdown regarding tools and language runtime versions that you couldn’t change. So, if you wanted the latest bleeding-edge language runtime, you weren’t gonna get it.

Azure Web Apps for Containers: Drum Roll

Last year, Microsoft released the Azure Web Apps for Containers or Linux App Service plan offering to the public. This meant we could build a custom Docker image containing all the binaries and files, and then deploy it on the PaaS offering. After working with the product for some time, I was like..

The product was excellent, and it was clear that it had potential. Some of  the benefits:

  • Ability to use a custom Docker image to run the Web App
  • Zero headaches from managing Docker containers
  • The benefits of Azure Web App on Windows like Backups, Kudu, Deployment Slots, Autoscaling (Scale up & Scale out), etc.

Suddenly, running workloads that preferred Linux or required custom binaries became extremely easy.

The Architecture

Compared to Azure Web App on Windows, the architecture implemented in Azure Web App for Containers is different.

diagram of Azure web apps architecture

Each of the above Web Apps is strictly locked down with minimal possibility of modification. Furthermore, the backend storage was based on Network File Shares which means that even if you don’t want any storage (like in cases when your app simply reads data from the database and displays it back), the app would still perform slowly.

diagram of Azure web apps architecture

The major difference is that the Kudu/SCM site runs in a separate container from the actual web app. Both containers are connected to each other with a private network. In this case, each App Service Plan is deployed on a separate Virtual Machine and all the plumbing is managed by Microsoft. The benefits of this approach are:

  • Better isolation. If your Kudu is experiencing issues, it reduces the chance of taking down your actual website.
  • Ability to customize the actual web app container running the website.
  • Better resource utilization

Stay tuned for the next part in which I would be discussing the various options related to Storage which are available in Azure Web App for Containers and their trade-offs.

Happy holidays!

sharepoint logoLast week, Mark and I were patching a fairly large production SharePoint 2016 farm. The farm consisted of the following servers:

  • Two Application Servers
  • Two Search Servers
  • Two Web Servers
  • Two Cache Servers
  • Two Microsoft SQL Servers

We started the usual process of taking backups, installing the patch on all servers and then ran SharePoint Configuration Wizard on the primary application server hosting SharePoint Central Admin.

In a couple of minutes, we received the following error:

An exception of type Microsoft.SharePoint.PostSetupConfiguration.PostSetupConfigurationTaskException was thrown. Additional exception information:

Upgrade [SearchAdminDatabase Name=SEARCH_DB] failed. (EventID:an59t)

Exception: The database principal owns a database role and cannot be dropped. The proposed new database owner is already a user or aliased in the database. (EventID:an59t)

Upgrade Timer job is exiting due to exception: System.Data.SqlClient.SqlException (0x80131904): The database principal owns a database role and cannot be dropped. The proposed new database owner is already a user or aliased in the database.

Was This a Known Issue?

We started looking around to see if this was a known issue and for any possible solutions. We came across Microsoft Support’s posting on the July 10 SharePoint security patch and lo and behold, it was updated with this exact known issue. Read More…

Jenkins logoA few months ago, we worked with a client to migrate websites from on-premises to Azure. As part of this migration, Jenkins was set up to do the Continuous Integration/Continuous Delivery. During the initial setup, we checked out Jenkins email notifications and they were working fine with their internal mail server.

One fine day, an email comes from the client stating that emails from Jenkins were not coming at all. Well, that’s strange…

The Diagnosis

I like troubleshooting quite a lot and the rule of thumb is to first ask what changed? While the initial diagnosis was going on, the client mentioned that their network team made upgrades to the mail server and now only TLSv1.2 & TLSv1.1 were supported. Earlier, Jenkins sent email using SSL. (Yep, you guessed it right…using the -Dmail.smtp.starttls.enable=true parameter.) Read More…

Azure Functions LogoThe trend of making applications serverless is on fire! And it’s not just hype…going serverless makes the administration, monitoring and debugging of code so much easier that developers all around are flocking to use it in one way or the other.

If you don’t know what serverless means, I would suggest you start here. In a nutshell, it basically abstracts the underlying platform and allows the developers just to run a piece of code without worrying about servers and their maintenance. Also, it gets the usual benefits of the cloud like auto-scale, pay-per-use, etc.

The Background

At AIS IDC, we have a separate timesheet system to keep track of records internally. This system comprises of various features like Leave Requests, Time Logs, Work-From-Home Requests, etc. We deployed the system a couple of years ago, and it’s comprised of an Azure SQL instance, Azure Web App on Windows, and an Azure Classic Cloud Service. The Azure Classic Cloud Service was deployed with a separate application written in C# to keep track of overtime hours, update leaves, figure out timesheet defaulters, send out email notifications, etc. Since this was a classic Cloud Service, we had little visibility into what was going on while it was running.

Furthermore, just to do these basic tasks every five minutes, the classic cloud service was costing us $60 per month on an average. Read More…

We’ve all heard the phrase, “Anything that can go wrong, will go wrong”…otherwise known as Murphy’s Law. I couldn’t find a better phrase to describe a recent problem we encountered at AIS Managed Services.

I’ve been a part of the Managed Services practice at AIS for quite some time now. As part of this practice, we provide proactive and reactive support to various clients. Internally, we support the AIS Internet site and the AIS Blog, as well as multiple enterprise-level commercial and federal customers. This blog post covers the challenges of availability monitoring, related to a downtime incident right here on the AIS Blog. Read More…

Drupal Lift and Shift to AzureThrough our enterprise collaboration and productivity services, we help many organizations create new experiences with intranets and portals to increase productivity and streamline collaboration while cutting operating costs. We have Drupal lift and shift experience where we migrate existing portals to the cloud as well as the ability to architect a custom solution from the ground up if needed.

The Background

Our client, a large financial investment firm, provides financial services and handles multi-billion-dollar assets. The organization had multiple websites running Drupal with a third-party vendor. The vendor provided a custom setup that was riddled with issues such as slow response times, excessive downtime, and high operating costs. As a Microsoft Gold Certified Partner with such experience, AIS was engaged to perform a Drupal lift and shift operation for the websites along with a CI/CD setup across multiple environments.

FREE HALF DAY SESSION: APP MODERNIZATION APPROACHES & BEST PRACTICES
Transform your business into a modern enterprise that engages customers, supports innovation, and has a competitive advantage, all while cutting costs with cloud-based app modernization.

Requirement Gathering

As soon as AIS started the requirement gathering, it was clear that we were going to migrate the websites and databases to a PaaS-based model. Microsoft Azure was deemed the best choice given the plethora of options available for websites. The customer also already had an Enterprise Agreement with Microsoft Azure, making it the perfect fit.

The client had several modifications which required explicit use of Apache Webserver, along with Drush, PHP 7.1 and they needed room for future modifications. People working with Drupal should be quite familiar with that—Drush is often known as the “Swiss Army Knife” for Drupal.

Selecting the Perfect Azure Offerings

Initially, we came up with the following options:

  1. Create Virtual Machines Hosting Web Servers & Databases
  2. Modify the modules requiring Apache and then host the websites on Windows/IIS based Web Apps
  3. Azure Web App for Containers

Option 1 meant that we had to set up the entire infrastructure from scratch. Setting up the infrastructure from scratch wasn’t an issue, but the overhead of maintenance and costs afterward made us look for other alternatives.

Option 2 required quite some rework, and our previous experience taught us that Drush has hiccups when running on Windows-based hosts.

Option 3 was the best choice because it allowed us to write a custom Docker image with Apache, PHP 7.1, Drush and give the room for future modifications. It was the perfect balance of customization, maintenance overhead and costs. We also got added benefits like:

  1. Automated backups handled by Azure
  2. Continuous Deployment handled by the magic of Kudu
  3. Detailed metrics like Response time, number of requests etc. (Who doesn’t love detailed metrics??)
  4. Auto-scaling and more!

The Immediate Benefits of the Drupal Lift and Shift

This is the section which should most interest all of you readers. After moving the sites over to Azure, we immediately noticed a huge drop in…

  1. Time to deploy to various environments
  2. Response time. The response time came down to ~350 milliseconds from almost 1 second and above.
  3. The site was overall much faster…and the best part was that the client’s earlier infrastructure had four cores but this new set up only had two cores and 3.5GB of RAM.

Over the period of almost 20 days, the container served almost 4.6 million requests without breaking a sweat, as you can see from the below graph of CPU/RAM usage:

Figure 1 Mind you, this App Service plan runs another website!

Check Out Our Successes

Read the full story, Investment services firm migrates websites to cloud to save money, improve reliability or check out our library of featured success stories! If you’re interested in what AIS can do for you, contact us today and tell us about the challenges you’re facing. There’s an excellent chance we can help.

As part of AIS Managed Services, we provide proactive management and reactive support of infrastructure and applications at a predictable monthly cost. Recently, during a routine infrastructure health check, we noticed that Azure was failing to take backups for a particular virtual machine. Why?

The Environment

The client is a medium-sized outdoor equipment vendor. For this enterprise customer, we have configured Azure Recovery Services to take a daily backup of all the virtual machines in the production environment. The environment is set up with four domain controllers. Two of them are hosted in Azure while the other two are hosted on-premises. All domain controllers are running Windows Server 2008 R2. Both domain controllers hosted in Azure have 120GB System Drives attached to them, with only Active Directory Domain Services and DNS Server roles present on the server. Read More…