DevOps implements a Continuous Integration/Continuous Delivery (CI/CD) process. When multiple team members work in the same codebase, anyone’s update could break the integrated code. So, Continuous Integration is to trigger a build pipeline whenever a code update is pushed. The build pipeline will fail if the newly updated code is incompatible with the existing codebase if there are any conflicts. The codebase might work well within a single developer environment, but in a build pipeline where all configurations and dependencies are expected to be in place can fail. Continuous Delivery speeds up the deployment process. The release pipeline helps to deploy the same code base to multiple environments based on configurations. This helps to support code to be deployed in all environments without many manual changes.

Having an approval process helps peer code reviews, identifies potential issues, and any security flaws ahead of time. The current production applications are very distributed and complex. Whether it is an on-premise or cloud-based solution, missing a dependency or proper configurations could cost significant risk in deployments. DevOps helps to maintain the same code base for repeatable deployment in many environments with just configuration changes. DevOps avoids manually building the deployment packages and handing over to the operations team who would not have insights on what is being deployed. If an error occurs during deployment or post-deployment, then the development team jumps in at that time, which is time-consuming. This will cost in production timeline and end up with some unhappy customers also!
DevOps ImagePicture credit: DoD DevOps

Popular DevOps Tools

Follow here to learn more about DevOps practices from other AIS bloggers!

Why not just “DevOps”?

DevOps is fundamental for any organization’s build and deployment process with seamless CI/CD integration. Then, what is ‘DevSecOps’ and why is ‘Sec’ added between Dev and Ops. The ‘Sec’ in DevSecOps is ‘Security.‘ Though it’s added in between, security implementation should start from Development and continue in Operations. As development and deployment packages add many dependencies from both internal and external, this could introduce vulnerabilities. It could cost severe issues in production if not identified earlier in the build pipeline. Code scans help identify possible weaknesses in code implementations. But for any cybersecurity-related vulnerabilities, only specific tools at different stages of the pipeline must be used to identify as early as possible. Adding security scanning earlier in the pipeline and automating are essential for DevSecOps.

DevSecOps Software Lifecycle

Picture Credit: DoD DevSecOps

DevSecOps is not a tool or pattern but a practice and can be enhanced by adding appropriate tools. It is a process in securing the build and deployment by using several security tools by shifting security to the left. These security tools help to identify vulnerabilities that the code could have introduced, recommend possible solutions to fix those issues, and in some instances, the tools can mitigate some of those issues as well. This is to use the ‘fail fast’ method to identify vulnerabilities earlier in the build pipeline. As more applications moved into the cloud, it is highly imperative to follow Cloud Native Computing Foundation (CNCF) certified tools and implement security benchmarks that provided CIS benchmarks. DevSecOps avoids manual changes once the code is in the pipeline, deployed, and deployed. The codebase will be a single source of truth and should not be manipulated at any point.

Adding scanning tools for security and vulnerabilities helps to mitigate any flaws introduced in code and operations. Many open-source tools provide these functionalities. Enabling logging, continuous monitoring, alerting processes, and any self-fix for faster remediation are key for ongoing business operations. Containerizing with hardened container images from DoD Iron Bank helps to protect application container images. Hardened images can be kept up to date from reliable providers. Containers provide cloud-agnostic and no vendor lock-in solutions.

All the security tools in the DevSecOps pipeline must be deployed and running for pipeline scanning in the customer environment. A request will be sent to those security tools from the pipeline code via API request or trigger command-line interface (CLI) commands. Those tools then respond with their findings, statistics, and provide pass/fail criteria. If a tool identifies any vulnerability findings in the scan, then the pipeline will fail.

Deploying the security tools as SaaS services will require permission from the security team. Not all are approved to run in highly secured cloud environments. Those tools all need to be Authority to Operate (ATO) to deploy and configure. Whereas getting the hardened container images for those tools is a safer and secure approach to deploy those tools in the cloud. As the containers are already hardened, which means scanned, secured, and ready to go with all dependencies, they will provide continuous ATO. The hardened container images can be downloaded from DoD Iron Bank, and almost all tool providers provide container images. Many of these providers have different downloads, whether as a software download or a container image. When downloading as a software image, additional tasks to be done to ensure all the dependencies are appropriately configured or should pre-exist. Simultaneously, downloading as hardened container images comes with dependencies and are pre-scanned. The tools can be deployed into Kubernetes in your cloud environment to provide scalable functionality.

Below is a sample DevSecOps pipeline implementation with recommended security tools, as depicted in the picture below:

  • Source code pull request is approved by reviewers
  • The build pipeline kicks off and code scan is run after a successful initial build
    • If any code vulnerabilities are identified, then the pipeline fails
  • Build pipeline continues with DAST and PEN testing
    • If any vulnerabilities are identified, then the pipeline fails
  • Build artifacts are added to private repository either as packages or container
    • Repository scan is performed using repository scanning tools and vulnerabilities are reported
  • Release pipeline picks up artifacts from private repositories and deploys to Azure (or cloud of your choice)
    • Kubernetes is a highly recommended deployment for orchestration, but deployment can be an application of your choice such as Function App, App Service, Azure Container Instances, etc.
  • Security has been applied throughout the pipeline process and will continue once the application is deployed. Both native security tools such as Azure Monitor, Azure Security Center, Azure Policies, etc., and third-party tools such as Twistlock, Qualys, etc. Can be used to monitor the health of your production environment.DevSecOps Diagram

Let’s look at a few of the recommended tools to support the security validations in the DevSecOps process.

Build tools/CLI

A developer can write their code in their favorite editor such as Visual Studio, VS Code, and run/execute to test their applications. The code editor also generates debug/release packages generating binaries using the build tool that comes with the editor. The application works seamlessly from the developer environment as the dependencies and correct configurations exist. For the build to work in the pipeline, the build tool must be available to build the code. Based on the code language, the build tool varies, and they must be available in the pipeline.

Some of the build tools are:

  • DotNet Build
  • MSBuild
  • Maven
  • Gradle

Static Application Security Testing (SAST)

A code scan is one of the essential steps in securing the codebase. Automated testing helps identify failures, but these specific code scan tools help identify security flaws and vulnerabilities. The application does not need to be running for code scan tools as it scans only the codebase and not any dependencies.

Some of the Code scanning tools are:

  • SonarQube
  • Fortify
  • Anchore
  • JFrog Xray
  • OpenSCAP
  • HBSS
  • OWASP dependency check

Dynamic Application Security Testing (DAST)

DAST scans the application while its running or a container image that is hosted in private repositories. Container scanning before deploying helps resolve many security vulnerabilities.

Some of the DAST scanning tools are:

Penetration (Pen) Testing

Provides Web Applications scanner to help to find security vulnerabilities. Read here to learn about, “Top 10 Web Application Security Risks”

PEN testing tools:

  • OWASP ZAP

Deploy Code & IaC (Infrastructure as Code)

IaC is paramount in DevOps to avoid any manual work in customer environments and help with immutable infrastructure.

Popular IaC tools are:

  • Azure ARM Templates
  • Terraform
  • HELM
  • Private Repositories

In DevSecOps, a private repository is recommended to host the build dependencies, reference container images, container images for tools, and the built packages or application container images. This is to keep all the artifacts together in one centralized location, and the release pipeline can continue with deployments from there.
Some of the private repositories are:
JFrog
Docker Hub
Azure Container Registry (ACR)

Private Repository Scanning

As the pipeline requires security scanning, the repositories require scanning also. These tools scan for vulnerabilities in all packages and container artifacts stored in the repository. A scan report is being sent/notified for any issues.

Some artifact scanning tools are:

  • XRay
  • SonaType
  • Azure Monitor
  • Azure Security Center

Deploy

As the recommendation to deploy the security tools with container orchestration, the same recommendation goes to deployed applications. Containers provide high security with limited ways to be affected by attackers. Sidecar containers protect by continually monitoring applications with a container security stack built-in. Applications are scalable on a demand basis using Kubernetes and tools such as Kubectl; HELM packages are used to deploy and manage K8S clusters. ArgoCD is a declarative tool specifically for Kubernetes deployment in CI/CD pipeline.

Deployments to Azure could be:

  • Azure function app
  • Azure App Service
  • Azure Container Instance
  • Azure Kubernetes Service (AKS)
  • Open Shift in Azure
  • Monitoring/Alerting

Monitoring/Alerting

As the applications deployed and running in a cloud environment, it must be continuously monitored for attacks and identify any security vulnerabilities. For containers, these tools act as sidecar containers to regularly protect main containers from attacks, and some mitigate the issue. All these tools have built-in alert/notify operations team for immediate actions.

Monitoring/alerting tools:

  • Azure Monitor
  • Azure Security Center
  • Twistlock
  • Qualys
  • Aqua Security

So, all powered up with learning DevSecOps! Follow up back here for the next blog post in container-based deployments and containers scanning in the DevSecOps pipeline!

References for continuing your DevSecOps Journey

Agility and Discipline are Both Required as Organizations Adopt the Cloud

At the beginning of cloud adoption, experimentation is common and exploration of applicable technologies is welcomed. Development teams are excited to try the newest service offerings and proof of concept efforts are completed with great interest. However, the thrill of discovery often turns to frustration as configuration complexity increases, and cost control challenges mount. The initial exploration proof of concepts often does not consider where costs are coming from or what resources are assigned to different teams.

While cloud providers make it easy to provision new resources, considerable forethought is necessary to come up with disciplined procedures to measure costs and control access. Security and management raise valid concerns regarding proper security controls. Infrastructure as Code (IaC) concepts will allow for exploration and development agility while including the proper management review, thoughtful allocation of resources, and cost control initiatives. In this post, we will introduce IaC concepts and tools that can be used to bring agile interaction to your cloud adoption experience.

What is Infrastructure as Code (IaC)?

Infrastructure as Code (IaC) refers to the tools and processes used to track and manage infrastructure resources as code. This code is commonly stored in a version control system and the tools are most effective when applied to virtual environments. One such example is the Microsoft Azure public cloud, but IaC can be equally effective in private cloud environments. The combination of version control with executable infrastructure definitions provide a complete change history and ensure future repeatability

Historical Configuration Management

IT administration teams have historically consisted of a handful of individuals tasked with managing a large footprint of physical and virtual machines. With the popularization of virtualization technologies came the need to effectively maintain configuration and security baselines. Rudimentary automation tools such as parallel SSH gave way to more powerful configuration management tools such as Puppet (2005) and Chef (2009). These tools allowed a single IT administrator to establish and apply configuration templates across thousands of machines, and address divergence from configuration when needed.

With the maturation of the configuration management tooling, different paradigms began to emerge. Puppet and Chef both operated with a pull model, utilizing a machine agent responsible for retrieving configuration from the central control server. Newer entrants to the Infrastructure as Code space such as Ansible (2012) and Terraform (2014) operated with a push model over established protocols (such as SSH) for remote access. Configuration changes also saw two very different approaches to how they were defined between Infrastructure as Code tools. The imperative model relied on the developer to define specific change operations, while the declarative model favored the definition of the desired end-state. In the declarative model, the IaC tool itself was responsible for identifying and applying changes. Each model comes with unique benefits and challenges, but regardless of the model chosen, the application of IaC tools has become a huge force multiplier for the IT team.

DevOps

In the last few years, the term DevOps has emerged out of the convergence of development and operational teams adopting shared tooling. This new engineering specialization brings with it a mindset founded in the security and configuration management needs of the IT administrator while enhancing it with version control and module abstraction common in the software development space. Probably the biggest change brought forth by DevOps was the adoption of container technologies, largely popularized by Docker. Containers greatly hastened the adoption of immutable infrastructure. The underlying machines became less important themselves, instead, acting as a vehicle to deploy container workloads.

Immutability in Infrastructure

Infrastructure immutability refers to the static nature of the machines themselves and is most easily achieved in virtualized environments such as Microsoft Azure. The value of immutability is that infrastructure configuration can be reasoned about based on a single source of truth – the code that resides in the version control system. Configuration drift in long-lived infrastructure systems is a monumental challenge that Puppet and Chef have worked to solve, but this problem can be eliminated entirely by treating our infrastructure as replaceable. When changes are required, we simply replace old infrastructure with new ones.

Adopting immutability in infrastructure configuration is not an overnight process. There are considerable challenges, especially when dealing with stateful systems such as databases or file storage systems. While these challenges are real, the advantages are substantial:

  • Deterministic infrastructure
  • Version controlled source of truth
  • Infrastructure automation via CI/CD systems
  • Audit change deltas prior to execution
  • Peer Review based on code change and change deltas
  • Gated approvals from security authorities

Adopting IaC Within Your Organization

Adopting Infrastructure as Code in your organization must be approached carefully. You may be met with resistance from security and IT staff. Successful adaptation requires your stakeholders to understand the benefits of the IaC method.

Greenfield cloud projects are one of the best ways to introduce these concepts without having to tackle the challenging task of codifying existing systems. This approach has the advantage of a reduced scope while learning the tools and exploration without danger of impacting operational systems. If a greenfield project is not possible, this can be done with a short-run proof of concept effort. In the early stages of a project, attention to the following guidelines will help set you up for future success:

  • Security first
  • Avoid premature optimization
  • Favor managed (PaaS/SaaS) vs unmanaged (IaaS)
  • Tag resources aggressively

Security First

Security first is one of the most challenging things to get right in any project, but cloud efforts are especially challenging as there is a much higher exposed surface area. With consuming data from object storage APIs you could expose sensitive data to the world if not properly locked down. There are known security breaches every year where misconfiguration caused huge leaks of Intellectual Property and Personally Identifiable Information. Infrastructure as Code is uniquely suited to minimize the pain associated with good security practices. Least privilege is a key tenant of security and Infrastructure as Code can be used to define access policies that explicitly allow only the resources that need them. This is made simple because resource identifiers can be dynamically injected into policies as the Infrastructure as Code is responsible for managing both.

Avoid Premature Optimization

Premature optimization of systems or code is a trap that can be easily fallen into. Infrastructure as Code offers a wealth of opportunity to create reusable modules for resources. These modules bring advantages for sharing proven code between environments, but if used too early can greatly slow down team velocity. It is generally best to iterate your system until you reach a secure, stable MVP where the architecture is well established before beginning the abstraction into discrete modules.

Favor managed Paas/Saas

Deciding between Platform as a Service (PaaS), Software as a Service (SaaS) and Infrastructure as a Service (IaaS) is straight-forward early in a project. SaaS offerings, such as Azure Databases, are generally always the right choice over IaaS with a self-managed database. SaaS or PaaS is able to eliminate the busywork of security patching and backup management, allowing the team to focus on using the software to solve their unique business challenges. Time is our most valuable resource and while IaaS has its place, it incurs the highest time expenditure to manage properly.

Tag Resources Aggressively

Resource tagging is fundamental to cost visibility in cloud environments. It is difficult to devise a complete resource tagging strategy, but at a bare minimum, all resources should include a project tag. Even if your project is the only one present in your cloud account, allowing tags to be set on all resources by your Infrastructure as Code will make it easy to adopt uniform tagging in the future. Tags can also be useful in differentiating development, test, staging, and operational environments if they reside in a shared account.

Summary

This is just the first post in a blog series on DevOps and Infrastructure as Code. Understanding the concepts outlined here is the foundation of successful cloud adoption. In the future, we will look at applying these principles with specific examples that can be used as a template in your own DevOps journey. In the meantime, some resources can be found below that expand on topics discussed within this post:

PaaS & Cloud-Native Technologies

If you have worked with Azure for a while, you’re aware of the benefits of PaaS, such as the ability to have the cloud provider manage the underlying storage and compute infrastructure so you don’t have to worry about things like patching, hardware failures, and capacity management. Another important benefit of PaaS is the rich ecosystem of value-add services like database, identity, and monitoring as a service that can help reduce time to market.

So if PaaS is so cool, why are cloud-native technologies like Kubernetes and Prometheus all the rage these days? In fact, not just Kubernetes and Prometheus, there is a groundswell of related cloud-native projects. Just visit the cloud-native landscape to see for yourself.

Key Benefits of Cloud-Native Architecture

Here are ten reasons why cloud-native architecture is getting so much attention:

  1. Application as a first-class construct — Rather than speak in terms of VMs, storage, firewall rules, etc. cloud-native is about application-specific constructs. Whether it is a Helm chart that defines the blueprint of your application or a service mesh configuration that defines the network in application-specific terms.
  1. Portability — Applications can run on any CNCF certified clouds and on-premises and edge devices. The API surface is exactly the same.
  1. Cost efficiency — By densely packing the application components (or containers) on the underlying cluster, the cost of running an application is significantly more efficient.
  1. Extensibility model — Standards-based extensibility model allows you to tap into innovations offered by the cloud provider of your choice. For instance, using the service catalog and open service broker for Azure, you can package a Kubernetes application with a service like Cosmos DB.
  1. Language agnostic — Cloud-native architecture can support a wide variety of languages and frameworks including .NET, Java, Node etc.
  1. Scale your ops teams — Because the underlying infrastructure is decoupled from the applications, there is greater consistency for lower levels of your infrastructure. This allows your ops team to scale much more efficiently.
  1. Consistent and “decoupled” — In addition to greater consistency at the lower levels of infrastructure, applications developers are exposed to a consistent set of constructs for deploying their applications. For example, Pod, Service Deployment and Job. These constructs remain the same across cloud, on-premises and edge environments. Furthermore, these constructs also help decouple the developers from the underlying layers (Cluster, Kernel and Hardware layers ) shown in the diagram below.decoupling
  1. Declarative Model – Kubernetes, Istio, and other projects are based on a declarative, configuration-based model that support self-healing. This means that any deviation from the “desired state” is automatically “healed” by the underlying system. Declarative models reduce the need for imperative automation scripts that can be expensive to develop and maintain.
  1. Community momentum – As stated earlier, the community momentum behind CNCF is unprecedented. Kubernetes is #1 open source project in terms of contributions. In addition to Kubernetes and Prometheus, there are close to 500 projects that have collectively attracted over $5 B of venture funding! In the latest survey, (August 2018), the use of cloud-native technologies in production has gone up by 200% since Dec 2017.
  1. Ticket to DevOps 2.0 – Cloud-native combines the well-recognized benefits of what is being termed as “DevOps 2.0” that combines hermetically sealed and immutable container images, microservices and continuous deployment. Please refer to the excellent book by Victor Farcic.

Now that we understand the key benefits of cloud-native technologies, let us compare it to a traditional PaaS offering:

Attribute Tradition PaaS Cloud-Native as a Service
Portability Limited Advanced
Application as a first-class construct Limited (application construct limited to the specific PaaS service) Advanced construct including Helm, network and security policies
Managed offering Mature (fully managed) Maturing (some aspects of the cluster management currently require attention)
Stateful applications Advanced capabilities offered by the database as service offerings Some cloud-native support for stateful applications (However, cloud-native applications can be integrated with PaaS database offerings through the service catalog)
Extensibility Limited Advanced (extensibility includes Container Network Interface, Container Runtime Interface)

Azure & CNCF

Fortunately, Microsoft has been a strong supporter of CNCF, as they joined CNCF back in 2017 as a platinum member. Since then, they have made significant investments in a CNCF-compliant offering in the form of Azure Kubernetes Service (AKS). AKS combines the aforementioned benefits of a cloud-native computing with a fully managed offering – think of AKS as a PaaS solution that is also CNCF compliant.

Additionally, AKS addresses enterprise requirements such as compliance standards, integration with capabilities like Azure AD, Key Vault, Azure Files etc. Finally, offerings like Azure Dev Spaces and Azure DevOps greatly enhance the CI/ CD experience in working with cloud-native applications. I will be remiss not to talk about VS Code extension for Kubernetes that also brings a useful tooling to the mix.

Cloud-Native Use Cases

Here are few key use cases for cloud-native applications. Microservices are something you would expect, of course.  Customers are also being used to run Apache Spark on AKS.  There is also thinking around managing IoT Edge deployments right from within the Kubernetes environment. Finally, “Lift and shift to containers” – this use case is getting a lot of attention from customers as the preferred route for moving on-premises applications to the cloud. Please refer to our recent blog post on this very topic “A “Modernize-by-Shifting” App Modernization Approach” for more details!

Cloud-Native Scenarios

FREE HALF DAY SESSION: APP MODERNIZATION APPROACHES & BEST PRACTICES
Transform your business into a modern enterprise that engages customers, supports innovation, and has a competitive advantage, all while cutting costs with cloud-based app modernization.

A big announcement from Microsoft this month: The introduction of Azure DevOps, the most complete offering of proven, modern DevOps tools and processes available in the public cloud. Used together, the Azure DevOps services span the entire breadth of the development lifecycle so enterprises can modernize apps in a faster and more streamlined way.

What Is DevOps, Anyway?

DevOps solutions bring together people, processes, and technology, automating and streamlining software delivery to provide continuous value to your users.What is DevOps?

If you want your next development or app modernization project to be a success, DevOps is the way to go.

High-performance DevOps enterprises achieve increased revenue with a faster time to market and produce solutions that are more powerful, flexible, and open. (Yes, Microsoft has been partnering with the open-source community to ship products that work for everyone.) New features can be safely deployed to users as soon as they’re ready vs. bundling them together in one large update down the road.

New Services & Tools in Azure DevOps

  • Azure Pipelines – Continuously build, test and deploy to ANY language, platform, or cloud.  Azure Pipelines offers unlimited build minutes and 10 free parallel jobs for all open-source projects!
  • Azure Boards – Plan, track and discuss your work and ideas across teams with proven agile tools.
  • Azure Artifacts – With the click of a button, add artifacts to your CI/CD pipeline.
  • Azure Repos – Unlimited private Git repos (cloud-hosted) allow team members to build and collaborate better.
  • Azure Test Plans – These manual and exploratory testing tools will allow you to test and ship with ease and confidence.

Azure DevOps is what’s next for Visual Studio Team Services. VSTS users will be automatically upgraded without jeopardizing functionality.

With the services provided with Azure DevOps, you can choose the tools and cloud services that you want to use and build end-to-end solutions for an enterprise-level toolchain. As long-time believers in both Azure and DevOps, we’re really excited about this offering and what it can offer our clients.

FREE HALF DAY SESSION: APP MODERNIZATION APPROACHES & BEST PRACTICES
Transform your business into a modern enterprise that engages customers, supports innovation, and has a competitive advantage, all while cutting costs with cloud-based app modernization.

Drupal Lift and Shift to AzureThrough our enterprise collaboration and productivity services, we help many organizations create new experiences with intranets and portals to increase productivity and streamline collaboration while cutting operating costs. We have Drupal lift and shift experience where we migrate existing portals to the cloud as well as the ability to architect a custom solution from the ground up if needed.

The Background

Our client, a large financial investment firm, provides financial services and handles multi-billion-dollar assets. The organization had multiple websites running Drupal with a third-party vendor. The vendor provided a custom setup that was riddled with issues such as slow response times, excessive downtime, and high operating costs. As a Microsoft Gold Certified Partner with such experience, AIS was engaged to perform a Drupal lift and shift operation for the websites along with a CI/CD setup across multiple environments.

FREE HALF DAY SESSION: APP MODERNIZATION APPROACHES & BEST PRACTICES
Transform your business into a modern enterprise that engages customers, supports innovation, and has a competitive advantage, all while cutting costs with cloud-based app modernization.

Requirement Gathering

As soon as AIS started the requirement gathering, it was clear that we were going to migrate the websites and databases to a PaaS-based model. Microsoft Azure was deemed the best choice given the plethora of options available for websites. The customer also already had an Enterprise Agreement with Microsoft Azure, making it the perfect fit.

The client had several modifications which required explicit use of Apache Webserver, along with Drush, PHP 7.1 and they needed room for future modifications. People working with Drupal should be quite familiar with that—Drush is often known as the “Swiss Army Knife” for Drupal.

Selecting the Perfect Azure Offerings

Initially, we came up with the following options:

  1. Create Virtual Machines Hosting Web Servers & Databases
  2. Modify the modules requiring Apache and then host the websites on Windows/IIS based Web Apps
  3. Azure Web App for Containers

Option 1 meant that we had to set up the entire infrastructure from scratch. Setting up the infrastructure from scratch wasn’t an issue, but the overhead of maintenance and costs afterward made us look for other alternatives.

Option 2 required quite some rework, and our previous experience taught us that Drush has hiccups when running on Windows-based hosts.

Option 3 was the best choice because it allowed us to write a custom Docker image with Apache, PHP 7.1, Drush and give the room for future modifications. It was the perfect balance of customization, maintenance overhead and costs. We also got added benefits like:

  1. Automated backups handled by Azure
  2. Continuous Deployment handled by the magic of Kudu
  3. Detailed metrics like Response time, number of requests etc. (Who doesn’t love detailed metrics??)
  4. Auto-scaling and more!

The Immediate Benefits of the Drupal Lift and Shift

This is the section which should most interest all of you readers. After moving the sites over to Azure, we immediately noticed a huge drop in…

  1. Time to deploy to various environments
  2. Response time. The response time came down to ~350 milliseconds from almost 1 second and above.
  3. The site was overall much faster…and the best part was that the client’s earlier infrastructure had four cores but this new set up only had two cores and 3.5GB of RAM.

Over the period of almost 20 days, the container served almost 4.6 million requests without breaking a sweat, as you can see from the below graph of CPU/RAM usage:

Figure 1 Mind you, this App Service plan runs another website!

Check Out Our Successes

Read the full story, Investment services firm migrates websites to cloud to save money, improve reliability or check out our library of featured success stories! If you’re interested in what AIS can do for you, contact us today and tell us about the challenges you’re facing. There’s an excellent chance we can help.

As 2017 ends, it’s clear that while the enterprises (public sector and commercial) are increasingly moving to the public cloud, they face significant challenges. Earlier in the year, I wrote about bridging the chasm between the expectations from an enterprise regarding cloud capabilities and the actual out-of-the box features offered by cloud providers. Additional challenges include the foundational culture shift to cloud governance, DevOps and automation, security and compliance, and mapping an enterprise’s application portfolio to a complex array of cloud service options.

Here are five things you can do next year to better assist enterprises adopt the public cloud: Read More…

Microsoft AppSource is a great destination for discovering line-of-business SaaS offerings, ISV apps and services offered by SIs. Today, we are proud to announce that the AIS Service Catalog (ASC) is now available in AppSource.

You can now get started with ASC in just minutes by clicking here – simply login to AppSource and then onboard your Azure subscription to ASC. You can leverage ASC as a SaaS or deploy a dedicated instance of ASC inside your subscription. Please feel free to contact us for more information.

We believe that Service Catalog is an important part of any Enterprise DevOps Toolchain. This is why, after years of guiding enterprises and government agencies through their journey to Enterprise Cloud DevOps, we built ASC as a Service Catalog for Azure.

In a nutshell, ASC allows developers to quickly provision enterprise-approved resources in Azure. ASC’s features and key benefits can be broken into two high-level areas: Read More…

Copyright: <a href='https://www.123rf.com/profile_viktorus'>viktorus / 123RF Stock Photo</a>

I thought Per Werngren made some important observations in his recent article for Redmond Channel Partner Magazine. His main point: System Integrators (SIs) need to evolve their business models or risk disintermediation. As workloads are migrated to AWS and Azure, automation replaces the need for people to perform those tasks. This automation enables governance and compliance to standards, while also setting the stage for better downstream, fully-automated management, monitoring and operations. This, of course, further reduces the need for people performing in those roles,

Meanwhile, the new generation of intelligent PaaS services for predictive analytics, artificial intelligence, machine learning, etc. are also replacing jobs once done by hand. These new tools allow us to build better and more intelligent applications.

FREE HALF DAY SESSION: APP MODERNIZATION APPROACHES & BEST PRACTICES
Transform your business into a modern enterprise that engages customers, supports innovation, and has a competitive advantage, all while cutting costs with cloud-based app modernization.

Despite all this potential for automation, we still regularly see organizations allowing contractors to move workloads manually. It’s simply in a staffing contractor’s best interest to have people do this, despite it being a time-consuming and error-prone process. But why would an SI recommend automation and reduce their long-term revenue? Read More…

AIS’ principal solutions architect Brent Wodicka stopped by Federal News Radio for a discussion on DevOps with  Federal Tech Talk’s John Gilroy.

They were joined by Nathen Harvey, VP of Community Development at Chef Software, and David Bock, DevOps Services Lead at Excella Consulting. Each offered a unique and practical perspective on the concept of DevOps and how it’s working for federal government IT.

You can listen to the full show over at Federal News Radio!

Previously in another blog post, I laid out a quick summary of Continuous Integration (CI) and Continuous Delivery (CD) in Visual Studio Team Services (VSTS). Today we’re going to expand a bit on those DevOps processes to better suit your (or your clients’!) needs.

With CI and CD, a build agent is required—that is, a place where your code is sent to be compiled and then subsequently deployed. By default, VSTS gives you the option to use a hosted agent. This is an entirely a cloud solution; you can just choose one of the hosted agents to build and deploy your code and you’re all set. But there are a couple of drawbacks with this…

Read More…