Executive Summary

This blog post is for anyone working with AWS CloudFormation Templates and are looking for ways to improve template versatility by including mappings. It begins by describing how the scenario was discovered. It then describes the solution for generating mappings.

The Scenario

I worked with AWS Cloud Formation and was looking for a starting point for a Windows Server 2012 R2 with an Active Directory template. The AWS CloudFormation Templates page has plenty to start with; I went to Browse sample templates by the AWS region and then US West (Oregon) region. I downloaded the Windows_Single_Server_Active_Directory.template to see if I could use it as a starting point.

While reviewing the template, I noticed that it used a Region to AMI mapping to determine the Amazon Machine Image (AMI) to be used and that Windows2012r2 was available. As of this writing, the AMI for US West (Oregon) (us-west-2) would be ami-07c77fe35092981e9.

AMI West Region

Unfortunately, I was unsure if this image was valid or not, as it would return as null. According to the documentation, it should have produced a DescribeImagesResponse. I determined that that the template had outdated/incorrect AMI ids in it.

Outdated and Incorrect AMI ID image

I started wondering how I could build a mapping of my own. I used the Get-SSMLatestEC2Image command to get a list of images for the us-west-2 region.

List of Images for West Region

I located the Windows_Server-2012-R2_RTM-English-64Bit-Base image and decided to build a custom mapping for it.

The Solution

I determined the solution would be to write a PowerShell script that would iterate over an array of regions, locate the desired image, and then generate a JSON string that could be inserted into the template. First, you will need to install the AWS PowerShell Tools if you have not already.

Next, I put together the following script that would perform the steps above. I decided to use the region list used in the sample template.

$regions = @("us-east-1","us-west-2","us-west-1","eu-west-1","eu-west-2","eu-west-3",
    "eu-central-1","eu-north-1","ap-northeast-1","ap-northeast-2",
    "ap-northeast-3","ap-southeast-1","ap-southeast-2","ap-south-1",
    "us-east-2","ca-central-1","sa-east-1","cn-north-1","cn-northwest-1")

$searchString = "Windows_Server-2012-RTM-English-64Bit-Base"
$friendlyName = "Windows2012r2"
foreach($region in $regions)
{
    
    try {
        
        $ami=Get-SSMLatestEC2Image -Path ami-windows-latest -ImageName $searchString -Region $region
        Write-Host "`"$region`"        : {`"$friendlyName`" : `"$($ami)`"},"
    }
    catch {
        #do nothing, no access to region likely

I decided to add a “Friendly Name” that you could use to make it cleaner in the JSON. Here are the results:

Friendly Name Results for Cleaner JSON

Now copy/paste this text into the template mapping, and now you have a custom mapping:

Custom Mapping Template

This allows you to use the FindInMap function to locate the AMI for the provided region when defining the ImageId.

FindIn Map Function to locate AMI

Conclusion

As you can see, creating a custom mapping is not that difficult. This approach demonstrates how to retrieve data from AWS using PowerShell and generate JSON that can be used in a Cloud Formation template. It could be used to keep a Cloud Formation template easy-to-read, organized, and more versatile.

If you are like me, you have used cloud services in a limited fashion to create VM’s for testing or perhaps you have used them extensively. You’d also like to gain an understanding of the broader group of services offered by cloud providers. In my situation, this was due to the recent attainment of an Engagement Manager position and my desire to help AIS expand our business through the development of new opportunities. I realized that I needed to have at least a top layer understanding our offerings in order to realize potential use cases AIS could present to solve problems, more cost-effective options to current solutions, and develop completely new solutions to improve client business. It was obvious to start with Microsoft’s Azure and Amazon’s AWS platforms, being that these are the top focus of AIS and the industry as a whole.

What was not obvious, was where to start. Both platforms are not only extremely broad but also moving targets. I needed to find a way to dip into this process without drowning in all the information, in addition to holding the responsibilities in my day job. I looked at classroom training options, YouTube videos, and continued researching until I stumbled upon two paths. These paths not only provided a nice prepackaged set of materials, but I could complete at my own pace, at home, and they resulted in certifications. I will get to the details, but first a word about certifications.

I am sure many of you will be rolling your eyes when you read the “certifications” aspect of that second to the last sentence. Yes, certifications are not as valuable an indicator of a person’s skills and knowledge in an area as real-world experience. However, they provide the following benefits in order of least to most important:

  1. Provide a good starting point for someone that has no current projects in an area.
  2. Fill knowledge gaps that even a person with experience in an area has, especially in those services or techniques that are not used often.
  3. Provide value to AIS in maintaining various statuses.
  4. Provide a potential client with proof that you at least have an understanding of the basics.
  5. Most importantly, they result in a $500 bonus from AIS, and reimbursement of testing and training costs!

The paths I found are the Microsoft Azure Fundamentals learning path and certification and the Amazon AWS Cloud Practitioner training and certification. The training for both of these includes videos with the Azure path including an estimated ten hours of content and the AWS training about five hours. The Azure path estimates were spot on, and the AWS training took a bit longer, due to my complete lack of experience with the platform.

Microsoft Azure Fundamentals

This path included videos, reading, hands-on experience, and quick knowledge checks. It can be completed with an Azure account that you create just for the training or an account linked to the AIS subscription if you have one. Both the reading and videos provide just enough information, but not get bogged down in the minutia. The only thing I had done with Azure prior to the training created a few VM’s to set up SharePoint environments. I had done that years ago, but I didn’t do that much within those environments.

For me, most of the content was new. I believe if I had a more in-depth experience, the training would have filled in gaps with specific details.

These were the topics I found either completely new or helpful in understanding how to look at and/or pitch Azure services to clients:

  • Containers, app services, and serverless options and how they work
  • Reducing latency with the traffic manager
  • Azure policies and tags to enforce standards
  • Review of data centers, region pairs, geographies, availability zones
  • Various was to predict costs and manage costs such as calculators, Cost Manager, and Azure Advisor

The training took me probably two-thirds of the estimated time, after which I went through the knowledge checks for each section once more. After that, I spent maybe an hour reviewing some things from the beginning. From there, I took an exam and passed. The exam process was interesting and can be done from home with some software that enables someone to watch you. Prior to the exam, you are required to show the person the entire room and fix anything that might enable you to cheat.

After I completed the certification process, I submitted the cost of the exam ($100) as an expense as well as submitted my request for a certification bonus. I received both in a timely manner. See links at the end of this post for materials concerning reimbursements and bonuses. Don’t forget approval from your EM/AE prior to incurring any costs for which you might want reimbursement and to submit your updated certifications spreadsheet to the AIS PI Team.

AWS Cloud Practitioner

This path exclusively contains videos. In my opinion, the content is not as straight forward as the Azure Fundamentals content and the videos cannot be sped up, which can be very frustrating. The actual content was a bit difficult to find. I have provided links at the conclusion of this post for quick reference. Much of the video content involves Linux examples, so Putty and other command-line tools were used. This added a further layer of complexity that I felt took away from the actual content (do I really need to know how to SSH into something to learn about the service?).

As far as content, everything is video, there is no reading, hands-on examples, and knowledge checks. I felt the reading in the Azure path broke things up. The hands-on exercises crystalized a few things for me, and the knowledge checks ensured I was tracking. I would like to see Amazon add some of these things. That being said, the videos are professionally done and included helpful graphics. With zero experience with AWS, I am still finding that I am able to grasp concepts and the videos do a decent job of presenting use cases for each service.

My biggest complaint is the inability to speed up videos that are obviously paced for the lowest common denominator and I find admittedly ADD attention waning often. Something I found that helps is taking notes. This allowed me to listen, write and not get bored.

Amazon provides a list of recommended prep (see links below) that includes self-paced training, a one-day classroom option, exam guide, a list of four base white papers and links to many others, practice exams, as well as a link to the schedule certification exam. I scanned the whitepapers. They all looked like they were useful, but not necessary to knock out the exam. I say this with confidence as I was able to pass the exam without a detailed review of the whitepapers. My technique was to outline the videos, then review them over the course of a couple weeks.

Summary

Whether you are a budding developer or analyst wishing to get a broad overview, a senior developer that wants to fill gaps, or a new EM like me who wants a bit of both, the Microsoft Azure Fundamentals learning track/certification and Amazon AWS Cloud Practitioner training/certification is a good place to start. AIS will cover any costs and provide you with some additional scratch for your effort. Obtaining these certifications also improves AIS standings with providers, clients, and the community as a whole. It also greatly improves your value to clients, meets the criteria of certain AIS Career Paths and Competencies, and who knows, you might learn something!

Links:

  1. Azure Fundamentals Learning Path: https://docs.microsoft.com/en-us/learn/paths/azure-fundamentals/
  2. Azure Fundamentals Cert: AZ900 Microsoft Azure Fundamentals Exam.
  3. AWS Cloud Practitioner Preparation and Cert:  AWS Recommended Prep
  4. Training Reimbursements and Certification Bonuses: https://appliedis.sharepoint.com/sites/HR/Pages/Additional-benefits.aspx
  5. Submitting certification inventory to PI Team: Reach out to AIS – Process Improvement Team (ais-pi-team@appliedis.com) for more info.

The recent #AWS and #Azure outages over the past two weeks are a good reminder of how seemingly simple problems (failure of power source or incorrect script parameter) can have a wide impact on application availability.

Look, the cloud debate is largely over and customers (commercial, government agencies, and startups) are moving the majority of their systems to the cloud. These recent outages are not going to slow that momentum down.

That said, all the talk of 3-4-5 9s of availability and financial-backed SLAs has lulled many customers into expecting a utility-grade availability for their cloud-hosted applications out of the box. This expectation is unrealistic given the complexity of the ever-growing moving parts in a connected global infrastructure, dependence on third-party applications, multi-tenancy, commodity hardware, transient faults due to a shared infrastructure, and so on.

Unfortunately, we cannot eliminate such cloud failures. So what can we do to protect our apps from failures? The answer is to conduct a systematic analysis of the different failure modes, and have a recovery action for each failure type. This is exactly the technique (FMEA) that other engineering disciplines (like civil engineering) have used to deal with failure planning. FMEA is a systematic, proactive method for evaluating a process to identify where and how it might fail and to assess the relative impact of different failures, in order to identify the parts of the process that are most in need of change. Read More…

Amazon Web Services (AWS) CTO Werner Vogels offers this great piece of cloud advice: “Treat everything as a programmable resource, including data centers, networks, compute, storage and load balancers.”

In other words, automate every aspect of your (cloud-based) infrastructure.

Given AIS’ years of experience with SharePoint, we are always looking for ways to make the underlying infrastructure more cost effective, scalable and robust. Fortunately, the benefits of automation apply equally to a SharePoint 2013 farm hosted in the cloud — whether it’s the ability to dynamically provision a SharePoint 2013 farm on the fly, or the ability to scale up and down based on load, or the ability to make the SharePoint 2013 farm more fault-resilient.

We’ve written about two automated deployment approaches to SharePoint 2013; one for Amazon Web Services and one for Azure. In case you missed them…

Our AWS-based SharePoint 2013 script and source code can be found here.

Our Windows Azure-based SharePoint 2013 script and source code can be found here.

In this blog I’ll discuss some post-release reporting issues that we faced for one of our projects and the solutions we implemented. On the technology side, we had SQL Server 2008 R2 and MVC 4.0 application (which were hosted in Amazon Web Services) in our production environment.

The Problem

The post-production release reporting system was not responding as per the user expectations. For most of the high-volume reports (50K rows to 200K rows in report output), we were getting request timeout error. Client SLA for response time was two minutes; hence any report (big or small) must return data within two minutes. All the reports were designed using SQL Server Reporting Services 2008 R2. In all there were close to 40 reports with such timeout issues. Read More…

I’ve been reading a lot about the sweeping organizational changes at Microsoft. It’s always interesting to analyze and attempt to interpret their strategy and internal politics. (For example, why is the Dynamics business still separate? Is it being positioned to be sold? Probably not, but fun to consider.)

However, I am more drawn to the larger changes the re-org is enabling. The external press always seems to be negative about the actions of Microsoft’s executive leadership ever since Bill Gates left.  While I may not agree with every choice Steve Ballmer has made, when you really stop and think about how they have transformed themselves over the past six years, it’s pretty amazing — especially when set in juxtaposition to the lack of change at other lumbering IT giants. Microsoft is well on their way to transforming from a worldwide monopoly of “Windows and Office” to a “devices and services” business. Read More…

At AIS, we’re committed to delivering transformative solutions and outcomes, and that’s exactly what we’re doing for Healthcare Receivable Specialists, Inc. (HRSI). Everyone knows that data redundancy can be a productivity killer, so we’re utilizing a cloud-based hosting solution via Amazon Web Services (AWS) for their new custom application that will help eliminate that problem. The AWS hosting option will also keep costs down without sacrificing performance and positioning for HIPAA compliance. Additionally, our solution provides a self-service means to access the status of their applications throughout their entire business process.
Introduction by Vishwas Lele:

Amazon Web Services (AWS) CTO Werner Vogels offers this great piece of cloud advice: “Treat everything as a programmable resource, including data centers, networks, compute, storage and load balancers.” In other words, automate every aspect of your (cloud-based) infrastructure. There are significant benefits in following Werner Vogels’ advice:

  1. You can build systems that are cost aware by only keeping the parts of the system that are needed and turning off everything else .
  2. Capacity planning is hard. It is much better to dynamically build capacity based on the need.
  3. Failures are not an exception but a rule. Rather than building complex logic to handle exceptions, make your systems fault resilient by provisioning failover resources as needed.
  4. Make your systems more agile – systems that can scale in the direction of business vs. a design time scaling criterion.

Given AIS’ years of experience with SharePoint, we are always looking for ways to make the underlying infrastructure more cost effective, scalable and robust. Fortunately, the aforementioned benefits of automation apply equally to a SharePoint 2013 farm hosted in the cloud — whether it is the ability to dynamically provision a SharePoint 2013 farm on the fly, or the ability to scale up and down based on load, or the ability to make the SharePoint 2013 farm more fault resilient.

But it all begins with developing robust automation scripts to provision and manage a SharePoint 2013 farm. This brings us back to the purpose of this blog post by Abhijit Kumar. Abhijit discusses an automated approach for provisioning a SharePoint 2013 farm using Amazon Web Services. It is noteworthy that the automation approach we describe below is based solely on PowerShell. This might come as a surprise given that AWS offers services like CloudFormation, which enables creation of AWS resources, combined with open source tools such as Opcode Chef and AWS Puppet, which enable the installation and configuration of applications. We chose to rely solely on PowerShell for the following reasons:

  1. PowerShell is Microsoft’s canonical task automation framework, consisting of a command-line shell and a scripting language that has full access to COM and WMI, giving Windows administrators control over every aspect of Windows OS-based machines.
  2. PowerShell scripting language is based on the .NET framework. This means a PowerShell script can take advantage of .NET framework enhancements such as Workflow Foundation (WF). We use WF extensively to manage long-running automation scripts.
  3. AWS Cloud Formation is not available on AWS Gov Cloud. AWS Gov Cloud is an isolated AWS region designed to allow U.S. government agencies and customers with sensitive workloads to address their specific regulatory and compliance requirements. Given that AIS services a large number of customers with stringent regulatory and compliance requirements, we needed an automation approach that worked on AWS Gov Cloud.
  4. If you read our earlier blog post about SharePoint 2013 automation on Windows Azure, you will notice that we have been able to achieve a high level of reuse between Windows Azure and AWS scripts for SharePoint 2013 scripts. While the WF-based provisioning logic is largely the same, Azure Service Management SDK calls are replaced with AWS Tools for Windows PowerShell. This reuse allows us the flexibility to offer our customers a choice between the industry leading IaaS platforms – AWS and Windows Azure.

Abhijth’s post below walks you through the script to deploy SharePoint 2013 Farm on AWS in an automated manner. I am confident that you will it useful. Please give the scripts a try and let us know.

Amazon Web Services (AWS) provides an extremely flexible set of services for hosting web applications in the cloud with a web-based console for selecting options to quickly provision a set of IT resources. This post will explore the various aspects of hosting a custom .NET web application in AWS, focusing on high-availability options and disaster recovery scenarios and how to do so under cost constraints.

When building a solution in AWS you have to understand the difference between affinity and availability, and the terminology that Amazon uses. We will define affinity as the physical location of resources within a data center and availability as the isolation of resources for disaster recovery scenarios.

The AIS solution I’ll reference throughout this post uses a Virtual Private Cloud (VPC) to enable our engineers to develop a multi-tiered application within a private sub-netted network split across two availability zones. The VPC as defined by Amazon allows for “… you to create a virtual network topology—including subnets and route tables—for your Amazon Elastic Compute Cloud (Amazon EC2) resources.” The use of a VPC ensures you have high affinity for your EC2 resources. Availability zones refer to the geographically separated data centers Amazon uses for hosting EC2 resources.  Therefore, your disaster recovery architecture must account for placement of EC2 resources in different Availability Zones.

High Availability vs. Disaster Recovery in AWS

To muddy the terminology waters further, we have the concept of high availability, where the effects of hardware or software failure are essentially masked and downtime for end users reduced, such as in Microsoft’s SQL Server. Our solution had to address the issue of high availability and allow for a swift recovery in the event of a disaster or failure of EC2 resources in one availability zone. A high-level picture of what we needed to achieve is shown below.

Read More…

I was recently asked to write my own custom performance metric and publish it to Amazon’s CloudWatch using PowerShell.

Part I: How do I get this thing running already?

I initially used this blog post as a general guide, but since I had some experience with PowerShell already, the real learning part for me was how to call the API through .NET. (There is a second part, which actually shows you how to publish the metric. Unfortunately, his test “Tutorial” namespace ended up in the wrong region [US East] as compared to my instance [US West Oregon].

I figured out the correct way to do this by teasing apart the free community scripts available on AWS, which I will discuss later (see Part 2).

Read More…