Executive Summary

This blog post is for anyone working with AWS CloudFormation Templates and are looking for ways to improve template versatility by including mappings. It begins by describing how the scenario was discovered. It then describes the solution for generating mappings.

The Scenario

I worked with AWS Cloud Formation and was looking for a starting point for a Windows Server 2012 R2 with an Active Directory template. The AWS CloudFormation Templates page has plenty to start with; I went to Browse sample templates by the AWS region and then US West (Oregon) region. I downloaded the Windows_Single_Server_Active_Directory.template to see if I could use it as a starting point.

While reviewing the template, I noticed that it used a Region to AMI mapping to determine the Amazon Machine Image (AMI) to be used and that Windows2012r2 was available. As of this writing, the AMI for US West (Oregon) (us-west-2) would be ami-07c77fe35092981e9.

AMI West Region

Unfortunately, I was unsure if this image was valid or not, as it would return as null. According to the documentation, it should have produced a DescribeImagesResponse. I determined that that the template had outdated/incorrect AMI ids in it.

Outdated and Incorrect AMI ID image

I started wondering how I could build a mapping of my own. I used the Get-SSMLatestEC2Image command to get a list of images for the us-west-2 region.

List of Images for West Region

I located the Windows_Server-2012-R2_RTM-English-64Bit-Base image and decided to build a custom mapping for it.

The Solution

I determined the solution would be to write a PowerShell script that would iterate over an array of regions, locate the desired image, and then generate a JSON string that could be inserted into the template. First, you will need to install the AWS PowerShell Tools if you have not already.

Next, I put together the following script that would perform the steps above. I decided to use the region list used in the sample template.

$regions = @("us-east-1","us-west-2","us-west-1","eu-west-1","eu-west-2","eu-west-3",
    "eu-central-1","eu-north-1","ap-northeast-1","ap-northeast-2",
    "ap-northeast-3","ap-southeast-1","ap-southeast-2","ap-south-1",
    "us-east-2","ca-central-1","sa-east-1","cn-north-1","cn-northwest-1")

$searchString = "Windows_Server-2012-RTM-English-64Bit-Base"
$friendlyName = "Windows2012r2"
foreach($region in $regions)
{
    
    try {
        
        $ami=Get-SSMLatestEC2Image -Path ami-windows-latest -ImageName $searchString -Region $region
        Write-Host "`"$region`"        : {`"$friendlyName`" : `"$($ami)`"},"
    }
    catch {
        #do nothing, no access to region likely

I decided to add a “Friendly Name” that you could use to make it cleaner in the JSON. Here are the results:

Friendly Name Results for Cleaner JSON

Now copy/paste this text into the template mapping, and now you have a custom mapping:

Custom Mapping Template

This allows you to use the FindInMap function to locate the AMI for the provided region when defining the ImageId.

FindIn Map Function to locate AMI

Conclusion

As you can see, creating a custom mapping is not that difficult. This approach demonstrates how to retrieve data from AWS using PowerShell and generate JSON that can be used in a Cloud Formation template. It could be used to keep a Cloud Formation template easy-to-read, organized, and more versatile.

Building cross-platform applications is an excellent option for many organizations that need to support different technology vendors. Today, we can build websites that reach a massive audience regardless of the device that they use. Additionally, cross-platform mobile applications are an attractive option. All that is left now are the beloved applications that sit on our desktops. With tools such as Electron, desktop applications do not have to suffer from a lack of operating system support.

Electron is a framework that allows developers to create cross-platform desktop applications using a combination of HTML, CSS, and JavaScript. What makes Electron a viable solution is that it pairs with JavaScript frameworks very well. This means that there is generally a lower learning curve when injecting JavaScript into an Electron solution. Since JavaScript is such a popular option in today’s application development domain, it can make sense to leverage that experience in an Electron solution.
Developers can use any modern JavaScript framework, such as Angular, React, or Vue, within an Electron application.

For this post, I will be discussing how you can pair Electron with Vue to build your next cross-platform desktop application. I will cover some of the reasons why Vue should be considered as a viable integration option and highlight some useful features of Electron that I have been exposed to.

So Why Use Vue with Electron

While there is no perfect framework to pair with Electron, Vue offers some advantages. It is worth noting that the choice of the framework will depend on several factors. These are just general observations and findings from my experience with Vue.

Reasons to consider Vue:

  • The lower learning curve for new developers
  • Features a well-organized template for Vue files
  • Less boilerplate code that increases project size
  • Able to easily customize application builder (might be dependent on the way you scaffold your Electron solution)
  • Swiftly build prototypes and/or small-medium sized applications

A word of caution: if you choose Vue with Electron, the documentation is scarce in some areas. I assume that Vue is not a popular JavaScript framework like React and Angular.

During my time using Electron with Vue, I have come across several useful features. I will provide a few examples of these features, and hopefully, this will shorten the gap of implementing them into your next Electron/Vue application.

Custom Menus

One common scenario, when building Electron applications, is to come up with your menu schema. Electron ships with a default menu that includes options you would find in your Chrome browser since Electron runs on Chromium. That said, the default menu will probably not be good enough for a production build of your application. Depending on your situation, you might want to limit certain menu features from your users, such as the developer tools or the ability to force reload the application. Customizing your application’s menu is a straightforward process.

You can customize the menu within the Electron main process file.

Figure One Electron Vue

Custom Events

In my experience using Vue with Electron, I have used Electron specific events to accomplish certain tasks. Depending on your application, you will more than likely need to use events. One common scenario would be to show a confirmation popup window when you want to exit the application. This not only utilizes Electron’s built-in events, but it uses custom Electron events as well. Below is a simple implementation for this.

First, a preload.js file is needed so that Electron events can be properly used within Vue components.

Figure 2 Electron Vue

Next, the preload file will need to be referenced in the Electron main process file. The preload file path will depend on both the environment and operating system the app is running on. Additionally, the preload file will need to be added as an extra resource within a builder config file for production builds. This process will be discussed in a later section.

Within the method for creating the Electron window:

Figure 3 Electron Vue

In the Electron main process file, you can then send out a custom event to any Vue components that are subscribed to it.

Figure 4 Electron Vue

A Vue component can then perform certain logic once this event occurs.

Figure 5 Electron Vue

Here are the results from clicking the exit button in the application.

Welcome to Your Vue App Figure 6

Referencing External Resources

There might come a time where your application will need to reference-dependent static resources. This could be a spreadsheet file containing data for U.S. states, cities, zip codes, or a CSV file containing data about countries. Also, it could be a preload file, as previously mentioned in this post.

When your Electron/Vue application is built for production, the application gets bundled into an asar file. This is your whole application. If your application needs to load external files, such as a CSV file or preload file, it will need to point to that specific file. As mentioned earlier, your application gets bundled into an asar file. How can you reference a dependent file from your Electron/Vue application? There are essentially two things you need to do.

1 – Customizing the Electron Builder

The first step is to customize the Electron builder within a Vue config file. One thing to note is that I have used electron-builder for building Electron applications and used it for this example.

You will want to create a file called vue.config.js that resides in the root directory of your Electron/Vue solution. This file will, again, allow you to customize the builder for your Electron/Vue application. There is an option to specify any extra resources (or external resources) that your application may reference. The builder’s configuration will need to know where your resource is in your solution and where to place it.

Your config file should look something like this.

Electron Vue Figure 7

After the application gets built and installed onto the local machine, any resources within the extra resources section will be placed under the following paths:

  • Windows: C:\Users\\AppData\Local\Programs\\Resources\Vue
  • Mac: /Applications/.app/Contents/Resources/Vue

2 – Referencing the External Resource

The last step is to reference the external resource, using the appropriate path, within your application. It is recommended that you specify both the development and production path to your resource so that both environments can access it. This can be accomplished within the Electron main process file. The reason is that we can utilize the Node dirname variable. Using this variable will help us acquire the full path.

Figure 8 Electron Vue

From there, you can add this logic within a custom, or predefined, Electron event. The event could either process the file and send the results back to any components subscribed to a “reply” event, or the event could send the full file path to any subscribed components for them to process the file.

In this example, I used an approach where the file is processed in the Electron main process file, and the file contents are sent to a subscribed component.

Vue component publishing an event to get a file’s contents and then listening for a reply event.

Figure 9 Electron Vue

Once the event gets published, the Electron main process will read the given file. Then it will reply with the contents of the file.

Figure 10 Electron Vue

The way you process your file will depend on the file type and the actual data that is contained within it. For this example, I used a text file. That said, I encourage you to use the appropriate logic to extract the contents of your file.

Closing Thoughts

Using Vue with Electron has become an excellent duo for building cross-platform desktop applications. You can craft custom menu options that can help add to the user experience. Also, Electron and Vue allow you to develop custom events that allow for more robust business logic. Finally, crucial assets such as external files can be properly processed inside an Electron/Vue solution, which can help you need swift data processing. All in all, Electron and Vue give you the tools to build rich, customizable desktop applications.

Kubernetes has become the go-to orchestrator for running containers. At AIS, we are working with clients using Azure Kubernetes Service (AKS), applying our expertise in DevOps. One concern is delivering HTTPS certificates to containerized web applications; let’s use Azure Key Vault to achieve this.

Azure Key Vault is a cloud service that provides hardware security modules (HSMs) for securely storing and delivering secrets, encryption keys, and certificates. The Key Vault FlexVolume driver for Kubernetes offers a means for mounting certificates as files in our containers; this allows us to automate and manage HTTPS certificates for applications without requiring them to access Azure Key Vault directly.

Let’s look at how we can apply this to Kubernetes services that front containers running ASP.NET Core 3.1 applications. We’ll start with a sample application from Microsoft’s .NET Docker repo. You can find the full solution and step-by-step instructions in this aks-keyvault-certs GitHub repo.

We’ll use PFX encoded certificates in our Azure Key Vault for this demo, as they are readily loadable in .NET Core 3.1 for use in Kestrel hosting.

There are a few important details to note:

  1. You can retrieve a certificate from Azure Key Vault using the certificate, key or secret object types. To get the full private key certificate, you need to use the “secret” object type.
  2. If you import a PFX encoded certificate into Azure Key Vault, getting its secret will return the full PFX file; however, since the API return value is a string and a PFX is a binary file format, the result must be base64 decoded.
  3. If you import a certificate using the text encoded PEM format, it is returned as-is and base64 decoding is unnecessary; however, .NET Core does not currently support loading private-key PEM certificates.
  4. Since we’ve already authenticated to Key Vault, the resulting PFX file mounted in our container no longer requires the PFX password to load.

Knowing this, if we import a PFX certificate into an Azure Key Vault, we can configure our Kubernetes Deployment YAML to create a volume containing a file containing the certificate secret contents. For example:

volumes:
      - name: aks-keyvault-aspnetcore-httpscert
flexVolume:
          driver: "azure/kv"
          secretRef:
            name: kvcreds
          options:
            # update with your key vault name
            keyvaultname: "YOUR KEY VAULT NAME"
            # update with your AAD tenant
            tenantid: "YOUR KEY VAULT TENANT"
            # The name of the object in Key Vault
            keyvaultobjectnames: "aks-https"
            # Use "secret" to get the full cert
            keyvaultobjecttypes: secret
            # This becomes the file name on the mount.
            keyvaultobjectaliases: "https.pfx.base64"

Then, in our pod specification we can mount the volume and set an environment variable for its path:

env:
        # Set an environment var to the cert path for
        # the application to use.
        - name: HTTPS_CERTIFICATE_PATH
          value: "/certs/https.pfx.base64"
        volumeMounts:
        # Mount the key vault volume to /certs
        - name: aks-keyvault-aspnetcore-httpscert
          mountPath: /certs
          readOnly: true

Finally, in our ASP.NET Core application, we configure Kestrel to use the mounted certificate file for HTTPS endpoints:

public static IHostBuilder CreateHostBuilder(string[] args) =>
            Host.CreateDefaultBuilder(args)
                .ConfigureWebHostDefaults(webBuilder =>
                {
                    webBuilder
                        .UseStartup<Startup>()
                        .UseKestrel(options =>
                        {
                            // configure Kestrel with our HTTPS certificate
                            options.ConfigureHttpsDefaults(ConfigureHttps);
                        });
                });

        private static void ConfigureHttps(HttpsConnectionAdapterOptions options)
        {
            try
            {
                // When we get the certificate from Key Vault as a secret,
                // it provides the entire PFX file but without the password.
                // Since PFX is a binary format and a secret is a string,
                // it is base64 encoded. So we read in the text file and convert
                // it to the bytes to initialize the X509Certificate2.
                var certPath = Environment.GetEnvironmentVariable("HTTPS_CERTIFICATE_PATH");
                if (!string.IsNullOrEmpty(certPath))
                {
                    var certString = System.IO.File.ReadAllText(certPath);
                    var certBytes = Convert.FromBase64String(certString);
                    var httpsCert = new X509Certificate2(certBytes);

                    Console.WriteLine($"HTTPS cert Subject:    {httpsCert.Subject}");
                    Console.WriteLine($"HTTPS cert Thumbprint: {httpsCert.Thumbprint}");

                    // set the Kestrel HTTPS certificate
                    options.ServerCertificate = httpsCert;
                }
            }
            catch (Exception ex)
            {
                Console.Error.WriteLine($"unable to load https cert: {ex}");
                throw;
            }
        }

See the full working solution in the aks-keyvault-certs repo.

With this approach, we can automate managing HTTPS certificates with Azure Key Vault and delivering them to Azure Kubernetes Service pods running ASP.NET Core applications.

Introduction

As enterprises start to utilize Azure resources, even a reasonably small footprint can begin to accumulate thousands of individual resources. This means that the resource count for much larger enterprises could quickly grow to hundreds of thousands of resources.

Establishing a naming convention during the early stages of establishing Azure architecture for your enterprise is vital for automation, maintenance, and operational efficiency. For most enterprises, these aspects involve both humans and machines, and hence the naming should cater to both of them.

It would be too naive and arrogant to propose a one-size-fits-all naming convention. Each enterprise has its own unique culture, tools, and processes. So, here are seven rules for scalable and flexible Azure resource naming conventions. To emphasize, these are rules for establishing naming conventions and not the actual naming convention itself.

Rule #1: Break them Up

  • Breakup resource names into segments. Each segment may contain one or more characters to indicate a specific attribute of the resource.
  • For example: Resource name cte2-sales-prod-rgp has four segments. First segment represents Contoso’s [ct] Enterprise’s, in East US 2 [e2], production [prod] Resource Group [rgp] for Sales application [sales]
    Why This Rule: Logically partitioning resource names into segments allows for the comprehension of resource information by both machines and humans.

Rule #2: Make them Uniquely Identifiable

  • Every resource should have a unique name. Meaning, a name should only belong to a singular resource. Do not hesitate to add additional segments to make the name unique.
  • Ideally, the name should be unique globally across Azure, but if that is too hard to achieve, then at a minimum, it must be unique across all Azure Subscriptions under your Azure AD Tenant.
  • For our Contoso example form Rule # 1, using a couple of characters that identify enterprise would increase chances for Azure wide uniqueness. For cte2-sales-prod-rgp, [ct] represents Contoso enterprise. Other segments, as explained in Rule # 1, also increases uniqueness.
    Why This Rule: Following this rule will eliminate misidentification and resource name conflicts.

Rule #3: Make them Easily Recognizable

  • Names must convey ordinary, but some critical pieces of information about the resource. This rule also serves as a backstop to Rule # 1, whereby taking Rule # 1 to an extreme, one might be tempted to use something like GUID to name the resource.
  • The information may include Azure Region, Environment or Environment Category, Resource Type, etc. to name a few.
  • For our Contoso example, each segment helps with the identification of Azure Region, Application, Environment, and Resource type. All good things for recognizing the resource.
    Why This Rule: Following this rule will eliminate needing a lookup to get information, as the information is embedded in the name itself. Do not use random name generation such as GUIDs, as it might generate a unique name, but it would serve no other purpose.

Rule #4: Make Exceptions Obedient

  • Some resources may not want to fit into the convention. For those resources, establish a convention for exceptions. Don’t let exceptions dictate the overall convention.
  • For example: Storage account names cannot have non-alphanumeric characters. So, if your convention happens to use a dash to separate segments, don’t use the Storage account name and have a dash. Don’t drop a dash for all other resource names.
    Why This Rule: Following this rule prevents a Convention that is too rigid and draconian, leading to convoluted and confusing names.

Rule # 5: Know When To Stop

  • Initially, establish a naming convention for high-level resources and maybe one level deeper. Do not try to establish a naming convention for resources that are three, four, or five levels deep within a resource.
  • If there is a need, let the convention for those lower levels be established by folks who have the expertise and happen to work with them daily.
  • For example, establish a convention for Storage accounts, do not go too deep into naming Container, Blob, and Table.
    Why This Rule: It is impossible to know everything about every resource type used by an enterprise. Leaving room for future extensions is essential for resilient and scalable naming conventions. Your future self and your colleagues will thank you for it.

Rule # 6. Keep Them Handsome, Pretty & Melodic

  • Names created by convention should be pleasing to the eye and melodic to the ears. This means that you should pay special attention to following
    • Acronyms
    • Segment sizes
    • Juxtaposition of segments
    • Sequencing of segments
    • Separators
  • Go back to our Contoso example and see how you can improve so that it lives up to Rule # 6.
    Why This Rule: You will live with the names for a long time and spend a lot of time with them. So you might as well make them a pleasure to work with.

Rule # 7: Toot Your Horn, But With Open Ears

  • Document your convention where it is easily accessible and searchable such as a central Wiki. Present your convention at every opportunity. Demo real-life excellent and bad examples. Write blogs and make videos to explain.
  • But, always keep an open mind. Listen to feedback and be open to refining when needed.
    Why This Rule: Your established naming pattern is only as good as it’s last use. So practice, preach, persuade, push, peddle, profligate, pander but never be pedantic.

These rules have been battle-tested in several large enterprises over the past decade, so following these rules for flawless Azure Naming Convention.

Recently I completed a project that was challenging in the best of ways. We were asked to study and understand the client’s current system to write their next system requirements. The client asked us to glean system requirements from reviewing the system, speaking with users, and combing through as much documentation as possible.

The task felt monumental – we needed to understand the system as it was, but with an eye for improving the new system. Documentation was sparse. The system contained three sub-systems, managed and used by individuals at several locations across the country. Many end-users were experts, having learned a system that was not too user friendly; they often relied on their historical knowledge and experience to accomplish their tasks. Also, outside tools such as chats, emails, excel sheets, and other applications were often leveraged due to dated and incomplete functionality.

This was not my first time trying to take a big set of somewhat nebulous ideas and translate them into functional system requirements. The process is a bit of an art, and nothing can replace experience with this kind of task, but these repeatable processes will help you take conversations and create requirements. If you have been tasked to manage or participate in a study to develop requirements, these steps could help you succeed.

1. Have the right team in place

It would be best to have innovators on your team that can take a big-picture problem and break it down into bite-sized, usable pieces. It isn’t easy to look at a resume and figure out a mind-set like this. When putting together your team, have a gifted project manager or technical lead that can distribute tasks that optimize individual strengths.

Consider including an experienced tester that can guide completeness and understanding of requirements and who will enjoy clicking through the entire system and documenting the functionality. Include system engineers and architects who can quickly understand the current system and assess what is required to meet functionality versus technical limitations. Include a user experience researcher to develop questionnaires and interact with users.

Be careful not to choose only individuals with a lot of experience with the current system. This can often bias your results and limit your team’s ability to explore and learn.

Assign sections of work to different teammates. System architects should be concentrating on writing requirements for system dependencies, input/output requirements, existing architecture, and potential improvements to the design. Security information specialists can assist with documenting any guidance, laws, or conditions that must be considered in the design and system requirements to meet said requirements. Software engineers can concentrate on frameworks, limitations, and expectations that the software will need to accomplish. User experience personnel should focus on user functions. Know what the system will be and bring in the experts to help. For example, our client considered automation, machine learning, and cloud architecture as part of their solution. Including individuals who were excited to research these items and experience where to start was critical to the work.

2. Be as selective as possible with who interacts with the end-users on the team

Ideally, you will have user experience specialists leading the charge of any interactions with users. Their focus is to understand the user through observations, focus groups, and questionnaires. There are essential, subtle practices that a user experience specialist invokes to ensure quality data is gathered while interacting with the users. Other teammates can join in on observations and discussions, but their primary task should be to learn and take notes.

Alternatively, if you know you will be meeting with a specific audience (for example, the database administrators of the current system), bring a teammate that has experience with the technology being discussed.

3. Learn as much about the system as you can prior to user interactions

I have found the best place to start is training material and help documentation. To understand the functions of the system as a user, you must try and become the user. Explore the course if you can on your own time before meeting with users. Click every button, explore every menu. Write observations about interactions, patterns in design, and lingo.

4. Optimize user interactions

Ask for a representative user group. When talking with your stakeholders, project management, or clients, ask for a diverse user group. I specifically like to ask for users ranging from very experienced to junior. Users that are resistant to change or are frustrated with the current system. It is ideal to have contacts for 3-5 users for each system role, if possible.

Meet one on one with users and ask to observe. This is one of the biggest challenges I run into. Managers of the project have the best intentions, so they will often ask to have just a big group meeting so “everyone can be heard.” Still, presentations and discussions often lead to a disjointed and incomplete view of the system. Ask if you can sit with your users to observe how they use the system. It is okay to ask to repeat an observation more than once as your understanding of the system increases?

Assume nothing, ask the obvious, and seek understanding. Even if you think you know an answer, ask the obvious questions when working with the users. It is essential to hear from the user how they will accomplish a task, where they have issues with the system, or how they work around the design issues. I often see individuals who are not experienced in user observation want to jump in and show their understanding. Being quick to demonstrate your knowledge may shut your users down or have them skip essential steps in their tasks.

Ask pointed questions but be flexible. It’s great to have a planned list of items but be adaptable with any user interactions. Show the user you are hearing them by making sure the questions make sense and add to the conversation context. Just running through a list of items can turn off by boring the users or making them feel you must only “check the box.” Specifically, I like to ask the following:

  • What are the critical functions that we must ensure the system has? If a system exists, follow-up by asking them to show you how they accomplish those critical functions.
  • What would you say your role is regarding this system?
  • What tasks are you responsible for? Can you show me how you accomplish these?
  • Who do you interact with to accomplish the tasks?
  • What about the system causes problems? If possible, focus on the system, not the users. You will still often be able to tease out problems and errors that the users make through these questions.
  • If you were training me, what you want me to make sure I understood?
  • What would you like to improve the current system?

Parrot back what you think the user has said. This is important for two reasons:

  1. It shows the user you are listening and invested in their feedback and knowledge and
    It gives an opportunity to correct misunderstanding.
  2. Avoid getting into implementations and solutions with your users. It is human nature to want to jump to “just add a button that lets me do X, Y, Z.” Though it is fine for a user to give this type of feedback, focus on the need, not the implementation during these discussions.

5. After your meeting with the users, immediately start organizing your notes

I need time to write things down and organize them on my own. I will often write the same set of notes twice- once in order of the conversation, and a second time by grouping information under functionality or tasks.

You can work with your team during this time to consider
Card sorting exercises are often leveraged in designing the organization of a system. Write down as many functions as you know the system must have, and then group them. Allow everyone on the team to participate to see what trends arise and discuss where the team disagrees. At first, try and group items. You can decide on the hierarchy later. Questions that will help you group information for requirements writing:

  1. What must the users create, complete, pass along, edit, share?
  2. What permissions are necessary to protect, create, and manage data in the system?
  3. Does the client want you to consider off-the-shelf products? If so, do you inherit specific behavior or limitations based on this choice?
  4. What are the critical tasks for the system?

6. Create parent functions

With this client, there were a lot of details from multiple meetings, observations, and documents. It was essential to create high-level “parent” functions (for example: Create a product) and then link the parent function to all the supporting needs. The parent functions are called different things depending on your goal; these may be key performance parameters, functional requirements, capability statements, or system functions.

Create flow diagrams, user personas, use cases, and prototypes. These are great tools to figure out if you understand something from beginning to end. Each has its pros/cons. My personal favorites are use cases and prototypes.

7. Identify research tasks

I have yet to get every piece of information I need in the first round of document reviews and user interviews. By creating diagrams and grouping requirements together, you will start to see where your data is weak. If your team is large enough to support the effort, break up tasks by expertise. Trade-off tasks and documentation so that there is a fresh set of eyes looking at existing information to find what you are missing. Realize that with each pass-through, your understanding will increase, so it is worth revisiting information.

8. Write, review, and revise

Review your work with the appropriate users. Once you have a basic idea of the system’s critical functions, ask to meet back with the users. Ideally, this interaction would be led by the user experience expert on your team. The goal will be to ensure understanding, so welcome critical feedback. When users see things like a flow diagram or a use case, they will often remember something initially left out. This is an excellent way to ensure every function is understood and documented.

Enter the writing portion of your requirements with an open mind. We wrote and rewrote requirements with our project, changed the groupings of parent/children, and added sub-sections multiple times.

Write S.M.A.R.T. Shall Statements. Specific, measurable, attainable/achievable, realistic, timely requirements. Once you have a good idea of what the requirements should be, write to them to be as transparent as possible. Your tester can be a tremendous asset to the team. Ask them to review the requirements and let you know if they could write a test script off the requirement. Let them help you determine where there is ambiguity.
For each critical piece of information, you will write one requirement in the form of a “shall” requirement. For example, “The system shall allow users to withdraw money from accounts they have the appropriate permissions to access” Write all desired behavior in the form of a “will” statement. For example, “The system will include audio signals for successful withdraw of money.”

If writing functional requirements, avoid system-specific lingo. Focus on what the user must accomplish and how. Avoiding lingo, existing system labels, and flow of how the user will complete a task end-to-end are especially essential when you are building and improving upon a current system.
For example, we wanted to avoid the use of role names in the requirements with our client, so we stated things like “user with the appropriate permissions” instead of “product manager.” You can capture the need without limiting the new system in roles that may be unwarranted.

Include prototypes, user personas, experience flows, etc. as part of the final document to understand requirements. This is especially important when you are suggesting improvements or new ideas for the next system.
It is difficult to translate complex ideas between people, and the old saying of “A picture is worth a thousand words” holds in this case also.

9. Present the information to stakeholders as part of the project wrap-up

If possible, don’t just hand over the completed requirements document and walk away. Our requirements document was well over 100 pages and full of important information critical to our customer’s next system. Consider walking through the document with your stakeholders as a final step. Point out areas that may need more consideration and design. Ensure understanding the point of all parent requirements and the philosophy behind the requirements’ grouping and organization.

Having the task of creating or updating requirements can be a daunting process. It can be incredibly difficult to take many conversations and determine how to translate those into written, concise requirements.

I am excited when the client allows me to influence the requirements. I hope that by sharing my insights that you will gain confidence in taking conversations to needs. By following these suggestions and guidelines, you will regularly engage with your users, optimize your team’s time by focusing on their area of expertise, and deliver feature-complete, intuitive projects to your clients.

It would not be unfair to say that Azure Storage is one of Azure’s most essential services. Almost all the other services in Azure use Azure Storage in some shape or form.

AIS has been involved with Azure since it’s beta days, under the code name Red Dog. We’ve seen Azure Storage grow from a service with a limited set of features and capabilities to a service with an extensive collection of features, supporting storage requirements of small organizations and large enterprises equally.

Given the extensive feature set, we have seen our customers sometimes struggle to choose the right kind of storage for their needs. Furthermore, at the time of this blog, it is not possible to change the type of storage account once it is created.

We intend to clear up some of the confusion through this blog post by providing a matrix of features available based on the kind of storage account.

Storage Account Kind

When you create a storage account (in a portal or by other means), you are asked to make many selections (like resource group, location, etc.). Among them, three vital selections that you’re asked to make are:

  • Desired Performance Level
  • Account Type
  • Replication/Data Redundancy

Desired Performance Level

When it comes to performance, Azure Storage offers you two options – Premium and Standard. In Premium Storage, the data is stored on Solid State Drives (SSD) versus standard Hard Disk Drives (HDD) instead of Standard Storage. Premium Storage provides you better performance in terms of IOPS (Input/Output Operations Per Second) and throughput.

Choosing the right performance level at the time of account creation becomes essential. Once a storage account is created with a performance level, it can’t be changed i.e.; you can’t change a “Standard” storage account to a “Premium” storage account and vice-versa. Furthermore, not all services are supported for all performance levels. For example, if your application makes heavy use of Storage Queues or Tables, you cannot choose “Premium” as these services are not supported.

You can learn more about the storage account performance levels here.

Account Type

Next is the account type. At the time of writing of this blog, Azure Storage supports the following types of accounts:

  • General-purpose v2 accounts
  • General-purpose v1 accounts
  • BlockBlobStorage accounts
  • FileStorage accounts
  • BlobStorage accounts

Choosing the right type of account at the time of account creation is vital because you can’t convert the type of an account once it’s created. The only exception is that you can do a one-time upgrade a general-purpose v1 account to a general-purpose v2 account.

Also, like with performance level, not all features are supported in all account types. For example, the “FileStorage” kind of accounts only supports file storage service and not a blob, queue, and table service. Another example is that you can only host a static website in general-purpose v2 and BlockBlobStorage type of accounts.

Another important consideration in choosing the right type of storage account is pricing. In our experience, general-purpose v2 accounts are more expensive than general-purpose v1 accounts, offering more features.

You can learn more about the storage account types here.

Replication/Data Redundancy

Azure Storage is strongly a consistent service and multiple copies of your data are stored to protect that data from planned and unplanned events like data center failures, transient errors, etc. At the time of writing of this blog, Azure Storage provides the following types of redundancy options:

  • Locally redundant storage (LRS)
  • Zone-redundant storage (ZRS)
  • Geo-redundant storage (GRS)
  • Geo-zone-redundant storage (GZRS)
  • Read-access geo-redundant storage (RAGRS)
  • Read-access geo-zone-redundant storage (RAGZRS)

Choosing the right redundancy kind becomes essential as it enables your application to be fault-tolerant and more available.

Again, not all redundancy kinds are supported for all storage account types. While it is true that you can change redundancy kind of a storage account on the fly, it is only possible between certain redundancy kinds. For example, you can’t change the redundancy kind of an account from Geo-zone-redundant storage (GZRS) to Geo-redundant storage (GRS).

Another good example is that you can convert Geo-zone-redundant storage (GZRS)/Read-access geo-zone-redundant storage (RAGZRS) to a Zone-redundant storage (ZRS) but not the other way around.

You can read more about the data redundancy options available in Azure Storage here.

Storage Feature Matrix

Based on the account type, performance level, and redundancy kind, we have come up with the following feature matrix.

Storage Feature

Using this matrix, you should choose the right kind of storage account to meet your needs.

Here are some examples:

  • If you need to host static websites in Azure Storage, you can either use “Premium BlockBlobStorage (LRS/ZRS)” or “Standard General-purpose v2 (LRS/GRS/RAGRS/GZRS/RAGZRS/ZRS)” type of storage account.
  • Suppose you need to archive the data for compliance or another regulatory purpose. In that case, you can either use “BlobStorage (LRS/GRS/RAGRS)” or “General-purpose v2 (LRS/GRS/RAGRS)” type of storage account.
  • If you need premium performance with your page blobs, you can either use “General-purpose v2 (LRS) or “General-purpose v1 (LRS)” type of storage account.
  • If you need a premium performance of your file shares, the only option you have is the “Premium FileStorage (LRS/ZRS)” type of storage account.

Summary

We hope that you find this blog post useful and will use the feature matrix the next time you have a need to create a storage account.

Feel free to reach out to us if we can be of any assistance with your Azure projects. You can contact us online here!