Closing a Chapter

After serving in the Army for six years, the thought of finding a job and working as a civilian after separating terrified me. All I had ever known was the Army. It was the only career I had up until that point. I honestly had no idea what to expect once separating. However, I consider myself to be extremely fortunate in that I was able to go through the Microsoft Software and Systems Academy (MSSA) and then, as I was preparing to graduate from the course, secure a job interview with Applied Information Sciences (AIS).

The MSSA program truly changed my life. Not only did MSSA provide me with the knowledge and technical skills needed to be successful in the IT field, but the remarkable teachers and mentors there also prepped me for job interviews, showed me how to network and leverage my LinkedIn profile, and taught me how to write a proper resume. All these components made a massive difference as I was preparing to re-enter civilian life.

Joining the AIS Team

Fortunately, as I was getting ready to graduate from the program, one of my peers at MSSA, Taylor Hill, referred me to AIS after his interview with their team. I will always be extremely grateful that Taylor recommended me to AIS, as a recruiter quickly contacted me. I have to say; the entire recruitment process was excellent. My recruiter was very professional, responsive, and informative. Shortly after speaking with her, I had my first technical interview and then my second and final interview scheduled only a few days later. I don’t know if there are even words to describe how nervous I was for those two interviews. I had done a few interviews before this one, but after researching AIS, I knew that this is where I wanted to work, and I was eager to succeed. Even though I was anxious, I could leverage the skills I learned from MSSA and do well during the interview. A couple of weeks later, I was thrilled to find out that AIS wanted to hire me! I was so relieved that they allowed me to work for them.

As my first day of work at AIS was approaching, I began to feel like I had imposter syndrome. I had no prior experience in the IT field aside from the MSSA program. Nevertheless, AIS helped to ease this feeling and set me up for success. First, they assigned me a mentor – someone working on the same project that I could shadow and learn from. This was extremely helpful in my first few weeks at AIS. I truly learned a lot from my mentor, and working alongside him enabled me to gain confidence and be successful from the very beginning. They were also excellent at ensuring that I was assigned the appropriate level of work.

Additionally, they gave me some easy tasks that helped me further build my confidence and then slowly increased the difficulty and workload at a pace I was comfortable with. Second, I was almost immediately enrolled in the Azure Infrastructure Bootcamp that senior AIS employees teach. The course provided me with an overview of Azure fundamentals through setting up templates, virtual machines, governance, and high availability. It was also an excellent opportunity to meet other AIS employees with different backgrounds and varying experience levels.

Conclusion

In all, just in my first couple of weeks, I could tell that AIS truly wants its employees to be successful and continually strives to provide us with ample opportunities to learn and grow. Getting out of the military and finding a career equally capable of supporting my family was very intimidating. However, the Microsoft Software and Systems Academy gave me the tools and knowledge necessary to excel in the IT field. At the same time, my mentor and the Azure Bootcamp that AIS provided were beneficial and put me on the path to success at the company. I feel very fortunate to work for such a great company in a field that I genuinely enjoy, which is ever-growing. If I could do it all over again, I would.

A DoD client requested support with automated file transfers. The client has files placed in a common folder that can be accessed by the standard File Transfer Protocol (FTP). Given the FTP server’s connection information, the client requested the files to be moved to an Amazon Web Services (AWS) S3 bucket where their analysis tools are configured to use.

Automating the download and upload process would save users time by allowing for a scheduled process to transfer data files. This can be achieved using a combination of AWS Lambda and EC2 services. AWS Lambda provides a plethora of triggering and scheduling options and the power to create EC2 instances. By creating an EC2 example, a program or script can avoid Lambdas’ limitations and perform programmatic tasking such as downloading and uploading. Additionally, this can be done using Terraform to allow for deployment in any AWS space.

Writing a Script to Do the Work

Create a Script that can log in to the FTP server, fetch/download files, and copy them to an S3 bucket before using Terraform or AWS console. This can be done effectively with Python’s built-in FTPlib and the AWS boto3 API library. There are various libraries and examples online to show how to set up a Python script to download files from an FTP server and use the boto3 library to copy them to S3.

Consider writing the script that file size will play a significant role in how FTPlib and Boto3’s copy functions work. Anything over 5GB will need to be chunked from the FTP Server and use the multiple file upload methods for the AWS API.

Creating an Instance with Our Script Loaded

Amazon provides Amazon Managed Images (AMI) to start up a basic instance. The provided Linux x86 AMI is the perfect starting place for creating a custom instance and eventually custom AMI.

With Terraform, creating an instance is like creating any other module, requiring Identity and Access Management (IAM) permissions, security group settings, and other configuration settings. The following shows the necessary items needed to make an EC2 instance with a key-pair, permissions to write to s3, install Python3.8 and libraries, and copy the script to do the file transferring into the ec2-user directory.

First, generating a key-pair, a private key, and a public key is used to prove identity when connecting to an instance. The benefit of creating the key-pair in the AWS Console is access to the generated .pem file. Having a local copy will allow for connecting to the instance via the command line, while great for debugging, but not great for deployment. Terraform can be generated and store a key-pair in its memory to avoid passing sensitive information.

# Generate a ssh key that lives in terraform
# https://registry.terraform.io/providers/hashicorp/tls/latest/docs/resources/private_key
resource "tls_private_key" "instance_private_key" {
  algorithm = "RSA"
  rsa_bits  = 4096
}

resource "aws_key_pair" "instance_key_pair" {
  key_name   = "${var.key_name}"
  public_key = "${tls_private_key.instance_private_key.public_key_openssh}"

}

To set up the secrSetup, which is the security group to run the instance in, open up the ports for Secure Shell (SSH) and Secure Copy Protocol (SCP) to copy the script file(s) to the instance. A security group acts as a virtual firewall for your EC2 instances to control incoming and outgoing traffic. Then, open other ports for ingress and egress as needed, i.e. 443 for HTTP traffic. The security group will require the vpc_id for your project. This is the Visual Private Cloud (VPC) that the instance will be running. The security group should match up with your VPC settings.

resource "aws_security_group" "instance_sg" {
  name   = "allow-all-sg"
  vpc_id = "${var.vpc_id}"
…
  ingress {
    description = "ftp port"
    cidr_blocks = ["0.0.0.0/0"]
    from_port   = 22
    to_port     = 22
    protocol    = "tcp"
  }
…
}

The IAM policy, for instance, will require PutObject access to the S3 bucket. The Terraform module will need the S3 bucket as an environment variable, and a profile instance is created. If creating the IAM policy in the AWS Console, a profile instance is automatically created, but it has to be explicitly defined in Terraform.

#iam instance profile setup
resource "aws_iam_role" "instance_s3_access_iam_role" {
  name               = "instance_s3_access_iam_role"
  assume_role_policy = <<EOF
{
  "Version": "2012-10-17",
  "Statement": [
    {
      "Action": "sts:AssumeRole",
      "Principal": {
        "Service": "ec2.amazonaws.com"
      },
      "Effect": "Allow",
      "Sid": ""
    }
  ]
}
EOF
}
resource "aws_iam_policy" "iam_policy_for_ftp_to_s3_instance" {
  name = "ftp_to_s3_access_policy"

  policy = <<EOF
{
  "Version": "2012-10-17",
  "Statement": [
    {
      "Sid": "VisualEditor0",
      "Effect": "Allow",
      "Action": [
          "s3:PutObject",
          "s3:GetObject"
      ],
      "Resource": "arn:aws:s3:::${var.s3_bucket}"
  },
}
EOF
}

resource "aws_iam_role_policy_attachment" "ftp_to_s3" {
  role       = aws_iam_role.instance_s3_access_iam_role.name
  policy_arn = aws_iam_policy.iam_policy_for_ftp_to_s3_instance.arn
}

resource "aws_iam_instance_profile" "ftp_to_s3_instance_profile" {
  name = "ftp_to_s3_instance_profile"
  role = "instance_s3_access_iam_role"
}

Defining the instance to start from and create the custom AMI from in the Terraform will need the following variables:

  • AMI – the AMI of the Linux x86 image
  • instance_type – the type of instance, i.e., t2.micro
  • subnet_id – the subnet string from which VPC the instance will run on
  • key-name – the name of the key, should match the key-pair name generated above or the one from the AWS console, could use a variable reference here too

Define the connection and provisioner attributes to copy the python script to do the file transferring to the ec2-user home folder. The connection will use the default ec2-user using the secure key and then copy over the python file. If using the key downloaded from AWS Console, use the following to point to the file private_key = “${file (“path/to/key-pair-file.pem”)}”.

Complete the instance setup with the correct Python version and library. The user_data attribute sends a bash script to install whatever is needed— in this case, updating Python to 3.8, installing the boto3, and paramiko libraries.

# Instance that we want to build out
resource "aws_instance" "ftp-to-s3-instance" {
  ami           = var.ami
  instance_type = var.instance_type
  subnet_id     = var.subnet_id
  key_name 	   = "${var.key_name}" #use your own key for testing
  security_groups      = ["${aws_security_group.instance_sg.id}"]
  iam_instance_profile = "${aws_iam_instance_profile.ftp_to_s3_instance_profile.id}"

  # Copies the python file to /home/ec2-user
  # depending on how the install of python works we may need to change this location
  connection {
    type        = "ssh"
    user        = "ec2-user"
    host        = "${element(aws_instance.ftp-to-s3-instance.*.public_ip, 0)}"
    private_key = "${tls_private_key.instance_private_key.private_key_pem}"
  }

  provisioner "file" {
    source      = "${path.module}/ftp_to_s3.py"
    destination = "/home/ec2-user/ftp_to_s3.py"
  }
}

  user_data = <<EOF
#!/bin/sh
sudo amazon-linux-extras install python3.8
python3.8 -m pip install -U pip
pip3.8 --version
pip3.8 install boto3 
pip3.8 install paramiko 

EOF
}

The last step is to create the custom AMI. This will allow our Lambda to duplicate and make as many of these instances as need.

resource "aws_ami_from_instance" "ftp-to-s3-ami" {
  name               = "ftp-to-s3_ami"
  description        = "ftp transfer to s3 bucket python 3.8 script"
  source_instance_id = "${aws_instance.ftp-to-s3-instance.id}"

  depends_on = [aws_instance.ftp-to-s3-instance]

  tags = {
    Name = "ftp-to-s3-ami"
  }
}

Creating Instances on the Fly in Lambda

Using a Lambda function that can be triggered in various ways is a straightforward way to invoke EC2 instances. The following python code show passing in environment variables to be used in an EC2 instance as both environment variables in the instance and arguments passed to the Python script. The variables needed in the python script for this example are as followed:

  • FTP_HOST – the URL of the FTP server
  • FTP_PATH – the path to the files on the URL server
  • FTP_USERNAME, FTP_PASSWORD, FTP_AUTH – to be used for any authentication for the FTP SERVER
  • S3_BUCKET_NAME – the name of the bucket for the files
  • S3_PATH – the folder or path files should be downloaded to in the S3 bucket
  • Files_to_download – for this purpose, a python list of dictionary objects with filename and size to downloaded.

For this example, the logic for checking for duplicate files is down before the Lambda invoking the instance for transferring is called. This allows the script in the instance to remain singularly focused on downloading and uploading. It is important to note that the files_to_download variable is converted to a string, and the quotes are made into double-quotes. Not doing this will make the single quotes disappear when passing to the EC2 instance.

The init_script variable will use the passed-in event variables to set up the environment variables and python script arguments. Just like when creating the instance, the user_data script is run by the instance’s root user. The root user will need to use the ec2-user’s python to run our script with the following bash command: PYTHONUSERBASE=/home/ec2-user/.local python3.8 /home/ec2-user/ftp_to_s3.py {s3_path} {files_to_download}.

# convert to string with double quotes so it knows its a string
    files_to_download = ",".join(map('"{0}"'.format, files_to_download))
    vars = {
        "FTP_HOST": event["ftp_url"],
        "FTP_PATH": event["ftp_path"],
        "FTP_USERNAME": event["username"],
        "FTP_PASSWORD": event["password"],
        "FTP_AUTH_KEY": event["auth_key"],
        "S3_BUCKET_NAME": event["s3_bucket"],
        "files_to_download": files_to_download,
        "S3_PATH": event["s3_path"],
    }
    print(vars)

    init_script = """#!/bin/bash
                /bin/echo "**************************"
                /bin/echo "* Running FTP to S3.     *"
                /bin/echo "**************************"
                export S3_BUCKET_NAME={S3_BUCKET_NAME}
                export PRODUCTS_TABLE={PRODUCTS_TABLE}
                export FTP_HOST={FTP_HOST}
                export FTP_USERNAME={FTP_USERNAME}
                export FTP_PASSWORD={FTP_PASSWORD}
                PYTHONUSERBASE=/home/ec2-user/.local python3.8 /home/ec2-user/ftp_to_s3.py {s3_path} {files_to_download}
                shutdown now -h""".format(
        **vars
    )

Invoke the instance with the boto3 library providing the parameters for the custom image AMI, Instance type, key-pair, subnet, and instance profile, all defined by Terraform environment variables. Optionally, set the Volume size to 50GB from the default 8GB for larger files.

instance = ec2.run_instances(
        ImageId=AMI,
        InstanceType=INSTANCE_TYPE,
        KeyName=KEY_NAME,
        SubnetId=SUBNET_ID,
        MaxCount=1,
        MinCount=1,
        InstanceInitiatedShutdownBehavior="terminate",
        UserData=init_script,
        IamInstanceProfile={"Arn": INSTANCE_PROFILE},
        BlockDeviceMappings=[{"DeviceName": "/dev/xvda", "Ebs": {"VolumeSize": 50}}],
    )

Conclusion

After deploying to AWS, Terraform will have created a Lambda that invokes an EC2 instance running the script passed to it during its creation. Triggering the Lambda function to invoke the custom instance can be done from a DynamoDB Stream update, scheduled timer, or even another Lambda function. This provides flexibility on how and when the instance is called.

Ultimately, this solution provides a flexible means of downloading files from an FTP server. Changes to the Lambda invoking the instance could include separating the file list to create several more minor instances to run simultaneously, moving more files faster to the AWS S3 bucket. This greatly depends on the client’s needs and the cost of operating the AWS services.

Changes can also be made to the script downloading the files. One option would be to use more robust FTP libraries than the built-in provided python library. Larger files may require more effort as FTP servers can timeout when network latency and file sizes come into play. Python’s FTPlib does not auto-reconnect, nor does it keep track of incomplete file downloads.

The Microsoft National Security Symposium is an event that brings government decision-makers together to understand technology advancements that serve our shared missions. Microsoft, technology partners, and government agencies share their experiences and solutions for security and compliance, modernization, AI and analytics, multi-cloud deployments, and more.

This year, Wade Allen, VP of Defense at AIS, sat down with David Porter, AE for the National Security Group at Microsoft, to talk about how our partnership, mission focus, and agile approach have yielded successful outcomes across the Federal government. In case you missed it, we’ve recapped the key takeaways from the discussion in this blog, including the following words of wisdom:

  • Always keep the mission at the forefront no matter what.
  • An agile mindset and practices are required to move quickly.
  • Embrace partnerships for a thriving ecosystem where you can leverage each partners’ strengths to get to the right mission solution. It’s all about the relationship, just like a marriage. You may not always agree, but there is open communication, everyone has a voice, and everyone must listen. This creates a good balance between teams that can adapt to the speed of the mission.

The Power of Partnership

Finding a partner and CSP with a good relationship can greatly impact your ability to affect mission results. A successful partner extends the products, skillsets, and value of the cloud provider to drive forward the mission.

As a Microsoft partner since 1994, AIS has been at the forefront of industry-leading solutions within both the public and private sectors. Our partnership with Microsoft is based on close relationships with leadership, account, engineering and product teams, award-winning solutions on Azure and Power Platform, and certified capabilities and advanced specializations. Reach back to Microsoft’s engineering group to inform the path forward for our customers, especially with so much unchartered territory, allows us to accelerate mission results.

We have worked closely with Microsoft on many new territory projects in military departments and the 4th estate. AIS is working with Microsoft overseas to co-create solutions for the warfighter on the Azure platform. We are working on disconnected cloud solutions and deployment across Microsoft’s IL2, IL4, IL5, and IL6 environments.

After decades of partnership, we work better together and use each other’s strengths to provide innovative solutions that trickle down from the cloud provider and partner through the organization to end-users, equipping all levels with the solutions and data they need when they need it.

A Business Model that Serves

The AIS approach builds on a sound foundation of Azure expertise and past performance. Our experience has allowed us to gain a deep understanding of Microsoft technologies to deliver better solutions and meet the needs of the mission. We build and test best practices, then prepare and teach our client’s internal teams how to leverage them.

“We have been able to take the vision of the customer and tools that exist today to create a roadmap, work towards the mission goal through unknowns while creating reusable assets for best practices, and be able to move with speed.” -Wade Allen, VP of Defense

AIS has worked closely with Microsoft engineering and created a feedback loop that feeds directly into the product roadmap. These assets then find their way into the end-user’s hands. We understand and help change the customer needs based on their roadmap while educating what needs to be prioritized.

We “wrap our arms around our customers” to help teams adopt and embrace new technology and solutions rather than managing it for them. Our goal is to empower our customers with the quickly changing technology. We host workshops, internal boot camps through our internal training (aisU), and many educational sessions around cloud and other emerging technologies. Our passion is to upskill soldiers and provide them with the ability to build solutions and independently operate them without relying fully on an SI.

SUCCESS STORY: UNITED STATES MILITARY
Learn more on how AIS helped a U.S. Military deliver a secure polycloud infrastructure for hundreds of bases.

Getting Over the Compliance Hurdle

When it comes to the cloud, don’t get too focused on the technology. Cloud outcomes are not fully dependent on cloud technologies alone. In many cases in the federal government, compliance is the long pole in the tent. AIS has led the way in achieving ATO and has led many security and compliance firsts in Azure and AzureGov. We’ve been able to align with federal compliance changes quickly by leveraging DevSecOps processes, automating STIG processing, using native cloud monitoring tools like Azure Sentinel, and much more.

Partner Qualities that Count

When choosing your co-pilot in cloud transformation, you need a partner that can serve the mission. With the rapidly changing technology landscape and mission needs, it’s essential to be agile and flexible to pivot when challenges arise. Look for a partner that is open, honest, agile, and flexible. You don’t want a partner that is just going to tell you what you want to hear.

Partnerships like AIS and Microsoft bring extreme value to our customers because of our ability to keep up with agile business changes and developments in the cloud. We couldn’t do it without the tools and insight that Microsoft brings to the table and our talented, one-of-a-kind team at AIS.

DRIVE YOUR MISSION FORWARD
Partner with AIS on your next cloud project. Work with a team of Microsoft technology leaders to enable your teams. Contact us today.

If you want to create robust automated workflows in Azure, integrating Azure Logic Apps with Azure Functions can be a valuable solution. A logic app is a PaaS (Platform as a Service) that offers extensive options to create powerful workflows. This service is beneficial if you need a structured series of events to support your Azure-hosted application(s). To craft workflows, you can use a “no-code” method. Azure Logic Apps offers a graphical user interface approach to creating workflows, which bridges the gap between pairing your other Azure applications with your logic app.

Why Integrate?

One of the big reasons to integrate these two services is to help support a systematic end-to-end solution. You can add your essential function(s) to your workflow in a few steps, hence creating a more robust cloud-based solution.

When your workflow process gets triggered, such as an email reaching an inbox, your Azure Function can perform its specific business logic after the trigger fully executes. Then, the workflow can branch-off and perform more actions.

Using Azure Functions in your workflow also allows you to capture the HTTP status code from the resulting function call. If receiving a specific status code, such as an HTTP 500 status code, there can be logic in place to email support personnel to inform them, or the results of the failed function call can be inserted into Azure Storage. There are many possibilities.

Getting Started

When beginning integrating an Azure Function into a logic app, there are a few prerequisites. You will need to create both a logic app and function service in Azure. I will be using the out-of-the-box HTTP trigger Azure Function from Visual Studio, and it will simply return “Hello World” when it receives a request. Once the Azure Function service has been created, and the code deployed to it, the next area of focus is setting up the logic app workflow.

Setting up the Logic App Workflow

There are two ways to set up a workflow; the first is an infrastructure as code (IaC) approach, and the second is a GUI approach. For this walkthrough, I will use the GUI. The first thing you will need for the logic app is a trigger. This is an event that starts your logic app workflow. Many distinct types of triggers help us carry out many tasks. I will be using the timer trigger for this example.

Notification Frequency in Logic Apps

After you have published your code to your Azure Function, you will be able to add it as an Azure Function action. One thing to note about the Azure Function action is that it takes care of your function’s authentication. So, if your endpoint has a function authorization level assigned, the action will take care of authenticating so you can use the function in your workflow. Now, once the trigger activates, it will call the Azure Function. From there, it is up to the function to handle the request.

Azure Function Request

The Azure Function will process the request and fetch the correct data for the rest of the workflow to process. The next step will be to create a condition action. This will run as a conventional if statement. After the Azure Function executes a request, we want to verify if the request was successful or not.

What is neat about using logic apps is that when you use a service, like an Azure Function, it will detect specific properties from the service to use in later actions. For the condition action, we can grab the response status code of the function that just ran. Using that status code, we can create custom logic to look for the necessary code numbers. For this action, we will check for an HTTP 200 status code. When you create a condition, it automatically generates true and false scenarios. From there, you can set up what happens during each scenario.

For the following steps, we will set up one of my favorite parts of using a logic app that integrates with Outlook! There are many various tasks to use for Outlook. For this example, we will be using the send email task.

Send Email Task Outlook Task

When you set up an Outlook task, you must first log in with your desired Outlook account. This will produce an office365 API connection resource in your resource group. Thus, allowing the logic app workflow to work properly with Outlook!

After you log in, you can start using Outlook tasks. For this part of the workflow, we are going to send an email to the recipient. In the body of the email, we will send the response we get from the Azure Function we called in an earlier step. However, if the Azure Function call fails, then we can return a default email message to the recipient.

Azure Function Email Send

This concludes the setup for the Azure Logic App workflow that now integrates with an Azure Function. To run your newly created workflow, simply navigate to the “Overview” tab of your logic app resource. From there, click on “Run Trigger.”

Email Send Failure Function in Outlook

Congratulations! Your new workflow is now ready to run.

After the timer interval has passed, you will find a new email in your inbox from the Azure Logic App workflow.

Closing Thoughts

The beauty of using Azure Logic Apps is the customizability that it provides. The possibilities are almost endless. Whether your workflow is simple or complex, you have the tools at your disposal to carry out your systematic end-to-end flow. This post shows you one flavor of logic apps and the type of workflow you can create.