When Agencies decide to move their applications to the commercial cloud, the Defense Information Systems Agency (DISA) mandates specific approval and certification to connect the Cloud Access Point (CAP).  In this blog post, I will describe the process for establishing connectivity for a mission application on Azure Government IL 5 environment to DoD the DISA CAP. This post is based on the document provided by DISA DoD Cloud Connection Process Guide. The site has information including connecting to the DoD DISA CAP for both cloud providers (AWS, AZURE …) and cloud consuming applications (“Mission Applications”). Unfortunately, I found the document to be outdated and hard to follow. 

Here are some assumptions that you may have already done or are doing before requesting the CAP connection:

  • Designed network architecture  
  • Identified the components used in the mission application (VM, OS, Application, Database, data at rest/transit, vulnerability compliance report) 
  • Performed assessment and authorization such as:
    • Start applying STIGs to all servers. This process may take time, and you maybe continue to implement STIGs beyond the CAP connection
    • Deployed agent for HostBased Security Service (HBSS)
    • Applied patches to all VMs
    • Approved for an Interim Authority to Test (IATT) or Authority to Operate (ATO)

BEST PRACTICES FOR DATA INTELLIGENCE IN THE DOD
Discover the relationship between your data and your mission with our free whitepaper download on the AIS data intelligence framework and best practices.

Registration Process 

Here is the starting point for the DISA CAP connection journey:  

  1. DISA Systems/Network Approval Process (SNAP) Registration – The SNAP database stores the required documents and provides workflow status. To obtain a SNAP account go to https://snap.dod.mil site for registration (CAC required). The registration will ask for materials such as a DoD 2875 System Authorization Access Request (SAAR) form with Justification for Access. The diagram below shows detail steps for getting the SNAP account.detailed steps for getting the SNAP account
  2. SNAP Project Registration
    • Register C-ITPS Project
    • Submit Consent to Monitor Agreement
    • Submit a Business Case Analysis
    • Submit an Initial Contact form
  3. Develop SNAP artifact package
    • Input the Ports, Protocols, and Services Management (PPSM) Registration Number – Obtained from the DoD PPSM Registry
    • DoD NIC IP Range – Request the IP address space from DISA Network Information Center (NIC)
    • Interim Authorization to Test (IATT) or an Authorization to Operate (ATO) Memo – Received a formal statement from an Authorizing Official (AO)
    • Authorizing Official (AO) Memo – AO Appointment Letter
    • Cybersecurity Service Provider (CSSP) Agreement
    • Network Diagram of the application
    • Plan of Actions & Milestones (POA&M) report
    • Security Assessment Report (SAR)
    • System Security Plan (SSP)
  4. Obtain Connection Approval
    • Submit the Package within SNAP
    • Receive Cloud Permission to Connect (CPTC) within five business days
    • Acknowledgment of Registration – DISA Cloud Approval Office (CAO) issues a CPTC Memo (or returns package for rework/resubmission)
  5. Complete Secure Cloud Computing Architecture (SCCA) Onboarding
    • Complete SCCA Onboarding Form
    • Complete DISA Firewall Request Sheet
    • Complete Express Route Steps
  6. Technical exchange meeting with DISA Engineering Team
    • Provided Service key and Peering location
    • Received shared key from the DISA

Connect to Authorized CSP-CSO (Azure)  

The steps for connecting via DISA enterprise BCAP depends on the impact level. The following steps apply for both Level 4 and 5. 

  1. Obtain IP Addresses – Requested DoD IP address range for the mission application from DoD NIC 
  2. Obtain DNS name with A record (forward lookup) and PTR (reverse DNS) for:  
    • Application
    • Federation server (ADFS)
    • Mail server 
  3. Obtain certificates for the DNS and server certificate for each VM
  4. Configure the app to use the Enterprise Email Security Gateway (EEMSG)

Create and Modify Azure ExpressRoute Circuit 

  1. Create ExpressRoute following these steps:
    Create ExpressRoute Circuit
  2. Send the service key, Peering location to the DISA BCAP team
    Sending Service Key
  3. Periodically check the status and the state of the circuit key. When the Circuit status changed to Enabled and Provider status change to Provisionedthen the connection is established.
  4. Provision Virtual Network 
  5. Change the DNS Servers to Custom and provide the following IP addressed so that traffic can pass through CAP connection (214.16.26.1, 214.71.0.1, 214.27.166.1) 
  6. Link the Virtual Network to the Express Route Circuit

AIS has a team specialized in navigating the process of getting ATO and connecting to DISA CAP. Contact us today and let’s start the conversation of getting you connected to DISA CAP.

AIS is working with a large organization that wants to discover relationships between data and the business by iteratively integrating data from many sources into Azure Data Lake. The data will be analyzed by different groups within the organization to discover new factors that might affect the business. Data will then be published to the appropriate consumers using PowerBI.

In the initial phase, data lake ingests data from some of the Operational Systems. Eventually, data will be captured not only from all the organization’s systems but also from streaming data from IoT devices.  

Azure Data Lake 

Azure Data Lake allows us to store a vast amount of data of various types and structures. Data can be analyzed and transformed by Data Scientists and Data Engineers. 

The challenge with any data lake system is preventing it from becoming a data swamp. To establish an inventory of what is in a data lake, we capture the metadata such as origin, size, and content type during ingestion. We also have the Interface Control Document (ICD) from the Operational Systems that describe the data definition of the source data. 

Logical Zones

The data in the data lake is segregated into logical zones to allow logical and physical separation to keep the environment secure, organized and agile. As the data progress through the zones various transformation is performed. 

  • Landing Zone is a place where the original data files are stored untouched. No data is deleted from this zone, and access to this zone is limited.  
  • Raw Zone is a place where data quality validation is applied based on the rules defined in source ICDAny data filed validation moves to Error Zone. 
  • Curated Zone is a place where we store the cleansed and transformed data and ready for consumption. The transformation is done for different audiences, and within the Zone, folders will be created for each specialized change.  
  • Error Zone is a place where we store data that filed validation. A notification is sent to the registered data curators upon arriving new data.  
  • Metadata Zone is a place where we keep track of metadata of the source and the transformed data.Metadata Zone Organization

The source systems have security requirements that prevent access to sensitive data. When the folders are created, permissions are given to security groups in Azure Active Directory. The same security rules are applied to the subsequent folders.

Now that the data is in the data lake, we allow each consuming group to create their own transformation rules. The transformed data is then moved to the curated zone ready to be loaded to the Azure Data Warehouse.

Azure Data Factory

Azure Data Factory orchestrated the movement and transformation of data, as shown in the diagram below. When a file is dropped in the Landing Zone, the Azure Data Factory pipeline that consists of activities to Unzip, Validate, Transform, and Load the data into Data Warehouse.

The unzipping is performed by a custom code Azure Function activity rather than the copy activity’s decompress functionality. The out of box functionality of Azure Data Factory can be used to uncompressed only GZip, Deflate and BZip2 files but not Tar, Rar, 7Zip, Lzip.

The basic validation rules, such as data range, valid values, and reference data, are described in the ICD. A custom Azure Function activity was created to validate the incoming data.

Data is transformed using Spark activity in Azure Data Factory for each consuming user. Each consumer has a folder under the Curated Zone.

Data Processing Example

Tables in the Azure Data Warehouse were created based on the Curated zone by executing the Generate Azure Function activity to create data definition language (DDL). The script modifies the destination table if there is a new field added.

Finally, the data is copied to the destination tables to be used by end-users and warehouse designers.

In each step, we captured business, operational, and technical metadata to help us descript the data in the lake. The metadata information can be uploaded to a metadata management system in the future.

When you read about the Internet of Things, you often hear about connected cars, connected kitchen appliances, small devices that let you order things quickly, or other consumer-grade applications. In this post, I will quickly describe a recent IoT project I worked on where the devices are not small consumer-grade sensors…they are large industrial manufacturing machines.

In this case, machines arranged on a shop floor are responsible for cutting metal into various shapes. These machines must be both very powerful and very precise, and they have robotic arms that are programmed to grip specialized tools for this activity. These machines use the MT Connect protocol as the language for communicating their operational status and the results of any action taken. On collection, the data is streamed to a collection point for analysis. In older machines, adapters are installed to stream the machine’s data using the common language.

Our work on this project helped the engineers identify optimal cut times by analyzing the machine activity data. First, we needed to enhance the collection process so that all data was readily available, then apply the appropriate business rules to identify cut time, and finally provide quick, actionable feedback on the outcome. Read More…

These disciplines can play a significant role in building stable release processes that help ensure project milestones are met.

Continuous Integration (CI) and Continuous Delivery (DC) are rapidly becoming an integral part of software development. These disciplines can play a significant role in building stable release processes that help ensure project milestones are met. And in addition to simply performing compilation tasks, CI systems can be extended to execute unit testing, functional testing, UI testing, and many other tasks. This walkthrough demonstrates the creation of a simple CI/CD deployment pipeline with an integrated unit test.

There are many ways of implementing CI/CD, but for this blog, I will use Jenkins and GiHub to deploy the simple CI/CD pipeline. A Docker container will be used to host the application.  The GitHub repository hosts the application including a Dockerfile for creating an application node. Jenkins is configured with GitHub and Docker Plugin. Read More…