A Level of Certification to Consider

Obtaining a Secuirty+ certification allows individuals that are pursuing a career in an information technology field many opportunities. A good portion of DoD jobs requires this level of certification to maintain secure systems utilized daily. This training ensures that the minimum-security requirements convey. The exam is difficult because it covers an extensive range of topics under Information Technology Security.

There are multiple ways to study for the Security+ exam. This article demonstrates one way to follow. It has been a proven method to achieve a passing score the first time taking the exam. When this article was written, the SYO-501 was the current exam offered.

What You Need

Below are suggested materials to guide you towards the exam, with specific examples outlined in the subsequent sections:

  • Books (there are two specific titles mentioned in this blog)
  • Additional Subscription Study materials (this is not required, but suggested)
  • Friends to study with (or to keep you awake when you are supposed to be studying)
  • A well-rested mind
  • One month to prepare (suggested if you are new to the material)

Step 1: Read the Fun Manual (RTFM)

StudyingFirst of all, the best way to get a good grasp of the material is to take a class offered by qualified and licensed entities. These courses usually last five days and will follow a book, or multiple books, for education material. Attendance is either via an online course taken at your leisure or in person at a facility. If you can have a company host the professional to teach an on-site/virtual class, that is the best way to get involved in a course. You can purchase two useful books at many major retailers that sell course material for CompTIA:

  • CompTIA Security+ Get Certified Get Ahead: SY0-501 Study Guide by Darril Gibson
  • CompTIA Security+ SY0-501 Cert Guide by David L. Prowse.

Reading these books is recommended, regardless of if you take the course or not. Allow plenty of time to get through both of them. Both cover topics in length on different subjects and give you an excellent grasp of all the exam material. Ensure that the proper books purchased are for the current exam that is offered. Failure to do so will result in information missed that may be on the exam. If time is of the essence, reading the book by Darril Gibson would be recommended. Then use the David Prowse book and skim through the sections that expand a bit more on the topics not covered in the first book.

Step 2: Online Videos (they are free!)

An excellent online resource to use is Professor Messer’s CompTIA SY0-501 Security+ Course. The videos are a completely free way to cram a lot of information quickly after reading the books. I’d recommend doing so in this order. You can do the reverse if you like. However, listening to the videos as you are driving or going about your day after reading the material makes it easier to retain the information. There are also other study materials offered for sale to help aid in the passing of the exam. Listening to all the videos after reading the material is thoroughly suggested to help retain the information. There is also information covered in these videos that had not been discovered in the books recommended to use.

Security Studying

Step 3: Get Certified and Get Ahead Study Material

If you can afford to do so, purchase the full study guide at GCGAPremiumPass. There is a package that is great to use after completing the books and videos. The study package follows Darril Gibson’s book recommended above. A package is offered that contains the book and the study guide to save some money. This is the recommended way to get both if you have not taken a course that includes the book in the purchase. The full study guide includes:

  • Multiple-choice Security+ practice test questions
  • Performance-based questions
  • Audio from the Study Guide
  • Online flashcards

The audio “Remember This” material is one of the best things you can use to retain the information in this book’s chapters. Reading a chapter and listening to the accompanying audio file for it will help immensely. If there is something in the audio file that you do not understand, go back and read the section in question. Then, listen to the remember this audio file again. Each of these files is ten minutes or less. Using these to listen to while you are driving or folding laundry will help you retain what you have read from each chapter. These short audio clips are handy to keep the information fresh in your mind. Using the flashcards in this manner will also help you remember specific details like ports and acronyms useful for the test. Acronyms are the most significant thing to commit to memory. The exam will not spell these out for you. If you do not know all of them, you will spend a lot of your time on questions trying to figure out their context.

The practice questions are a great way to get yourself prepared for the exam. With one caveat: do not just memorize where the answer is in the order of the list. The order of the answers will change between chapter exams and full exams. Taking these in order of the chapters, then taking the complete exams is the best course of action. This helps eliminate your brain tricking you into choosing the answer’s letter rather than identifying the correct answer by knowledge. Please note that no matter how many times you will take these practice exams, the real exam will not have questions on it that are the same.

Step 4: Test Day

Prepare yourself by taking some time off before the exam to let the information sink in. Cramming right up until the test time will only confuse you for the information you need to know. A fresh mind and a calm attitude will go a long way. The exam is timed, so you will see the time the whole way through. Try not to pay too much attention to it without ignoring it. There are scenario-based questions included in the exam. These questions will take much set up time to figure out an answer. Building networks or figuring out access points will be common questions. The rest will be multiple choice. The best plan of action will be to answer all the questions you know as quickly as possible, allowing time to go back through to think about the questions you are not sure of. Usually, your first answer that you put down will be the correct one if you studied enough. Spending too much time on a question will lead you to second guess yourself, and you may settle on the wrong answer. There is an option to flag questions you are unsure of so that you can return to them at a later time. The best advice here is to make sure you answer the question you are unsure of and flag it when you move on. This way, if you run out of time, the question is answered. It may be wrong, but it is better than leaving a question empty.

After the exam time has run out the clock, a survey will be presented for you to take. You will not see your score before your survey is complete. Don’t worry that some technical glitch may be happening. If you passed the test, a certificate would be mailed to you. You can then present this to your organization. If not, you will be able to retake the exam. It is suggested that you give yourself some more study time and focus on the areas the summary lets you know where you are not strong in your knowledge.

Conclusion

The suggestions stated here are just that: suggestions that have worked for some people. Others require less time to prepare and study, and some require a lot. If you put enough work into preparing for the exam and ensure you have a positive attitude about it, you will do great. Don’t worry if you do not pass the first time. The exam is challenging to prepare for in a limited amount of time. No matter how much preparation you have put in, there will still be questions presented to you that you feel you have not covered. The exam is tailored that way to collect statistics and catch cheaters. Getting with a group of people for the exam prep is the best way to study for this. Instructors can be hired to teach you the exam’s ins and outs and the history of the questions presented. Good luck!

When building and managing an Azure environment, Microsoft maintains control of the network traffic as a core operations responsibility. The primary Azure platform resource to implement network traffic control is the Network Security Group (NSG). A Network Security Group allows you to define security rules, like firewall rules, that control traffic by specifying allowed and denied sources, destinations, ports, and protocols. Like all Azure resources, there are multiple options to manage NSGs, including the standard Azure Management tools: The Azure Portal, Scripts (PowerShell and CLI), APIs, and Azure Resource Manager (ARM) templates.

Managing NSG security rules using an ARM template can be challenging. Each security rule is defined using a large chunk of JSON, and many security rules may be required. The verbose JSON structure makes it difficult to see many rules at once, visualize changes from version to version, and encourage team members to revert to the Azure Portal to view and edit rules. Why use the Azure Portal? It turns out the portal’s grid format for NSG Security Rules was comfortable for quickly viewing multiple rules and for making minor edits to individual rules.

Since the portal’s grid view was comfortable the CSV file format seemed like the right idea based on its similarity to a grid. CSV files have a few pros:

  • Good viewers and editors including Excel and VS Code.
  • One vertically compact line for each security rule.
  • A vertically compact view that makes it easier to visually scan rules and to see the changes that are made from version to version when viewing differences.
  • Anyone who can edit a CSV can edit the NSG Security Rules allowing a larger group of security rule editors.

NSG in JSON format

This is a simple example of the NSG Security Rule JSON. A rule like this can get much larger vertically when numerous ports and address prefixes are defined:

{
          "name": "I-L-All_HttpHttps-UI_Layer-T",
          "description": "Allow HTTP + HTTPS traffic inbound.",
          "priority": 1110,
          "access": "Allow",
          "direction": "Inbound",
          "protocol": "Tcp",
          "sourceAddressPrefix": "",
          "sourceAddressPrefixes": [
            "AzureLoadBalancer"
          ],
          "sourceApplicationSecurityGroups": null,
          "sourcePortRange": "*",
          "sourcePortRanges": null,
          "destinationAddressPrefix": "*",
          "destinationAddressPrefixes": null,
          "destinationApplicationSecurityGroups": null,
          "destinationPortRange": "",
          "destinationPortRanges": [
            "80",
            "443"
          ]
        }

NSG in CSV Format

Excel

Example CSV Excel

Example CSV

Converting Between CSV and JSON

The transition from CSV to JSON and from JSON back to CSV must be repeatable and simple. In this scenario, PowerShell scripts manage this process: Convert-NsgCsvToJson.ps1 and Convert-NsgJsonToCsv.ps1.

The Convert-NsgCsvToJson.ps1 script is straightforward and does the following:

  1. Read the source CSV file.
  2. Read the destination JSON file.
  3. Split multi-value fields into an array based on the parameter: CsvArraySeparator. The default is the pipe character ‘|’. For fields like source and destination port ranges, this collapses multiple values into a single CSV field.
  4. Structure of the CSV format data into objects that match the ARM Template NSG Security Rule JSON structure.
  5. Use a JsonFileType parameter to determine where in the destination JSON structure to place the security rules array. This allows placement of the security rules array into a parameter file, template file, or into an empty JSON file.

A New Workflow

With PowerShell scripts, the new workflow for NSGs is:

  1. Create and edit NSG Security Rules in a CSV file – usually using Excel.
  2. Visually scan the CSV looking for obvious anomalies (Excel makes it easy to see when one rule stands out from the others and as an example, a value is in the wrong column).
  3. Execute the script: Convert-NsgCsvToJson.ps1 to convert the rules to the Json Structure and update the destination JSON file.
  4. Deploy the ARM Template and updated parameters file to a dev/test environment using standard deployment approaches such as the Azure CLI. This will fully validate the NSG Json prior to production deployment.
  5. Deploy to Production during a planned change window.

From JSONback to CSV

At times, a team member may change the portal, for example, during troubleshooting. Once an update is made in the portal, transfer Azure changes back to the code that defines this infrastructure. The CSV files are the canonical source, so there needs to be a process to return to CSV from JSON.

  1. To retrieve the NSG Security Rules from the portal execute a CLI command to retrieve NSG security rules and export them to a JSON File.
    az network nsg rule list --nsg-name subnet-01-nsg --resource-group net-rgp-01 | set-content subnet-01-export.json
  2. Execute the Convert-NsgJsonToCsv.ps1 script using the generated file as the input and the corresponding CSV file as the output.

Constraints

The environment these scripts were built for may not match your own. This environment includes several constraints:

  • Azure Resource Manager Templates are the language for Azure Infrastructure as Code.
  • Manual steps are required: automated build and release pipelines are not yet available.
  • There is no guarantee that NSG security rules will not be modified in the Azure Portal, so a mechanism is required to synchronize the code with the environment.

Future Improvements

This solution represented a significant improvement for this team instead of managing NSG security rules directly in the JSON format. As with every answer, there are ideas on how to improve. Here are a few that have come to mind:

  • Use CI/CD tools such as GitHub Actions to automatically execute the Convert-NsgJsonToCsv.ps1 script when an NSG CSV file is committed.
  • Implement a release pipeline so that modified NSG Csv files trigger the conversion script, wait for approval to deploy, and deploy the ARM Template to the dev/test environment.
  • Add Pester tests to the PowerShell scripts.
  • Try this approach with other IaC languages such as Terraform.

Additional Notes

  • The example template has been dramatically simplified.
    • The production template also configures NSG Diagnostic Settings and NSG Flow Logs.
    • The production template builds all resource names based on several segments defined in a naming convention.
  • There are NSG Security Rules that are considered baseline rules that should be applied to every NSG. These rules are managed in a CSV file and placed in an array in the base template and not repeated in each parameter file. An example of this is a rule that allows all servers to contact the organization’s DNS servers.
  • Application Security Groups are used to group servers in the local VNET so that NSG Security Rules do not need to include IP addresses for servers contained in the VNET. The only IP Address Prefixes specified directly in our rules are from outside the current VNET. As with the NSGs, this template defines ASGs in the template (baseline) and parameters file (local) combined and created during template deployment. Only the unique portion of the name is used to define the group, and to specify rules. The remainder of the term is built during deployment. ASGs in Azure are currently only valid for the VNET where they are created, and only one ASG may be specified per security rule. This script creates all the ASGs defined in the template and parameters file.

Code

The code for these scripts including the conversion scripts and a sample ARM Template, ARM Template Parameters files, and matching NSG Security Rule CSV files is available on GitHub: https://github.com/matthew-dupre/azure-arm-nsg-via-csv

Code Scripts

Introduction

PowerShell DSC is possibly one of the most potent configuration managers for Windows Operating Systems. Even with the increase in popularity of Ansible and SaltStack, Powershell DSC remains supreme when enforcing the desired state on a Windows VM. Ansible itself has included the win_dsc module, which allows Ansible to run PowerShell DSC. In this blog post, we will dive deeper into one of PowerShell DSC’s most powerful aspects, 3rd Party Resources, and how they interact with Azure Automation.

3rd Party Resources are PowerShell DSC Modules that are created by the community. Any PowerShell community member can create modules, and there are tons of modules out there to choose from. Modules are kept in repositories, the most well known and default PowerShell repository being the PowerShell Gallery run by Microsoft. This is a common repository for PowerShell modules that are deployed to the Gallery by the community. PowerShell Modules in the PSGallery can be downloaded and installed by the PowerShellGet Module.

As developers and infrastructure engineers, there are many different reasons to script various services you are creating. Often, instead of developing behavior or scripts from scratch, it is much easier to leverage the work that others have done to expedite a task’s completion. 3rd Party Modules allow for easily repeatable code that can become production-ready through collaboration.

Often, DSC Configuration can become complicated. Engineers can be asked to do many things, from creating an Azure AD Domain, configuring OMS Solutions associated with a VM, and even interactions with non-native Azure products, such as Splunk.

These may all seem very daunting, but don’t fret! Members of the PowerShell community have dealt with these problems and many others, and often you will find third party modules to help do the work for you.

Here is an example of a Third Party Resource, named ActiveDirectoryDsc, which will help in the promotion, configuration, and management of Active Directory

Azure Automation is a robust PaaS offering from Azure that allows for a cloud-based DSC pull server. Within Azure Automation, it is possible to add both custom modules that the user develops and third-party modules available in any hosted source.
⚠ It should be known that organizations in locked-down environments can manage their Repository of PowerShell Modules, which have been vetted by the respective InfoSec team. It is possible to deploy your Artifact repo using the Azure DevOps product shown here. It allows an internal team to deploy its versions of packages, and you can use that as your URI references.
⚠ There are a few ways to upload modules to the Azure Portal natively. You can upload manually through the portal as shown here in this picture:

Uploading modules to the Azure Portal

However, being DevOps Engineers, we want to automate this process as much as possible. One way to do this is via ARM Templates, like the ones we used in the previous module.
Below is an example of how to add a 3rd party module to your Azure Automation Account via ARM Templates:

{
"name": "[concat(parameters('automationAccountName'), '/', parameters('name'))]",
"type": "Microsoft.Automation/automationAccounts/modules",
"apiVersion": "2015-10-31",
"location": "[resourceGroup().location]",
"properties": {
"isGlobal": false,
"sizeInBytes": 0,
"contentLink": {
"uri": "uri to package"
}
}
}

If you are deploying from the PowerShellGallery, your Uri would look something like this:

"uri": "[concat('https://www.powershellgallery.com/api/v2/package/', parameters('name'), '/', parameters('version'))]"

Alternatively, you can script the import of modules using the New-AzAutomationModule module in a Powershell Script.

Oddly enough, there is sometimes some difficulty understanding the correct ContentUri to use in both the ARM and Powershell case. Finding the correct one can be done by navigating the right module in the Powershell Gallery, and adding /api/v2 to the URL, and replacing packages (plural) with package (singular).

Add the /api/v2 to a URL

Conclusion

3rd Party Modules are a great way for developers to speed up development and productivity. If you are inclined to help in the development of these modules, head over to GitHub and contribute!

Azure Kubernetes Service is a Microsoft Azure-hosted offering that allows for the ease of deploying and managing your Kubernetes clusters. There is much to be said about AKS and its abilities, but I will discuss another crucial role of AKS and containers, security. Having a secure Kubernetes infrastructure is a must, and it can be challenging to find out where to start. I’ll break down best practices, including baseline security for clusters and pods, and implement network hardening practices that you can apply to your own AKS environment that will lay the foundation for a more secure container environment, including how to maintain updates.

Cluster and Pod Security

Let’s first look at some best practices for securing your cluster and pods using policies and initiatives. To get started, Azure has pre-defined policies that are AKS specific. These policies help to improve the posture of your cluster and pods. These policies also allow for additional control over things such as root privileges. A best practice Microsoft recommends is limiting access to the actions that containers can provide and avoiding root/privileged escalation. When the Azure Policy Add-on for AKS is enabled, it will install a managed instance of Gatekeeper. This instance handles enforcement and validation through a controller. The controller inspects each request when a resource is created or updated. You’ll then need to validate (based on your policies). Features such as these are ever-growing and can make creating a baseline easier. Azure Policy also includes a feature called initiatives. Initiatives are collections of policies that align with organizational compliance goals. Currently, there are two built-in AKS initiatives which are baseline and restricted. Both come with many policies that lockdown items, such as limiting the host filesystem, networking, and ports. By combining both initiatives and policies, you can tighten security and meet compliance goals in a more managed fashion.

Another way to secure your cluster is to protect the access to the Kubernetes API-Server. This is accomplished by integrating RBAC with AD or other identity providers. This feature allows for granular access, similar to how you control access to your Azure resources. The Kubernetes API is the single connection point to perform actions on a cluster. For this reason, it’s imperative to deploy logging\auditing and to enforce the least privileged access. The below diagram depicts this process:

Cluster and Pod Security

Reference:https://docs.microsoft.com/en-us/azure/aks/operator-best-practices-cluster-security#secure-access-to-the-api-server-and-cluster-nodes

Network Security

Next, let’s look at network security and how it pertains to securing your environment. A first step would be to apply network policies. Much like above, Azure has many built-in policies that assist with network hardenings, such as using a policy that only allows for specific network traffic from authorized networks based on IP addresses or namespaces. It’s also important to note this can only occur when the cluster is first created. You also have the option for ingress controllers that access internal IP addresses. This ensures they can only get accessed from that internal network. These small steps can narrow the attack surface of your cluster and tighten traffic flows. The below diagram demonstrates using a Web Application Firewall (WAF) and an egress firewall to manage defined routing in/out of your AKS environment. Even more granular control is possible using network security groups. These allow only specific ports and protocols based on source/destination. By default, AKS creates subnet level NSGs for your cluster. As you add services such as load balancers, port mappings, and ingress routes, it will automatically modify the NSGs. This ensures the correct traffic flow and makes it easier to manage change. Overall these effortless features and policies can allow for a secure network posture.

Network Security Graphic

Reference: Microsoft Documentation

The Final Piece

The final piece of securing your AKS environment is staying current on new AKS features and bug fixes. Specifically, upgrading the Kubernetes version in your cluster. These upgrades can also include security fixes. These fixes are paramount to remain up to date on vulnerabilities that could leave you exposed. I won’t go too deep on best practices for Linux node updates or managing reboot. This link dives deeper into what Kured is and how it can be leveraged to process updates safely. There are many ways to foundationally secure your AKS clusters. I hope this article helps future implementations and maintainability of your deployment.

If you’re looking for an intelligent cloud-native Security Information and Event Management (SIEM) solution that manages all incidents in one place, Azure Sentinel may be a good fit for you.

Not only does Azure Sentinel provide intelligent security analytics and threat intelligence, but it’s also considered a Security Orchestration and Automation Response (SOAR) solution, meaning it will collect data about security threats and you can automate responses to lower-level security events without the traditionally manual efforts required. You can extend this solution across data sources by integrating Azure Sentinel with enterprise tools, like ServiceNow. There are also services offered at no additional cost, such as User Behavior Analysis (UBA ), Petabyte daily digestion, and Office 365 data ingestion, to make Azure sentinel even more valuable.

BETTER SECURITY FOR YOUR CLOUD
We'll help you review your current security posture, risks, and gaps to establish a secure code culture. Reach out today to learn more.

First Impression

After opening Azure Sentinel from the Azure portal, you will be presented with the below items:

Azure sentinel first view

Theoretically, Azure Sentinel has four core areas.

Azure Sentinel Four Core Areas

  • Collect – By using connections from multiple vendors or operating systems, Azure Sentinel collects security events and data and keeps them for 31 days by default. This is extendable up to 730 days.
  • Detect – Azure Sentinel has suggested queries, you can find samples, or build your own. Another option is Azure Notebook, which is more interactive and has the potential to use your data science analysis.
  • Investigate – For triaging using the same detection methodology in conjunction with events investigation. Later you will have a case created for the incident.
  • Respond –  Finally, responding can be manual or automated with the help of Azure Sentinel playbooks. Also, you can use graphs, dashboards, or workbooks for presentation.

For a better understanding, the flow in this example of behind the scene is helpful.

Steps in Azure Sentinel

How do I enable Azure Sentinel?

If you already have an Azure Log Analytics Workspace, you are one click away from Azure Sentinel. You need to have contributor RBAC permission on the subscription that has Azure Log Analytics Workspace, which Azure Sentinel will bind itself to it.

Azure Sentinel has some prebuilt dashboards and you are able to share it with your team members.

You can also enable the integration of security data from Security Center > Threat Detection > Enable integration with other Microsoft security services

Azure Sentinel has a variety of built-in connectors that collect data and process it with its artificial intelligence empowered processing engine. Azure Sentinel can relate your events to well-known or unknown anomalies (with the help of ML)!

Below is a sample connection which  offers two out-of-the-box dashboards:

sample connection in Azure Sentinel

All connections have a fair amount of instructions, which usually allows for a fast integration. A sample of an AWS connector can be found here.

Azure Sentinel has thirty out-of-the-box dashboards that make it easy to create an eloquent dashboard, however, built-in dashboards only work if you have configured the related connection.

Built-In Ready to Use Dashboards:

  • AWS Network Activities
  • AWS User Activities
  • Azure Activity
  • Azure AD Audit logs
  • Azure AD Sign-in logs
  • Azure Firewall
  • Azure Information Protection
  • Azure Network Watcher
  • Check Point Software Technologies
  • Cisco
  • CyberArk Privileged Access Security
  • DNS
  • Exchange Online
  • F5 BIG-IP ASM F5
  • FortiGate
  • Identity & Access
  • Insecure Protocols
  • Juniper
  • Linux machines
  • Microsoft Web Application Firewall (WAF)
  • Office 365
  • Palo Alto Networks
  • Palo Alto Networks Threat
  • SharePoint & OneDrive
  • Symantec File Threats
  • Symantec Security
  • Symantec Threats
  • Symantec URL Threats
  • Threat Intelligence
  • VM insights

A Sample Dashboard:

One of the most useful IaaS monitoring services that Azure provides is VMInsights, or Azure Monitor for VMs. Azure Sentinel has a prebuilt VMInsight Dashboard. You can connect your VM to your Azure Log Analytics Workspace, then enable VMInsights from VM > Monitoring > Insights. Make sure the Azure Log Analytics Workspace is the same one that has Azure Sentinel enabled on it.

Sample Dashboard VMInsights or Azure Monitor for VMs

Creating an alert is important. Alerts are the first step for having a case or ‘incidents’. After a case is created based on the alert, then you can do your investigation. For creating an alert, you need to use the KQL language that you probably already used it in Azure Log analytics.

Azure Sentinel has a feature named entity mapping, which lets you relate the query to values like IP address and hostname. These values make the investigation much more meaningful. Instead of going back and forth to multiple queries to relate, you can use entities to make your life easier. At the time of writing this article, Azure Sentinel has four entities; Account, Host, IP address, and Timestamp, which you can bind to your query. You can enable or disable an alert or run it manually as you prefer easily from Configuration > Analytics. Naming might be a little bit confusing since you also need to create your alerts from Analytics.

Azure Sentinel Investigation map of entities becomes public in September 2019 and you no longer need to fill out a form request access.

Let’s Go Hunting

You can use Azure Sentinel built-in hunting queries. You can also directly shoot it down if you know where to find the anomalies by KQL queries and create an alert. Or uses Azure Notebook for AI, ML-based hunting. You can bring your own ML model to Azure Sentinel. Azure Sentinel Notebook is for your tier 4 SOC analysis.

Azure Sentinel built-in hunting query

Azure Sentinel uses MITRE ATT&CK-based queries and introduced eight types of queries, also known as bookmarks, for hunting.

After you become skilled in detection, you can start creating your playbook constructed on logic app workflows. You can also build your automated responses to threads or craft custom actions after an incident has happened. Later you can enable Azure Sentinel Fusion to associate lower fidelity anomalous activities to high fidelity cases.

Azure Sentinel Detection Playbook

A sample playbook:

Azure Sentinel Sample Playbook

Image Source: Microsoft

Azure Notebooks is a Jupyter notebook (interactive computational tool) for facilitating your investigation by using your data science skills. Azure Notebooks support languages and packages from Python 2 and 3 you can also use R and F#.

We all love community-backed solutions. You can share your findings and designs with others and use their insights by using the Azure Sentinel Community on GitHub.

Azure Sentinel Fusion

Fusion helps reduction of noise by preventing alert fatigue. Azure Sentinel Fusion uses this insight here, and you can see how to enable Azure Sentinel Fusion.

Traditionally we assume an attacker follows a static kill chain as the attack path or all information of an attack is present in the logs. Fusion can help here by bringing probabilistic kill chain and to find novel attacks. You can find more information on this topic here. Formerly, you should run a PowerShell command to enable Fusion, but going on Fusion is enabled by default.

What Data Sources Are Supported?

Azure Sentinel has three types of connectors. First, Microsoft services are connected natively and can be configured with a few clicks. Second, is by connecting to external solutions via API. And finally, connecting to external solutions via an agent. These connectors are not limited to below table, and there are some examples of IoT and Azure DevOps that can communicate with Azure Sentinel

Microsoft services External solutions via API External solutions via an agent
Office 365 Barracuda F5
Azure AD audit logs and sign-ins Symantec Check Point
Azure Activity Amazon Web Services Cisco ASA
Azure AD Identity Protection Fortinet
Azure Security Center Palo Alto
Azure Information Protection Common Event Format CEF appliances
Azure Advanced Threat Protection Other Syslog appliances
Cloud App Security DLP solutions
Windows security events Threat intelligence providers
Windows firewall DNS machines
DNS Linux servers
Microsoft web application firewall (WAF) Other clouds

Where Does Azure Sentinel Sit in the Azure Security Picture?

Azure Sentinel in the Azure Security Big Picture

Azure Sentinel can be used before an attack, like Azure Active Directory signings from new locations. During an attack, like malware in the machine or post-attack for investigation about an incident and perform triage with it. Azure Sentinel has a service graph that can show you the related event to an incident.

If you are security titled a person or part of the SOC team and you prefer a cloud-native solution, Azure Sentinel is a good option.

Security Providers or Why Azure Sentinel?

Azure Sentinel uses Microsoft Intelligent Security Graph that is backed by Microsoft Intelligent Security Association. This association consists of almost 60 companies that hand in hand help to find vulnerabilities more efficiently.

Microsoft brings its findings from 3500+ security professionals, 18B+ Bing page scans per month, 470B emails analyzed per month, 1B+ azure account 1.2B devices updated each month, 630B authentications per month, 5B threats blocked per month.

Microsoft Intelligent Security Graph Overview

Image Source: Microsoft

Microsoft has more solutions that create a valuable experience for his Microsoft Graph Security API: Windows antimalware platform, Windows Defender ATP, Azure Active Directory, Azure Information Protection, DMARC reporting for Office 365, Microsoft Cloud App Security, and Microsoft Intune.

Microsoft Intelligent Security Association (MISA)

Microsoft creates vast threat intelligence solutions. Microsoft collaborated with other companies to create a product under the name of Microsoft Intelligent Security Graph API. Microsoft calls the association The Microsoft Intelligent Security Association (MISA), an association that consists of almost 60 companies who share their security insights from trillions of signals.

  • Microsoft products: Azure Active Directory, Azure Information Protection, Windows Defender ATP, Microsoft Intune, Microsoft Graph Security API, Microsoft Cloud App Security, DMARC reporting for Office 365, Windows antimalware platform, Microsoft Azure Sentinel
  • Identity and access management: Axonius, CyberArk, Duo, Entrust Datacard, Feitian, Omada, Ping Identity, Saviynt, Swimlane, Symantec, Trusona, Yubico, Zscaler
  • Information protection: Adobe, Better Mobile, Box, Citrix, Checkpoint, Digital Guardian, Entrust Datacard, EverTrust, Forcepoint, GlobalSign, Imperva, Informatica, Ionic Security, Lookout, Palo Alto Networks, Pradeo, Sectigo, Sophos, Symantec, Wandera, Zimperium, Zscaler
  • Threat protection: AttackIQ, Agari, Anomali, Asavie, Bay Dynamics, Better Mobile, Bitdefender, Citrix, Contrast Security, Corrata, Cymulate, DF Labs, dmarcian, Duo Security, FireEye, Illumio, Lookout, Minerva Labs, Morphisec, Palo Alto Networks, Red Canary, ThreatConnect, SafeBreach, SentinelOne, Swimlane, ValiMail, Wandera, Ziften
  • Security management: Aujas, Barracuda, Carbon Black, Checkpoint, Fortinet, F5, Imperva, Symantec, Verodin

MISA and Security Graph API

MISA is a combined security effort. It continuously monitors cyberthreats and fortifies itself. This enriched knowledge is accessible by Microsoft Intelligent Security Graph API. Azure Sentinel Fusion is the engine that uses graph powered Machine Learning algorithms. Fusion associates activities with patterns of anomalies.

Microsoft Intelligent Security Association (MISA) and Security Graph API

Below you can see the Azure Sentinel Big Picture:

Azure Sentinel Big Picture

I hope you found this blog helpful! As you can see, Azure Sentinel is just a tip of the Microsoft Security ‘iceburg’.

Azure Sentinel Microsoft Security Iceburg

GAPS IN YOUR SECURITY POSTURE?
Work with AIS to identify security risks and gaps. Together, we'll create a plan for your secure cloud environment.

Accurately identifying and authenticating users is an essential requirement for any modern application. As modern applications continue to migrate beyond the physical boundaries of the data center and into the cloud, balancing the ability to leverage trusted identity stores with the need for enhanced flexibility to support this migration can be tricky. Additionally, evolving requirements like allowing multiple partners, authenticating across devices, or supporting new identity sources push application teams to embrace modern authentication protocols.

Microsoft states that federated identity is the ability to “Delegate authentication to an external identity provider. This can simplify development, minimize the requirement for user administration, and improve the user experience of the application.”

As organizations expand their user base to allow authentication of multiple users/partners/collaborators in their systems, the need for federated identity is imperative.

The Benefits of Federated Authentication

Federated authentication allows organizations to reliably outsource their authentication mechanism. It helps them focus on actually providing their service instead of spending time and effort on authentication infrastructure. An organization/service that provides authentication to their sub-systems are called Identity Providers. They provide federated identity authentication to the service provider/relying party. By using a common identity provider, relying applications can easily access other applications and web sites using single sign on (SSO).

SSO provides quick accessibility for users to multiple web sites without needing to manage individual passwords. Relying party applications communicate with a service provider, which then communicates with the identity provider to get user claims (claims authentication).

For example, an application registered in Azure Active Directory (AAD) relies on it as the identity provider. Users accessing an application registered in AAD will be prompted for their credentials and upon authentication from AAD, the access tokens are sent to the application. The valid claims token authenticates the user and the application does any further authentication. So here the application doesn’t need to have additional mechanisms for authentication thanks to the federated authentication from AAD. The authentication process can be combined with multi-factor authentication as well.

Glossary

Abbreviation Description
STS Security Token Service
IdP Identity Provider
SP Service Provider
POC Proof of Concept
SAML Security Assertion Markup Language
RP Relying party (same as service provider) that calls the Identity Provider to get tokens
AAD Azure Active Directory
ADDS Active Directory Domain Services
ADFS Active Directory Federation Services
OWIN Open Web Interface for .NET
SSO Single sign on
MFA Multi factor authentication

OpenId Connect/OAuth 2.0 & SAML

SAML and OpenID/OAuth are the two main types of Identity Providers that modern applications implement and consume as a service to authenticate their users. They both provide a framework for implementing SSO/federated authentication. OpenID is an open standard for authentication and combines with OAuth for authorization. SAML is also open standard and provides both authentication and authorization.  OpenID is JSON; OAuth2 can be either JSON or SAML2 whereas SAML is XML based. OpenID/OAuth are best suited for consumer applications like mobile apps, while SAML is preferred for enterprise-wide SSO implementation.

Microsoft Azure Cloud Identity Providers

The Microsoft Azure cloud provides numerous authentication methods for cloud-hosted and “hybrid” on-premises applications. This includes options for either OpenID/OAuth or SAML authentication. Some of the identity solutions are Azure Active Directory (AAD), Azure B2C, Azure B2B, Azure Pass through authentication, Active Directory Federation Service (ADFS), migrate on-premises ADFS applications to Azure, Azure AD Connect with federation and SAML as IdP.

The following third-party identity providers implement the SAML 2.0 standard: Azure Active Directory (AAD), Okta, OneLogin, PingOne, and Shibboleth.

A Deep Dive Implementation

This blog post will walk through an example I recently worked on using federated authentication with the SAML protocol. I was able to dive deep into identity and authentication with an assigned proof of concept (POC) to create a claims-aware application within an ASP.NET Azure Web Application using the federated authentication and SAML protocol. I used OWIN middleware to connect to Identity Provider.

The scope of POC was not to develop an Identity Provider/STS (Security Token Service) but to develop a Service Provider/Relying Party (RP) which sends a SAML request and receives SAML tokens/assertions. The SAML tokens are used by the calling application to authorize the user into the application.

Given the scope, I used stub Identity Provider so that the authentication implementation could be plugged into a production application and communicate with other Enterprise SAML Identity Providers.

The Approach

For an application to be claims aware, it needs to obtain a claim token from an Identity Provider. The claim contained in the token is then used for additional authorization in the application. Claim tokens are issued by an Identity Provider after authenticating the user. The login page for the application (where the user signs in) can be a Service Provider (Relying Party) or just an ASP.NET UI application that communicates with the Service Provider via a separate implementation.

Figure 1: Overall architecture – Identity Provider Implementation

Figure 1: Overall architecture – Identity Provider Implementation

The Implementation

An ASP.NET MVC application was implemented as SAML Service provider with OWIN middleware to initiate the connection with the SAML Identity Provider.

First, the communication is initiated with a SAML request from service provider. The identity provider validates the SAML request, verifies and authenticates the user, and sends back the SAML tokens/assertions. The claims returned to service provider are then sent back to the client application. Finally, the client application can authorize the user after reviewing the claims returned from the SAML identity provider, based on roles or other more refined permissions.

SustainSys is an open-source solution and its SAML2 libraries add SAML2P support to ASP.NET web sites and serve as the SAML2 Service Provider (SP).  For the proof of concept effort, I used a stub SAML identity provider SustainSys Saml2 to test the SAML service provider. SustainSys also has sample implementations of a service provider from stub.

Implementation steps:

  • Start with an ASP.NET MVC application.
  • Add NuGet packages for OWIN middleware and SustainSys SAML2 libraries to the project (Figure 2).
  • Modify the Startup.cs (partial classes) to build the SAML request; set all authentication types such as cookies, default sign-in, and SAMLl2 (Listing 2).
  • In both methods CreateSaml2Options and CreateSPOptions SAML requests are built with both private and public certificates, federation SAML Identity Provider URL, etc.
  • The service provider establishes the connection to identity on start up and is ready to listen to client requests.
  • Cookie authentication is set, default authentication type is “Application,” and set the SAML authentication request by forming the SAML request.
  • When the SAML request options are set, instantiate Identity Provider with its URL and options. Set the Federation to true. Service Provider is instantiated with SAML request options with the SAML identity provider. Upon sign in by the user, OWIN middleware will issue a challenge to the Identity Provider and get the SAML response, claim/assertion back to the service provider.
  • OWIN Middleware issues a challenge to SAML Identity Provider with the callback method (ExternalLoginCallback(…)). Identity provider returns that callback method after authenticating the user (Listing 3).
  • AuthenticateSync will have claims returned from the Identity Provider and the user is authenticated at this point. The application can use the claims to authorize the user to the application.
  • No additional web configuration is needed for SAML Identity Provider communication, but the application config values can be persisted in web.config.

Figure 2: OWIN Middleware NuGet Packages

Figure 2: OWIN Middleware NuGet Packages

Listing 1:  Startup.cs (Partial)

using Microsoft.Owin;
using Owin;

[assembly: OwinStartup(typeof(Claims_MVC_SAML_OWIN_SustainSys.Startup))]

namespace Claims_MVC_SAML_OWIN_SustainSys
{
    public partial class Startup
    {
        public void Configuration(IAppBuilder app)
        {
            ConfigureAuth(app);
        }
    }
}

Listing 2: Startup.cs (Partial)

using Microsoft.Owin;
using Microsoft.Owin.Security;
using Microsoft.Owin.Security.Cookies;
using Owin;
using Sustainsys.Saml2;
using Sustainsys.Saml2.Configuration;
using Sustainsys.Saml2.Metadata;
using Sustainsys.Saml2.Owin;
using Sustainsys.Saml2.WebSso;
using System;
using System.Configuration;
using System.Globalization;
using System.IdentityModel.Metadata;
using System.Security.Cryptography.X509Certificates;
using System.Web.Hosting;

namespace Claims_MVC_SAML_OWIN_SustainSys
{
    public partial class Startup
    {
        public void ConfigureAuth(IAppBuilder app)
        {            
            // Enable Application Sign In Cookie
            var cookieOptions = new CookieAuthenticationOptions
                {
                    LoginPath = new PathString("/Account/Login"),
                AuthenticationType = "Application",
                AuthenticationMode = AuthenticationMode.Passive
            };

            app.UseCookieAuthentication(cookieOptions);

            app.SetDefaultSignInAsAuthenticationType(cookieOptions.AuthenticationType);

            app.UseSaml2Authentication(CreateSaml2Options());
        }

        private static Saml2AuthenticationOptions CreateSaml2Options()
        {
            string samlIdpUrl = ConfigurationManager.AppSettings["SAML_IDP_URL"];
            string x509FileNamePath = ConfigurationManager.AppSettings["x509_File_Path"];

            var spOptions = CreateSPOptions();
            var Saml2Options = new Saml2AuthenticationOptions(false)
            {
                SPOptions = spOptions
            };

            var idp = new IdentityProvider(new EntityId(samlIdpUrl + "Metadata"), spOptions)
            {
                AllowUnsolicitedAuthnResponse = true,
                Binding = Saml2BindingType.HttpRedirect,
                SingleSignOnServiceUrl = new Uri(samlIdpUrl)
            };

            idp.SigningKeys.AddConfiguredKey(
                new X509Certificate2(HostingEnvironment.MapPath(x509FileNamePath)));

            Saml2Options.IdentityProviders.Add(idp);
            new Federation(samlIdpUrl + "Federation", true, Saml2Options);

            return Saml2Options;
        }

        private static SPOptions CreateSPOptions()
        {
            string entityID = ConfigurationManager.AppSettings["Entity_ID"];
            string serviceProviderReturnUrl = ConfigurationManager.AppSettings["ServiceProvider_Return_URL"];
            string pfxFilePath = ConfigurationManager.AppSettings["Private_Key_File_Path"];
            string samlIdpOrgName = ConfigurationManager.AppSettings["SAML_IDP_Org_Name"];
            string samlIdpOrgDisplayName = ConfigurationManager.AppSettings["SAML_IDP_Org_Display_Name"];

            var swedish = CultureInfo.GetCultureInfo("sv-se");
            var organization = new Organization();
            organization.Names.Add(new LocalizedName(samlIdpOrgName, swedish));
            organization.DisplayNames.Add(new LocalizedName(samlIdpOrgDisplayName, swedish));
            organization.Urls.Add(new LocalizedUri(new Uri("http://www.Sustainsys.se"), swedish));

            var spOptions = new SPOptions
            {
                EntityId = new EntityId(entityID),
                ReturnUrl = new Uri(serviceProviderReturnUrl),
                Organization = organization
            };
        
            var attributeConsumingService = new AttributeConsumingService("Saml2")
            {
                IsDefault = true,
            };

            attributeConsumingService.RequestedAttributes.Add(
                new RequestedAttribute("urn:someName")
                {
                    FriendlyName = "Some Name",
                    IsRequired = true,
                    NameFormat = RequestedAttribute.AttributeNameFormatUri
                });

            attributeConsumingService.RequestedAttributes.Add(
                new RequestedAttribute("Minimal"));

            spOptions.AttributeConsumingServices.Add(attributeConsumingService);

            spOptions.ServiceCertificates.Add(new X509Certificate2(
                AppDomain.CurrentDomain.SetupInformation.ApplicationBase + pfxFilePath));

            return spOptions;
        }
    }
}

Listing 3: AccountController.cs

using Claims_MVC_SAML_OWIN_SustainSys.Models;
using Microsoft.Owin.Security;
using System.Security.Claims;
using System.Text;
using System.Web;
using System.Web.Mvc;

namespace Claims_MVC_SAML_OWIN_SustainSys.Controllers
{
    [Authorize]
    public class AccountController : Controller
    {
        public AccountController()
        {
        }

        [AllowAnonymous]
        public ActionResult Login(string returnUrl)
        {
            ViewBag.ReturnUrl = returnUrl;
            return View();
        }

        //
        // POST: /Account/ExternalLogin
        [HttpPost]
        [AllowAnonymous]
        [ValidateAntiForgeryToken]
        public ActionResult ExternalLogin(string provider, string returnUrl)
        {
            // Request a redirect to the external login provider
            return new ChallengeResult(provider, Url.Action("ExternalLoginCallback", "Account", new { ReturnUrl = returnUrl }));
        }

        // GET: /Account/ExternalLoginCallback
        [AllowAnonymous]
        public ActionResult ExternalLoginCallback(string returnUrl)
        {
            var loginInfo = AuthenticationManager.AuthenticateAsync("Application").Result;
            if (loginInfo == null)
            {
                return RedirectToAction("/Login");
            }

            //Loop through to get claims for logged in user
            StringBuilder sb = new StringBuilder();
            foreach (Claim cl in loginInfo.Identity.Claims)
            {
                sb.AppendLine("Issuer: " + cl.Issuer);
                sb.AppendLine("Subject: " + cl.Subject.Name);
                sb.AppendLine("Type: " + cl.Type);
                sb.AppendLine("Value: " + cl.Value);
                sb.AppendLine();
            }
            ViewBag.CurrentUserClaims = sb.ToString();
            
            //ASP.NET ClaimsPrincipal is empty as Identity returned from AuthenticateAsync should be cast to IPrincipal
            //var identity = (ClaimsPrincipal)Thread.CurrentPrincipal;
            //var claims = identity.Claims;
            //string nameClaimValue = User.Identity.Name;
            //IEnumerable<Claim> claimss = ClaimsPrincipal.Current.Claims;
          
            return View("Login", new ExternalLoginConfirmationViewModel { Email = loginInfo.Identity.Name });
        }

        // Used for XSRF protection when adding external logins
        private const string XsrfKey = "XsrfId";

        private IAuthenticationManager AuthenticationManager
        {
            get
            {
                return HttpContext.GetOwinContext().Authentication;
            }
        }
        internal class ChallengeResult : HttpUnauthorizedResult
        {
            public ChallengeResult(string provider, string redirectUri)
                : this(provider, redirectUri, null)
            {
            }

            public ChallengeResult(string provider, string redirectUri, string userId)
            {
                LoginProvider = provider;
                RedirectUri = redirectUri;
                UserId = userId;
            }

            public string LoginProvider { get; set; }
            public string RedirectUri { get; set; }
            public string UserId { get; set; }

            public override void ExecuteResult(ControllerContext context)
            {
                var properties = new AuthenticationProperties { RedirectUri = RedirectUri };
                if (UserId != null)
                {
                    properties.Dictionary[XsrfKey] = UserId;
                }
                context.HttpContext.GetOwinContext().Authentication.Challenge(properties, LoginProvider);
            }
        }
    }
}

Listing 4: Web.Config

<?xml version="1.0" encoding="utf-8"?>
<!--
  For more information on how to configure your ASP.NET application, please visit
  https://go.microsoft.com/fwlink/?LinkId=301880
  -->
<configuration>
  <appSettings>
    <add key="webpages:Version" value="3.0.0.0" />
    <add key="webpages:Enabled" value="false" />
    <add key="ClientValidationEnabled" value="true" />
    <add key="UnobtrusiveJavaScriptEnabled" value="true" />
    <add key="SAML_IDP_URL" value="http://localhost:52071/" />
    <add key="x509_File_Path" value="~/App_Data/stubidp.sustainsys.com.cer"/>
    <add key="Private_Key_File_Path" value="/App_Data/Sustainsys.Saml2.Tests.pfx"/>
    <add key="Entity_ID" value="http://localhost:57234/Saml2"/>
    <add key="ServiceProvider_Return_URL" value="http://localhost:57234/Account/ExternalLoginCallback"/>
    <add key="SAML_IDP_Org_Name" value="Sustainsys"/>
    <add key="SAML_IDP_Org_Display_Name" value="Sustainsys AB"/>
  </appSettings>

Claims returned from the identity provider to service provider:

Claims returned from the identity provider to service provider

Additional References

AIS Gets Connection of DoD DISA Cloud Access Point at Impact Level 5

Getting the DoD to the Cloud

Our team was able to complete the near-impossible. We connected to the DoD DISA Cloud Access Point at Impact Level 5, meaning our customer can now connect and store any unclassified data they want on their Azure subscription.

About the Project

The project started in July 2017 to connect an Azure SharePoint deployment to the DoD NIPRnet at Impact Level 5. Throughout the process, the governance and rules of engagement were a moving target, presenting challenges at every turn.

Thanks to the tenacity and diligence of the team, we were able to successfully achieve connection to the Cloud Access Point (CAP) on September 6th, 2018. This was a multi-region, with 2 connections, SharePoint IaaS always-on deployment, which involved completing all required documentation for the DISA Connection (SNAP) process.

We are now moving towards the first Azure SharePoint Impact Level 5 production workload in the DoD, so be sure to stay tuned for more updates.

A Repeatable Process for Government Cloud Adoption

Azure Government was the first hyperscale commercial cloud service to be awarded an Information Impact Level 5 DoD Provisional Authorization by the Defense Information Systems Agency, and this was the first public cloud connection on Azure in the DoD 4th Estate.

With fully scripted, repeatable cloud deployment, including Cloud Access Point connection requirements, we can now get Government Agencies to the cloud faster, and more securely than ever before.

We work with fully integrated SecDevOps processes and can leverage Microsoft’s Azure Security Team for assistance in identifying applicable security controls, inherited, shared and customer required controls.

See how you can make the cloud work for you. Contact AIS today to start the conversation, or learn more about our enterprise cloud solutions.

HARNESS THE POWER OF CLOUD SERVICES FOR YOUR ORG
Discover how AIS can help your org leverage the cloud to modernize, innovate, and improve IT costs, reliability, and security.