We hear an awful lot about Zero Trust security these days. For some, Zero Trust may be a relatively new concept; for others, it is a term that sounds simple enough but can come at a high cost.

Today’s thought leadership points to Zero Trust as one of the top methods for protecting an organization’s identities, assets, and data. But while there is a lot of information, making sense of it all may be a different story.

It is why I caught up with Derek Morgan, who is a Senior Cloud Solutions Architect with Applied Information Sciences. Morgan has been a vocal advocate of Zero Trust and authored a fascinating piece on the topic. I jumped on a call with Morgan to better understand what Zero Trust means and how close we are to a passwordless future.

[Our conversation has been edited for clarity and length]

Brennen Schmidt: How about starting with the basics? In its simplest terms, how would you describe Zero Trust?

Derek Morgan: It starts with understanding the three core concepts of Zero Trust – verify explicitly, use least-privileged access, and always assume breach.

Verify explicitly is about understanding how you are authenticating users, how you authorize them, and how you can identify individual users. Identities are highly targeted assets within organizations. It is essential to ensure that when you verify those identities, you use a powerful authentication method, such as second-factor, commonly known as Multi-Factor Authentication (MFA).

Second, it is essential to understand the concept of least-privileged access for all identities, both privileged and non-privileged. For example, say you want to access a particular resource. The organization should be able to define permissions or roles necessary to access that specific resource at a given time. With the overall focus on content, data, and applications, organizations with an identity governance framework can help establish the correct permissions and roles to enable identities to access resources.

Finally, we come to the concept of always assuming a breach. The reality is that endpoints will not reach a 100% hardened state. It may be more realistic to think endpoints may reach a threshold of 70% while still making it an organizational goal to reach 90%. This situation leaves devices vulnerable to various attack vectors where malicious actors can access an endpoint or an identity at any given moment. Always assuming a breach means security teams are validating whether an event or incident has occurred through manual intervention or automation.

Schmidt: Let us dive deeper into the concept of the identity state you just mentioned with always assuming breach. What does this mean?

Morgan: In other words, identity state helps security practitioners understand and confirm if there are any user or sign-in risks. If there is a risk, the user would no longer have access to the resource.

For example, suppose a device’s threat level is high because malware has been detected. Always assuming breach means the device would be unable to access specific resources. Another example is a security practitioner receiving an alert for an identity logging into Microsoft 365 from Los Angeles, California; six minutes later, that same identity logging in to Microsoft 365 from Moscow, Russia. Always assuming a breach means specific security measures will take place, such as Multi-Factor Authentication for validation and resetting the identity’s password.

In short, organizations of all sizes should focus on the six pillars of zero trust: identities, devices, data, apps, infrastructure, and networks. When it comes to passwordless, the primary focus should be the first and most critical pillar: identity.

Schmidt: What role does Zero Trust play in the shift toward going “passwordless”? Are we at a point where remembering all those passwords is no longer necessary?

Morgan: When it comes to going passwordless, it is vital to think back to the first and critical pillar: identity. Understanding just how essential identity is can help us understand just how valuable it is for threat actors to exploit.

We have seen many sophisticated examples of how malicious actors have exploited users. This includes extracting their password using social engineering techniques. While complex passwords and the use of password managers may be viable solutions, we see they cannot eliminate the threat of exploitation entirely. Overall, technology cannot resolve the first attack vector for identity exploitation – the human being.

Securing identity comes down to minimizing the attack surface or the plane of attack. Passwords are vulnerable to attacks, such as brute force, dictionary attacks, phishing campaigns, or a combination. Using commonly used passwords make it easy for threat actors to guess a password. However common or complex, an exploited password of a privileged or non-privileged identity can become a financial nightmare for any organization.

Schmidt: What kind of tools are in place to help support passwordless?

Morgan: Organizations using Microsoft 365 can move toward passwordless by leveraging the Microsoft Authenticator app. The app is a secure way of enabling users to authenticate their identity. The app presents a second-factor authentication on the user’s device and can use technologies like Apple Face ID or Android biometrics to help prove the user is who they say they are without needing a password.

Another example is logging in to a Windows 10 or 11 endpoint via Windows Hello for Business. The beauty of Windows Hello for Business is that it leverages biometrics or PIN to enable access to the endpoint without requiring the user to provide a password.

Another option is to leverage a FIDO2 security key. This option lets users, for example, use a web browser to log in to their email client by entering their email address into the web form. From there, the user is prompted to use their physical FIDO2 security key to complete the authentication process.

Yubico’s YubiKey is an example of this FIDO2 technology. Based on what the organization’s IT department defines as the requirement, a user may be prompted to insert their physical security key into the device, then use their thumbprint as a biometric. The user is authenticated only once the IT organization’s requirements have been met.

Schmidt: What kind of business value can the move toward passwordless offer to large and small organizations?

Morgan: By definition, passwordless leverages authentication methods which are more difficult to exploit. With the difficulty of exploiting identities enforced with second-factor authentication, there may be significant IT and organizational costs.

But we are seeing many examples in IT service management where the use of passwordless has significantly reduced the time and resources required for resetting passwords. In addition, we also see cost savings in the form of reductions in security incidents, third-party forensics investigations, and insurance claims for business operational damages.

Schmidt: While this sounds like a great business strategy under ideal circumstances, it also sounds like a rather large and potentially expensive undertaking. What are some simple steps that you recommend that IT might want to think about presenting to IT leaders to consider approaching this path?

Morgan: Taking the fast route while trying to encompass this entire strategy is unrealistic. Rolling out Zero Trust and passwordless must take place in steps over time.

Leaders need to ask themselves some key questions: What kind of infrastructure do I have in place now? What kind of third-party solutions and applications is supporting the organization? Where am I using a password?

Organizational leaders will want to pay specific attention to legacy apps and system support, given they may not fit into their organization’s passwordless strategy. Another critical step is to understand the user experience and what the customer journey looks like.

Going passwordless might make sense from a security perspective, but it can be highly disruptive to end users. Change management must be considered when evaluating a chosen solution. This includes thinking about the various devices end users might have grown accustomed to using, ranging from desktops to mobile devices.

PROTECTING USER IDENTITY
How does an organization protect the user identity, device assets, and corporate data, preventing compromises and breaches? The answer: Zero Trust.

Schmidt: Earlier, you mentioned the importance of considering third-party providers. Why should this be a key focus for organizations thinking about going passwordless?

Morgan: It all comes down to creating an offering, which includes developing a deeper understanding of the full impacts of going passwordless. One tip I would offer would be for business and IT leaders to think about the core problem they are trying to solve, supported by clearly defined requirements.

Compliance with the identity provider is only part of the puzzle regarding infrastructure, network, and applications. Decision-makers should also consider how users and customers might be leveraging Software-as-a-Service (SaaS) solutions, such as Salesforce, ADP, or Workday.

Focusing on the user can help organizations choose the best approaches to either phase out an existing legacy identity provider, migrate to a new one, or explore a hybrid approach to support a mix of application requirements that require passwords.

Schmidt: I want to channel an organization that might have yet to fully shift to the cloud. What potential pitfalls might someone in business or technology leadership wish to consider if they still need to support legacy applications?

Morgan: There are cases where a legacy system may not support a passwordless strategy. Take Windows Server 2008 or some other operating system that is no longer supported. Creating an inventory of applications can help organizations identify technologies that fall into this category.

From there, the organization is in a much better place to draft a plan with a future timeline. It is essential always to consider the reality that some legacy systems may sometimes fall outside the scope as part of a passwordless strategy.

In either case, leaders should understand that a passwordless strategy will take time. Sorting through the challenges of using a mix of modern and legacy applications and systems could take months, if not years, in the implementation journey.

Schmidt: On that note, I would imagine these modern and legacy systems could span the entirety of the organization, whether it is Finance, HR, or IT. Suppose you are on the technical side and want to broach this conversation with your colleagues across these different groups. What kind of advice can you share that might work well for them to at least get the conversation started?

Morgan: When it comes to engaging senior leadership and defining business units, groups, or functional groups, it all starts with the conversation. Set time aside to sit down with them to understand exactly what their business does and unpack their unique needs.

While technology is the conversation’s focus, remember that this conversation’s success lies in creating a two-way relationship. Get the conversation going by leaning in to understand the business challenges they might be experiencing.

It usually becomes clear early on in these conversations that both the business side of things and the technology provider need a full understanding of the application. Everyone might know what the technology does, but it is only from a single perspective.

It is why zooming out to focus on the bigger picture is so important to shape a meaningful conversation. Those in technical roles might want to bring an architectural design to the table to help articulate where a particular application fits in the organization’s IT and operations model.

Make sure to focus only a little on the technology, though. It is about understanding how users access systems today so you can help make things easier for them while supporting data safeguards to keep their information safe.

It all comes down to establishing and maintaining the relationship with your stakeholders. I usually see things fall through the cracks when the dialogue starts breaking down. Focus on maintaining that two-way relationship and understanding exactly how the application works. You will soon be on your way to crafting and executing a user-centric passwordless strategy.

In today’s digital world, passwords have become a universal language to access applications and devices. Now, many organizations are looking to employ a password-less strategy.

As I wrote in a previous blog about protecting the user identity and data with Zero Trust and Microsoft Security, let us start with the basics and realize the number one asset under attack: the identity. Today, as organizations continue to plan and strategize the adoption of multi-factor authentication, users continue to authenticate with one-factor authentication via passwords. For some organizations, password policies tend to remain relaxed for the ease of user experience. Especially when logging in to their Windows 10/11 device, Microsoft 365, a third-party cloud app, or a legacy/line-of-business app leveraging Azure Active Directory or Active Directory Domain Services. While a relaxed policy allows more accessible user experiences, it will enable malicious actors to draw an opportunity of deploying simple attack strategies for compromising identities, including but not limited to brute-force attacks and dictionary attacks.

While some systems have built-in security capabilities to prevent simple attacks, let us not forget the sophisticated methods for compromising an identity: social engineering. With social engineering, malicious actors draw out the user with psychological techniques for extracting the necessary data and generating possible passwords to leverage with moderate to high accuracy; this includes baiting, spear phishing, scareware, and pretexting. Of course, a simple password is only scratching the surface. Consider organizations with end-users complaining about the complex password requirements and refusing to use the systems or apps or users who reuse the same password in all systems and apps. The question becomes, “How do you protect the identity with a strengthened security foundation and optimal user experience?”. The answer? Eliminate passwords by defining a password-less strategy.

Password-less login makes it easier for users to sign in securely. It helps support a Zero Trust security model where every access request is authenticated independently of any device state or network location.

While password-less authentication can simplify the user experience, it also supports a Zero Trust security model. Every access request is authenticated independently of any device state or network location. In addition to simplifying the user experience and supporting a Zero Trust approach to security, password-less login makes it easier for IT teams to implement modern identity solutions such as Azure Active Directory (Azure AD).

How do you define a password-less strategy?

As implied, “password-less” indicates methods by which users can log in to respective systems and apps without needing a password. While a password-less strategy may sound impossible, culture and adoption are the primary factors. Imagine the typical user in your organization logging into their Windows 10/11 device with biometrics and/or PIN, already establishing two-factor authentication, and seamlessly logging in to Outlook and Microsoft Teams. Also, imagine the same user using a personal device to log in to Microsoft 365 with only the Microsoft Authenticator app. Finally, consider the field user who needs to log in to an enterprise app that leverages Azure AD as an identity provider and requires two-factor authentication; see the previous example with Microsoft 365!

With that, here are some questions to consider when defining a password-less strategy:

  1. What is the business culture when it comes to leveraging passwords?
  2. How many passwords does a user need to remember?
  3. What is the average number of enterprise apps the typical end-user logs in to daily?
  4. Does our organization already leverage devices for biometrics and or PIN to log in?
  5. How many incidents and requests do your help desk receive for resetting passwords or unlocking accounts?
  6. How many incidents does our cybersecurity team receive for compromised identities due to passwords?

How You Can Build a Better Security Strategy with Password-less Authentication

In brief, Microsoft defined a password-less strategy for all organizations, ranging from small-to-medium to enterprises, and summarizes four (4) steps: 

  1. Developing password-replacement offerings
  2. Reducing the user-visible password surface area
  3. Transitioning into your password-less deployment
  4. Eliminating passwords from your directory services, such as Active Directory Domain Services

    Password Less Strategy

Developing password-replacement offerings

The first step in your password-less strategy journey is determining the best replacement offerings for your organization. Next, consider the technologies your end-users are leveraging today: Windows 10 and 11, Microsoft 365, Azure Virtual Desktop, cloud apps (e.g., Box), and any enterprise apps deployed on-premises. Also, consider what will be convenient to your end-users when logging in to systems and apps leveraging Azure AD or Active Directory for authentication purposes and forms-based passwords.

Let us take Windows 10 and 11 as an example. A replacement for a password-less strategy is Windows Hello for Business, utilizing biometrics (e.g., facial recognition or fingerprint scanning) or PIN. In addition, you can couple these features with a Bluetooth device for two-factor authentication.

Now, let us consider Microsoft 365. Suppose your organization already adopted multi-factor authentication via Microsoft Authenticator. You are just one step closer to enabling password-less authentication and logging into Microsoft 365 from a personal device without a password!

While developing your password-replacement offerings, this is the best opportunity for your organization to structure the journey by identifying the different personas throughout your organizational departments, including IT. Also another factor is identifying all applications and services that leverage a password. The number of personas, departments, and apps in your organization will determine how long it will take to establish a solid foundation for your password-less journey. Still, the best idea is, to begin with, a pilot!

Reducing the user-visible password surface area

As you progress through the pilot and confirm the feasibility of the password-less technologies, the next step is a deep engagement with the personas and departments on the usage of passwords and the comfort level of eliminating passwords for their apps and services. Once the deep dive is complete and understanding the overall use and frequency of passwords for each app and service, the journey continues with developing a mitigation plan. While the easy part is the apps and services already leveraging Active Directory and Azure AD for authentication purposes, the challenge is determining the effort level for applications requiring custom development or vendor support for additional authentication methods. However, once your mitigation plan is in place for all apps and services, engage with the pilot and remove all password capabilities, such as enforcing Windows Hello for Business or removing the password credential provider.

Transitioning into your password-less deployment

If all went well with the pilot and user acceptance testing was successful, the next milestone is taking the remaining personas and departments, deploying the password-less technologies, and eliminating the password surface area. Of course, aside from technology, this milestone requires extensive organizational change management and end-user adoption. 

As you transition the personas and departments into the password-less space, there are essential items to consider: 

  • Organizational change management
  • End-user adoption
  • Awareness campaigns
  • Training sessions
  • Education material

The purpose is to establish a positive atmosphere for promoting the password-less journey, and the benefits received.

During the transition, the IT organization will report and track all issues related to the password-less deployment transition, ensuring gaps do not exist in the deployment and resolution is met with all issues. After resolving issues and remediating gaps, the final step is configuring identities to disallow passwords and enforce the use of password-less technology.

Eliminating passwords from your directory services

As the transition to a password-less deployment is near completion, the final milestone is eliminating passwords from your directory services, such as Active Directory. Today, organizations can accomplish this by removing the password spaces, enforcing the use of password-less technology, and randomizing passwords for all identities, where possible.

Schedule your Free Cloud Security Assessment

Let our certified security experts help with your password-less journey. AIS is a Microsoft Gold Partner with mission-critical competencies and Advanced Specializations, including Cloud Security, Identity and Access Management, and Cloud Productivity. Begin your password-less journey today with an AIS Cloud Security Assessment to help identify and determine your roadmap for a password-less journey, and accomplish your objectives in a reasonable, cost-effective, and secured timeframe.

Whether you start with a Cloud Security Assessment, or if you’re ready to engage a partner to begin your journey to a password-less space, contact us to learn more about how AIS experts can help you.

The Current Situation

Today, organizations continue the transition to cloud computing as part of their digital transformation journey and become highly productive organizations in their respective industry. While transitioning to cloud computing demonstrates agility, scalability, cost-effectiveness, and performance, the challenge is strengthening their security foundation and reducing risk. While vendors, such as Microsoft, provide the best-in-class measures and features for tightening the organization’s security foundation, the ultimate responsibility belongs to the specific organization. Of course, organizations will take precautionary steps for protecting device assets and data from malicious exfiltration and theft. Still, organizations tend to loosely protect the number one crucial asset: the user’s identity. As reported by Microsoft, cybersecurity experts detected and reported 15 million attacks, scoped to password-based attacks alone. This is a signal that the user’s password is the primary target! So, the big question is…how does an organization protect the user identity, device assets, and corporate data, preventing compromises and breaches? The answer: Zero Trust.

What is Zero Trust?

Top cybersecurity experts and leaders changed the game with the introduction and indoctrination of Zero Trust, revealed as the top method to protecting identities, assets, data, and the overall organization. First, let’s start with the basics: what is Zero Trust?

Microsoft clearly defines Zero Trust by following three objectives:

  • Verify explicitly
  • Use least-privileged access
  • Assume breach

To summarize, treat every request as if it came from an unknown network and always verify.

Zero Trust Foundation

Verify explicitly – the user identity is the key to accessing organizational data and assets. Therefore, the first step is to verify the identity belongs to the user accessing the data—Azure AD supplements this concept with Conditional Access and leveraging security features, such as multi-factor authentication.

Ask yourself these questions:

  • Is the user accessing the data from the correct device to take it a step forward?
  • Is it in the correct location?
  • Is the user now at high risk?
  • Is the device at high exposure due to an existing cybersecurity incident? Is the device compliant?

Use least-privileged access – not a new concept, but not a heavily practiced concept. Specifically, in Microsoft 365, organizations utilize privileged identities for administrative functions and operations. However, remember that organizations do not separate privileged access from their regular accounts and leave a permanent assignment for privileged access even if the identities are separate. Also, some organizations allow users to consent for apps accessing Microsoft 365 data. Azure AD offers features to reduce the risk revolving around privileged access and tracking identities. For example, Azure AD Privileged Identity Management, Microsoft Defender for Identity, Conditional Access, Azure AD Identity Governance, Azure AD Identity Protection, Microsoft Information Protection, and more.

Assume breach – As any cybersecurity expert, vendor, or leader will tell you, an organization will never reach 100% hardened as malicious actors and attackers continue to develop sophisticated attacks. Therefore, the organization must adopt the “assume breach” mindset and always defend themselves. Thus, Microsoft Defender for Office 365, Microsoft Defender for Endpoint, Microsoft Defender for Identity, and Microsoft Cloud App Security protect the organization from modern attacks, allowing an organization to become proactive and leverage advanced hunting features to prevent future malicious attacks. Other methods of adopting the “assume breach” mindset are reviewing user sign-in risk, device health risk and exposure, threat and vulnerability management, device & user identity hygiene, and more.

Why Microsoft Security for Zero Trust Foundation?

In brief, Microsoft defines security under four (4) pillars: protect everything, simplify the complex, catch what others miss, and grow your future.

Safeguard your people, data and infrastructure

The four (4) pillars outline the overall position on Microsoft Security and the value it demonstrates for all customers: simplifying the Zero Trust foundation. In addition, Microsoft Security is proving its firm establishment in the cybersecurity field by being a prime leader in seven (7) Forrester Wave reports, and five (5) Gartner Magic Quadrant reports.

Microsoft Security a Leader in Gartner Magic Quadrant

Microsoft Security

Microsoft Security continues to develop an end-to-end approach, integrating with a total of 53 essential categories around the cybersecurity landscape. It also demonstrates cost savings with Microsoft security solutions covered under Microsoft 365 E3 and Microsoft 365 E5 licensing tiers, compared to competitors in cybersecurity, such as Symantec, Cisco, and CrowdStrike.

Integrate up to 40 categories

Enhanced Microsoft Security

How Can AIS help?

As a Microsoft Gold Partner, AIS contains the expertise and skills to assess, guide, and deploy the Zero Trust foundation from leveraging the solutions from the Microsoft Security foundation, such as Microsoft Defender for Endpoint, Microsoft Defender for Office 365, Microsoft Defender for Identity, Microsoft Cloud App Security, Microsoft Information Protection, Azure AD Privileged Identity Management, and more. Below is the Zero Trust Guidance Center from Microsoft. Consider AIS for assistance on this journey for your organization to be at an optimal state of the Zero Trust foundation!

Zero Trust Guidance Center: Zero Trust Guidance Center | Microsoft Docs

A Level of Certification to Consider

Obtaining a Secuirty+ certification allows individuals that are pursuing a career in an information technology field many opportunities. A good portion of DoD jobs requires this level of certification to maintain secure systems utilized daily. This training ensures that the minimum-security requirements convey. The exam is difficult because it covers an extensive range of topics under Information Technology Security.

There are multiple ways to study for the Security+ exam. This article demonstrates one way to follow. It has been a proven method to achieve a passing score the first time taking the exam. When this article was written, the SYO-501 was the current exam offered.

What You Need

Below are suggested materials to guide you towards the exam, with specific examples outlined in the subsequent sections:

  • Books (there are two specific titles mentioned in this blog)
  • Additional Subscription Study materials (this is not required, but suggested)
  • Friends to study with (or to keep you awake when you are supposed to be studying)
  • A well-rested mind
  • One month to prepare (suggested if you are new to the material)

Step 1: Read the Fun Manual (RTFM)

StudyingFirst of all, the best way to get a good grasp of the material is to take a class offered by qualified and licensed entities. These courses usually last five days and will follow a book, or multiple books, for education material. Attendance is either via an online course taken at your leisure or in person at a facility. If you can have a company host the professional to teach an on-site/virtual class, that is the best way to get involved in a course. You can purchase two useful books at many major retailers that sell course material for CompTIA:

  • CompTIA Security+ Get Certified Get Ahead: SY0-501 Study Guide by Darril Gibson
  • CompTIA Security+ SY0-501 Cert Guide by David L. Prowse.

Reading these books is recommended, regardless of if you take the course or not. Allow plenty of time to get through both of them. Both cover topics in length on different subjects and give you an excellent grasp of all the exam material. Ensure that the proper books purchased are for the current exam that is offered. Failure to do so will result in information missed that may be on the exam. If time is of the essence, reading the book by Darril Gibson would be recommended. Then use the David Prowse book and skim through the sections that expand a bit more on the topics not covered in the first book.

Step 2: Online Videos (they are free!)

An excellent online resource to use is Professor Messer’s CompTIA SY0-501 Security+ Course. The videos are a completely free way to cram a lot of information quickly after reading the books. I’d recommend doing so in this order. You can do the reverse if you like. However, listening to the videos as you are driving or going about your day after reading the material makes it easier to retain the information. There are also other study materials offered for sale to help aid in the passing of the exam. Listening to all the videos after reading the material is thoroughly suggested to help retain the information. There is also information covered in these videos that had not been discovered in the books recommended to use.

Security Studying

Step 3: Get Certified and Get Ahead Study Material

If you can afford to do so, purchase the full study guide at GCGAPremiumPass. There is a package that is great to use after completing the books and videos. The study package follows Darril Gibson’s book recommended above. A package is offered that contains the book and the study guide to save some money. This is the recommended way to get both if you have not taken a course that includes the book in the purchase. The full study guide includes:

  • Multiple-choice Security+ practice test questions
  • Performance-based questions
  • Audio from the Study Guide
  • Online flashcards

The audio “Remember This” material is one of the best things you can use to retain the information in this book’s chapters. Reading a chapter and listening to the accompanying audio file for it will help immensely. If there is something in the audio file that you do not understand, go back and read the section in question. Then, listen to the remember this audio file again. Each of these files is ten minutes or less. Using these to listen to while you are driving or folding laundry will help you retain what you have read from each chapter. These short audio clips are handy to keep the information fresh in your mind. Using the flashcards in this manner will also help you remember specific details like ports and acronyms useful for the test. Acronyms are the most significant thing to commit to memory. The exam will not spell these out for you. If you do not know all of them, you will spend a lot of your time on questions trying to figure out their context.

The practice questions are a great way to get yourself prepared for the exam. With one caveat: do not just memorize where the answer is in the order of the list. The order of the answers will change between chapter exams and full exams. Taking these in order of the chapters, then taking the complete exams is the best course of action. This helps eliminate your brain tricking you into choosing the answer’s letter rather than identifying the correct answer by knowledge. Please note that no matter how many times you will take these practice exams, the real exam will not have questions on it that are the same.

Step 4: Test Day

Prepare yourself by taking some time off before the exam to let the information sink in. Cramming right up until the test time will only confuse you for the information you need to know. A fresh mind and a calm attitude will go a long way. The exam is timed, so you will see the time the whole way through. Try not to pay too much attention to it without ignoring it. There are scenario-based questions included in the exam. These questions will take much set up time to figure out an answer. Building networks or figuring out access points will be common questions. The rest will be multiple choice. The best plan of action will be to answer all the questions you know as quickly as possible, allowing time to go back through to think about the questions you are not sure of. Usually, your first answer that you put down will be the correct one if you studied enough. Spending too much time on a question will lead you to second guess yourself, and you may settle on the wrong answer. There is an option to flag questions you are unsure of so that you can return to them at a later time. The best advice here is to make sure you answer the question you are unsure of and flag it when you move on. This way, if you run out of time, the question is answered. It may be wrong, but it is better than leaving a question empty.

After the exam time has run out the clock, a survey will be presented for you to take. You will not see your score before your survey is complete. Don’t worry that some technical glitch may be happening. If you passed the test, a certificate would be mailed to you. You can then present this to your organization. If not, you will be able to retake the exam. It is suggested that you give yourself some more study time and focus on the areas the summary lets you know where you are not strong in your knowledge.

Conclusion

The suggestions stated here are just that: suggestions that have worked for some people. Others require less time to prepare and study, and some require a lot. If you put enough work into preparing for the exam and ensure you have a positive attitude about it, you will do great. Don’t worry if you do not pass the first time. The exam is challenging to prepare for in a limited amount of time. No matter how much preparation you have put in, there will still be questions presented to you that you feel you have not covered. The exam is tailored that way to collect statistics and catch cheaters. Getting with a group of people for the exam prep is the best way to study for this. Instructors can be hired to teach you the exam’s ins and outs and the history of the questions presented. Good luck!

When building and managing an Azure environment, Microsoft maintains control of the network traffic as a core operations responsibility. The primary Azure platform resource to implement network traffic control is the Network Security Group (NSG). A Network Security Group allows you to define security rules, like firewall rules, that control traffic by specifying allowed and denied sources, destinations, ports, and protocols. Like all Azure resources, there are multiple options to manage NSGs, including the standard Azure Management tools: The Azure Portal, Scripts (PowerShell and CLI), APIs, and Azure Resource Manager (ARM) templates.

Managing NSG security rules using an ARM template can be challenging. Each security rule is defined using a large chunk of JSON, and many security rules may be required. The verbose JSON structure makes it difficult to see many rules at once, visualize changes from version to version, and encourage team members to revert to the Azure Portal to view and edit rules. Why use the Azure Portal? It turns out the portal’s grid format for NSG Security Rules was comfortable for quickly viewing multiple rules and for making minor edits to individual rules.

Since the portal’s grid view was comfortable the CSV file format seemed like the right idea based on its similarity to a grid. CSV files have a few pros:

  • Good viewers and editors including Excel and VS Code.
  • One vertically compact line for each security rule.
  • A vertically compact view that makes it easier to visually scan rules and to see the changes that are made from version to version when viewing differences.
  • Anyone who can edit a CSV can edit the NSG Security Rules allowing a larger group of security rule editors.

NSG in JSON format

This is a simple example of the NSG Security Rule JSON. A rule like this can get much larger vertically when numerous ports and address prefixes are defined:

{
          "name": "I-L-All_HttpHttps-UI_Layer-T",
          "description": "Allow HTTP + HTTPS traffic inbound.",
          "priority": 1110,
          "access": "Allow",
          "direction": "Inbound",
          "protocol": "Tcp",
          "sourceAddressPrefix": "",
          "sourceAddressPrefixes": [
            "AzureLoadBalancer"
          ],
          "sourceApplicationSecurityGroups": null,
          "sourcePortRange": "*",
          "sourcePortRanges": null,
          "destinationAddressPrefix": "*",
          "destinationAddressPrefixes": null,
          "destinationApplicationSecurityGroups": null,
          "destinationPortRange": "",
          "destinationPortRanges": [
            "80",
            "443"
          ]
        }

NSG in CSV Format

Excel

Example CSV Excel

Example CSV

Converting Between CSV and JSON

The transition from CSV to JSON and from JSON back to CSV must be repeatable and simple. In this scenario, PowerShell scripts manage this process: Convert-NsgCsvToJson.ps1 and Convert-NsgJsonToCsv.ps1.

The Convert-NsgCsvToJson.ps1 script is straightforward and does the following:

  1. Read the source CSV file.
  2. Read the destination JSON file.
  3. Split multi-value fields into an array based on the parameter: CsvArraySeparator. The default is the pipe character ‘|’. For fields like source and destination port ranges, this collapses multiple values into a single CSV field.
  4. Structure of the CSV format data into objects that match the ARM Template NSG Security Rule JSON structure.
  5. Use a JsonFileType parameter to determine where in the destination JSON structure to place the security rules array. This allows placement of the security rules array into a parameter file, template file, or into an empty JSON file.

A New Workflow

With PowerShell scripts, the new workflow for NSGs is:

  1. Create and edit NSG Security Rules in a CSV file – usually using Excel.
  2. Visually scan the CSV looking for obvious anomalies (Excel makes it easy to see when one rule stands out from the others and as an example, a value is in the wrong column).
  3. Execute the script: Convert-NsgCsvToJson.ps1 to convert the rules to the Json Structure and update the destination JSON file.
  4. Deploy the ARM Template and updated parameters file to a dev/test environment using standard deployment approaches such as the Azure CLI. This will fully validate the NSG Json prior to production deployment.
  5. Deploy to Production during a planned change window.

From JSONback to CSV

At times, a team member may change the portal, for example, during troubleshooting. Once an update is made in the portal, transfer Azure changes back to the code that defines this infrastructure. The CSV files are the canonical source, so there needs to be a process to return to CSV from JSON.

  1. To retrieve the NSG Security Rules from the portal execute a CLI command to retrieve NSG security rules and export them to a JSON File.
    az network nsg rule list --nsg-name subnet-01-nsg --resource-group net-rgp-01 | set-content subnet-01-export.json
  2. Execute the Convert-NsgJsonToCsv.ps1 script using the generated file as the input and the corresponding CSV file as the output.

Constraints

The environment these scripts were built for may not match your own. This environment includes several constraints:

  • Azure Resource Manager Templates are the language for Azure Infrastructure as Code.
  • Manual steps are required: automated build and release pipelines are not yet available.
  • There is no guarantee that NSG security rules will not be modified in the Azure Portal, so a mechanism is required to synchronize the code with the environment.

Future Improvements

This solution represented a significant improvement for this team instead of managing NSG security rules directly in the JSON format. As with every answer, there are ideas on how to improve. Here are a few that have come to mind:

  • Use CI/CD tools such as GitHub Actions to automatically execute the Convert-NsgJsonToCsv.ps1 script when an NSG CSV file is committed.
  • Implement a release pipeline so that modified NSG Csv files trigger the conversion script, wait for approval to deploy, and deploy the ARM Template to the dev/test environment.
  • Add Pester tests to the PowerShell scripts.
  • Try this approach with other IaC languages such as Terraform.

Additional Notes

  • The example template has been dramatically simplified.
    • The production template also configures NSG Diagnostic Settings and NSG Flow Logs.
    • The production template builds all resource names based on several segments defined in a naming convention.
  • There are NSG Security Rules that are considered baseline rules that should be applied to every NSG. These rules are managed in a CSV file and placed in an array in the base template and not repeated in each parameter file. An example of this is a rule that allows all servers to contact the organization’s DNS servers.
  • Application Security Groups are used to group servers in the local VNET so that NSG Security Rules do not need to include IP addresses for servers contained in the VNET. The only IP Address Prefixes specified directly in our rules are from outside the current VNET. As with the NSGs, this template defines ASGs in the template (baseline) and parameters file (local) combined and created during template deployment. Only the unique portion of the name is used to define the group, and to specify rules. The remainder of the term is built during deployment. ASGs in Azure are currently only valid for the VNET where they are created, and only one ASG may be specified per security rule. This script creates all the ASGs defined in the template and parameters file.

Code

The code for these scripts including the conversion scripts and a sample ARM Template, ARM Template Parameters files, and matching NSG Security Rule CSV files is available on GitHub: https://github.com/matthew-dupre/azure-arm-nsg-via-csv

Code Scripts

Introduction

PowerShell DSC is possibly one of the most potent configuration managers for Windows Operating Systems. Even with the increase in popularity of Ansible and SaltStack, Powershell DSC remains supreme when enforcing the desired state on a Windows VM. Ansible itself has included the win_dsc module, which allows Ansible to run PowerShell DSC. In this blog post, we will dive deeper into one of PowerShell DSC’s most powerful aspects, 3rd Party Resources, and how they interact with Azure Automation.

3rd Party Resources are PowerShell DSC Modules that are created by the community. Any PowerShell community member can create modules, and there are tons of modules out there to choose from. Modules are kept in repositories, the most well known and default PowerShell repository being the PowerShell Gallery run by Microsoft. This is a common repository for PowerShell modules that are deployed to the Gallery by the community. PowerShell Modules in the PSGallery can be downloaded and installed by the PowerShellGet Module.

As developers and infrastructure engineers, there are many different reasons to script various services you are creating. Often, instead of developing behavior or scripts from scratch, it is much easier to leverage the work that others have done to expedite a task’s completion. 3rd Party Modules allow for easily repeatable code that can become production-ready through collaboration.

Often, DSC Configuration can become complicated. Engineers can be asked to do many things, from creating an Azure AD Domain, configuring OMS Solutions associated with a VM, and even interactions with non-native Azure products, such as Splunk.

These may all seem very daunting, but don’t fret! Members of the PowerShell community have dealt with these problems and many others, and often you will find third party modules to help do the work for you.

Here is an example of a Third Party Resource, named ActiveDirectoryDsc, which will help in the promotion, configuration, and management of Active Directory

Azure Automation is a robust PaaS offering from Azure that allows for a cloud-based DSC pull server. Within Azure Automation, it is possible to add both custom modules that the user develops and third-party modules available in any hosted source.
⚠ It should be known that organizations in locked-down environments can manage their Repository of PowerShell Modules, which have been vetted by the respective InfoSec team. It is possible to deploy your Artifact repo using the Azure DevOps product shown here. It allows an internal team to deploy its versions of packages, and you can use that as your URI references.
⚠ There are a few ways to upload modules to the Azure Portal natively. You can upload manually through the portal as shown here in this picture:

Uploading modules to the Azure Portal

However, being DevOps Engineers, we want to automate this process as much as possible. One way to do this is via ARM Templates, like the ones we used in the previous module.
Below is an example of how to add a 3rd party module to your Azure Automation Account via ARM Templates:

{
"name": "[concat(parameters('automationAccountName'), '/', parameters('name'))]",
"type": "Microsoft.Automation/automationAccounts/modules",
"apiVersion": "2015-10-31",
"location": "[resourceGroup().location]",
"properties": {
"isGlobal": false,
"sizeInBytes": 0,
"contentLink": {
"uri": "uri to package"
}
}
}

If you are deploying from the PowerShellGallery, your Uri would look something like this:

"uri": "[concat('https://www.powershellgallery.com/api/v2/package/', parameters('name'), '/', parameters('version'))]"

Alternatively, you can script the import of modules using the New-AzAutomationModule module in a Powershell Script.

Oddly enough, there is sometimes some difficulty understanding the correct ContentUri to use in both the ARM and Powershell case. Finding the correct one can be done by navigating the right module in the Powershell Gallery, and adding /api/v2 to the URL, and replacing packages (plural) with package (singular).

Add the /api/v2 to a URL

Conclusion

3rd Party Modules are a great way for developers to speed up development and productivity. If you are inclined to help in the development of these modules, head over to GitHub and contribute!

Azure Kubernetes Service is a Microsoft Azure-hosted offering that allows for the ease of deploying and managing your Kubernetes clusters. There is much to be said about AKS and its abilities, but I will discuss another crucial role of AKS and containers, security. Having a secure Kubernetes infrastructure is a must, and it can be challenging to find out where to start. I’ll break down best practices, including baseline security for clusters and pods, and implement network hardening practices that you can apply to your own AKS environment that will lay the foundation for a more secure container environment, including how to maintain updates.

Cluster and Pod Security

Let’s first look at some best practices for securing your cluster and pods using policies and initiatives. To get started, Azure has pre-defined policies that are AKS specific. These policies help to improve the posture of your cluster and pods. These policies also allow for additional control over things such as root privileges. A best practice Microsoft recommends is limiting access to the actions that containers can provide and avoiding root/privileged escalation. When the Azure Policy Add-on for AKS is enabled, it will install a managed instance of Gatekeeper. This instance handles enforcement and validation through a controller. The controller inspects each request when a resource is created or updated. You’ll then need to validate (based on your policies). Features such as these are ever-growing and can make creating a baseline easier. Azure Policy also includes a feature called initiatives. Initiatives are collections of policies that align with organizational compliance goals. Currently, there are two built-in AKS initiatives which are baseline and restricted. Both come with many policies that lockdown items, such as limiting the host filesystem, networking, and ports. By combining both initiatives and policies, you can tighten security and meet compliance goals in a more managed fashion.

Another way to secure your cluster is to protect the access to the Kubernetes API-Server. This is accomplished by integrating RBAC with AD or other identity providers. This feature allows for granular access, similar to how you control access to your Azure resources. The Kubernetes API is the single connection point to perform actions on a cluster. For this reason, it’s imperative to deploy logging\auditing and to enforce the least privileged access. The below diagram depicts this process:

Cluster and Pod Security

Reference:https://docs.microsoft.com/en-us/azure/aks/operator-best-practices-cluster-security#secure-access-to-the-api-server-and-cluster-nodes

Network Security

Next, let’s look at network security and how it pertains to securing your environment. A first step would be to apply network policies. Much like above, Azure has many built-in policies that assist with network hardenings, such as using a policy that only allows for specific network traffic from authorized networks based on IP addresses or namespaces. It’s also important to note this can only occur when the cluster is first created. You also have the option for ingress controllers that access internal IP addresses. This ensures they can only get accessed from that internal network. These small steps can narrow the attack surface of your cluster and tighten traffic flows. The below diagram demonstrates using a Web Application Firewall (WAF) and an egress firewall to manage defined routing in/out of your AKS environment. Even more granular control is possible using network security groups. These allow only specific ports and protocols based on source/destination. By default, AKS creates subnet level NSGs for your cluster. As you add services such as load balancers, port mappings, and ingress routes, it will automatically modify the NSGs. This ensures the correct traffic flow and makes it easier to manage change. Overall these effortless features and policies can allow for a secure network posture.

Network Security Graphic

Reference: Microsoft Documentation

The Final Piece

The final piece of securing your AKS environment is staying current on new AKS features and bug fixes. Specifically, upgrading the Kubernetes version in your cluster. These upgrades can also include security fixes. These fixes are paramount to remain up to date on vulnerabilities that could leave you exposed. I won’t go too deep on best practices for Linux node updates or managing reboot. This link dives deeper into what Kured is and how it can be leveraged to process updates safely. There are many ways to foundationally secure your AKS clusters. I hope this article helps future implementations and maintainability of your deployment.

If you’re looking for an intelligent cloud-native Security Information and Event Management (SIEM) solution that manages all incidents in one place, Azure Sentinel may be a good fit for you. We’ll be walking through what is Azure Sentinel and how your organization can use it to enhance your data capabilities.

What is Azure Sentinel?

Not only does Azure Sentinel provide intelligent security analytics and threat intelligence, but it’s also considered a Security Orchestration and Automation Response (SOAR) solution, meaning it will collect data about security threats and you can automate responses to lower-level security events without the traditionally manual efforts required. You can extend this solution across data sources by integrating Azure Sentinel with enterprise tools, like ServiceNow. There are also services offered at no additional cost, such as User Behavior Analysis (UBA ), Petabyte daily digestion, and Office 365 data ingestion, to make Azure sentinel even more valuable.

BETTER SECURITY FOR YOUR CLOUD
We'll help you review your current security posture, risks, and gaps to establish a secure code culture. Reach out today to learn more.

First Impression

After opening Azure Sentinel from the Azure portal, you will be presented with the below items:

Azure sentinel first view

Theoretically, Azure Sentinel has four core areas.

Azure Sentinel Four Core Areas

  • Collect – By using connections from multiple vendors or operating systems, Azure Sentinel collects security events and data and keeps them for 31 days by default. This is extendable up to 730 days.
  • Detect – Azure Sentinel has suggested queries, you can find samples, or build your own. Another option is Azure Notebook, which is more interactive and has the potential to use your data science analysis.
  • Investigate – For triaging using the same detection methodology in conjunction with events investigation. Later you will have a case created for the incident.
  • Respond –  Finally, responding can be manual or automated with the help of Azure Sentinel playbooks. Also, you can use graphs, dashboards, or workbooks for presentation.

For a better understanding, the flow in this example of behind the scene is helpful.

Steps in Azure Sentinel

How do I enable Azure Sentinel?

If you already have an Azure Log Analytics Workspace, you are one click away from Azure Sentinel. You need to have contributor RBAC permission on the subscription that has Azure Log Analytics Workspace, which Azure Sentinel will bind itself to it.

Azure Sentinel has some prebuilt dashboards and you are able to share it with your team members.

You can also enable the integration of security data from Security Center > Threat Detection > Enable integration with other Microsoft security services

Now, let’s talk Azure Sentinel data sources. They have a variety of built-in connectors that collect data and process it with its artificial intelligence empowered processing engine. Azure Sentinel can relate your events to well-known or unknown anomalies (with the help of ML)!

Below is a sample connection which  offers two out-of-the-box dashboards:

sample connection in Azure Sentinel

All connections have a fair amount of instructions, which usually allows for a fast Azure Sentinel integration. A sample of an AWS connector can be found here.

Azure Sentinel has thirty out-of-the-box dashboards that make it easy to create an eloquent dashboard, however, built-in dashboards only work if you have configured the related connection.

Built-In Ready to Use Dashboards:

  • AWS Network Activities
  • AWS User Activities
  • Azure Activity
  • Azure AD Audit logs
  • Azure AD Sign-in logs
  • Azure Firewall
  • Azure Information Protection
  • Azure Network Watcher
  • Check Point Software Technologies
  • Cisco
  • CyberArk Privileged Access Security
  • DNS
  • Exchange Online
  • F5 BIG-IP ASM F5
  • FortiGate
  • Identity & Access
  • Insecure Protocols
  • Juniper
  • Linux machines
  • Microsoft Web Application Firewall (WAF)
  • Office 365
  • Palo Alto Networks
  • Palo Alto Networks Threat
  • SharePoint & OneDrive
  • Symantec File Threats
  • Symantec Security
  • Symantec Threats
  • Symantec URL Threats
  • Threat Intelligence
  • VM insights

A Sample Dashboard:

One of the most useful IaaS monitoring services that Azure provides is VMInsights, or Azure Monitor for VMs. Azure Sentinel has a prebuilt VMInsight Dashboard. You can connect your VM to your Azure Log Analytics Workspace, then enable VMInsights from VM > Monitoring > Insights. Make sure the Azure Log Analytics Workspace is the same one that has Azure Sentinel enabled on it.

Sample Dashboard VMInsights or Azure Monitor for VMs

Creating an alert is important. Alerts are the first step for having a case or ‘incidents’. After a case is created based on the alert, then you can do your investigation. For creating an alert, you need to use the KQL language that you probably already used it in Azure Log analytics.

Azure Sentinel has a feature named entity mapping, which lets you relate the query to values like IP address and hostname. These values make the investigation much more meaningful. Instead of going back and forth to multiple queries to relate, you can use entities to make your life easier. At the time of writing this article, Azure Sentinel has four entities; Account, Host, IP address, and Timestamp, which you can bind to your query. You can enable or disable an alert or run it manually as you prefer easily from Configuration > Analytics. Naming might be a little bit confusing since you also need to create your alerts from Analytics.

Azure Sentinel Investigation map of entities becomes public in September 2019 and you no longer need to fill out a form request access.

Let’s Go Hunting

You can use Azure Sentinel built-in hunting queries. You can also directly shoot it down if you know where to find the anomalies by KQL queries and create an alert. Or uses Azure Notebook for AI, ML-based hunting. You can bring your own ML model to Azure Sentinel. Azure Sentinel Notebook is for your tier 4 SOC analysis.

Azure Sentinel built-in hunting query

Azure Sentinel uses MITRE ATT&CK-based queries and introduced eight types of queries, also known as bookmarks, for hunting.

After you become skilled in detection, you can start creating your playbook constructed on logic app workflows. You can also build your automated responses to threads or craft custom actions after an incident has happened. Later you can enable Azure Sentinel Fusion to associate lower fidelity anomalous activities to high fidelity cases.

Azure Sentinel Detection Playbook

A sample playbook:

Azure Sentinel Sample Playbook

Image Source: Microsoft

Azure Notebooks is a Jupyter notebook (interactive computational tool) for facilitating your investigation by using your data science skills. Azure Notebooks support languages and packages from Python 2 and 3 you can also use R and F#.

We all love community-backed solutions. You can share your findings and designs with others and use their insights by using the Azure Sentinel Community on GitHub.

Azure Sentinel Fusion

Fusion helps reduction of noise by preventing alert fatigue. Azure Sentinel Fusion uses this insight here, and you can see how to enable Azure Sentinel Fusion.

Traditionally we assume an attacker follows a static kill chain as the attack path or all information of an attack is present in the logs. Fusion can help here by bringing probabilistic kill chain and to find novel attacks. You can find more information on this topic here. Formerly, you should run a PowerShell command to enable Fusion, but going on Fusion is enabled by default.

Azure Sentinel Data Sources

Azure Sentinel data sources include three types of connectors. First, Microsoft services are connected natively and can be configured with a few clicks. Second, is by connecting to external solutions via API. And finally, connecting to external solutions via an agent. These connectors are not limited to the below table, and there are some examples of IoT and Azure DevOps that can communicate with Azure Sentinel.

Microsoft services External solutions via API External solutions via an agent
Office 365 Barracuda F5
Azure AD audit logs and sign-ins Symantec Check Point
Azure Activity Amazon Web Services Cisco ASA
Azure AD Identity Protection Fortinet
Azure Security Center Palo Alto
Azure Information Protection Common Event Format CEF appliances
Azure Advanced Threat Protection Other Syslog appliances
Cloud App Security DLP solutions
Windows security events Threat intelligence providers
Windows firewall DNS machines
DNS Linux servers
Microsoft web application firewall (WAF) Other clouds

Where Does Azure Sentinel Sit in the Azure Security Picture?

Azure Sentinel in the Azure Security Big Picture

Azure Sentinel can be used before an attack, like Azure Active Directory signings from new locations. During an attack, like malware in the machine or post-attack for investigation about an incident and perform triage with it. Azure Sentinel has a service graph that can show you the related event to an incident.

If you are security titled a person or part of the SOC team and you prefer a cloud-native solution, Azure Sentinel is a good option.

Security Providers or Why Azure Sentinel?

Azure Sentinel uses Microsoft Intelligent Security Graph that is backed by Microsoft Intelligent Security Association. This association consists of almost 60 companies that hand in hand help to find vulnerabilities more efficiently.

Microsoft brings its findings from 3500+ security professionals, 18B+ Bing page scans per month, 470B emails analyzed per month, 1B+ azure account 1.2B devices updated each month, 630B authentications per month, 5B threats blocked per month.

Microsoft Intelligent Security Graph Overview

Image Source: Microsoft

Microsoft has more solutions that create a valuable experience for his Microsoft Graph Security API: Windows antimalware platform, Windows Defender ATP, Azure Active Directory, Azure Information Protection, DMARC reporting for Office 365, Microsoft Cloud App Security, and Microsoft Intune.

Microsoft Intelligent Security Association (MISA)

Microsoft creates vast threat intelligence solutions. Microsoft collaborated with other companies to create a product under the name of Microsoft Intelligent Security Graph API. Microsoft calls the association The Microsoft Intelligent Security Association (MISA), an association that consists of almost 60 companies who share their security insights from trillions of signals.

  • Microsoft products: Azure Active Directory, Azure Information Protection, Windows Defender ATP, Microsoft Intune, Microsoft Graph Security API, Microsoft Cloud App Security, DMARC reporting for Office 365, Windows antimalware platform, Microsoft Azure Sentinel
  • Identity and access management: Axonius, CyberArk, Duo, Entrust Datacard, Feitian, Omada, Ping Identity, Saviynt, Swimlane, Symantec, Trusona, Yubico, Zscaler
  • Information protection: Adobe, Better Mobile, Box, Citrix, Checkpoint, Digital Guardian, Entrust Datacard, EverTrust, Forcepoint, GlobalSign, Imperva, Informatica, Ionic Security, Lookout, Palo Alto Networks, Pradeo, Sectigo, Sophos, Symantec, Wandera, Zimperium, Zscaler
  • Threat protection: AttackIQ, Agari, Anomali, Asavie, Bay Dynamics, Better Mobile, Bitdefender, Citrix, Contrast Security, Corrata, Cymulate, DF Labs, dmarcian, Duo Security, FireEye, Illumio, Lookout, Minerva Labs, Morphisec, Palo Alto Networks, Red Canary, ThreatConnect, SafeBreach, SentinelOne, Swimlane, ValiMail, Wandera, Ziften
  • Security management: Aujas, Barracuda, Carbon Black, Checkpoint, Fortinet, F5, Imperva, Symantec, Verodin

MISA and Security Graph API

MISA is a combined security effort. It continuously monitors cyberthreats and fortifies itself. This enriched knowledge is accessible by Microsoft Intelligent Security Graph API. Azure Sentinel Fusion is the engine that uses graph powered Machine Learning algorithms. Fusion associates activities with patterns of anomalies.

Microsoft Intelligent Security Association (MISA) and Security Graph API

Below you can see the Azure Sentinel Big Picture:

Azure Sentinel Big Picture

I hope you found this blog helpful and feel confident answering the question “what is Azure Sentinel”! As you can see, it’s just the tip of the Microsoft Security ‘iceburg’.

Azure Sentinel Microsoft Security Iceburg

GAPS IN YOUR SECURITY POSTURE?
Work with AIS to identify security risks and gaps. Together, we'll create a plan for your secure cloud environment.

Accurately identifying and authenticating users is an essential requirement for any modern application. As modern applications continue to migrate beyond the physical boundaries of the data center and into the cloud, balancing the ability to leverage trusted identity stores with the need for enhanced flexibility to support this migration can be tricky. Additionally, evolving requirements like allowing multiple partners, authenticating across devices, or supporting new identity sources push application teams to embrace modern authentication protocols.

Microsoft states that federated identity is the ability to “Delegate authentication to an external identity provider. This can simplify development, minimize the requirement for user administration, and improve the user experience of the application.”

As organizations expand their user base to allow authentication of multiple users/partners/collaborators in their systems, the need for federated identity is imperative.

The Benefits of Federated Authentication

Federated authentication allows organizations to reliably outsource their authentication mechanism. It helps them focus on actually providing their service instead of spending time and effort on authentication infrastructure. An organization/service that provides authentication to their sub-systems are called Identity Providers. They provide federated identity authentication to the service provider/relying party. By using a common identity provider, relying applications can easily access other applications and web sites using single sign on (SSO).

SSO provides quick accessibility for users to multiple web sites without needing to manage individual passwords. Relying party applications communicate with a service provider, which then communicates with the identity provider to get user claims (claims authentication).

For example, an application registered in Azure Active Directory (AAD) relies on it as the identity provider. Users accessing an application registered in AAD will be prompted for their credentials and upon authentication from AAD, the access tokens are sent to the application. The valid claims token authenticates the user and the application does any further authentication. So here the application doesn’t need to have additional mechanisms for authentication thanks to the federated authentication from AAD. The authentication process can be combined with multi-factor authentication as well.

Glossary

Abbreviation Description
STS Security Token Service
IdP Identity Provider
SP Service Provider
POC Proof of Concept
SAML Security Assertion Markup Language
RP Relying party (same as service provider) that calls the Identity Provider to get tokens
AAD Azure Active Directory
ADDS Active Directory Domain Services
ADFS Active Directory Federation Services
OWIN Open Web Interface for .NET
SSO Single sign on
MFA Multi factor authentication

OpenId Connect/OAuth 2.0 & SAML

SAML and OpenID/OAuth are the two main types of Identity Providers that modern applications implement and consume as a service to authenticate their users. They both provide a framework for implementing SSO/federated authentication. OpenID is an open standard for authentication and combines with OAuth for authorization. SAML is also open standard and provides both authentication and authorization.  OpenID is JSON; OAuth2 can be either JSON or SAML2 whereas SAML is XML based. OpenID/OAuth are best suited for consumer applications like mobile apps, while SAML is preferred for enterprise-wide SSO implementation.

Microsoft Azure Cloud Identity Providers

The Microsoft Azure cloud provides numerous authentication methods for cloud-hosted and “hybrid” on-premises applications. This includes options for either OpenID/OAuth or SAML authentication. Some of the identity solutions are Azure Active Directory (AAD), Azure B2C, Azure B2B, Azure Pass through authentication, Active Directory Federation Service (ADFS), migrate on-premises ADFS applications to Azure, Azure AD Connect with federation and SAML as IdP.

The following third-party identity providers implement the SAML 2.0 standard: Azure Active Directory (AAD), Okta, OneLogin, PingOne, and Shibboleth.

A Deep Dive Implementation

This blog post will walk through an example I recently worked on using federated authentication with the SAML protocol. I was able to dive deep into identity and authentication with an assigned proof of concept (POC) to create a claims-aware application within an ASP.NET Azure Web Application using the federated authentication and SAML protocol. I used OWIN middleware to connect to Identity Provider.

The scope of POC was not to develop an Identity Provider/STS (Security Token Service) but to develop a Service Provider/Relying Party (RP) which sends a SAML request and receives SAML tokens/assertions. The SAML tokens are used by the calling application to authorize the user into the application.

Given the scope, I used stub Identity Provider so that the authentication implementation could be plugged into a production application and communicate with other Enterprise SAML Identity Providers.

The Approach

For an application to be claims aware, it needs to obtain a claim token from an Identity Provider. The claim contained in the token is then used for additional authorization in the application. Claim tokens are issued by an Identity Provider after authenticating the user. The login page for the application (where the user signs in) can be a Service Provider (Relying Party) or just an ASP.NET UI application that communicates with the Service Provider via a separate implementation.

Figure 1: Overall architecture – Identity Provider Implementation

Figure 1: Overall architecture – Identity Provider Implementation

The Implementation

An ASP.NET MVC application was implemented as SAML Service provider with OWIN middleware to initiate the connection with the SAML Identity Provider.

First, the communication is initiated with a SAML request from service provider. The identity provider validates the SAML request, verifies and authenticates the user, and sends back the SAML tokens/assertions. The claims returned to service provider are then sent back to the client application. Finally, the client application can authorize the user after reviewing the claims returned from the SAML identity provider, based on roles or other more refined permissions.

SustainSys is an open-source solution and its SAML2 libraries add SAML2P support to ASP.NET web sites and serve as the SAML2 Service Provider (SP).  For the proof of concept effort, I used a stub SAML identity provider SustainSys Saml2 to test the SAML service provider. SustainSys also has sample implementations of a service provider from stub.

Implementation steps:

  • Start with an ASP.NET MVC application.
  • Add NuGet packages for OWIN middleware and SustainSys SAML2 libraries to the project (Figure 2).
  • Modify the Startup.cs (partial classes) to build the SAML request; set all authentication types such as cookies, default sign-in, and SAMLl2 (Listing 2).
  • In both methods CreateSaml2Options and CreateSPOptions SAML requests are built with both private and public certificates, federation SAML Identity Provider URL, etc.
  • The service provider establishes the connection to identity on start up and is ready to listen to client requests.
  • Cookie authentication is set, default authentication type is “Application,” and set the SAML authentication request by forming the SAML request.
  • When the SAML request options are set, instantiate Identity Provider with its URL and options. Set the Federation to true. Service Provider is instantiated with SAML request options with the SAML identity provider. Upon sign in by the user, OWIN middleware will issue a challenge to the Identity Provider and get the SAML response, claim/assertion back to the service provider.
  • OWIN Middleware issues a challenge to SAML Identity Provider with the callback method (ExternalLoginCallback(…)). Identity provider returns that callback method after authenticating the user (Listing 3).
  • AuthenticateSync will have claims returned from the Identity Provider and the user is authenticated at this point. The application can use the claims to authorize the user to the application.
  • No additional web configuration is needed for SAML Identity Provider communication, but the application config values can be persisted in web.config.

Figure 2: OWIN Middleware NuGet Packages

Figure 2: OWIN Middleware NuGet Packages

Listing 1:  Startup.cs (Partial)

using Microsoft.Owin;
using Owin;

[assembly: OwinStartup(typeof(Claims_MVC_SAML_OWIN_SustainSys.Startup))]

namespace Claims_MVC_SAML_OWIN_SustainSys
{
    public partial class Startup
    {
        public void Configuration(IAppBuilder app)
        {
            ConfigureAuth(app);
        }
    }
}

Listing 2: Startup.cs (Partial)

using Microsoft.Owin;
using Microsoft.Owin.Security;
using Microsoft.Owin.Security.Cookies;
using Owin;
using Sustainsys.Saml2;
using Sustainsys.Saml2.Configuration;
using Sustainsys.Saml2.Metadata;
using Sustainsys.Saml2.Owin;
using Sustainsys.Saml2.WebSso;
using System;
using System.Configuration;
using System.Globalization;
using System.IdentityModel.Metadata;
using System.Security.Cryptography.X509Certificates;
using System.Web.Hosting;

namespace Claims_MVC_SAML_OWIN_SustainSys
{
    public partial class Startup
    {
        public void ConfigureAuth(IAppBuilder app)
        {            
            // Enable Application Sign In Cookie
            var cookieOptions = new CookieAuthenticationOptions
                {
                    LoginPath = new PathString("/Account/Login"),
                AuthenticationType = "Application",
                AuthenticationMode = AuthenticationMode.Passive
            };

            app.UseCookieAuthentication(cookieOptions);

            app.SetDefaultSignInAsAuthenticationType(cookieOptions.AuthenticationType);

            app.UseSaml2Authentication(CreateSaml2Options());
        }

        private static Saml2AuthenticationOptions CreateSaml2Options()
        {
            string samlIdpUrl = ConfigurationManager.AppSettings["SAML_IDP_URL"];
            string x509FileNamePath = ConfigurationManager.AppSettings["x509_File_Path"];

            var spOptions = CreateSPOptions();
            var Saml2Options = new Saml2AuthenticationOptions(false)
            {
                SPOptions = spOptions
            };

            var idp = new IdentityProvider(new EntityId(samlIdpUrl + "Metadata"), spOptions)
            {
                AllowUnsolicitedAuthnResponse = true,
                Binding = Saml2BindingType.HttpRedirect,
                SingleSignOnServiceUrl = new Uri(samlIdpUrl)
            };

            idp.SigningKeys.AddConfiguredKey(
                new X509Certificate2(HostingEnvironment.MapPath(x509FileNamePath)));

            Saml2Options.IdentityProviders.Add(idp);
            new Federation(samlIdpUrl + "Federation", true, Saml2Options);

            return Saml2Options;
        }

        private static SPOptions CreateSPOptions()
        {
            string entityID = ConfigurationManager.AppSettings["Entity_ID"];
            string serviceProviderReturnUrl = ConfigurationManager.AppSettings["ServiceProvider_Return_URL"];
            string pfxFilePath = ConfigurationManager.AppSettings["Private_Key_File_Path"];
            string samlIdpOrgName = ConfigurationManager.AppSettings["SAML_IDP_Org_Name"];
            string samlIdpOrgDisplayName = ConfigurationManager.AppSettings["SAML_IDP_Org_Display_Name"];

            var swedish = CultureInfo.GetCultureInfo("sv-se");
            var organization = new Organization();
            organization.Names.Add(new LocalizedName(samlIdpOrgName, swedish));
            organization.DisplayNames.Add(new LocalizedName(samlIdpOrgDisplayName, swedish));
            organization.Urls.Add(new LocalizedUri(new Uri("http://www.Sustainsys.se"), swedish));

            var spOptions = new SPOptions
            {
                EntityId = new EntityId(entityID),
                ReturnUrl = new Uri(serviceProviderReturnUrl),
                Organization = organization
            };
        
            var attributeConsumingService = new AttributeConsumingService("Saml2")
            {
                IsDefault = true,
            };

            attributeConsumingService.RequestedAttributes.Add(
                new RequestedAttribute("urn:someName")
                {
                    FriendlyName = "Some Name",
                    IsRequired = true,
                    NameFormat = RequestedAttribute.AttributeNameFormatUri
                });

            attributeConsumingService.RequestedAttributes.Add(
                new RequestedAttribute("Minimal"));

            spOptions.AttributeConsumingServices.Add(attributeConsumingService);

            spOptions.ServiceCertificates.Add(new X509Certificate2(
                AppDomain.CurrentDomain.SetupInformation.ApplicationBase + pfxFilePath));

            return spOptions;
        }
    }
}

Listing 3: AccountController.cs

using Claims_MVC_SAML_OWIN_SustainSys.Models;
using Microsoft.Owin.Security;
using System.Security.Claims;
using System.Text;
using System.Web;
using System.Web.Mvc;

namespace Claims_MVC_SAML_OWIN_SustainSys.Controllers
{
    [Authorize]
    public class AccountController : Controller
    {
        public AccountController()
        {
        }

        [AllowAnonymous]
        public ActionResult Login(string returnUrl)
        {
            ViewBag.ReturnUrl = returnUrl;
            return View();
        }

        //
        // POST: /Account/ExternalLogin
        [HttpPost]
        [AllowAnonymous]
        [ValidateAntiForgeryToken]
        public ActionResult ExternalLogin(string provider, string returnUrl)
        {
            // Request a redirect to the external login provider
            return new ChallengeResult(provider, Url.Action("ExternalLoginCallback", "Account", new { ReturnUrl = returnUrl }));
        }

        // GET: /Account/ExternalLoginCallback
        [AllowAnonymous]
        public ActionResult ExternalLoginCallback(string returnUrl)
        {
            var loginInfo = AuthenticationManager.AuthenticateAsync("Application").Result;
            if (loginInfo == null)
            {
                return RedirectToAction("/Login");
            }

            //Loop through to get claims for logged in user
            StringBuilder sb = new StringBuilder();
            foreach (Claim cl in loginInfo.Identity.Claims)
            {
                sb.AppendLine("Issuer: " + cl.Issuer);
                sb.AppendLine("Subject: " + cl.Subject.Name);
                sb.AppendLine("Type: " + cl.Type);
                sb.AppendLine("Value: " + cl.Value);
                sb.AppendLine();
            }
            ViewBag.CurrentUserClaims = sb.ToString();
            
            //ASP.NET ClaimsPrincipal is empty as Identity returned from AuthenticateAsync should be cast to IPrincipal
            //var identity = (ClaimsPrincipal)Thread.CurrentPrincipal;
            //var claims = identity.Claims;
            //string nameClaimValue = User.Identity.Name;
            //IEnumerable<Claim> claimss = ClaimsPrincipal.Current.Claims;
          
            return View("Login", new ExternalLoginConfirmationViewModel { Email = loginInfo.Identity.Name });
        }

        // Used for XSRF protection when adding external logins
        private const string XsrfKey = "XsrfId";

        private IAuthenticationManager AuthenticationManager
        {
            get
            {
                return HttpContext.GetOwinContext().Authentication;
            }
        }
        internal class ChallengeResult : HttpUnauthorizedResult
        {
            public ChallengeResult(string provider, string redirectUri)
                : this(provider, redirectUri, null)
            {
            }

            public ChallengeResult(string provider, string redirectUri, string userId)
            {
                LoginProvider = provider;
                RedirectUri = redirectUri;
                UserId = userId;
            }

            public string LoginProvider { get; set; }
            public string RedirectUri { get; set; }
            public string UserId { get; set; }

            public override void ExecuteResult(ControllerContext context)
            {
                var properties = new AuthenticationProperties { RedirectUri = RedirectUri };
                if (UserId != null)
                {
                    properties.Dictionary[XsrfKey] = UserId;
                }
                context.HttpContext.GetOwinContext().Authentication.Challenge(properties, LoginProvider);
            }
        }
    }
}

Listing 4: Web.Config

<?xml version="1.0" encoding="utf-8"?>
<!--
  For more information on how to configure your ASP.NET application, please visit
  https://go.microsoft.com/fwlink/?LinkId=301880
  -->
<configuration>
  <appSettings>
    <add key="webpages:Version" value="3.0.0.0" />
    <add key="webpages:Enabled" value="false" />
    <add key="ClientValidationEnabled" value="true" />
    <add key="UnobtrusiveJavaScriptEnabled" value="true" />
    <add key="SAML_IDP_URL" value="http://localhost:52071/" />
    <add key="x509_File_Path" value="~/App_Data/stubidp.sustainsys.com.cer"/>
    <add key="Private_Key_File_Path" value="/App_Data/Sustainsys.Saml2.Tests.pfx"/>
    <add key="Entity_ID" value="http://localhost:57234/Saml2"/>
    <add key="ServiceProvider_Return_URL" value="http://localhost:57234/Account/ExternalLoginCallback"/>
    <add key="SAML_IDP_Org_Name" value="Sustainsys"/>
    <add key="SAML_IDP_Org_Display_Name" value="Sustainsys AB"/>
  </appSettings>

Claims returned from the identity provider to service provider:

Claims returned from the identity provider to service provider

Additional References

AIS Gets Connection of DoD DISA Cloud Access Point at Impact Level 5

Getting the DoD to the Cloud

Our team was able to complete the near-impossible. We connected to the DoD DISA Cloud Access Point at Impact Level 5, meaning our customer can now connect and store any unclassified data they want on their Azure subscription.

About the Project

The project started in July 2017 to connect an Azure SharePoint deployment to the DoD NIPRnet at Impact Level 5. Throughout the process, the governance and rules of engagement were a moving target, presenting challenges at every turn.

Thanks to the tenacity and diligence of the team, we were able to successfully achieve connection to the Cloud Access Point (CAP) on September 6th, 2018. This was a multi-region, with 2 connections, SharePoint IaaS always-on deployment, which involved completing all required documentation for the DISA Connection (SNAP) process.

We are now moving towards the first Azure SharePoint Impact Level 5 production workload in the DoD, so be sure to stay tuned for more updates.

A Repeatable Process for Government Cloud Adoption

Azure Government was the first hyperscale commercial cloud service to be awarded an Information Impact Level 5 DoD Provisional Authorization by the Defense Information Systems Agency, and this was the first public cloud connection on Azure in the DoD 4th Estate.

With fully scripted, repeatable cloud deployment, including Cloud Access Point connection requirements, we can now get Government Agencies to the cloud faster, and more securely than ever before.

We work with fully integrated SecDevOps processes and can leverage Microsoft’s Azure Security Team for assistance in identifying applicable security controls, inherited, shared and customer required controls.

See how you can make the cloud work for you. Contact AIS today to start the conversation, or learn more about our enterprise cloud solutions.

HARNESS THE POWER OF CLOUD SERVICES FOR YOUR ORG
Discover how AIS can help your org leverage the cloud to modernize, innovate, and improve IT costs, reliability, and security.