This blog will share ten thought-provoking moments from the Gartner IT Symposium/Xpo™ 2022 conference. The event agenda was rich with overlapping sessions; my observations are based on sessions I could attend, and I look forward to watching the recordings of sessions I could not view live.

One: The Metaverse Equation

There are so many instances of metaverse today. Microsoft alone has a consumer, commercial, and industrial metaverse. An example of the industrial metaverse is their collaboration with Coke Hellenic to build a “digital twin” fabric of sensors on the factory floor and then allow employees to experience the twin itself. AltspaceVR, a social VR platform now owned by Microsoft, offers a space for various communities while corporations explore use cases for employee training or onboarding. The near and actual examples of the metaverse in action are everywhere; for Saudi Arabia, it is their Neom city; for Nike, it is NikeLand, and so on.

It does not matter if you use AR/VR/ MR, 3D simulation, 2-D rendering, or any device you choose. The metaverse is about presence and experience. Your cartoony avatar in the metaverse is not about photo realism. It is about the ability to interact with others in a shared space from anywhere in the world. Most importantly, it is about the ability to control your experience.

The session ‘Building a Digital Future: The Metaverse’ Gartner analyst presented the Metaverse equation using three areas as pillars: transport, transform and transact.

  • Metaverse is a shared space you need to “transport into” via a headset, spatial computing glasses, 3D glasses, a game rig, or a mobile device or a PC).
  • Once transported to the metaverse of your choice, your surroundings are transformed.
  • Finally, once you are in the metaverse, you can transact can use crypto and Web3 to transact (for example, you can buy NFT sneakers in Nikeland)

Ultimately, I think the success of the metaverse will depend on interoperability and establishing a common identity. It will be a while before we see such interoperability.

For more information refer to the Gartner® IT Symposium/Xpo™ 2022 session
Session Name: Building a Digital Future: The Metaverse
Speaker: Marty Resnik

AIS IN THE METAVERSE
AIS hosted a hiring event and decided to go remote and test a Virtual Reality-based format using AltspaceVR, the leading platform for live, virtual events. Learn more about our experience in our blog.

Two: Talent Force Multipliers

In the Opening Keynote- Gartner analyst Tina Nunno spoke about IT for Sustainable Growth and mentioned three Force Multipliers for Revolutionary work. Below are my observations on each.

Take the friction out of work.
Whether dealing with cumbersome job applicant tracking systems or everyday enterprise applications, friction is like sand in your bike’s gears, making every hill seem more significant. Workers satisfied with their day-to-day applications are twice as likely to stay.

Invest aggressively in AI augmentation.
Think of AI as a way to augment our employees’ reach, range, and capabilities by helping them with everyday decision-making. Unity Health uses an AI-driven tool like Chart Watch looks at 100 different variables in a patient’s chart, including lab results and vital signs, and determines whether the patient is at a low, moderate, or high risk of needing ICU care. AI is not replacing emergency room doctors; it merely assists them with informed decision-making.

Experiment with the highly visible, highly hyped.
Invest in emerging and highly hyped and do that publicly. We all know that a higher failure rate is associated with emerging and highly hyped projects. But organizations that are seen as innovative are more attractive to potential employees and tend to be ahead of the pack during the downturn.

For more information refer to the Gartner® IT Symposium/Xpo™ 2022 session
Session Name: Talent Force Multipliers
Speaker: Tina Nunno

Three: Continuous Modernization Value Stream to Deliver App Migration

App migration needs to be a constant process of modernizing. In the session “Use the Continuous Modernization Value Stream to Deliver Application and Cloud Migration Priorities” at the Gartner IT Symposium/Xpo, analyst Howard Dodd shared, I heard this quote: “You must continuously modernize the most valuable and vulnerable parts of your portfolio.”

This session taught me that the main idea is to think about Assess, Transform, and Evolve loops. Don’t try to do this all at once. Instead, take one opportunity, walk it to the end, learn from it, and apply the learnings to the next iteration of the loop. Such a feedback loop lets you start going faster and delivering a steady value stream. The most significant benefit of the incremental approach is that it allows you to change. If for some reason, that container orchestration platform you placed at the center of your app migration strategy does not work out, you can change it.

Finding value in app modernization aligns with your long-term strategy. It is about the outcomes. For example, rather than talking about percentages of applications migrated, talk about outcomes. Consider this example: “As a healthcare company, we want to increase engagement with our members by helping them make cost-conscious decisions about their plans. We will do that by modernizing our web applications to offer the lowest prescription….” Notice that the app modernization is tied to an outcome with a defined measure of success. Migration is not complete by deploying the app to the cloud. It needs to be continuously monitored and improved.

For more information refer to the Gartner® IT Symposium/Xpo™ 2022 session
Session Name: Use the Continuous Modernization Value Stream to Deliver Application and Cloud Migration Priorities
Speaker: Howard Dodd

Four: BeeKeeperAI Demo

BeeKeeperAI

Generalizing ML algorithms are complicated (think cost and time) mainly because using synthetic or de-identified training data can create a significant amount of overhead. Only 16 algorithms have achieved the “DEN” designation from FDA.

BeeKeeperAI is attempting to solve this problem by giving algorithm developer access to real data without data ever leaving an organization’s premises. Algorithm developers deploy their algorithm to a confidential computer-based secure enclave (created by the data owner). These secure enclaves eliminate the risk of data exfiltration and interrogation of the algorithm IP from insiders and third parties.

For more information, check out the BeeKeeperAI website.
Slide: This is not a Gartner presentation, and no deck was provided. The picture above is from the BeeKeeper website.

Five: Future of AI

AI as a companion technology. It will be pervasive across all jobs, not just as a tool but as a teammate. AI won’t replace your plumber but can check the plumber’s work.
AI will require less data and computation by combining purely data-driven with a login-driven AI. It would take a neural network of about 100,000 games to learn “Tic Tac Toe.” We can significantly reduce learning time by telling the algorithm to start in the middle.

In the session “The Future of AI,” analyst Whit Andrews mentioned, “AI Is Accessible to More People With Less Skills and More Knowledge.” I feel that the advent of open-source algorithms and composable AI patterns will make AI more accessible to more people with fewer skills but more business knowledge to drive business outcomes.

For more information refer to the Gartner® IT Symposium/Xpo™ 2022 session
Session Name: The Future of AI
Speaker: Whit Andrews

Six: Data Ecosystems

Cohesive cloud-based data ecosystems are on the rise and are expected to be the dominant choice in future years. These ecosystems include CSP native tools and a collection of third-party ISV tools as necessary. In the session, “Why CIOs Must Care About Data and Analytics Ecosystems – Adaptability, Speed and Lower Cost,” a Gartner analyst shared the common “data use cases in the data ecosystem: applications, data science, AI/ML, IoT, analytics, event processing, marketplaces, and edge computing.” Data ecosystems, especially those completed based on CSP-native tools and services, can pose a lock-in challenge, leading to higher prices. The competition within the cloud data ecosystem is broad enough that the cost is expected to go down. Additionally, the cost of a DIY data ecosystem is high because of the integration costs.

For example, consider Microsoft’s Intelligent Data Platform, which deeply integrates its databases, analytics, BI, and data governance products into a data ecosystem. At the recently concluded Ignite, Microsoft added a partner ecosystem for the Intelligent Cloud.

Microsoft Intelligent Platform

Source: Docs.Microsoft.com
For more information refer to the Gartner® IT Symposium/Xpo™ 2022 session
Session Name: Why CIOs Must Care About Data and Analytics Ecosystems – Adaptability, Speed, and Lower Cost
Speaker: Donald Feinberg

Seven: Future of Cloud 2027

Cloud is transitioning from the technology core (provided by hyper-scalers today) to capability enhancement that adds value to core services. Cross-cloud data platforms like Snowflake and containers as common infrastructure layers and operations layers are examples of capability enhancement.
It is expected that most of the customer requirements would be satisfied by CSP-Native offering versus container-focused. I wonder if SP first-party services like AKS are considered CSP-native or CNCF-native.

For more information refer to the Gartner® IT Symposium/Xpo™ 2022 session
Session Name: Future of Cloud 2027: From Technology to Business Innovation
Speaker: David Smith

Eight: The New Economics of Technology

Strive to evaluate our assumptions in the face of technological disruption constantly. In the session “The New Economics of Technology,” analyst Daryl Plummer stated, “end-user organizations must manage the risk associated with failure to anticipate new economics brought about by technology disruption.” This means that technology disruptions lead to a change in technology economics. So, we must find new value stories to create new growth opportunities for our companies and customer organizations.

Below are the four primary phase shifts in technology shared by analyst Plummer and my understanding of each:

  • Control to democratization. To prepare for this shift, IT leaders must invest in a governance model allowing citizen developers to participate in content creation.
  • Heuristic to Intelligence. AI is being embedded into applications allowing the applications to analyze and reason at a higher clip. To prepare for this shift, IT leaders must invest in a robust ML Ops governance model.
  • Data Center Isolation to Cloud Concentration. This shift is already well underway. According to Gartner, “nearly 60% of IT spending on application software will be directed toward cloud technologies by 2024”.
  • Centralized authority is moving towards a decentralized model. Web3 and Smart Contracts are driving decentralized governance.

For more information refer to the Gartner® IT Symposium/Xpo™ 2022 session
Session Name: The New Economics of Technology
Speaker: Daryl Plummer

Nine: Democratized Digital Delivery with Fusion Teams

Digital Democratization is defined as making the creation and management of information technology accessible to everyone. In the session, “Democratized Digital Delivery: Fusion Teams and Product Management Explained,” analyst Jaime Capellá shared stats. These stats made me think that as CEOs push towards digitization, they are increasingly looking for technology work to be done directly within the business function and less in IT. A dominant trend, “Fusion Teams,” is emerging to support this objective. Fusion teams are “multidisciplinary teams that blend technology or analytics and business domain expertise and share accountability for business and technology outcomes.”

The analyst referred to “Fusion Teams” as a new IT and Business interface.

For Fusion Teams to be successful, IT leaders need to build a consistent foundation of platform products/services such as cloud infrastructure, security, low code solutions, and data.
Additionally, IT leaders must embed cross-cutting experts across the fusion teams, including architecture, security, and compliance.

For more information refer to the Gartner® IT Symposium/Xpo™ 2022 session
Session Name: Democratized Digital Delivery: Fusion Teams and Product Management Explained
Speaker: Jaime Capellá

Ten: Major Trends in Robotic Pocess Automation

In the session, “Magic Quadrant for Robotic Process Automation” analyst Saikat Ray shared the following major trends in RPA, including:

  1. APIs complement screen scraping capabilities.
  2. RPA vendor is constantly evolving, leading a fluid vendor landscape.
  3. Customers are going from RPA to Hyperautomation.; In my opinion, this is moving beyond task-based RPA to a platform for automation that includes resilience, orchestration, and infusion of AI. Another critical aspect of hyper-automation is access to low-code and no-code technologies.
  4. Finally, vendors are coming out with innovative RPA pricing models, including consumption-based pricing.

For more information refer to the Gartner® IT Symposium/Xpo™ 2022 session
Session Name: Magic Quadrant™ for Robotic Process Automation
Speaker: Saikat Ray

Conclusion

Did you attend the Gartner IT Symposium/Xpo™ conference? What were your takeaways? Reach out to Vishwas Lele on LinkedIn to share your thoughts.

GARTNER and IT SYMPOSIUM/XPO are registered trademarks and service marks of Gartner, Inc. and/or its affiliates in the U.S. and internationally and are used herein with permission. All rights reserved.

Every day, too many patients worldwide fall victim to medical problems that can be solved in a matter of a few hundred dollars; a few hundred dollars that they do not have. Donors can help by donating funds to hospitals and support agencies, but they can only hope their money is going straight to where it is needed. Often life-changing surgeries ranging from just a few hundred to a thousand dollars can get lost among the demand for funds to support a wide range of healthcare needs. Watsi tackles this problem head-on.

With a mission to make healthcare a reality for everyone, Watsi is changing lives, and you can, too. Here are six reasons why we support Watsi, and why you should join us:

#1: Watsi hosts crowdfunding of medical treatment for people in need around the world.

Watsi has made history as the first nonprofit to receive financial backing from Silicon Valley’s accelerator Y Combinator. Rather than relying on donations from indirect sources, patients know that money is sent directly to them. A limitless number of donors can fund one treatment, so even a few extra dollars donated translate to a direct impact on patients.

#2: 100% of the donations directly fund medical treatments.

Watsi pays for its operational costs from their funding and none from the donations. They even cover the credit card processing fees. They raise operational funds via accelerators like Y Combinator or angel investors. Think of Watsi as a two-sided market leveraging the internet’s ability to connect donors with needs, a market previously dominated by narrow channels. You can choose to “tip” Watsi for this support, but there is full transparency on where your donations go, with the option to donate 100% to patient care.

Please note, I am not suggesting that using portions of donor contributions for operational costs is bad. Most nonprofits incur considerable operational costs. Yet, the ability to direct 100% of funds patients makes Watsi enticing for many donors like me.

#3: For donors, Watsi enables a “transactional” approach to philanthropy.

Even a few dollars of the contribution goes towards a directed outcome. This can appeal to folks who prefer transactions outcomes over supporting broader initiatives such Against Malaria Foundation or Schistosomiasis Control Initiative. Donors can choose to contribute monthly and will get updates on where their donations are making an impact.

Watsi Monthly Donation

Additionally, because of Watsi’s directed approach, donors can choose to put their donations towards specific causes and treatments they are passionate about, such as heart surgery.

#4: Donors can connect with the people they help.

Because Watsi is run like a startup, as a donor, you have full access to each patient timeline and full access to the transparency report in terms of payments made to the medical providers that Watsi works with. The process of helping someone in need is the most direct it gets, and Watsi makes sure of it. You get an email regarding your initial donation, as well as notifications once they receive treatment with a link to the full story, like Somaly’s story below.

Watsi Donation and Update

#5: This is not just an idea; it’s a success.

After its founding in 2011, Watsi has improved the lives of over 25,000 patients around the world. Here is another story of a little boy in Kenya who suffered from a serious infection.

#6: Watsi provides gift cards, so you can give the gift of giving.

The perfect gift for anyone, you can send a gift card to Watsi via email or mail. The recipient can then donate to any patient or patients they choose and will get updates after they receive care.

Your Donations At Work

AIS has a special culture. As part of a special match campaign by our community, aisGives, who gathers and supports employees in charitable donations, matching, fundraising, volunteering, awareness, we influenced over $20,000 this holiday season. This paid for roughly 50 life-altering surgeries around the world and we’re excited to continue supporting their mission.

About the Podcast

I had the pleasure to once again be a guest on the .NET Rocks! podcast last month. This year marked my 11th time on the show!

Carl, Richard, and I talked about how the cloud has changed data analytics. We discussed the latest data work at AIS and bringing a developer’s view to the approach. The cloud has changed bringing disparate data sources together for analytics. With the cloud’s compute-on-demand, you don’t need to do many transformations of data as it’s loaded, but you can test it! This conversation dives into how CI/CD techniques can be applied to data to make for accurate data analytics and intelligent ingestion pipelines. Check out the full episode to learn more.

Listen to the full podcast here

Related Content

Accelerate your journey to become a data-driven organization. Many companies struggle to extract value from their data at the pace that today’s world moves and data grows. Our Data Platform Modernization Proof of Value engagement provides you a path to enable safe and secure access to data sets, empowering business users to unlock the next big opportunities for the company. Learn more and reach out to AIS today.

The first AWS service (S3) was launched in September 2006. My humble journey in the cloud started in 2007, leading up to my first cloud talk in New York in 2008. In this blog post, I want to talk about what I, and we collectively, have learned from the last decade in the cloud and how to use this knowledge to drive cloud value. Let us get started.

By now, the transformational benefits of the cloud are well established. Look no further than spending data on the top three cloud providers, which was nearly 150 billion in 2021. However, current cloud spending is only a fraction of the global IT spend at 2-3 trillion dollars. This suggests that the main chapters of the cloud story are yet to be written. As is the case with the progression of any successful technology, the cloud is expected to become so pervasive that it will become invisible – just as traditional IaaS is becoming largely invisible with the advent of Kubernetes and serverless models.

Future discussions about the cloud will be less about the cloud capabilities, and more about the outcomes empower by cloud services. At some point, we can expect to stop talking about enterprise landing zones, VNETs/ VPCs, PaaS v-next, i.e., containers and serverless. I also hope that we will stop measuring our progress in the cloud using “proxy” metrics like the number of VMs migrated, number of cloud deployments, number of workloads moved to the cloud, Cloud policy violations, and more. Later in this blog, we will talk about cloud transformation metrics that matter most at the end of the day.

But first, let us get back to the theme of this blog post – driving cloud value. This depends on a few things:

  1. Mature cloud foundation
  2. Cloud value stream

Cloud Value Diagram Cloud Foundation and Cloud Value Stream Ven Diagram

Mature Cloud Foundation

With cloud providers releasing close to three thousand features each year, cloud entropy is absolute. Cloud consumers, including big enterprises, don’t have the resources to take advantage of all the new features announced daily. As a result, cloud consumers need to focus on improving their foundational capabilities. Let us look at a few (non-exhaustive) foundational capabilities.

  1. Consistency – Pick a few (3-5) architectural blueprints that will support 80% of the application types. Ensure that users can provision these architectural blueprints consistently, quickly, and in a self-service manner using a service catalog, like Zeuscale. The service catalog, in turn, is powered by a collection of robust automation scripts that address cross-cutting objectives. Here are a few attributes to consider:
    • Resilience – spreading the subsystems across availability zones
    • Observability – automatically enabling monitoring machinery
    • Security – a zero-trust approach that assumes breach
    • It is crucial that you treat the above-mentioned automation assets as first-class code artifacts and invest in their continual improvement.

  2. Cost Optimization – Too many organizations are witnessing unplanned increases in their cloud bills. Worst, it has been suggested that 30% of all cloud consumption can be attributed to waste. Cloud cost maturity comes with cloud maturity. Ensure that you have enough governance in place, including resource tags, provisioning policies, reservations for long-term resource use, monitoring & alerts, and detailed usage reporting at a level of granularity that matches your organizational hierarchy. Doing so will allow you to catch unplanned cloud expenditure and improve your ability to predict costs over time.

  3. Security – Resources provisioned in the cloud are dynamic, and so is security monitoring. Security monitoring does *not* end with provisioning a compliant architectural blueprint. In fact, it starts with the provisioning step. You will need automation jobs that continuously monitor your applications’ security configurations and security policies to prevent “drift” from a compliant state.

  4. Data Gravity – In 2013, Dave McCrory proposed the software concept of “data gravity,” where applications, services, and business logic in software applications will move physically closer to where data is stored. As you can imagine, data gravity is a critical consideration for companies with a multi-cloud** setup i.e it is hard to spread applications across clouds because of data gravity. One way to dent the data gravity challenge is to have a data-sharing strategy in place. Data sharing can be based on copy / in-place access of datasets and can span across single or multiple clouds.** Almost 80% of Fortune 500 companies find themselves in a multi-cloud setup. These companies are provisioning similar technology stacks across more than one cloud provider.

  5. Center of Excellence – We talked about settling on a small set of architecture blueprints. You will need to invest in a forward-looking CoE group that continues to track the advances in the cloud and ensures that your organization is not caught flat-footed in the face of a disruptive new cloud capability or an architecturally significant improvement to an existing service. Without a CoE team focused on tracking and evaluating new capabilities in the cloud, you are likely to accrue cloud debt rapidly.

  6. Inclusiveness – Cloud is not just for professional developers and infrastructure engineers. An inclusive cloud strategy needs to support the needs of a growing community of citizen developers as well. Constructs like self-service provisioning and architectural blueprints that we discussed earlier need to be accessible to citizen developers. For example, it should be seamless for citizen developers to mix and match low / no-code constructs with advanced cloud platform construct.

  7. Data Analytics – As you plan to migrate/reimagine your applications in the cloud, recognize the immense potential of the data being collected. By planning for data ingestion, data transformation upfront, you can help bridge the divide between operations and analytics data. Architectures like the data mesh that think of data (operations and analytics) as the product are headed in this direction.

  8. Cloud Operating Model – Your traditional infrastructure, networking, and security teams must embrace a cloud operating model. They must rely on modern development practices of iterative development, DevSecOps, maintaining infrastructure/network/security as code. You cannot succeed in the cloud with a traditional IT operating mindset.

  9. Continuous Learning – Your organization may have become fluent in the cloud basics, but you will need continuous learning and upskilling programs to reach the next level. Only an organization that embeds a culture of learning can truly achieve its cloud transformation goals.

  10. Sandbox – Along with upskilling programs, cloud teams need the freedom to experiment and fail. This is where a cloud sandbox unencumbered by enterprise cloud security policies is essential for innovation. It should be possible for teams to experiment with any new, fast arriving preview capabilities within hours (not weeks or months).

Focus on Cloud Value Stream

Working on the cloud foundation alone will not be enough to leverage all the benefits cloud has to offer. You will need to consider the entire cloud value stream – a set of actions from start to finish that bring the value of the cloud to an organization. Cloud value streams allow businesses to specify the value proposition that they want to realize from the cloud.

Align cloud strategy with business objectives

The key idea is to start with a business strategy that can help realize the value proposition, then map that strategy into a list of cloud services. The list of cloud services, in turn, determines your cloud adoption plan. Let us break this down.

One of my favorite tools to develop a business strategy is a Wardley map. Wikipedia describes a Wardley map as a set of components positioned within a value chain (vertical axis) and anchored by the user need, with movement defined by an evolution axis (horizontal axis). Don’t worry if you are feeling a bit lost with the definition. A simple example can help. Let us assume that business leaders of fictitious financial services companies want to set up an insurance exchange in the future.

Starting from the perspective of the user of the insurance exchange, you can create a Wardley map, as shown in the diagram below.

Cloud Transformation Wardley Map

Mapped along the value chain (vertical axis) are the value-line capabilities of the insurance exchange. These capabilities are pegged on an evolution axis (horizontal axis) that represents the evolution of the components from genesis (high value) to utility (commodity).

A map like this allows you to organize your cloud investments. For example, the Matching Algorithm that pairs incoming purchase requests with the insurance providers may need to be a custom-built capability. A custom-built capability requires additional investment, but it also offers a differentiator and potentially higher profit. In the future, the previously mentioned matching capability may become available as a pre-built ML product or rental capability through evolution. So, there is indeed a risk of commoditization. But the question is – how soon can that happen? Wardley maps excel in engendering discussion across various teams into a single map.

The End-to-End Flow of Business Value

Earlier in this post, we talked about “proxy” metrics such as the number of VMs or workloads migrated to the cloud. While these metrics are helpful as IT and agile metrics, they fail to communicate the overall progress of a cloud transformation effort from the perspective of business outcomes. This is where Flow Framework®, introduced by Dr. Mik Kersten, comes in.

The core premise of the Flow Framework is the need to measure the end-to-end flow of business value and what results it produces. Flow Metrics measures the flow of business value through all the activities involved in producing business value through a software value stream. For example, consider the following chart depicting the Flow Efficiency metric. Flow Efficiency is the ratio of active time out of the total Flow Time.

A few things to note in the diagram below:

  • We are measuring end-to-end for the migration time of the application.
  • Additionally, we are considering the entire capability area and not an individual app.
  • The process of containerizing the app seems to be quick, but we are spending a significant time on security scanning and Authority to Operate (ATO) certification – not surprising for a highly regulated environment with very stringent security expectations.

Perhaps we need to make an upfront down payment on “technical debt” associated with security scanning and certification. Doing so would improve the flow efficiency of cloud migration.

Flow Chart for Driving Cloud Value Streams

In summary, to drive cloud value, you need a robust cloud foundation, as well as a keen eye towards the overall cloud value stream. Focusing on a few well-defined architectural blueprints will accord you the opportunity to mature in the areas of cloud costs, automation, and readiness. Focusing on the overall cloud value stream will ensure that your cloud investments are aligned with your business strategic goals.

Contact AIS to talk cloud strategy and business objectives.

What is Cognitive Surplus?

In his famous TED talk about Cognitive Surplus, Clay Shirky shares an incident in Kenya from December of 2007. There was a disputed presidential election that resulted in an outbreak of ethnic violence. A lawyer in Nairobi, Ory Okolloh, began blogging about it on her site, Kenyan Pundit. Shortly after the election and the outbreak of violence, the government suddenly imposed a significant media blackout. So, Okolloh solicited more information about what was going on from her commenters. The comments began pouring in, and Okolloh would collate them, but there was more information than any one person could manage. So, she asked for help to automate this task.

Two programmers who read her blog held their hands up and said, “We could do that,” and in 72 hours, they launched Ushahidi. Ushahidi (which means “witness” or “testimony” in Swahili) is a straightforward way of taking reports from the field, whether it’s from the web or, critically, via mobile phones and SMS, aggregating it and putting it on a map.

Enough people looked at it and found a value that the programmers who created Ushahidi decided they would make it open source and turn it into a platform. It’s since been deployed in Mexico to track electoral fraud and Washington D.C. to track snow cleanup.

Shirky credits the idea of Cognitive Surplus behind the creation of Ushahidi. According to Shirky, Cognitive Surplus has two parts:

  1. Time: The world’s population’s ability to volunteer and contribute and collaborate on large, sometimes global, projects. According to Shirky, the world has over a trillion hours a year to commit to shared projects.
  2. Tools: Primarily for collaboration. Internet, mobile phones, social media, and more.

Shirky acknowledges that not all cognitive surplus products are as helpful as Ushahidi. We also get LOL Cats (cute pictures of cats made more adorable by adding cute captions). While LOL Cats may not seem beneficial (unless you are looking for cute cat pictures), it is essential to note that it is still a creative and communal act. Freedom to contribute means freedom to contribute anything. The important takeaway is that the contributor of a LOL Cat picture has crossed a vital participation threshold.

In ur folder   Failed Password

Cognitive Surplus in an Enterprise

Can Shirky’s notion of Cognitive Surplus be applied to an enterprise? A typical enterprise has employees in between projects, not being utilized to the fullest on their current assignments for whatever reason, or motivated by creating value not just for themselves (e.g., self-learning) but for everyone in the enterprise. Cognitive Surplus encourages this resource of precious employee time for building something of value to the employees and the enterprise.

Beyond time, enterprises also need a framework that engenders participation. Such a framework would include:

  1. A starting point for a framework is tools for collaboration. Examples include collaboration tools like Github, Teams, and Slack.
  2. A collection of tasks that employees can contribute towards. These tasks don’t have to be well-defined like a product backlog. But an absence of defined tasks to choose from can significantly hamper employees from getting started. Ideas for tasks can come from ongoing projects (a nagging problem or an optimization that the product team has no time to spend on or research on upcoming features and features in an area of interest.
  3. Each task needs to be broken up into “byte-sized” chunks. A rule of thumb is 40 hours. This is typically an employee’s time between projects before they are pulled into their next assignment. It is also important to encapsulate the task to hide or decouple it from underlying dependencies. Doing so allows employees to contribute without spending hours or days setting up the environment before they start contributing.
  4. Ability to get feedback early and often is crucial to making the employees’ contribution productive. Therefore, it is essential to scale the feedback loop. The way to scale the feedback loop is to crowdsource it, i.e., get a representative from the team that suggested the task or a member of COI to pair with the employee working on the task. Even a 15-minute daily sync-up can go a long way.
  5. Celebrate the successful completion of a task. Recognize people’s contributions via a blog, internal communication, social media shares, and more. No matter how small the contribution, it should be recognized.

Cognitive Surplus at AIS

At AIS, we have worked to put the Cognitive Surplus to good use. Here are a few recent examples.

    1. Cameo Contributors for Value Creation projects. New hire and not yet on a project? In between projects? Our team is working to support more internal efforts by matchmaking individuals who have availability with the needs of our internal Cloud Acceleration Hub​ (HUB) team. The HUB is a dedicated team of AIS consultants organized to help AIS project teams deliver successful cloud services modernization projects.​ The HUB team consolidates knowledge and experience to provide rapid research and guidance for cloud migration and modernization projects. Cameo contributions can range from project-specific problem solving or solution reviews to contributing to IP development or internal skilling efforts. The HUB team has manageable chunks of work that they engage individuals for and mature this capability with ramp-up guides, task partners, and more.
    2. Creation of open-source tool ddl2dbt tool. The team wanted to automate the creation of DBT YML files based on ErWin models but had no cycles to build this tool. Cognitive Surplus made it possible to develop this tool – AppliedIS/ddl2dbt: CLI to generate DBT and dbtvault models from DDL and Source To Target Mappings (github.com).

How Do You Exercise Cognitive Surplus? Tangible and Intangible Benefits

Dean Kamen, the inventor, and entrepreneur, said, “Free cultures get what they celebrate.” In a similar vein, “Enterprises get what they celebrate.” Enterprises need to create a culture that celebrates participation, no matter how small the contribution is or how directly impactful the contribution is. The value created for the enterprise is not just a by-product of the participation; it is what they collectively make of participation.

Enterprises have hundreds of hours of participatory value up for grabs, year-in and year-out. Enterprises designed around a culture of participation and a framework for common value creation can achieve incredible results.

We have been able to integrate more individuals across the organization while providing value to the broader company. Those who have participated in these exercises have provided excellent feedback on the experience and how it gave them a positive experience with AIS, allowing them to contribute value to the company between delivery and billability. There are intangible benefits to this, including valuable impacts on culture, employee passion, and motivation. How is your organization using cognitive surplus?

First-time Feedback from Peers and Mentors

An AIS employee was invited to share his experience when participating in project work.


“I had the opportunity to work with an internal AIS HUB project earlier this year and came away with a new perspective when it came to my critical thinking. I was asked to write specific articles dealing with PowerShell, Pester, and PSScriptAnalyzer as well as a working example code to compliment the article. This was the first time in my career I had a group of Engineers and Developers providing feedback and guidance as I was producing a WiKi and code. The guidance and feedback were outstanding!

By the end of my time with the HUB team not only was my WiKi writing substantially better, but the feedback I received from the HUB team made my thought process much clearer and more refined. As a DevOps Engineer, being able to work with a client in a clear and concise manner is critical to successfully providing implementation guidelines and also results. The HUB Team took me under their wing and taught me how to be a better DevOps Engineer. My current project requires a lot of critical thinking, WiKi’s, and code blocks. I work with developers who need example code and instructions on how to get started. If I had not had the time I had with the HUB team, I would not be able to provide better documentation and code for their WiKi’s.”

– David Dracoules, AIS Cloud Infrastructure Consultant


JOIN OUR GROWING TEAM
AIS provides employees with opportunities to learn and grow in their careers. Won't you join us?

As we are still adjusting to remote work, it has allowed me to uncover some extra time to record a Power Platform Administration Foundation Pluralsight course to complement the “dev side” course released in 2018, Low-Code Development with Power Apps.

This course is designed for developers–both citizen and professional–interested in a low-code approach for building mobile applications. In this course, I used aspects of Azure PowerShell/CLI in demonstrating Power Platform management as I see a commonality between PowerApps and Azure Administration.

You can find the Pluralsight course here.

My daughter drew the following sketch to captures the principal motivation for why I created this course, which was to help the Power Platform admins attain equilibrium. Today, the balance seems to be tilting towards the citizen developers, making IT departments uncomfortable with moving forward with the Microsoft Power Platform adoption.

Illustration of Power Platform Course

The  Power Platform Adoption Framework created by Andrew Welch and the team speaks to the “adoption at scale” theme and an approach to enterprise management and governance that resonates with customers and puts IT and security departments at ease.

This course is about teaching admins the basic skills needed to effectively administer the Power Platform. My hope is that making the Power Platform admin tools somewhat match that of Azure, acceptance of the platform may be a bit easier.

The current approach of mass quarantining is to “flatten the curve.” However, learning about how the virus has spread and how it can return with the eventual restoration of our economy is something that is still blurry. Recent work by Abbott Labs (among others) shows that shortening testing times and mass-producing testing kits at affordable prices look promising.

However, despite the advancements by Abbott Labs, it is unattainable to test everyone in America. As of today, April 5th, we have tested, on average, one in every two hundred Americans. This can be compared to a test rate like South Korea. The ramp-up in testing has not allowed moving closer to reopening our economy.

Some have proposed the idea of checking for antibodies. This test would suggest immunity to the virus because of a prior infection. The logic behind this is that people that have these antibodies can safely return to work and take on the critical tasks needed to restart the economy. Nonetheless, recent news from across Asia warns us that patients previously diagnosed with COVID-19 are being readmitted to hospitals after testing positive for the virus again.

So as it stands, our current approach to mass quarantining from what the media outlets have predicted to be up to twelve-months is not only slow but is also pushing us down a path of economic damage. If that continues, this may be difficult to recover from. Scaling up and developing new methods of testing that check for antibodies, while advantageous, will not be by itself enough to reopen our economy.

An Aggressive Data-Vision Approach is Needed

An aggressive data-driven approach to understand how COVID-19 is spreading should be suggested. Based on these insights, we can demarcate safe zones where seminal economic activity can be reinstituted with minimal risk. There are two aspects of this approach:

  1. We can develop more in-depth insights into how the virus is spreading. We must acknowledge that mass quarantining alone is not the best approach.
  2. Based on the insights we develop, we may be able to open parts of our country once again, with a measure of confidence based on the data.

Ultimately, the solution boils down to data and computation problems. Imagine if we took the phone numbers of everyone infected with COVID-19 (of course, using an anonymized identifier rather than the actual phone numbers for protecting the privacy of folks involved). Then, using cell tower data to gather movement of those individuals based on their smart-phones, we will perform location and time-based searches. This would determine who might have come in contact with infected persons in the last forty-five days. Then, we will algorithmically place the search results dataset into bands based on the degree of overlap (time and location). This algorithm will be able to eliminate instances of location proximity where there is minimal chance of spread for example, at a traffic intersection. Conversely, this algorithm will accord a higher weight to location proximity based on where there is a bigger chance of the virus spreading. For example, at a coffee shop or workplace. All these factors will lead to the calculation of a risk factor. Individuals that meet the high-risk criteria will be notified. Any individual who receives the notification will be instructed to self-quarantine immediately. We can go further and penalize them if they don’t follow the suggestion, using the cell phone data. These individuals should be sent a self-test kit on a priority basis.

If these individuals test positive, their immediate family would then receive instant notification to also self-quarantine. The location in which this individual came into contact with the COVID-19 infected patient that initiated this search will be notified as well. If they test negative, we will still learn a vital data point is how the virus is spreading. These learnings, including the test results, will be fed into a continuously retraining machine learning algorithm. This algorithm will keep track of the trajectory of an infected person and common intersection locations. Additionally, this algorithm will also be able to account for an infected person being quarantined, thus neutralizing a virus carrier from the mix. In summary, this algorithm is akin to performing deep automated contact tracing at a level that cannot be matched by armies of volunteers.

Another important byproduct of the trained algorithm is the automatic extraction of “features”. In machine learning, a feature is an individual measurable property or characteristic of a phenomenon being observed [1]. For example, the algorithm will observe that many people are becoming infected, without coming in direct contact with an already infected person. Based on observing millions of such data points, it can, on its own, identify discriminating features such as an infected mailman route and common meeting areas that include certain surfaces like metals where coronavirus can remain active for days.

Using a continuously retraining algorithm, we can start to open parts of the country where the threat of spread is low. Any discovery of a COVID-19 case in a low-risk area will trigger the actions mentioned above and will flow back as input to training. It should be evident that the dataset and algorithm described above is computationally challenging. We are talking about recursive data searches through a dataset comprised of millions of citizens and a continuously learning algorithm with potentially billions of hyperparameters.

But Hasn’t This Approach Already Been Used in Other Countries like Taiwan and Singapore?

There is no question that location tracking capabilities have been highly effective in controlling the spread of coronavirus. In Taiwan and Singapore, location tracking technologies were used very early in the outbreak and mainly used for surveillance. In Korea, officials routinely send text messages to people’s phones alerting them on newly confirmed infections in their neighborhood — in some cases, alongside details of where the unnamed person had traveled before entering quarantine. Based on my research, these countries did not rely on big data and deep learning techniques to derive insights from the data. In the case of Taiwan and Singapore, the dataset of infected persons is not large enough for such an analysis.

Summary

The U.S. Government has broad authority to request personal data in the case of a national emergency like the Coronavirus. In the United States, phone companies such as AT&T and Verizon have extensive records on their customer’s movements. However, it does not appear that we are leveraging the large body of people’s movement data to combat coronavirus. According to a recent Washington Post story, “AT&T said it has not had talks with any government agencies about sharing this data for purposes of combating coronavirus. Verizon did not respond to requests for comment.”

The goal of this post is to engender a collaborative discussion with experts in big data, ML and medicine. Hopefully, there are efforts already underway based on a similar or better idea. Please send your comments via twitter @vlele.

Note: This blog post is *not* about Kubernetes infrastructure API (an API to provision a Kubernetes cluster). Instead, this post focuses on the idea of Kubernetes as a common infrastructure layer across private and public clouds.

Kubernetes is, of course, well known as the leading open-source system for automating deployment and management of containerized applications. However, its uniform availability, is for the first time, giving customers a “common” infrastructure API across public and private cloud providers. Customers can take their containerized applications, Kubernetes configuration files, and for most parts, move to another cloud platform. All of this without sacrificing the use of cloud provider-specific capabilities, such as storage and networking, that are different across each cloud platform.

At this time, you are probably thinking about tools like Terraform and Pulumi that have focused on abstracting underlying cloud APIs. These tools have indeed enabled a provisioning language that spans across cloud providers. However, as we will see below, Kubernetes “common” construct goes a step further – rather than be limited to a statically defined set of APIs, Kubernetes extensibility allows or extends the API dynamically through the use plugins, described below.

Kubernetes Extensibility via Plugins

Kubernetes plugins are software components that extend and deeply integrate Kubernetes with new kinds of infrastructure resources. Plugins realize interfaces like CSI (Container Storage Interface). CSI defines an interface along with the minimum operational and packaging recommendations for a storage provider (SP) to implement a compatible plugin.

Another example of interfaces includes:

  • Container Network Interface (CNI) – Specifications and libraries for writing plug-ins to configure network connectivity for containers.
  • Container Runtime Interface (CRI) – Specifications and libraries for container runtimes to integrate with kubelet, an agent that runs on each Kubernetes node and is responsible for spawning the containers and maintaining their health.

Interfaces and compliant plug-ins have opened the flood gates to third-party plugins for Kubernetes, giving customers a whole range of options. Let us review a few examples of “common” infrastructure constructs.

Here is a high-level view of how a plugin works in the context of Kubernetes. Instead of modifying the Kubernetes code for each type of hardware or a cloud provider offered service, it’s left to the plugins to encapsulate the knowhow to interact with underlying hardware resources. A plugin can be deployed to a Kubernetes node as shown in the diagram below. It is the kubelet’s responsibility to advertise the capability offered by the plugin(s) to the Kubernetes API service.

Kubernetes Control Panel

“Common” Networking Construct

Consider a networking resource of type load balancer. As you would expect, provisioning a load balancer in Azure versus AWS is different.

Here is a CLI for provisioning ILB in Azure:

CLI for provisioning ILB in Azure

Likewise, here is a CLI for provisioning ILB in AWS:

CLI for provisioning ILB in AWS

Kubernetes, based on the network plugin model, gives us a “common” construct for provisioning the ILB that is independent of the cloud provider syntax.

apiVersion: V1

“Common” Storage Construct

Now let us consider a storage resource type. As you would expect, provisioning a storage volume in Azure versus Google is different.

Here is a CLI for provisioning a disk in Azure:

CLI for provisioning a disk in Azure

Here is a CLI for provisioning a persistent disk in Google:

CLI for provisioning a persistent disk in Google

Once again, under the plugin (device) model, Kubernetes gives us a “common” construct for provisioning storage that is independent of the cloud provider syntax.

In the example below, of “common” storage construct across cloud providers. In this example, a claim for a persistent volume of size 1Gi and access mode “ReadWriteOnce” is being made. Additionally, storage class “cloud-storage” is associated with the request. As we will see next, the persistent volume claims decouple us from the underlying storage mechanism.

cloud-storage-claim

The StorageClass determines which storage plugin gets invoked to support the persistent volume claim. In the first example below, StorageClass represents the Azure Disk plugin. In the second example below, StorageClass represents the Google Compute Engine (GCE) Persistent Disk.

StorageClass

StorageClass Diagram 2

“Common” Compute Construct

Finally, let us consider a compute resource type. As you would expect, provisioning a compute resource in Azure versus GCE is different.

Here is a CLI for provisioning a GPU VM in Azure:

CLI for provisioning a GPU VM in Azure

Here is a CLI for provisioning a GPU in Google Cloud:

CLI for provisioning a GPU in Google Cloud:

Once again, under the plugin (device) model, Kubernetes, gives us a “common” compute construct across cloud providers. In this example below, we are requesting a compute resource of type GPU. An underlying plugin (Nvidia) installed on the Kubernetes node is responsible for provisioning the requisite compute resource.

requesting a compute resource of type GPU

Source: https://docs.microsoft.com/en-us/azure/aks/gpu-cluster

Summary

As you can see from the examples discussed in this post, Kubernetes is becoming a “common” infrastructure API across private, public, hybrid, and multi-cloud setups. Even traditional “infrastructure as code” tools like Terraform are building on top of Kubernetes.

Azure Arc is one of the significant announcements coming out of #msignite this week. As depicted in the picture below, Azure Arc is a single control plane across multiple clouds, premises, and the edge.

Azure Arc

Source: https://azure.microsoft.com/en-us/services/azure-arc/

But we’ve seen single control planes before, no?

That is correct. The following snapshot (from 2013) shows App Controller securely connected to both on-premise and Microsoft Azure resources.

Azure App Controller in 2013

Source: https://blogs.technet.microsoft.com/yungchou/2013/02/18/system-center-2012-r2-explained-app-controller-as-a-single-pane-of-glass-for-cloud-management-a-primer/

So, what is different with Azure Arc?

Azure Arc is not just a “single-pane” of control for cloud and on-premises. Azure Arc takes Azure’s all-important control plane – namely, the Azure Resource Manager (ARM) – and extends it *outside* of Azure. In order to understand the implication of the last statement, it will help to go over a few ARM terms.

Let us start with the diagram below. ARM (shown in green) is the service used to provision resources in Azure (via the portal, Azure CLI, Terraform, etc.). A resource can be anything you provision inside an Azure subscription. For example, SQL Database, Web App, Storage Account, Redis Cache, and Virtual Machine. Resources always belong to a Resource Group. Each type of resource (VM, Web App) is provisioned and managed by a Resource Provider (RP). There are close to two hundred RPs within the Azure platform today (and growing with the release of each new service).

ARM

Source: http://rickrainey.com/2016/01/19/an-introduction-to-the-azure-resource-manager-arm/

Now that we understand the key terms associated with ARM, let us return to Azure Arc. Azure Arc takes the notion of the RP and extends it to resources *outside* of Azure. Azure Arc introduces a new RP called “Hybrid Compute”. See the details for the RP HybridCompute in the screenshot below. As you can imagine, the HybridCompute RP is responsible for managing the resources *outside* of Azure. HybridCompute RP manages the external resources by connecting to the Azure Arc agent, deployed to the external VM. The current preview is limited to Windows or Linux VM. In the future, the Azure Arc team plans to support containers as well.

RP Hybrid Compute Screenshot

Note: You will need to first to register the provider using the command az register -n Microsoft.HybridCompute

Once we deploy the Azure Arc agent [1] to a VM running in Google Cloud, it shows inside Azure Portal within the resource group “az_arc_rg” (see screenshot below). Azure Arc agent requires connectivity to Azure Arc service endpoints for this setup to work. All connections are outbound from the agent to Azure and are secured with SSL. All traffic can be routed via an HTTPS proxy.

deploy the Azure Arc agent [1] to a VM running in Google cloud

Since the Google Cloud hosted VM (gcp-vm-001) is an ARM resource, it is an object inside Azure AD. Furthermore, there can be a managed identity associated with Google VM.

Benefits of Extending ARM to Resources Outside Azure:

  • Ability to manage external VMs as ARM resources via using Azure Portal / CLI, as well as, the ability to add tags, as shown below.Ability to manage external VMs as ARM resources using Azure Portal
  • Ability to centrally manage access and security policies for external resources with Role-Based Access Control.manage access and security policies for external resources with Role-Based Access Control
    Microsoft Hybrid Compute Permissions
  • Ability to enforce compliance and simplify audit reporting.Ability to enforce compliance and simplify audit reporting

[1] Azure Arc Agent is installed by running the following script on the remote VM. This script is generated from the Azure portal:

# Download the package:

Invoke-WebRequest -Uri https://aka.ms/AzureConnectedMachineAgent -OutFile AzureConnectedMachineAgent.msi

# Install the package:

msiexec /i AzureConnectedMachineAgent.msi /l*v installationlog.txt /qn | Out-String

# Run connect command:

"$env:ProgramFiles\AzureConnectedMachineAgent\azcmagent.exe" connect --resource-group "az_arc_rg" --tenant-id "" --location "westus2" --subscription-id ""
Late last Friday, the news of the Joint Enterprise Defense Infrastructure (JEDI) contract award to Microsoft Azure sent seismic waves through the software industry, government, and commercial IT circles alike.

Even as the dust settles on this contract award, including the inevitable requests for reconsideration and protest, DoD’s objectives from the solicitation are apparent.

DOD’s JEDI Objectives

Public Cloud is the Future DoD IT Backbone

A quick look at the JEDI statement of objectives illustrates the government’s comprehensive enterprise expectations with this procurement:

  • Fix fragmented, largely on-premises computing and storage solutions – This fragmentation is making it impossible to make data-driven decisions at “mission-speed”, negatively impacting outcomes. Not to mention that the rise in the level of cyber-attacks requires a comprehensive, repeatable, verifiable, and measurable security posture.
  • Commercial parity with cloud offerings for all classification levels – A cordoned off dedicated government cloud that lags in features is no longer acceptable. Furthermore, it is acceptable for the unclassified data center locations to not be dedicated to a cloud exclusive to the government.
  • Globally accessible and highly available, resilient infrastructure – The need for infrastructure that is reliable, durable, and can continue to operate despite catastrophic failure of pieces of infrastructure is crucial. The infrastructure must be capable of supporting geographically dispersed users at all classification levels, including in closed-loop networks.
  • Centralized management and distributed control – Apply security policies; monitor security compliance and service usage across the network; and accredit standardized service configurations.
  • Fortified Security that enables enhanced cyber defenses from the root level – These cyber defenses are enabled through the application layer and down to the data layer with improved capabilities including continuous monitoring, auditing, and automated threat identification.
  • Edge computing and storage capabilities – These capabilities must be able to function totally disconnected, including provisioning IaaS and PaaS services and running containerized applications, data analytics, and processing data locally. These capabilities must also provide for automated bidirectional synchronization of data storage with the cloud environment when a connection is re-established.
  • Advanced data analytics – An environment that securely enables timely, data-driven decision making and supports advanced data analytics capabilities such as machine learning and artificial intelligence.

Key Considerations: Agility and Faster Time to Market

From its inception, with the Sep 2017 memo announcing the formation of Cloud Executive Steering Group that culminated with the release of RFP in July 2018, DoD has been clear – They wanted a single cloud contract. They deemed a multi-cloud approach to be too slow and costly. Pentagon’s Chief Management officer defended a single cloud approach by suggesting that multi-cloud contract “could prevent DoD from rapidly delivering new capabilities and improved effectiveness to the warfighter that enterprise-level cloud computing can enable”, resulting in “additional costs and technical complexity on the Department in adopting enterprise-scale cloud technologies under a multiple-award contract. Requiring multiple vendors to provide cloud capabilities to the global tactical edge would require investment from each vendor to scale up their capabilities, adding expense without commensurate increase in capabilities”

A Single, Unified Cloud Platform Was Required

The JEDI solicitation expected a unified cloud platform that supports a broad set of workloads, with detailed requirements for scale and long-term price projections.

  1. Unclassified webserver with a peak load of 400,000 requests per minute
  2. High volume ERP system – ~30,000 active users
  3. IoT + Tactical Edge – A set of sensors that captures 12 GB of High Definition Audio and Video data per hour
  4. Large data set analysis – 200 GB of storage per day, 4.5 TB of online result data, 4.5 TB of nearline result data, and 72 TB of offline result data
  5. Small form-factor data center – 100 PB of storage with 2000 cores that is deliverable within 30 days of request and be able to fit inside a U.S. military cargo aircraft

Massive Validation for the Azure Platform

The fact that the Azure platform is the “last cloud standing” at the end of the long and arduous selection process is massive validation from our perspective.

As other bidders have discovered, much to their chagrin, the capabilities described above are not developed overnight. It’s a testament to Microsoft’s sustained commitment to meeting the wide-ranging requirements of the JEDI solicitation.

Lately, almost every major cloud provider has invested in bringing the latest innovations in compute (GPUs, FPGAs, ASICs), storage (very high IOPS, HPC), and network (VMs with 25 Gbps bandwidth) to their respective platforms. In the end, what I believe differentiates Azure is a long-standing focus on understanding and investing in enterprise IT needs. Here are a few examples:

  • Investments in Azure Stack started 2010 with the announcement of Azure Appliance. It took over seven years of learnings to finally run Azure completely in an isolated mode. Since then, the investments in Data Box Edge, Azure Sphere and commitment to hybrid solutions have been a key differentiator for Azure.
  • With 54 Azure regions worldwide that ( available in 140 countries) including dedicated Azure government regions – US DoD Central, US DoD East, US Gov Arizona, US Gov Iowa, US Gov Texas, US Gov Virginia, US Sec East, US Sec West – Azure team has accorded the highest priority on establishing a global footprint. Additionally, having a common team that builds, manages, and secures Azure’s cloud infrastructure has meant that even the public Azure services have DoD CC SRG IL 2, FedRAMP moderate and high designations.
  • Whether it is embracing Linux or Docker, providing the highest number of contributions to GitHub projects, or open-sourcing the majority of  Azure SDKs and services, Microsoft has demonstrated a leading commitment to open source solutions.
  • Decades of investment in Microsoft Research, including the core Microsoft Research Labs and Microsoft Research AI, has meant that they have the most well-rounded story for advanced data analytics and AI.
  • Documentation and ease of use have been accorded the highest engineering priorities. Case in point, rebuilding Azure docs entirely on Github. This has allowed an open feedback mechanism powered by Github issues.