Microsoft’s RPA (Robotic Process Automation) solution will transform organizations, helping teams realize cost savings and increased productivity. And the beauty of Microsoft’s RPA capabilities is that they build atop an organization’s existing Azure infrastructure, tapping into native Microsoft cloud services – like Azure Active Directory, On-Demand Compute, Native Network Security Tools, and RBAC Access Control – rather than building these capabilities from scratch on another solution.

While automation has been around for decades, the Microsoft one cloud platform provides seamless, integrated process automation services. Microsoft automation capabilities can extend across all departments of large enterprise organizations, optimizing processes and significantly trimming costs.

Many years of Azure experience and award-winning Power Platform capabilities provided the AIS team the opportunity to collaborate with Microsoft on a whitepaper for enterprise deployment of RPA. We’re grateful for the opportunity to help create opinionated guidance and resources for organizations looking to enable and govern the rollout of Power Automate for RPA and other hyperautomation scenarios. Many talented SMEs across Microsoft and AIS partnered to deliver valuable guidance for you to accelerate your path to automation. In this blog, our team shares six key learnings and takeaways from the project.

What We Learned Helping Write the RPA Whitepaper

It’s hard to beat the opportunity to work with a product team. We’ve been a dedicated Microsoft partner for nearly as long as we’ve been a company. Throughout our partnership, we’ve been lucky enough to work closely with product engineering across various teams. It’s always enlightening and inspiring and gives both of our teams an opportunity to apply product vision to customer priorities.

Opinionated first, genericized later. New technology is released at a rapid clip. We know there’s a lot of noise. You need to get things done efficiently and the last thing you need is a case of analysis paralysis. Opinionated guidance will help you do that, giving you a leg up and a head start on the best approach. We’ve enabled cloud capabilities at some of the largest and most complex enterprises for over 13 years. We’ve infused those learnings and hardened processes into opinionated guidance by identifying what’s most effective. From identity and network operations to migration and modernization approaches, we arm delivery teams with constantly improving best practices and resources. Many organizations approaching RPA won’t be starting from ground zero in the cloud; the Microsoft RPA solution, with some help from other cloud resources (like our CDF), will allow you to leverage and build on your cloud investments.

Successful enterprise programs require insights from real-world, practical experience. The result of this whitepaper is an example of the powerful impact of talented product teams and veteran systems integrators coming together. We were able to bring learnings from building out enterprise-level Azure environments. Together with Microsoft, we shaped the vision for RPA technology into practical, hands-on resources for building and supporting hyperautomation. Insight from previous experience, enterprise technologies outside the Microsoft suite, and an understanding of customers’ business outcomes allowed us to inform the product and develop the resources to enable it. The combination of product team vision and insight and hands-on, experienced practitioners is a winning formula for developing valuable customer guidance.

Checklists are always helpful. Checklists drove the format of the whitepaper to help us make the content as actionable as possible. The goal of this whitepaper was to set forth a set of optimal milestones and share the thinking to help teams make progress faster. This approach was driven by Apostolis Papaioannou, Principal Program Manager at Microsoft. His vision was to create content that was consumable for such a vast topic, building on the foundation of the Holistic Enterprise Automation Techniques (HEAT). What you will find with this whitepaper is a thorough overview and actionable steps to get a workable environment up and going quickly. There’s a wealth of additional material and documentation available today with more coming soon.

Cross-cutting teams are the means to succeed. We brought in AIS Power Platform, Azure, M365, and other architects with varying skillsets to support the whitepaper creation, in addition to the Microsoft contributors, and the range of experience and perspectives made a big impact on the outcomes and guidance. This experience was yet another example of the success and trend towards cross-functional and cross-cloud collaboration, a concept and model the AIS team has adopted with much success.

Write a lot. Trim a lot more. Repeat. This was the flow of operations. We would quickly get up to 100 pages, then trim back a lot. This helped us get all the ideas and perspectives out there, before evaluating to focus on the right message and resources. In some cases, guidance can boil the ocean. Our goal was to vet the options and provide an opinionated best path forward to support organizations in focusing on the right things.

Get Started: Administering a Low-Code Intelligent Automation Platform for Your Organization

Are you considering RPA solutions for your organization? Check out the whitepaper, Enterprise Deployment for RPA and more in Power Automate, as well as other resources below. Please share a link with your partners and colleagues and let us know what you think. If you have any questions, please reach out to the AIS team.

Access the whitepaper: https://aka.ms/autocoeadminwhitepaper

Helpful Microsoft RPA Resources and Guidance Links

• Automation CoE Blueprint https://aka.ms/autocoeblueprint
• Automation CoE Strategy https://aka.ms/autocoestrategy
• HEAT https://aka.ms/rpapnp
• HEAT video series https://aka.ms/rpapnpvideo
• Whitepaper overview blog: https://powerautomate.microsoft.com/en-us/blog/administer-and-govern-a-low-code-intelligent-automation-platform-whitepaper-enterprise-deployment-for-rpa-and-more-in-power-automate/
• Automation Admin & Governance Whitepaper https://aka.ms/autocoeadminwhitepaper
• Manage Power Automate for Desktop on Windows https://aka.ms/padonwindowspnp
• Hyperautomation SAP Playbook (https://aka.ms/MicrosoftRPAPlaybookForSAPGUI) & video series (https://aka.ms/AutomateItSAPSeries)
• Automate It video series: https://aka.ms/AutomateIt
• RPA in a Day training: https://aka.ms/RPAinaDayPackage

Acknowledgments

Thank you to Apostolis Papaioannou, Kent Weare, Pranav Rastogi, Anitha Natarajan, Jonathan Eckman, Lav Gupta, Brent Wodicka, Vishwas Lele, Gautier Chastan, Kathy Osborne, Rakesh Krishnan, Amit Bhambri, Ashvini Sharma, and Jonathan Kendall for the partnership on this whitepaper.

I was fortunate enough to attend the Microsoft BUILD 2019 Conference in Seattle this year – the company’s annual developer conference. There was a lot of excitement and a TON of great information to consume; from both the scheduled sessions and one-on-one conversations with product team representatives. So I’m wrapping up BUILD 2019 with some of my highlights below.

(Admittedly, these highlights skew towards technologies I’m currently using most frequently – I’ve grouped some of these into related categories. Also I’m sure I’ve left out some highpoints, so I’ll plan to update this post as needed.)

AIS at BUILD 2019

However, before describing announcements or specific technology updates I noted, my number one highpoint of the week was the session that Vishwas Lele (AIS CTO and MS Azure MVP) gave on Tuesday: “Architecting Cloud-Native Apps with AKS and Cosmos DB.” This year was the first year that Microsoft allowed a few select partners to lead sessions at BUILD, so I consider his inclusion recognition of the great work he is doing to advance cloud-native technologies on Azure. His session was packed, and attendees got their money’s worth of content related to AKS, Cosmos DB, and strategies for using cloud-native conventions for the consumption of PaaS services to build resilient, globally scalable applications.

AIS Team at Microsoft Build 2019

Kubernetes and AKS

Most of the discussion about compute on Azure included at least one point related to AKS (Azure Kubernetes Service). AKS was everywhere, and one consistent theme seems to be AKS as a significant portion of the Azure “compute” offering in the future. So, there were many exciting K8s-related announcements and demonstrations which I had not previously heard, a few that stood out to me:

Azure AI

The company’s vision related to Artificial Intelligence (AI) and Machine Learning offerings is stronger than it’s ever been. This story’s been developing for the past few years, and the vision hasn’t always been crystal clear. Over the past two years, I’ve often asked the question “If I were going to start a new custom machine learning project in Azure, what services would I start with?” Usually, that answer has been “Azure Databricks” by default, but I’m now coming around to the idea that there is now a viable alternative – or at least additional tools to consider.

The BUILD 2019 conference included great sessions and content focused on Azure AI, segmented into three high-level areas:

  • Knowledge Mining: This is concerned with using Azure services to help discover hidden insights from your content – including docs, images, and other media. Sessions and announcements in this area focused on enhancements to two key services; Azure Search and a new “Form Recognizer” service.
  • Azure (Cognitive) Search is now generally available: This service uses built-in AI capabilities to discover patterns and relationships, understand the sentiment, extract key phrases, etc. without the need for specific data science expertise. Additionally, Azure allows consumers to customize results by applying custom-tuned ranking models.
  • Forms Recognizer: A new service announced in public preview. This service exposes a REST API that accepts document content (PDF, images, etc.) and extracts text, key/value pairs, and tables. The idea is that “usable data” can be gleaned from content that has been hard to unlock in the past.

Machine Learning: A set of services that enable building and deploying custom machine learning models. This area represents many capabilities on the Azure platform; I found that at this year’s conference some great new additions and enhancements were highlighted that help to answer that first “where do I start?” question. Some highlights:

  • AutoML is in public preview: This service allows a consumer to choose the “best” machine learning algorithm for a provided data set and the desired outcome. It does this by accepting the data set from the user (in preview it accepts files stored in blob storage exclusively), automatically training several different models based on this data, comparing performance, and reporting the performance to the end user.
  • Visual Interface for Azure Machine Learning Service is in public preview: This service enables consumers to build ML models using a drag and drop interface, with the ability to drop down into Python code when needed for specific activities. In many ways, this is a reincarnation of the “Azure ML Studio” service of the past, without some of the limitations that held this service back (data size restrictions, etc.).
  • Choose your underlying compute: Choose where your models are trained and run, including the Machine Learning Services managed compute environment, AKS, Virtual Machines, Azure Databricks, HDInsight clusters, or in Azure Data Lake Analytics.

AI apps and agents: This area includes Azure Cognitive Services and Azure Bot Service. Azure Cognitive Services is a set of APIs that allow developers to call pre-built AI models to enhance their applications in the areas of computer vision, speech-to-text, and language. A few data points that stuck out to me:

  • A new Cognitive Services category – “Decision”: This category will initially include three services: 1) Content Moderator, 2) Anomaly Detector (currently in preview), and 3) Personalizer (also currently in Preview). Personalizer is a service to help promote relevant content and experiences for users.
  • “Conversation Transcription”: An advanced speech to text capability.
  • Container Support Expansion: The portfolio of Cognitive Services that can be run in locally in a Docker container now includes Anomaly Detector, Speech-to-Text, and Text-to-Speech in addition to the existing text analytics and vision containers.

.NET Platform

It’s amazing for me to consider that .NET is now 17 years old – the official release of .NET 1.0 was in February 2002! And, although .NET is now on the “mature” end of the spectrum compared to many other active programming frameworks, it’s also true that there are many new .NET developers still adding C#, VB.NET, F#, or CLR-based languages to their repertoire. In fact, at BUILD 2019 the company quoted the fact that “a million new active .NET developers” were added last year alone.

One of the reasons for this is that the .NET team continues to innovate with offerings like .NET core – which it released in 2014. .NET Core is the cross-platform development stack which runs across operating systems and has been the “future” of .NET for some time.

One of the major announcements that will affect .NET developers in the future is that the next “release” of .NET core will be “.NET 5”. Yes, this means there will be one unified platform that includes legacy .NET framework components, .NET Core, and Mono. After the .NET 5 release in 2020, there will be one annual release of .NET.

.NET Schedule

A few other .NET related data points that stuck out to me as items to investigate in more detail:

  • “Blazor” got a lot of session time and seems to be a real project now. For some people, the idea of running C# in the browser can devolve into a philosophical debate. However, it’s clear that Microsoft sees enough upside that it has moved the technology beyond an “experimental” phase into a fully-supported preview.
  • .NET for Spark was released (open source) aimed to provide access to Apache Spark for .NET developers.
  • Frequent mentions of gRPC support in .NET Core. gRPC is the language agnostic remote procedure call framework published by Google.
  • NET 1.0: A cross-platform (.NET core) framework for creating custom ML models using C# or F# – without having to leave the .NET ecosystem.

Cosmos DB

BUILD 2019 also had a few great sessions and announcements related to Cosmos DB, Microsoft’s fully managed, global, multi-modal database service. My highlights:

  • Best practices for Azure Cosmos DB: Data modeling, Partitioning, and RUs: A great session given by Deborah Chen and Thomas Weiss (program managers on the Cosmos DB team). Practical, actionable examples related to how to partition, how to minimize request units (RUs) for common database calls, etc.
  • Etcd API: In Kubernetes, etcd is used to store the state and the configuration of clusters. Ensuring availability, reliability, and performance of etcd is crucial to the overall cluster health, scalability, elasticity availability, and performance of a Kubernetes cluster. The etcd API in Azure Cosmos DB allows you to use Azure Cosmos DB as the backend store for Azure Kubernete
  • Spark API: New (preview) native support for Spark through the Cosmos DB Spark API. This one is interesting to me because it has the potential to enable a “serverless experience for Apache Spark” – where the “cluster” is Cosmos DB.  I would pay close attention to the consumed RUs though!
  • Cosmos DB will support multi-model access in the future: Cosmos DB is a multi-model database, meaning you can access the data using many different APIs. However, until now this has been a choice that is made up front on the creation of the database.  In his “Inside Datacenter Architecture” session, Mark Russinovich announced that in the future, Cosmos DB would support multi-model access to the same data.
  • Jupyter notebooks running inside Azure Cosmos DB: announced in preview. A native notebook experience that supports all the Cosmos DB APIs and is accessed directly in the Azure Portal.

Other Announcements

Below are some other BUILD 2019 announcements, highlights, and data points I’m investigating in the coming weeks:

If you have any questions, feel free to reach out to me on Twitter at @Bwodicka or contact the AIS team online.

Kubernetes logoIf you’ve worked in software development or IT for any amount of time, chances are you’ve at least heard about containers…and maybe even Kubernetes.

Maybe you’ve heard that Google manages to spin up two billion containers a week to support their various services, or that Netflix runs its streaming, recommendation, and content systems on a container orchestration platform called Titus.

This is all very exciting stuff, but I’m more excited to write and talk about these things now more than ever before, for one simple reason: We are finally at a point where these technologies can make our lives as developers and IT professionals easier!

And even better…you no longer have to be a Google (or one of the other giants) employee to have a practical opportunity to use them.

Containers

Before getting into orchestrators and what they actually offer, let’s briefly discuss the fundamental piece of technology that all of this is depends on – the container itself.

A container is a digital package of sorts, and it includes everything needed to run a piece of software.  By “everything,” I mean the application code, any required configuration settings, and the system tools that are normally brought to the table by a computer’s operating system. With those three pieces, you have a digital package that can run a software application in isolation across different computing platforms because the dependencies are included in that package.

And there is one more feature that makes containers really useful – the ability to snapshot the state of a container at any point. This snapshot is called a container “image.” Think of it in the same way you would normally think of a virtual machine image, except that many of the complexities of capturing the current state of a full-blown machine image (state of the OS, consistency of attached disks at the time of the snapshot, etc.) are not present in this snapshot.  Only the components needed to run the software are present, so one or a million instances can be spun-up directly from that image, and they should not interfere with each other.  These “instances” are the actual running containers.

So why is that important? Well, we’ve just alluded to one reason: Containers can run software across different operating systems (various Linux distributions, Windows, Mac OS, etc.).  You can build a package once and run it in many different places. It should seem pretty obvious at this point, but in this way, containers are a great mechanism for application packaging and deployment.

To build on this point, containers are also a great way to distribute your packages as a developer.  I can build my application on my development machine, create a container image that includes the application and everything it needs to run, and push that image to a remote location (typically called a container registry) where it can be downloaded and turned into one or more running instances.

I said that you can package everything your container needs to run successfully, but the last point to make is that the nature of the container package gives you a way to enforce a clear boundary for your application and a way to enforce runtime isolation. This feature is important when you’re running a mix of various applications and tools…and you want to make sure a rogue process built or run by someone else doesn’t interfere with the operation of your application.

Container Orchestrators

So containers came along and provided a bunch of great benefits for me as a developer.  However, what if I start building an application, and then I realize that I need a way to organize and run multiple instances of my container at runtime to address the expected demand?  Or better yet, if I’m building a system comprised of multiple microservices that all need their own container instances running?  Do I have to figure out a way to maintain the desired state of this system that’s really a dynamic collection of container instances?

This is where container orchestration comes in.  A container orchestrator is a tool to help manage how your container instances are created, scaled, managed at runtime, placed on underlying infrastructure, communicate with each other, etc.  The “underlying infrastructure” is a fleet of one or more servers that the orchestrator manages – the cluster.  Ultimately, the orchestrator helps manage the complexity of taking your container-based, in-development applications to a more robust platform.

Typically, interaction with an orchestrator occurs through a well-defined API, and the orchestrator takes up the tasks of creating, deploying, and networking your container instances – exactly as you’ve specified in your API calls across any container host (servers included in the cluster).

Using these fundamental components, orchestrators provide a unified compute layer on top of a fleet of machines that allows you to decouple your application from these machines. And the best orchestrators go one step further and allow you to specify how your application should be composed, thus taking the responsibility of running the application and maintaining the correct runtime configuration…even when unexpected events occur.

VIEW OUR AZURE CAPABILITIES
Since 2009, AIS has been working with Azure honing our capabilities and offerings. View the overview of our Azure-specific services and offerings.

Kubernetes

Kubernetes is a container orchestrator that delivers the capabilities mentioned above. (The name “Kubernetes” comes from the Greek term for “pilot” or “helmsman of a ship.”) Currently, it is the most popular container orchestrator in the industry.

Kubernetes was originally developed by Google, based in part on the lessons learned from developing their internal cluster management and scheduling system Borg.  In 2014, Google donated Kubernetes to the Cloud Native Computing Foundation (CNCF) which open-sourced the project to encourage community involvement in its development. The CNCF is a child entity of the Linux Foundation and operates as a “vendor-neutral” governance group. Kubernetes is now consistently in the top ten open source projects based on total contributors.

Many in the industry say that Kubernetes has “won” the mindshare battle for container orchestrators, but what gives Kubernetes such a compelling value proposition?  Well, beyond meeting the capabilities mentioned above regarding what an orchestrator “should” do, the following points also illustrate what makes Kubernetes stand out:

  • The largest ecosystem of self-driven contributors and users of any orchestrator technology facilitated by CNCF, GitHub, etc.
  • Extensive client application platform support, including Go, Python, Java, .NET, Ruby, and many others.
  • The ability to deploy clusters across on-premises or the cloud, including native, managed offerings across the major public cloud providers (AWS, GCP, Azure). In fact, you can use the SAME API with any deployment of Kubernetes!
  • Diverse workload support with extensive community examples – stateless and stateful, batch, analytics, etc.
  • Resiliency – Kubernetes is a loosely-coupled collection of components centered around deploying, maintaining and scaling workloads.
  • Self-healing – Kubernetes works as an engine for resolving state by converging the actual and the desired state of the system.

Kubernetes Architecture

A Kubernetes cluster will always include a “master” and one or more “workers”.  The master is a collection of processes that manage the cluster, and these processes are deployed on a master node or multiple master nodes for High Availability (HA).  Included in these processes are:

  • The API server (Kube-apiserver), a distributed key-store for the persistence of cluster management data (etcd)
  • The core control loops for monitoring existing state and management of desired state (Kube-controller-manager)
  • The core control loops that allow specific cloud platform integration (Cloud-controller-manager)
  • A scheduler component for the deployment of Kubernetes container groups, known as pods (Kube-scheduler)

Worker nodes are responsible for actually running the container instances within the cluster.  In comparison, worker nodes are simpler in that they receive instructions from the master and set out serving up containers.  On the worker node itself, there are three main components installed which make it a worker node in a Kubernetes cluster: an agent called kubelet that identifies the node and communicates with the master, a network proxy for interfacing with the cluster network stack (kube-proxy), and a plug-in interface that allows kubelet to use a variety of container runtimes, called the container runtime interface.

diagram of Kubernetes architecture

Image source

Managed Kubernetes and Azure Kubernetes Service

“Managed Kubernetes” is a deceptively broad term that describes a scenario where a public cloud provider (Microsoft, Amazon, Google, etc.) goes a step beyond simply hosting your Kubernetes clusters in virtual machines to take responsibility for deploying and managing your cluster for you.  Or more accurately, they will manage portions of your cluster for you.  I say “deceptively” broad for this reason – the portions that are “managed” varies by vendor.

The idea is that the cloud provider is:

  1. Experienced at managing infrastructure at scale and can leverage tools and processes most individuals or companies can’t.
  2. Experienced at managing Kubernetes specifically, and can leverage dedicated engineering and support teams.
  3. Can add additional value by providing supporting services on the cloud platform.

In this model, the provider does things like abstracting the need to operate the underlying virtual machines in a cluster, providing automation for actions like scaling a cluster, upgrading to new versions of Kubernetes, etc.

So the advantage for you, as a developer, is that you can focus more of your attention on building the software that will run on top of the cluster, instead of on managing your Kubernetes cluster, patching it, providing HA, etc. Additionally, the provider will often offer complementary services you can leverage like a private container registry service, tools for monitoring your containers in the cluster, etc.

Microsoft Azure offers the Azure Kubernetes Service (AKS), which is Azure’s managed Kubernetes offering. AKS allows full production-grade Kubernetes clusters to be provisioned through the Azure portal or automation scripts (ARM, PowerShell, CLI, or combination).  Key components of the cluster provisioned through the service include:

  • A fully-managed, highly-available Master. There’s no need to run a separate virtual machine(s) for the master component.  The service provides this for you.
  • Automated provisioning of worker nodes – deployed as Virtual Machines in a dedicated Azure resource group.
  • Automated cluster node upgrades (Kubernetes version).
  • Cluster scaling through auto-scale or automation scripts.
  • CNCF certification as a compliant managed Kubernetes service. This means it leverages the Cloud-controller-manager standard discussed above, and its implementation is endorsed by the CNCF.
  • Integration with supporting Azure services including Azure Virtual Networks, Azure Storage, Azure Role-Based Access Control (RBAC), and Azure Container Registry.
  • Integrated logging for apps, nodes, and controllers.

Conclusion

The world of containers continues to evolve, and orchestration is an important consideration when deploying your container-based applications to environments beyond “development.”  While not simple, Kubernetes is a very popular choice for container orchestration and has extremely strong community support.  The evolution of managed Kubernetes makes using this platform more realistic than ever for developers (or businesses) interested in focusing on “shipping” software.

Containers are, for good reason, getting a lot of attention.  For the cost of having to manage some complexity, they provide a unique level of flexibility, ability to scale, run software across cloud and on-premises environment…the list of benefits can go on and on.  And usually when you hear about containers in the technical press, they’re included in an overarching story about an organization that moved to some highly scalable, microservices-based architecture to meet their ridiculous capacity demands (Netflix, Google, etc.).

At the most basic level, however, containers are about being able to streamline the process of installing and running software. In fact, the fundamental concepts behind containers map almost one-to-one with what’s been traditionally required to install a piece of software on your laptop: Read More…

In this video blog, I’ll walk you through building a continuous integration and continuous delivery (CI/CD) pipeline using the latest tools from Microsoft, including Visual Studio Team Services (VSTS) and Azure. The pipeline is built to support a .NET core application, and the walkthrough includes the following steps:

  1. Configuring Continuous Integration (CI) with VSTS Build services
  2. Adding unit testing and validation to the CI process
  3. Adding Continuous Deployment (CD) with VSTS Release Management & Azure PaaS
  4. Adding automated performance testing to the pipeline
  5. Promotion of the deployment to production once validated
  6. Sending feedback on completion of the process to Slack
15498218 - search icon

Make no mistake, most organizations and government agencies are—at least in part—software companies. The backbone of the services and products they sell, the internal business processes they use, and the customer feedback mechanisms they rely on are all built on software. Even in the age of software as a service (SaaS) – a modern organization’s portfolio of applications and the specifics of how these apps are used influence its most important decisions.

So while it’s easy to understand that software is a foundational component to modern business, often the decision to invest in building or offering software to users must also be accompanied by a more specific, anticipated return on that investment. That process can go like this:
Read More…

Machine learning An Azure Machine Learning News Aggregator Solution

Here is an overview of a personalized, machine-curated delivery of news and related significant data in the public domain powered by Azure Machine Learning and Azure Cognitive Services, built by AIS.

A Little Background

Today’s 24-hour news cycle produces a vast amount of publicly available information about events occurring all over the globe. Imagine the data that is produced in a single hour, let alone in a single day. Now imagine having the responsibility of distilling that data into actionable information for a single team or individual. It would quickly become overwhelming! This was the reality for our client. They have offices and teams all over the world, and each team was seeking personalized views of data relevant to their region, mission and preferences. But by implementing machine learning technology, AIS developed a solution to help break through the noise. Read More…

Data Analytics, Cortana Intelligence Suite, Data Visualization The Cortana Intelligence Suite is a fully managed data analytics platform that offers a range of services aimed at helping businesses find meaning in all of the data they are collecting. This post will dive into the services that are offered by the platform. Prior to discussing these services in detail, let’s take a quick lap around data analytics specific to goals, what to expect when building and maturing a competency for your business, and challenges businesses experience while on this journey. After laying this groundwork, we’ll discuss the Cortana Intelligence Suite in a bit more detail and provide a list of the services offered by the platform. Additionally, we’ll cover how the Cortana Intelligence Suite can offer the tools needed to implement a robust strategy to address the key challenge areas we’ve described. Read More…
sharepoint 2013 logoIf you’ve ever had the need to add document management capabilities for your entities in CRM, you already know that CRM 2013 and CRM online rely on SharePoint for this functionality. This out of the box integration point is well documented and available for configuration in the CRM administrative interface. When set up, users can create, upload, view and delete documents in SharePoint locations that correspond to entity instances in CRM.

This post will discuss a different integration point – using search in SharePoint 2013 to expose CRM entity data. When setup properly, SharePoint 2013 can provide a robust, enterprise level search capability that can be tailored to your needs. Also, it seems to fill a current functionality gap in CRM that often requires a third party tool. Granted, you will need SharePoint 2013 Enterprise to realize this setup, but if this is available to you there should be no need to look anywhere else for search. Read More…

javasciptWhat is ECMAScript? ECMAScript is the standards name for the language many of us know as JavaScript. To envision the relationship between the two, think of ECMAScript as the language, and JavaScript as the dialect. There are other dialects – including JScript, ActionScript, etc. – that are still around. These days, developers commonly use the terms “ECMAScript” and “JavaScript” to mean the same thing – and more and more I see developers referring to implementing ECMA, ES, or ECMAScript.

Version six of the ECMAScript standard – code-named “Harmony” – will include some very interesting features that bring the experience of implementing complex apps on the client side closer to the experience server side developers know and love. Although some of these features have been previously available using a combination of patterns and third party plugins – ECMAScript 6 aims to make many commonly used features available natively.

I’ll walk through a few of my highlights below, but keep in mind version 6 is a large release that has been a long time coming (five years since the last version’s publication) and has a ton of functionality and improvements that will be well worth exploring. So the features sites below should not be seen as any more or less important that other ES6 features. Also, it’s worth noting that not all browsers will support every feature right away, but it appears as if the development teams behind the major browsers are very motivated to support as much of the standard they can – as quickly as possible. Read More…