You are about to start and new C# project and have opened Visual Studio. Let’s discuss some best practices you should employ or consider before writing that first line of code. These practices are just a start. I have scoped this blog post to only cover those best practices to those that can be enabled before you write a single line of code and utilize the built-in features of Visual Studio and VS Code. Of course, there are other best practices you should follow, such as unit testing, code coverage, continuous integration, etc. and other tools you may consider acquiring, but those are out of scope for this blog post.

These best practices are gleaned from years of experience developing C# solutions. First, let me present my comprehensive list of practices, then we can dive into each one individually. I will be assuming you are using .NET Core 3 and Visual Studio 2019 or later for each of the below practices. Most of the below practices are still recommended for earlier versions of .NET and Visual Studio but may be configured differently.

  1. Turn on Warnings as Errors
  2. Turn on Code Analysis
  3. Establish Code Conventions
  4. Consider Enabling Nullable Reference Checking

1. Turn on Warnings as Errors

Why?: You might think that having warnings is enough. However, even for some of the most diligent programmers on a project, they might have a bad day and miss a warning message from the compiler or forget to fix it before committing code to a project. Instead of relying on human discipline, why not merely automate that discipline by asking for the compiler’s assistance. By enabling warnings as errors, every compiler warning will now manifest as an error preventing compilation until addressed. As a developer, you can still disable alerts at the entire project level, or you can #pragma directives to disable warnings for given cases in the codebase. In either case, you now have an affirmative decision in the codebase that can be traced back with version history. It would be best if you considered asking developers to include comments explaining why the warning was disabled. This is the single biggest thing I do when starting any C# project. It keeps unfixed warnings from ever entering my codebase.

How?: You can right-click on a Project in Visual Studio and select “Properties…”. On the properties screen, you can navigate to the Build tab. On the Build tab, be sure and use the drop-down for Configuration to select “All Configurations.” Then, under the Treat warnings as errors, select “All” for the radio button and save the project. You can reference the screenshot below.

Set configurations for new project

For those of you on Linux/MacOS X or using VSCode, you can instead open the .csproj file directly in a text editor. You can then add the following code inside the top-level tag:

<TreatWarningsAsErrors>true</TreatWarningsAsErrors>

See the screenshot below:

Warning as Errors Screenshot

2. Turn on Code Analysis

Why?: Let’s talk about compiler warnings and errors first. Compiler errors are typically there because you made a syntax error, and the compiler can’t generate any code. Compiler warnings are provided typically because the compiler may be a deciding factor, but isn’t sure of your intention, hence a warning that they hope you will read and then fix. Most compiler teams are wary of adding new errors because it will discourage upgrades to a new compiler version. So, most of the time, compiler teams will only add recent warnings to a compiler, and by default, those warnings can be disregarded. But in general, I like to consider new compiler warnings as errors the compiler team wished they could have written. Hence, the rationale in the previous section for always enabling warnings as errors.

Now, let’s continue with the viewpoint that compiler warnings are errors, just left as warnings because the compiler team was worried about upgrades to new compiler versions and backward compatibility. With that viewpoint, warnings are still essentially only pointing out significant faults in your codebase; they aren’t pointing out suggestions or even best practices. So, how can we get recommendations or best practices? The answer to that question is to enable Code Analysis and then configure your desired ruleset.

How?: You can right-click on a Project in Visual Studio and select “Properties…”. On the properties screen, you can navigate to the Code Analysis tab. On the Code Analysis tab, click the Install button. The install button will download a NuGet package that contains the code analysis rules. In general, leave both checkboxes enabled. This ensures rules during a full build of the project, and the rules built as you are typing live in the editor. You can reference the screenshot below.

Download nuget package that contains code analysis

For those of you on Linux/MacOS X or using VSCode, you can instead open the .csproj file directly in a text editor. You can then add the following code inside a new tag:

<PackageReference Include=”Microsoft.CodeAnalysis.FxCopAnalyzers” Version=”2.9.6″>

See the screenshot below.

Turned on Code Analysis

If you have enabled Warnings as Errors as recommended in the previous section, the Code Analysis rules will be considered compiler errors.

What to do once you’ve turned on Code Analysis

Let’s imagine you have created a new console application using the provided templates. Now imagine you followed my recommendations and enabled warnings as errors and turned on code analysis. Now, if you compile that application for the first time, you immediately see 2 Code Analysis errors, and your project will no longer compile. See the screenshot below.

Compile Application and see 2 Errors

Keep in mind all you’ve done is used a provided template project and turned on some Code Analysis rules. You haven’t added any code to the project. Why then, do we see two errors in the code? Let’s talk about each error individually. As you see, the codes for the rules are CA1801 and CA1303.

Fixing Code Analysis Error #1 – CA1801 Unused Argument

How can we fix this code analysis suggestion/error, and what does it mean? The first suggestion is CA1801, which tells us that our Main method accepts a string array of arguments but that we aren’t using them in the method body. This is precisely the kind of assistance we are looking for. In this case, we could remove the arguments to the Main method. This makes our intention clearer; our Console application doesn’t have any arguments.

However, if we were implementing an interface, we might be required to have that argument even though our implementation isn’t using that argument. In this case, we can disable this on a case-by-case basis using #pragma directive. We can disable this by adding the following code around the statements where we want the rule disabled.

#pragma warning disable CA1801 // Remove unused parameter
//other code goes here
#pragma warning restore CA1801 // Remove unused parameter

You can also reference the below screenshot.

Add code to disable rule

Any code between the #pragma disable, and the restore will not be checked for that given code analysis rule. The developer could also provide comments as to why this choice was made. But it’s now clear and has an intention behind it. This same approach can be used for general compiler warnings that were turned on as errors in the previous section.

Fixing Code Analysis Error #2 – CA1303 Globalization

Now, let’s talk about the second code analysis error/suggestion, which is CA1303. This rule was built because internally, Microsoft is required to run it. Microsoft is a multinational company, and they deliver software to many countries in many languages. The CA1303 code analysis warning indicates that we have a hard-coded string, and we should move it to the resource (.resx) file where we could map different versions of the string depending upon the requested language and culture. Now, for our given project, it’s good that Code Analysis pointed this out so early. Because we encountered this error so early in our project, we can ask our requirements team if we need to support globalization. However, let’s assume our requirements team decides we don’t need to perform globalization. How can we turn this off once and not have to litter our code with hundreds of #pragma’s? The answer to that question is to use .editorconfig files. For those working in .NET for many years or before .NET Core, the use of .editorconfig is relatively new but is far easier than the older alternatives. The easiest way to get started is to let Visual Studio help you out. If you right-click on the Lightbulb icon for the given line, you can click to Suppress or Configure Issues. Within that menu, you can Configure CA1303 severity. When you select that option, regardless of the value, it will offer to create a new .editorconfig and add it to your project. See the screenshot below for reference.

create a new .editorconfig and add it to your project

For those of you on Linux/MacOS X or using VSCode, create a new file named .editorconfig with the following contents:
[*.cs]
# CA1303: Do not pass literals as localized parameters
dotnet_diagnostic.CA1303.severity = none

Once you have an .editorconfig, if you are using Visual Studio, you can view all of the analysis rules and configure them individually. In the Solution Explorer, expand your project. Then expand the Dependencies node and under that expand the Analyzers node. You can then find a given rule and right-click to configure Severity. This will then add a new line to your .editorconfig. See the below screenshot.

New line added to .editorconfig

Code Analysis Summary

To summarize, enable Code Analysis. It would be best if you then fixed any code analysis errors or decide you have a good reason for not fixing the suggestion. If you have a good cause, you can use #pragma directives to disable on a case-by-case basis, or you can .editorconfig to disable entire rules.
The .editorconfig works on a directory basis. The compiler will walk up directories from the source file until it finds a .editorconfig file. So, you can stack rules for your entire solution while having different rules for unit test projects, for example.
You can also write custom code analysis rules and find other open-source packages with custom code analysis rules. You should research to see if any of the third-party dependencies also provide code analysis packages.

Establish Code Conventions

Why?: As you start work on a project, you are implicitly making some code style conventions. You are deciding how indentation is formatted, whether the project will use the “var” keyword, the preferred syntax for initializing collections, etc. It would be nice to automate enforcement of some decisions and have the compiler or formatting tool to help enforce these decisions. In previous years, your only option was to find a third-party tool, either free or commercial. With a more recent version of Visual Studio and VS Code, there are built-in tools that provide some of this capability for free. So, let’s take advantage of those provided features. The benefit of code conventions is enabling consistency across projects and making it easier to start working on an existing project. To that end, I would recommend you begin with well-established code conventions like those provided by Microsoft and only diverge where necessary for your project or organization.

How?: If you agree with the Why section above, let’s talk about How. All you need to do is use Visual Studio’s built-in support for .editorconfig. If you’ve been following my guidance from earlier sections, you should already have an .editorconfig file. That’s it, and you are done. As your team makes decisions, apply the setting to your .editorconfig file. Visual Studio then provides several tools to help you update code to match your agreed-upon conventions.

References:
https://docs.microsoft.com/en-us/visualstudio/ide/create-portable-custom-editor-options?view=vs-2019
https://docs.microsoft.com/en-us/visualstudio/ide/editorconfig-language-conventions?view=vs-2019

4. Consider Enabling Nullable Reference Checking

Why?: One of the largest sources of errors in running codebases is unchecked null values, which throw a NullReferenceException. I’m not suggesting you add if (x != null) checks all over your codebase, which can also have unintended consequences and can make your code harder to read. Instead, I might suggest you utilize a new C# 8 feature, which is nullable references checked by the compiler using advanced flow analysis at compilation time. This feature can be retrofitted onto an existing project, but the work is considerable and can be very confusing. But since this best practice guidance is for new projects, I would suggest you strongly consider enabling this on new projects from the beginning.

How?: To enable nullable reference checking, simply right-click on the project in Visual Studio. Then navigate to the Build tab and select “Enable” from the drop-down for the Nullable field. See the screenshot below for reference.

Enable nullable reference checking

For those of you on Linux/MacOS X or using VSCode, you can open your .csproj in a text editor and add the following to the existing tag:

<Nullable>enable</Nullable>

See the below screenshot.

Insert the Nullable group tag

Once this setting is enabled, any reference type (e.g. string name, Dictionary<string, string> properties) is now considered to be non-null the compiler and the compiler will enforce this. If you wish to permit a null value, you must annotate the type like the long-time support for nullable value types. You would need to use (string? name, Dictionary<string, string>? properties). At this point, the compiler will warn you if you attempt to use any part of the object without first checking for null. It provides advanced flow analysis to determine this and supports many of the well-known patterns you already have been using for years to check for null.

Summary

I hope you found the above guidance around best practices useful. Remember, the above best practices are simply a starting point and intended to be steps you take before writing a single line of code and are enabled using only built-in features of Visual Studio or VS Code. It would help if you considered techniques such as unit testing, code coverage, and continuous integration, and potentially using other free or commercial tools such as Resharper or CodeIt.Right.

The goal for all the above practices is to ensure you start a new project out on the best foot and ensure high code quality from the beginning of the project. By creating early, it should be a low effort to maintain high code quality. It should also result in code with fewer bugs, which always makes users happy and should result in easier code because intentions in the code are clear.

Cosmos DB is a cloud-native, geo-distributed, highly scalable schema-less database provided by Microsoft Azure. Cosmos DB achieves cloud-scale horizontal scalability by utilizing a partitioning scheme. Choosing how to partition data stored in Cosmos DB can be a tough decision and directly impacts how well Cosmos DB will work as a solution. Let’s discuss some ideal use cases for Cosmos DB to understand better. I’d like to discuss two usage patterns; the first is storing user information. The second use case will deal with storing device telemetry in an internet-of-things scenario.

User Information Use Case

How to store user information in Cosmos DB

Cosmos DB is an excellent solution for storing user information. A few examples would be saving a user’s shopping cart or storing a user’s wishlist. For either of these scenarios, we can create a single schema-less document in Cosmos DB. That single document would contain the contents of their shopping cart or all the items on their wishlist. When determining the partition key for the Cosmos DB collection, we would choose the identity of the user.

Now, when integrating Cosmos DB with the website, the user will have first logged in. Once logged in, we will have the user identity. Now, anytime we need to display the shopping cart or wishlist, it is a document read. This document read is to a well-known partition in Cosmos DB, the fastest and cheapest operation in Cosmos DB. Similarly, updates are straight-forward with a single document write to a well-known partition in Cosmos DB. If users wished to share their wishlist, they would typically do that with a link, which could embed the user identity.

Advantages of storing user information in Cosmos DB

Now, let’s discuss some of the advantages of using Cosmos DB for this scenario. In the above scenario, we are always performing a single document reads, or a separate document writes. Both of those scenarios have service level agreements (SLAs) provided by the Cosmos DB service. Document reads and writes are guaranteed to finish in 10 milliseconds or less. So, by using Cosmos DB, we have guaranteed performance.

Also, when building a website storing user information, we need to design for high-load events. Cosmos DB achieves its scalability by creating additional physical nodes as requested and then mapping those logical partitions onto physical nodes. In our case, a logical partition is simply the identity of a single user. So, if we want to max out our scaling potential, Cosmos DB would create a single physical node for every individual user of our system. A single RU (request unit) is defined as a document read of 2 KB document. Writing a document typically consumes 2-3 times the RUs (request units) that a document read uses (this depends on the consistency and indexing settings). A single Cosmos DB physical node can support up 10,000 RUs (request units) per second. Based upon this knowledge, here are some example workloads that Cosmos DB could support should you max out the scaling so that each user ended up with their physical node.
Maximum Scalability for each user

Hopefully, as you can see, Cosmos DB is highly scalable when choosing the right partition strategy. With Cosmos DB, we also get the ability to easily geo-replicate our entire database. You need to enable the desired Azure regions, and Cosmos DB will automatically deal with replicating the data and keeping it consistent. For our scenario, this provides additional benefits because we can deploy our web service in each different continent. Each instance of the web service could then automatically locate the closest Cosmos DB region of our database to ensure high performance. So, we can now achieve high performance, while also guaranteeing high availability, but still only manage a single Cosmos database.

Finally, when provisioning Cosmos DB, you pay for a given number of RUs (request units per second). The provisioned RUs can be adjusted as needed to either increase scalability or lower cost. So, we can scale Cosmos DB as necessary for our user information scenario to ensure we only pay for the resources that we need at any given time.

Disadvantages of storing user information in Cosmos DB

Cosmos DB provides the ability to query documents using a SQL like a syntax. This capability can either query a single partition or query all the partitions. In general, executing queries do NOT provide a response time SLA (service level agreement), and they typically cost many more RUs (request units). Now, the advantage of storing user-level information, especially for a web platform or e-commerce platform, is that it would never be performing these types of queries. You would always be showing the user their information using their own user identity. Hopefully, I’ve explained that this use case avoids one of the problem areas with Cosmos DB.

Device Telemetry Use Case

How to store device telemetry in Cosmos DB

Cosmos DB is an excellent solution for storing device telemetry. There might be a manufacturing center with several hardware devices that generate telemetry as they are used. In this case, each piece of generated telemetry could be a separate document in Cosmos DB. Because Cosmos DB  is schema-less, each part of telemetry could have an entirely different schema from any other piece of telemetry. However, they could also be stored in the same document collection in Cosmos DB. For this solution, the ideal partition strategy would be to use the unique identifier of each device. With this storage solution, each device can stream telemetry to Cosmos DB, and then a dashboard could show the telemetry received from each device.

Advantages of storing device telemetry in Cosmos DB

The advantages of using Cosmos DB for this scenario are very similar to those advantages for the user information scenario. Specifically, I will mention the guaranteed latency SLA (service level agreement) of 10 milliseconds or less is very useful for this scenario to ensure that the device telemetry is received promptly. Also, providing the required request units ahead of time and scale them elasticity up and down is very useful for this scenario. For device telemetry, it likely that not all telemetry needs to be stored for all time. Cosmos DB nicely supports this scenario by allowing each document to have a time to live (TTL) property set on the document. This property indicates how many seconds the document should remain in the system since the last update to the document. This feature would work nicely to ensure that the system only retained the last week or the previous 30 days of telemetry. Finally, the advantage of schema-less storage is incredibly useful for this scenario as well.

Disadvantages of storing device telemetry in Cosmos DB

Like the user information scenario, the primary disadvantage of Cosmos DB for this solution would be querying documents, individually querying across devices. Now, instead of querying telemetry directly from Cosmos DB, one could either use the Analytical data storage feature of Cosmos DB or the change feed. The Analytical data storage feature is a feature of Cosmos DB that will replicate the data from Cosmos DB into a separate row-based data storage system using Parquet files. Once the data is replicated, it can then be directly connected to Azure Synapse and queried using either Apache Spark or massively parallel SQL engine. Both queries compute engines are designed for big data querying scenarios and have no effect on the RUs (request units) required to manipulate documents in the Cosmos DB transactional store. I’ll discuss the Cosmos change feed in the next section.

Event sourcing and Cosmos change feed

Cosmos DB provides a built-in change history feature known as the change feed. The Cosmos DB change feed tracks changes to the documents in a collection over time. Any client can query the change feed at any moment and from any given historical point in the collection. Each client can then track their position in the change feed to know if new document changes have arrived in the system.

The change feed feature is an incredibly useful feature to replicate data from Cosmos DB into another storage system as needed in near real-time. Specifically, as mentioned, complex querying across documents is not ideal in Cosmos, but with the change feed, you can easily replicate the data to any other third-party data platform.

However, the most compelling use case of the change feed is to pair it with event sourcing. For both of our use cases, instead of storing the current state of the data (e.g., the current contents of the user shopping cart or wishlist), save the action that the user performed as a document (e.g., added an item to the wishlist, removed an item from the wishlist). The same would apply to the device telemetry scenario. One could then use the change feed to process the system events in near real-time and then calculate the desired aggregate (e.g., the user’s current wishlist) and then store that in Cosmos DB as an aggregate document or in another data storage system. By storing events and processing them in order using the change feed, you can then enable and trigger other scenarios. Such as answering if an ad campaign resulted in users adding the item to their shopping cart or wishlist and how quickly that occurred from when they interacted with the ad campaign. Cosmos DB specifically enables this scenario because of the schema-less support, and each event can be stored as a document with a different schema. Still, all the functions can be stored in a single collection.

Summary

Finally, a summary of the advantages of storing user information and device telemetry in Cosmos DB.

  • Partition Strategy is very clear (user identity or device id)
  • Only uses document reads and document writes (most efficient operations in Cosmos DB)
  • Guaranteed SLAs on latency
  • Easy to predict RU (request unit) usage
  • Support for complex schema-less documents
  • Elastic scalability (provisioned request units per second) can be adjusted as needed. The partitioning strategy will result in maximum scalability of a single physical node for a given user or device
  • Ability to scale Cosmos DB down to save costs or up to deal with high load
  • Automatic geo-replication and use of the nearest region to reduce round-trip latency
  • Ability to auto-trim device telemetry by utilizing the time-to-live (TTL) feature on a document
  • Ability to perform event sourcing and near real-time reactions to data changes using the Change Feed feature of Cosmos DB

Hopefully, this article highlighted some of the ideal usage patterns for Cosmos DB and helped you better understand how to utilize this cloud-scale, elastic, geo-replicated schema-less database.

Once you’ve decided to instrument your ASP.NET Core application with Application Insights, you may be looking for how to anonymize or customize the data that is being sent to Application Insights. For details on why you should be using Application Insights and how to get started, please reference my previous post in this blog series.

Two things to consider with telemetry:

  1. Adding additional telemetry that is not recorded by Application Insights.
  2. Removing or anonymizing data that is already being sent to Application Insights.

There will a later post in this series discussing how to add new telemetry, this post focuses on anonymizing or removing data.

Personally Identifiable Information (PII) Already in Application Insights Account

We’ll start with a likely scenario, during an audit or during testing you discovered that you are logging PII to your Application Insights account. How can you go about fixing that? The short answer is to delete the entire Application Insights resource. That means you’ll lose access to all historical telemetry that was in the account and your production system will no longer be logging telemetry anywhere unless you create a new account and update your production system with the new telemetry key. However, this does solve your immediate problem of retaining PII. See the Microsoft documentation, for details on what is captured by Application Insights, how it’s transmitted and stored.

Application Insights does provide a PURGE endpoint, but requests are not timely, the endpoint isn’t well documented and it will not properly update metrics to account for the data that was purged. In general, if you have a compliance concern, delete the Application Insights account. Remember, Application Insights is designed to be a highly available high-performance telemetry platform, which means it is designing around being an append-only system. The best solution is simply not to send data to the platform that you shouldn’t.

API Use Case

Think of an API your business may have built. This API allows us to search for customers by e-mail to find their customer id. Once we have the customer id, we can make updates to their record such as their first name, birthday, etc. By default, Application Insights records the request URL and the response code. By default, it does NOT record any HTTP headers or the request body or the response body. First, let’s think of how we might design the search endpoint, we have two approaches:

  1. GET /api/customer/search?emailAddress=test@appliedis.com
  2. POST /api/customer/search
    a. In the request body, send JSON:
    { “emailAddress”: “test@appliedis.com”}

If we design the API using approach #1, we will be sending PII to Application Insights by default. However, if we designed the API using approach #2, by default no PII would be sent to Application Insights. Always try and keep your URLs free of any PII.

That may not always be possible, so let’s look at another use case. Let’s say we have the primary key of a customer record and we want to view and make edits to that record, what would the endpoints look like:

  1. GET /api/customer/9b02dd9d-0afd-4d06-aaf1-c34d3c051ec6
  2. PUT /api/customer/9b02dd9d-0afd-4d06-aaf1-c34d3c051ec6

Now, depending on your regulatory environment logging these URLs to Application Insights might present a problem. Notice we are not logging e-mail addresses, phone numbers or names; we are logging behavior about an individual. Pay attention to when the site was accessed and when was their profile updated? To avoid this we would like to anonymize the URL data that is being sent to Application Insights.

Anonymize Data Sent to Application Insights

This section assumes you are using ASP.NET Core and have already configured Application Insights, see my previous blog post for details.  Also, if you need to troubleshoot your configuration or need to verify it’s working as expected, please see my other blog post for details.

The Application Insights NuGet package provides an interface for exactly this purpose called ITelemetryProcessor. You simply need to subclass it and implement the Process method. The Telemetry Processor implementation acts much like ASP.NET Core middleware, in that there is a chain of telemetry processors. You must provide a constructor in your implementation that accepts an ITelemetryProcessor which is next in the chain. In your process method, you are then responsible for calling onto the next processor in the chain. The last processors in the chain are the ones provided by the NuGet package that implements the same interface and sends the telemetry over the wire to the actual Application Insights service in Azure. In the Process method which you are required to implement, you receive a single argument, ITelemetry. You can cast that to one of the subclasses, e.g. DependencyTelemetry, RequestTelemetry, etc. In the Process method, you can then mutate the telemetry in whatever way you need to, e.g. to anonymize data. You’ll then be responsible for calling the Process method on the next telemetry processor in the chain, e.g. the one that was provided to the constructor of your class. If you want the given telemetry item to never be sent to Application Insights, simply omit the call to the Process method of the next telemetry processor in the chain.

Now we will look at the source code for one that does what we are proposing. This will be for anything that resembles a customer id in the URL of RequestTelemetry and then replaces it with the word “customerid”.
RequestTelemetry

As seen above, in the constructor we receive the next telemetry processor in the chain. In the process method, we check to see if we have RequestTelemetry, e.g. ignoring all other telemetry types, like TraceTelemetry or DependencyTelemetry. The RequestTelemetry has a .Name and .Url property both of which might contain details about the URL which contains our PII. We use a regular expression to see if either contains a customer id, if so, we replace it with the word “customerid”. And then we always ensure that we call the next telemetry processor in the chain so that the modified telemetry is sent to Application Insights.
Remember, just having this ITelemetryProcessor coded won’t do anything. We still need to register it with the Application Insights library. In the Startup.cs, you will need to add a line to the ConfigureServices method, see below:

Add a Line to the ConfigureServices Method

Now that you’ve registered the processor, it’s just a matter of deploying the code. If you wanted to debug this locally while still sending telemetry live to Application Insights in Azure, please see my blog post with details on how to do that.

Summary

You learned about the ITelemetryProcessor extension point in the Application Insights NuGet package and how to use that extension point to prevent PII data from being logged to your Application Insights account. We’ve discussed how to design your API endpoints efficiently by default so that hopefully you don’t need to customize your telemetry configuration. Lastly, I have shown you how to delete PII data that may have been logged accidentally to your production Application Insights account. You may also want to take advantage of a relatively new feature of Application Insights to set the retention period of logs. Previously it was only 90 days, but now you can configure it in as low as 30 days. This may be useful for handling forgotten requests if your logging system is storing PII and you need to ensure it is removed within 30 days of being requested. Next, in this blog series, we will discuss logging additional telemetry to create an overall better picture of your system’s health and performance.

Once you’ve decided to instrument your ASP.NET Core application with Application Insights, you may be looking for a quick way to troubleshoot telemetry configuration. For details on why you should be using Application Insights and how to get started, please reference my previous post in this blog series. How do you go about testing your telemetry configuration? Typically, developers would adjust the application and would then deploy to Azure, ideally your development environment in Azure. However, making a change to an application, building, publishing to Azure and testing the given endpoints. Waiting for telemetry to appear in Application Insights can require upwards of 30 minutes per change assuming that you know what you are doing and without making any mistakes. Is there a way to work locally?

Troubleshooting Locally

Let me start by discussing the end goal. We want to simulate our production environment while running locally on our dev machine. Azure typically provides local emulators for services like storage and Cosmos. They do not provide an Application Insights emulator. When running locally we want to simulate production, but we will need to send data to an actual Azure Application Insights account. In order to simulate running in production, we should publish our Dotnet application in Release mode and begin outside of Visual Studio. The reason for starting the application outside Visual Studio is that our production environment will not have Visual Studio installed. Another reason for starting the application outside Visual Studio is because Visual Studio includes a Diagnostic panel that captures the Application Insights telemetry and prevents it from being sent to the Azure Application Insights account. I’d like to emphasize that the Diagnostics panel built into Visual Studio is not an emulator and shouldn’t be used for that purpose.

First, we must publish the application in Release mode. You can do that using the dotnet command-line as shown below.

Publish in Release Mode

This will publish to a directory similar to the below,

Publish to Directory

Once in the directory where the build artifacts are, we should find both appsettings.json and the .dll for our main application, CustomerApi.dll in my case. From the command-line. We can then run Kestrel directly, using the following command-line.

Run Kestrel Directly

If using the defaults, your application will now be running and available in a browser at either http://localhost:5000/ or https://localhost:5001/. We are likely still missing one step, which is configuring the telemetry key for Application Insights. In the bin\Release\netcoreapp3.0\ folder, locate the appsettings.json. Open the file and put the telemetry key in the file.
configuring the telemetry key

If you go back to the command-line you can press Ctrl+C to exit the running web application and then re-run the dotnet CustomerApi.dll command to restart the application. We now have an application running locally that is sending telemetry to Application Insights in Azure.

View Live Telemetry from Local Machine

In the Azure portal, open the Application Insights resource and then locate the “Live Metrics Stream” blade.
Live Metrics Stream

The Live Metrics panel should open and connect as long as the application is running locally using “dotnet CustomerApi.dll”. Once open, scroll to the bottom of the pane.

Bottom of Pane

At the bottom, you will see a list of connected servers. In my example below, you see two servers. The one highlighted in red is my local developer machine. The other server is the Azure Application Service that I have running in my development environment in Azure.

Developer and Azure Application Server

To quickly recap we have our application running locally outside Visual Studio on a command-line and in Azure Application Insights, we can see our local machine is connected up to live metrics. In order to actually see telemetry flow into this panel, you will likely want to make one other change. In the upper-right, click on the filter icon to adjust the live metrics filters.

Telemetry Sample

You will then be prompted with the following dialog. If you trust the servers you can safely ignore.

Authorize Connected Servers

You will then see a dialog with the current filters. Notice that the default configuration will only show failed requests and dependency calls. Since we are troubleshooting it’s likely you will want to see all requests. Feel free to click the “x” to remove both filters and then the “Save” button.

Current Filter Dialogue

Once you have completed this step, you can go back to your web browser on your local machine, either http://localhost:5000/ or https://localhost:5001/ and then make a request to your API. I tried a URL I know returns a 404 response. You can see the live telemetry that showed up for me:

Sample Telementry

Then, click on that row for more details about the given telemetry item. This telemetry is also being logged to Application Insights and you will be able to see it on all the usual dashboards and search for it using Log Analytics query, just be aware there is still the typical 2 to 5-minute delay between when it is sent and when it will appear in queries and dashboards.

Summary

You have now learned how to troubleshoot Azure Application Insights quickly and without needing to deploy your application to Azure. To summarize, you run “dotnet publish” in “Release” mode locally and then run the application from the command-line outside Visual Studio. This is done for a few reasons:

  • When publishing in release mode, you do not need to worry about appsettings.development.json
  • By running outside Visual Studio, you do not need to worry about launchSettings.json setting any special environment variables that don’t match your production environment (e.g. ASPNETCORE_ENVIRONMENT)
  • When running outside Visual Studio, you do not need to worry about the diagnostics panel deciding to capture your telemetry and preventing it from being sent to Azure.

Once your application is running locally and has the Application Insights telemetry key configured properly, you will find the telemetry in the “Live Metrics” view so you can avoid the typical 2 to 5-minute delay between sending telemetry and will see it elsewhere in Application Insights.

If you are concerned that this setup will not allow for the use of the Visual Studio Editor, think again! Once you have the application running outside Visual Studio, simply use the “Attach To Process…” menu item in Visual Studio. This gives you the best of both worlds:

Visual Studio Debugger

Hopefully, this post helped you understand how to more quickly how to troubleshoot your Application Insights telemetry configuration. That will come in handy in the next post for this series, where we talk about customizing telemetry to keep PII (Personally Identifiable Information) out of your application insights logs.

When building a web API or web application it is critically important to know that the application is functioning as intended. Whether that be from a performance perspective or simply knowing that external clients are using the application correctly. Historically, for an on-premise solution that involves installing agent monitoring software and configuring a logging solution with associated storage management. With Azure, that now becomes a turn-key solution using Application Insights. Application Insights can be used whether your actual application is deployed on-premise or in the cloud. In this post, I’d like to talk about configuring Application Insights for an ASP.NET Core application and I’d also like to talk about structured logging.

Enable Application Insights for ASP.NET Core

The way to enable Application Insights for your ASP.NET Core application is to install the Nuget package into your .csproj, file as shown below.

Enable Application Insights for ASP.NET Core

The rest of this article assumes you are using version 2.7.1 or later of the Nuget package. There have been several changes in the last 6 months to the library.
Please add the following code to your Startup.cs,

Add Code to Your Startup.cs

Allocate your Application Insights resource in Azure, whichever way you prefer. This could be Azure Portal, Azure CLI, etc. See Azure Docs for more details.

In your appsettings.json, add the following:

appsettings.json

By now you’ve enabled Application Insights for your ASP.Net Core application. You’ll now get the following features:

  • Request Lodging
  • Automatic dependency logging for SQL requests and HTTP requests
  • A 90-day long retention period
  • Live metrics, which permit you to view and filter the above telemetry along while viewing CPU and memory usage statistics live. For example, see the below screenshots.

Live Metrics Stream

Telemetry Types and Waterfall View

One of the interesting features that Application Insights provides compared to other logging systems is that it has different kinds of telemetry. This includes RequestTelemetry, DependencyTelemetry, ExceptionTelemetry, and TraceTelemetry. Application Insights also provides the ability to have a parent operation that other telemetry operations belong to and you can view a waterfall view of a given request. For an example see the screenshot below:

End to end transaction

Structured Logging

Any of the telemetry types will provide the ability to add arbitrary key-value pairs. Those values will then be logged as key-value pairs to Application Insights. Then using the Log Analytics feature of Application Insights, one can then query on those custom key-value pairs. Effectively, you are getting a schema-less ability to attach custom properties to any telemetry in real-time. This is commonly referred to as Structured Logging with other frameworks. This is so you are not creating one long message string, then trying to parse the message string. Instead, you get custom key-value pairs and can simply query for a given key having a given value. The screenshot below provides an example of a Log analytics query on a custom property:

Customer API App Insights

Log Your Own Custom Messages and Telemetry

We now ask the question of how do you go about logging custom telemetry to Application Insights from within your ASP.NET Core application? The Application Insights NuGet package automatically registers the TelemetryClient class provided by the library into the Dependency Injection container. You could add that as a constructor argument to your Controller for instance and then directly call methods on the TelemetryClient. However, at this point, you are coupling more parts of your application to ApplicationInsights. It’s not necessary that you do that. With the latest versions of the ApplicationInsights NuGet for ASP.NET Core, they register an ILogger implementation with ASP.NET Core. So, you could then update your controller as follows:

Log Custom Messages

In the above example, we have logged a message and a custom key-value pair. The key will be “id” and the value will be the value of the argument passed into the Get function. Notice, we have done this only with a dependency on ILogger, which is a generic abstraction provided by Microsoft. ILogger will typically log to multiple outputs, Console, ApplicationInsights and you can find many implementations of ILogger. ILogger natively supports structured logging and will pass the information down to the actual log implementation. The below example being Application Insights.
Currently, by default Application Insights will only log warning messages from ILogger. So, my above example would not work. To disable the built-in filter, you would need to add the following to Startup.cs in ConfigureServices.

ConfigureServices

Summary

With Application Insights, we can provide within minutes in Azure. You’ll receive 5 GB of data ingestion free per month and free data retention for 90 days. It is trivial to instrument your application. Some of the benefits you’ll receive are:

  • Automatic logging of requests/responses
  • Ability to drill into recent failures/exceptions in Azure portal
  • Waterfall mapping of a request
  • Automatic dependency logging of out-bound SQL and HTTP requests
  • Arbitrarily query your data using Log Analytics
  • Ability to drill into recent performance metrics in Azure portal
  • Live metrics view as your application is running in production with filtering.
  • Custom graphs and charts with Notebooks
  • Ability to create an Azure Portal Dashboard
  • Application map that will show the topology of your application with any external resources it uses.

Application Insights is a very powerful tool to ensure your application is functioning as intended, and it is very easy to get started. You spend your time instrumenting your application and checking application health, not time provisioning log storage solutions and picking log query tools.

Look for future blog posts covering additional topics like keeping Personally Identifiable Information (PII) out of your logs and troubleshooting your Application Insights configuration.

javasciptAre you writing a lot of JavaScript as part of your web application? Spending a lot of time debugging that JavaScript? I want to discuss a debugging technique today using that old standby, console.log and overcoming one of its deficiencies.

Using console.log can be useful for printing out application state without interrupting your workflow. Contrast that with using a JavaScript breakpoint where you are forced to break your workflow to check the application state and then resume the execution of your application.

So, now that I’ve refreshed your memory on the upside of console.log over a breakpoint, let’s talk about the downside compared to breakpoints. To add a console.log statement, you have to edit the source code of your application. And editing the source code of your application is an even bigger break in your debugging workflow. You have to go back to the editor, add the log statement, refresh the page, and then go back to where you were. And don’t forget to remove that log statement before you commit your source code.

Well what if you could avoid those limitations? Read More…

AIS developed a prototype web application that leverages open standards for real-time data sharing and geospatial processing. It’s highly suggested you read our first two blog posts on this application, part one and part two.

In this post, we are going to discuss three areas of improvement for the application. We wanted to improve collaboration, improve management of events by adding a search capability, and improve the edit capabilities. Read More…

Are you working on a REST API and using the new Web API to implement it? You’ve written an ApiController subclass or two? Let’s say you’ve created a new subclass of ApiController called OrderController. WebAPI provides your OrderController with out-of-the-box support for the following URLs:

HTTP Verb URL Description
GET /api/order Returns all orders
GET /api/order/3 Returns details order #3
POST /api/order Create new order
PUT /api/order/3 Update order #3
DELETE /api/order/3 Delete order #3

The above is considered verb-based routing. The URLs above only contain the controller name and an optional id. So the Web API uses the HTTP verb of the request to determine the action method to execute in your ApiController subclass.

Your Goal

Now what if you want to add some custom actions to your ApiController subclass? For example:

HTTP Verb URL Description
GET api/order/3/vendors Returns all vendors involved with providing items to complete order #3
PUT /api/order/3/expedite Expedites order #3, but can only be executed by managers in customer service dept.
PUT /api/order/3/reject Rejects order #3, but can only be executed by managers in customer service dept.

It turns out that adding those custom actions is hard, very hard. But keep reading. There is an easy way. Read More…