Why JWT Tokens?

JSON Web Tokens (JWT) are an open, URL-safe & industry-standard method of representing claims securely between two parties. JWT Tokens are used for authorization and for exchanging information.

JWT Token Structure

The most commonly used JWT token consists of 3 parts separated by a dot (.).

  1. Header
  2. Payload
  3. Signature

Example:

eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCJ9.eyJzdWIiOiIxMjM0NTY3ODkwIiwibmFtZSI6IlVzZXJuYW1lIiwiaWF0IjoxNTE2MjM5MDIyfQ.Co6UrECXBbveQJiF3NkuMhnO_R34qZXhfFvQbePy6y4

The first two parts of a JWT token (header & payload) are Base64-URL encoded JSON, and the last part (signature) is a cryptographic signature.

Header: metadata about the token type and the signing algorithm to secure content.

Example:

{
  "alg": "HS256",
  "typ": "JWT"
}

Payload: Set of claims containing statements about an entity(user) and other additional data.

Example:

{
  "sub": "1234567890",
  "name": "Username",
  "iat": 1516239022
}

Signature: Combination of a base64-encoded header. A base64-encoded payload and secret are signed with the algorithm specified in the header. It is used to validate the JWT token.

Example:

HMACSHA256(
  base64UrlEncode(header) + "." +
  base64UrlEncode(payload),
  secret
)

Steps to Implement JWT Using .NET 6

Create UserDto class file (used for login)
Create User class file (used for storing data into the database)
Create an Auth controller and its methods
Run the application

Step 1: Create UserDto.cs File

This file contains the Data Transfer Object (DTO) for login and user registration screen. Both the username and password properties are of type string.

Code:

namespace JwtWebApiTutorial
{
    public class UserDto
    {
        public string UserName { get; set; } = string.Empty;
        public string Password { get; set; } = string.Empty;
    }
}

Creating UserDto.cs File

Step 2: Create User.cs File

This file contains the user model with username, passwordHash, and passwordSalt properties. The Username property is of type string, passwordHash, and passwordSalt; both are of the type byte array (byte[]).

Code:

namespace JwtWebApiTutorial
{
    public class User
    {
        public string Username { get; set; } = string.Empty;
        public byte[] PasswordHash { get; set; }
        public byte[] PasswordSalt { get; set; }
    }
}

Create User.cs file

Step 3: Create Controller and its Methods

Create a controller named AuthController and follow the steps listed below:
Create Register () method

Code:

[HttpPost("register")]
public async Task<ActionResult<User>> Register(UserDto request)
{
       CreatePasswordHash(request.Password, out byte[] passwordHash, out byte[] passwordSalt);
       user.Username = request.UserName;
       user.PasswordHash = passwordHash;
       user.PasswordSalt = passwordSalt;

       return Ok(user);
}

Create Controller and its methods

*This method is a post method that is used to register a user.

Create CreatePasswordHash() method
Code:

private void CreatePasswordHash(string password, out byte[] passwordHash, out byte[] passwordSalt)
        {
            using (var hmac = new HMACSHA512())
            {
                passwordSalt = hmac.Key;
                passwordHash = hmac.ComputeHash(System.Text.Encoding.UTF8.GetBytes(password));
            }
  }
PasswordHash Method

* This method uses a cryptographic algorithm (HMACSHA512) to create the password salt and hash and store that to the user object.

Create Login() method
Code:

[HttpPost("login")]
        public async Task<ActionResult<string>> Login(UserDto request)
        {
            if (user.Username != request.UserName)
                return BadRequest("User not found");

            if (!VerifyPasswordHash(request.Password, user.PasswordHash, user.PasswordSalt))
                return BadRequest("Wrong password");

            string token = CreateToken(user);

            return Ok(token);
        }
Create a Login Method

* This method calls the VerifyPasswordHash to compare the computedHash value with the stored password hash value by using the same passwordSalt.

Create VerifyPasswordHash() method
Code:

private bool VerifyPasswordHash(string password, byte[] passwordHash, byte[] passwordSalt)
{
     using (var hmac = new HMACSHA512(passwordSalt))
      {
               var computedHash = hmac.ComputeHash(System.Text.Encoding.UTF8.GetBytes(password));
               return computedHash.SequenceEqual(passwordHash);
       }
 } 

VerifyPasswordHash
*This method returns a boolean response of whether the computedHash value is equal to the stored password hash value.

Create CreateToken() method
Code:

private string CreateToken(User user)
{
      List<Claim> claims = new List<Claim>
      {
            new Claim(ClaimTypes.Name,user.Username),
            new Claim(ClaimTypes.Role,"Admin")
       };

        var key = new SymmetricSecurityKey(System.Text.Encoding.UTF8.GetBytes(
                _configuration.GetSection("AppSettings:Token").Value));
         var cred = new SigningCredentials(key, SecurityAlgorithms.HmacSha512Signature);
         var token = new JwtSecurityToken(
                				claims: claims,
                				expires: DateTime.UtcNow.AddDays(1),
                				signingCredentials: cred
);
            var jwt = new JwtSecurityTokenHandler().WriteToken(token);
            return jwt;
  }

CreateToken Method

* This method involves the creation of:

  • List of claims
  • Symmetric Security Key, generated with the help of a secret key present inside the appsettings.json
  • Signing Credentials, formed by Symmetric Security Key and Security Algorithm (HmacSha512Signature)
  • JWTSecurity Token that includes claims, expiration time, and signing credentials
  • JWT token using the JWT Security Token Handler

Note:

  • Right-click on Claim and add the missing import for it
  • Right-click on the SymmetricSecurityKey method and install the latest Microsoft.IdentityModel.Tokens package
  • Right-click on JWTSecurityToken and install the latest System.IdentityModel.Tokens.Jwt package.
  • Create a secret key in the appsettings.json file (the secret key must have 16 characters in it)

Code:

"AppSettings": {
    "Token": "This is my secret key"
  },

Create AppSettings Key

* Create a constructor for AuthController and add the IConfiguration as a dependency injection to access the secret key inside appsetting.json file.

Code:

public AuthController(IConfiguration configuration)
{
        _configuration = configuration;
 }
Create AuthController

Step 4: Run the Application

Run the application and open the swagger URL. You will see the two Http methods listed below.

JwtWebApi Tutorial

1. Expand the register method and register any user. You will get a successful JSON type response body containing username, passwordHash and passwordSalt in it.

Expand Register Method

2. Expand the login method
Case 1: Login is successful

Enter the valid username and password. You will get a valid response body containing the JWT token as a string.

Enter valid username and password

Case 2: Login is unsuccessful
Enter the invalid username or password. You will get an error response body containing the error message.

Invalid username and password

Summary

This blog has covered the basic implementation of JSON Web Tokens, and the purpose of this blog was to give you a jump start. You can take the code further by implementing more functionalities like role-based authorization and improving the code by using authentication services and dependency injection.

The cloud-oriented domain presents numerous styles of building automated solutions for various personal & business use cases. And there is no such thing as a correct or incorrect solution to these problems. It all depends on your requirement and any restraints in building the solution.

The introduction of serverless solutions proved to be a perfect fit for scenarios where you must perform event-driven tasks cost-effectively and flexibly.

Azure Functions

Azure Functions are a lightweight serverless compute-on-demand, Platform-as-a-Service (PaaS) offering from Microsoft where you can run & deploy your code directly without being bothered by the underlying infrastructure to host your application. They scale automatically as the load increases & you just need to pay for the execution time of the functions.

Azure Functions execution mode: In-process vs. Out-of-process

Initially, .NET Azure Functions only supported a strongly coupled mode of execution known as In-process mode, where your function code runs in the same .NET process as the host (Azure Functions runtime). In this mode, your code should run on the same framework version used by the runtime. The tight coupling certainly provided performance benefits such as faster cold start times but had shortcomings, such as the inability to create/use your Host instance to register your own Middleware, Logger, etc.

Starting .NET 5, Microsoft introduced a new Isolated-process (aka Out-of-process) mode for running your code in Azure Functions. The main benefit of isolated mode is that it decouples your function code from the Azure Functions runtime, thus letting the users utilize any supported version of .NET, even if it’s different from the runtime version.

Isolated mode removed the limitations of in-process execution mode, as it provided the user with the following:

  • Full control over how you configure & run your code inside Azure Functions
  • Ability to utilize features such as implementing custom Middleware, Logging, etc.
  • Encountering fewer conflicts between the code assemblies & the assemblies used by the host process.

Isolated-process Function Apps certainly offer numerous advantages compared to In-process Function Apps. And as per the roadmap shared by Microsoft for Azure Functions, isolated process functions are indeed the future & eventually will be the only choice from .NET 7 onwards.

Isolated Process with Durable Functions

This means that if you choose to go with isolated mode, the upgrade process for your codebase would be easier than in-process mode.

When writing this blog, our codebase is running on .NET 6 framework. So, for our solution, we decided to implement Azure Functions (version 4.x) based on the Isolated process model.

Isolated-mode Timer triggers Azure Functions
To set up an Azure Functions project to run in Isolated process mode, select the Azure Functions project template & select the “.NET 6.0 Isolated (LTS)” type in the dropdown:

Creating a New Azure Functions Application

The requirement was to implement an automated solution scheduled at certain time intervals, so we decided to work with Timer trigger functions as they let you run your code on a specified schedule.

Timer trigger functions use a CRON expression which describes a schedule on which the function is supposed to execute. A CRON expression contains six space-separated values represented as:

{second} {minute} {hour} {day} {month} {day-of-week}

In the above expression, each field can hold a specific (valid) numeric value, a range of values (2-4), a set of values (2,4,6), an interval value (e.g., /5 means every fifth of that unit) or all values (represented by an asterisk *).

The value in the “Schedule” column in the above screenshot is an example of a CRON expression. The value says:

10 */5 * 15-20 * *

This CRON expression translates to “Run the Timer Trigger function at 10th second of every 5th minute of every hour between day 15 and 20 in every month”.

After creating a .NET isolated function project, the Solution Explorer will contain the following files:

Create .NET isolated function project

The critical point to notice in an isolated functions project is that it contains a Program.cs file provides direct access to the Host instance, giving you complete control over setting any code configurations & dependencies. There is no separate Startup class available/required here.

Another difference is the local.settings.json file, where the FUNCTIONS_WORKER_RUNTIME application setting is set to “dotnet-isolated”. The value is set to “dotnet” for an in-process Azure Function.

Note: If you are using your appsettings.json files to store configuration values for various environments, you need to add the FUNCTIONS_WORKER_RUNTIME variable in the parent appsettings file as well, so that it gets inherited & applied to all your appsettings.json files.

Functions Worker Runtime

Using Middleware with Azure Functions

For our use-case, we implemented our own middleware to perform exception handling & log every exception that occurs in our Function App. To do that, we created a class called “ExceptionHandlingMiddleware.cs”. Now, for this class to act like a middleware, it needs to inherit IFunctionsWorkerMiddleware interface & implement it. At this point your code should look like this:

public class ExceptionHandlingMiddleware : IFunctionsWorkerMiddleware
{
    public async Task Invoke(FunctionContext context, FunctionExecutionDelegate next)
    {
        await next(context);
    }
}

Now, let’s add some code to our middleware class so that it can grab those exceptions and log them to the respective logger sources.

public sealed class ExceptionHandlingMiddleware : IFunctionsWorkerMiddleware
{
   
   public async Task Invoke(FunctionContext context, FunctionExecutionDelegate next)
   {
       try
       {
           await next(context);
       }
       catch (Exception e)
       {
           var log = context.GetLogger<ExceptionHandlingMiddleware>();
           log.Log(LogLevel.Error, eventId: 101, e, $"An exception occurred while calling the timer-triggered function. Exception message: {e.Message}. Original exception type: {e.GetType().Name}.");
       }
   }
}

Our custom middleware is all set to handle any exceptions occurring during the execution of our function app. Now, in order to actually use it, we need to register it in our Program.cs file.

public class Program
{
    public static void Main()
    {
        var host = new HostBuilder()
            .ConfigureFunctionsWorkerDefaults(
                builder =>
                {
                    builder.UseMiddleware<ExceptionHandlingMiddleware>();
                }
            )
            .Build();

        return host.Run();
    }
}

If an exception occurs in your Function App, it will get caught and logged to your console window, and if you have Application Insights configured, it will be logged in the respective Azure App Insights resource.

Conclusion

Isolated-process mode of Azure Functions certainly provides users with full control over the code, from deciding how to start the app to control the configuration of our functions. And it can significantly reduce the execution costs compared to other automation solutions such as Azure Runbooks (which was the initial solution in our use-case before introducing Azure Functions), not to mention other benefits such as auto-scaling, ability to perform code-based regression testing, availability of multiple input/output options (compared to just JSON input for Runbooks) & various upgradeable tiers to host them.

Considering the above-mentioned benefits, I guess it is safe to say that Azure Functions will play a key role in helping people solve complex problems in the cloud automation domain in the future.

I recently enjoyed being involved in an internal Power Platform modernization hackathon. I want to share my experience to provide valuable insights and/or ideas to help others decide if they will participate in or host a hackathon themselves.

What is a Hackathon?

A hackathon is typically a single sprint event where a group of motivated technical folks collaborates intensely to implement and demonstrate the solution to a problem use case chosen by each team. It is a rapid development effort where the solutions are designed and implemented towards the overall goals of the hackathon event. Besides valuable business/account benefits, hackathons are known to be great opportunities for hands-on learning and upgrading technical skillsets.

AIS Internal Hackathon

AIS held an internal Power Platform hackathon in the summer of 2021. One such stirring event helped a few motivated AIS team members to learn and upgrade their Power Platform skills while applying those to solve practical problem scenarios at one of our major clients. The event focused on relevant use cases that can use the many features of Power Platform to solve real-time gaps and/or problems across our client enterprise. There were six teams of 4-6 team members each. Each team had at least one Power Platform developer and one person familiar with the client’s business domain. A set of senior Power Platform SMEs acted as advisors for the event.

The Case and Team

I’ve had the opportunity to propose two of the six use cases selected by a team for implementation. My team focused on a crucial financial reconciliation process which proved to be a spot-on candidate for Power Platform modernization. The existing production system was a dusty Windows Forms application created at lightning speed by AIS to help our client avoid renewing a costly COTS product. Power Rippers’ hackathon team consisted of a Power Platform developer and three .NET developers with no Power Platform experience. Two of the team members had domain experience specific to the client we focused on. We’ve had an excellent experience working intensely on the effort as a mini-project. We leveraged Microsoft Teams for all collaboration, including recorded meet-ups, shared OneNote’s, and linked a OneDrive library app to the chat space.

Power Platform and its Wide Range

We learned, utilized, and integrated a wide range of Power Platform components like Model-Driven App, Dataverse, Dataflow, Power BI, paginated reports, data alerts, Power Automate, and OneDrive. Using these technologies, we modernized the existing business processes. In addition, we added a few Power Platform-backed enhancements to demonstrate how the platform can empower business users further.

Component Level Interaction

Gains

We had our share of hiccups in the project, which proved to be a significant part of the learning experience. Our team secured 2nd place, which came with a monetary reward!

From a business standpoint, it did not stop there. We made our application into a proof concept. We presented to the application leadership as a possible solution to replace the existing application, with benefits such as the following:

  • An application that is easier to maintain
  • More functionality than the original application
  • Decreased level of effort and costs for continued enhancements
  • The ability for the client to configure many aspects of the application through model-driven application
  • Moving the application to a platform where the functionality could be maintained, and enhancements could be done by the client themselves with limited training.

From a personal standpoint, it ticked a few checkboxes for my growth, like becoming familiar with PowerApps and Power BI to an intermediary level vs. a lesser-than-a-novice before this. It also allowed me to present my first Lightning Talk, a short presentation to pass on what I learned to others within my company.

The Transformation Saga

This excellent transformation story of a third-party product rewrites into a Power Platform opportunity that materialized to reality due to the hackathon.

The Transformation Story of Power Platform

Why AIS?

This effort is a testament to the technical learning appetite of AIS folks, how we as a company strive to make things beneficial and better for our clients. It also demonstrated how internal activities like hackathons, level-up sessions, lightning talks, etc. help participants achieve personal goals and work together to share their skills and knowledge.

Kubernetes has become the go-to orchestrator for running containers. At AIS, we are working with clients using Azure Kubernetes Service (AKS), applying our expertise in DevOps. One concern is delivering HTTPS certificates to containerized web applications; let’s use Azure Key Vault to achieve this.

Azure Key Vault is a cloud service that provides hardware security modules (HSMs) for securely storing and delivering secrets, encryption keys, and certificates. The Key Vault FlexVolume driver for Kubernetes offers a means for mounting certificates as files in our containers; this allows us to automate and manage HTTPS certificates for applications without requiring them to access Azure Key Vault directly.

Let’s look at how we can apply this to Kubernetes services that front containers running ASP.NET Core 3.1 applications. We’ll start with a sample application from Microsoft’s .NET Docker repo. You can find the full solution and step-by-step instructions in this aks-keyvault-certs GitHub repo.

We’ll use PFX encoded certificates in our Azure Key Vault for this demo, as they are readily loadable in .NET Core 3.1 for use in Kestrel hosting.

There are a few important details to note:

  1. You can retrieve a certificate from Azure Key Vault using the certificate, key or secret object types. To get the full private key certificate, you need to use the “secret” object type.
  2. If you import a PFX encoded certificate into Azure Key Vault, getting its secret will return the full PFX file; however, since the API return value is a string and a PFX is a binary file format, the result must be base64 decoded.
  3. If you import a certificate using the text encoded PEM format, it is returned as-is and base64 decoding is unnecessary; however, .NET Core does not currently support loading private-key PEM certificates.
  4. Since we’ve already authenticated to Key Vault, the resulting PFX file mounted in our container no longer requires the PFX password to load.

Knowing this, if we import a PFX certificate into an Azure Key Vault, we can configure our Kubernetes Deployment YAML to create a volume containing a file containing the certificate secret contents. For example:

volumes:
      - name: aks-keyvault-aspnetcore-httpscert
flexVolume:
          driver: "azure/kv"
          secretRef:
            name: kvcreds
          options:
            # update with your key vault name
            keyvaultname: "YOUR KEY VAULT NAME"
            # update with your AAD tenant
            tenantid: "YOUR KEY VAULT TENANT"
            # The name of the object in Key Vault
            keyvaultobjectnames: "aks-https"
            # Use "secret" to get the full cert
            keyvaultobjecttypes: secret
            # This becomes the file name on the mount.
            keyvaultobjectaliases: "https.pfx.base64"

Then, in our pod specification we can mount the volume and set an environment variable for its path:

env:
        # Set an environment var to the cert path for
        # the application to use.
        - name: HTTPS_CERTIFICATE_PATH
          value: "/certs/https.pfx.base64"
        volumeMounts:
        # Mount the key vault volume to /certs
        - name: aks-keyvault-aspnetcore-httpscert
          mountPath: /certs
          readOnly: true

Finally, in our ASP.NET Core application, we configure Kestrel to use the mounted certificate file for HTTPS endpoints:

public static IHostBuilder CreateHostBuilder(string[] args) =>
            Host.CreateDefaultBuilder(args)
                .ConfigureWebHostDefaults(webBuilder =>
                {
                    webBuilder
                        .UseStartup<Startup>()
                        .UseKestrel(options =>
                        {
                            // configure Kestrel with our HTTPS certificate
                            options.ConfigureHttpsDefaults(ConfigureHttps);
                        });
                });

        private static void ConfigureHttps(HttpsConnectionAdapterOptions options)
        {
            try
            {
                // When we get the certificate from Key Vault as a secret,
                // it provides the entire PFX file but without the password.
                // Since PFX is a binary format and a secret is a string,
                // it is base64 encoded. So we read in the text file and convert
                // it to the bytes to initialize the X509Certificate2.
                var certPath = Environment.GetEnvironmentVariable("HTTPS_CERTIFICATE_PATH");
                if (!string.IsNullOrEmpty(certPath))
                {
                    var certString = System.IO.File.ReadAllText(certPath);
                    var certBytes = Convert.FromBase64String(certString);
                    var httpsCert = new X509Certificate2(certBytes);

                    Console.WriteLine($"HTTPS cert Subject:    {httpsCert.Subject}");
                    Console.WriteLine($"HTTPS cert Thumbprint: {httpsCert.Thumbprint}");

                    // set the Kestrel HTTPS certificate
                    options.ServerCertificate = httpsCert;
                }
            }
            catch (Exception ex)
            {
                Console.Error.WriteLine($"unable to load https cert: {ex}");
                throw;
            }
        }

See the full working solution in the aks-keyvault-certs repo.

With this approach, we can automate managing HTTPS certificates with Azure Key Vault and delivering them to Azure Kubernetes Service pods running ASP.NET Core applications.

In this video blog, I’ll walk you through building a continuous integration and continuous delivery (CI/CD) pipeline using the latest tools from Microsoft, including Visual Studio Team Services (VSTS) and Azure. The pipeline is built to support a .NET core application, and the walkthrough includes the following steps:

  1. Configuring Continuous Integration (CI) with VSTS Build services
  2. Adding unit testing and validation to the CI process
  3. Adding Continuous Deployment (CD) with VSTS Release Management & Azure PaaS
  4. Adding automated performance testing to the pipeline
  5. Promotion of the deployment to production once validated
  6. Sending feedback on completion of the process to Slack
visualstudio-wallpaper-05At one point I was coding on a hobby project, using Visual Studio Online for project management and source control. Because of the technologies involved, a large number of temporary files were being generated that I didn’t want checked in. Visual Studio’s TFS integration is pretty good at automatically filtering these kinds of files out and placing them in the Excluded Changes list in the Pending Changes window, but in my case the sheer number made it a pain to scan the Excluded Changes list for valid changes that I actually wanted to commit.

In my case, I didn’t want those temporary files to show up at all – not even in the Excluded Changes list. In order to gain control over which files TFS should ignore completely, I added .tfignore files to my solution. These allow you to specify which files, extensions and directories to ignore (or un-ignore!) from source control. If you’re familiar with the concept of .gitignore files in GIT, you should feel right at home.

Read More…

Despite the terms Dependency Inversion Principle (DIP), Inversion of Control (IoC), Dependency Injection (DI) and Service Locator existing for many years now, developers are often still confused about their meaning and use.  When discussing software decoupling, many developers view the terms as interchangeable or synonymous, which is hardly the case.  Aside from misunderstandings when discussing software decoupling with colleagues, this misinterpretation of terms can lead to confusion when tasked with designing and implementing a decoupled software solution.  Hence, it’s important to differentiate between what is principle, pattern and technique.
Read More…
Given the widespread use of the Android operating system running on today’s mobile platforms, Android development has become an excellent choice for enhancing a developer’s skill set. Fortunately for the seasoned .NET developer, learning Android development is not a huge stretch. While there are several avenues for .NET developers looking to break into the world of Android application development, currently the most popular options are made possible by utilizing any of the following technologies:

  • Xamarin platform
  • PhoneGap framework
  • Native Android development via Java

The Xamarin platform provides the ability for .NET developers to harness their C# knowledge, create cross-platform (iOS, Android and Windows) applications, reuse existing code and perform development within Visual Studio. The greatest advantage of utilizing the Xamarin platform is a reduced time to market while supporting multiple platforms. However, due to the additional Xamarin runtime contained within the final application, the footprint tends to be larger — this could be an issue, especially for some Android devices.

The PhoneGap framework is another option for writing Android applications.  The PhoneGap framework is a client-side web application comprised of HTML5 pages using CSS and JavaScript. While it’s possible to utilize Visual Studio to code and test the application, ultimately the code will need to be packaged into a real Android application. This will require an IDE such as Eclipse or JetBrains’s IntelliJ IDEA.  The PhoneGap Build service may also be used to accomplish the application packaging. While the PhoneGap approach will provide multiple platform support, the application type should be given consideration because the PhoneGap framework relies on JavaScript, which may have performance limitations compared with native Java Android applications.

While Xamarin and PhoneGap certainly have their merits for creating Android applications, native Android development via Java provides an opportunity to take advantage of a device’s full feature set with fast execution, all wrapped in a smaller package for more rapid downloads. For a complete discussion of the various mobile platforms’ benefits/drawbacks, please read Eric Svendsen’s excellent article where he provides plenty of depth on the issue.  For now, the remaining content of this post will be to provide valuable insight for .NET developers looking to expand their language set by utilizing native Java for Android development. Read More…

Welcome to the first article in a series on moving enterprise systems from a mainframe-based platform to something else. That “something else” could be any number of things, but our default assumption (unless I say otherwise) is going to be a transaction processing system based on a platform like Microsoft’s .NET Framework.  While I expect that there will eventually be some technical content, the initial focus is going to be on planning, methodology, defining solutions and project management.  Moving a large system off of a legacy mainframe platform is very different from a development project that starts with a blank slate. Chances are that you’re going to be shifting paradigms for quite a few people in both your technical and business organizations — and for yourself, as well. Read More…