How to Use NLog for Structured Logging

How to Use NLog for Structured Logging

10 Min. Read

We’re grateful for this guest post from Tim Wilde! You can find the source code for the examples he uses in his github repo.

Strings are where data go to die

There you go; I said it.

How often have you found yourself contemplating some hair-brained regex scheme in order to extract an inkling of value from a string and wishing the data had just arrived in a well-structured package without all the textual fluff?

So why do we insist on writing prose in our logs? Take “Exception while processing order 1234 for customer abc123” for example. There are at least four important pieces of information drowning in that one sentence alone:

  1. An exception was raised!
  2. During order processing
  3. Order number 1234
  4. Customer abc123

Being an exception log message, it’s more than likely followed by a stack trace, too. And stack traces certainly don’t conform to carefully crafted log layout patterns.

I’ve been thinking about this recently. We’re currently building new systems with ambitious plans for a Microservices architecture and we’ve chosen the Elastic stack for log aggregation. First we need to get the logs from all of our services in there, so we’ve started to standardise logging layout patterns so they all conform with our Logstash grok filters.

Hold on, though. So our log layout patterns are now essentially just a data transit format?

Pretty much: we have structured data in the app which we squish into strings for logging — losing detail in the process. This is then converted back into structured data by Logstash. Lossy and redundant, right?

What are structured logs?

Log entries should be easily machine-readable. So instead of string interpolation, use JSON serialization. The example earlier might become:

{
   "timestamp": "2018-01-13T11:45:12.483Z",
   "level": "ERROR",
   "message": "Order processing exception",
   "data": {
      "order": 1234,
      "customer": "abc123"
   },
   "stacktrace": "... <stack trace here>..."
}

Notice how the message does not contain the data? Now we can very easily count Order processing exceptions over time. Much more valuable!

A practical NLog config example

Let’s put together a quick example comprised of an Elastic stack and a little .NET console app which logs to it via NLog. This will demo how to get the log messages into Elasticsearch, via Logstash, so we can view them using Kibana’s Discover function.

(TL;DR: There’s a link to the code at the end of the post.)

The Elastic stack

We’ll spin up Elasticsearch, Logstash, and Kibana in Docker to make it quick and easy to provision and clean up when we’re finished. For this we’ll need two files: docker-compose.yml which describes the stack, and logstash.conf which configures how we want our log messages to be ingested.

docker-compose.yml:

version: '3'

services:
  elasticsearch:  
    image: elasticsearch
    command: elasticsearch -Enode.name=derp -Ecluster.name=logging
    ports:
      - "9200:9200"

  logstash:  
    image: logstash
    command: logstash -f /etc/logstash/conf.d/logstash.conf
    volumes:
      - ./logstash:/etc/logstash/conf.d
    ports:
      - "5000:5000"

  kibana:  
    image: kibana
    ports:
      - "5601:5601"

While it’s far from a production-ready config, this does give us a usable set of services. The important part being that we map a logstash folder with our configuration file into the Logstash container, which saves us building a custom image, copying in the config file, etc.

logstash.conf:

input {
   http {
      port => "5000"
   }
}

filter {
   json {
      source => "message"
      remove_field => [ "message" ]
   }
   date {
      match => [  "timestamp", "YYYY-MM-dd'T'HH:mm:ss.SSS'Z'" ]
   }
}

output {
   elasticsearch {
      hosts => [ "elasticsearch:9200" ]
      index => "logs-%{+YYYY.MM.dd}"
   }
}

Compared to some Logstash configs, with mind-bending grok filters, this is a breath of fresh air. Nice and simple.

We’ll be receiving log entries via HTTP POSTs to port 5000.

When they arrive, we expect them to contain the JSON-encoded log entry in a field called message which is deserialized and all of the properties recorded in Elasticsearch. There’s a custom date formatting pattern so we can grab that correctly as well.

Finally, we want to send the result to our Elasticsearch server on port 9200 and we specify which index to use.

With those in place, we can spin up the stack:

docker-compose up --abort-on-container-exit

(I use --abort-on-container-exit while putting a docker stack together so problems will cause it to fail fast and the reason is easier to spot)

We should be able to hit http://localhost:5601 now and Kibana will ask us to set it up. We’re not quite ready to do that yet as we don’t have anything logged, so let’s get the console app done next.

Console App

This is as simple as possible. Just a .NET Core 2 console app with a couple of Nuget packages:

The NLog.StructuredLogging.Json package is by the developers at Just Eat. It provides a really handy layout which will serialize an NLog event to JSON — perfect for our needs!

We want NLog to post the log entries to our Logstash instance, so we’ll need to edit the nlog.config file as follows:

<?xml version="1.0" encoding="utf-8"?>

<nlog >"http://www.nlog-project.org/schemas/NLog.xsd"
      xsi:schemaLocation="NLog NLog.xsd"
      >"http://www.w3.org/2001/XMLSchema-instance"
      autoReload="true">

    <extensions>
        <add assembly="NLog.StructuredLogging.Json" />
    </extensions>

    <targets>
        <target name="console"
                xsi:type="Console"
                layout="${structuredlogging.json}" />

        <target name="logstash"
                xsi:type="Network"
                layout="${structuredlogging.json}"
                address="http://127.0.0.1:5000" />
    </targets>

    <rules>
        <logger name="*" writeTo="console" />
        <logger name="*" writeTo="logstash" />
    </rules>
</nlog>

I’ve included a console target so we can inspect the log entries that are being sent.

Even with a centralised log aggregation system, it’s still wise to log locally too – you never know when you’re trying to report problems because the network is down and you don’t want to lose log entries!

The logstash target is a simple network target which will POST the log entries to the address specified. As our docker-compose.yml binds the logstash ports to the host (our local machine), we can just use the loopback address here.

The ${structuredlogging.json} layout will turn our log messages into JSON, something like this (formatted for readability here):

{
   "TimeStamp": "2018-01-13T00:29:50.087Z",
   "Level": "Info",
   "LoggerName": "LoggingExample.Program",
   "Message": "Extended Greetings!",
   "CallSite": "LoggingExample.Program.Main",
   "WordOfTheDay": "Jibble"
}

The last thing to do with the app is actually write some log entries!

private static void Main(string[] args)
{
   LogManager.Configuration =
      new XmlLoggingConfiguration("nlog.config");

   ILogger logger = LogManager.GetLogger(typeof(Program).FullName);

   logger.Info("Hello World!");
   logger.ExtendedInfo("Extended Greetings!",
                       new {WordOfTheDay = "Jibble"});

   logger.Factory.Flush();
}

This quickly configures NLog using the nlog.config file and grabs an ILogger instance. All standard NLog stuff.

Next we use the normal Info call so we can make sure any existing log messages will benefit from JSON encoding (which they do.) This is critically important as it means it’s not necessary find and replace every single logging call throughout the codebase straight away. Just don’t forget about them, OK?

The NLog.ExtendedLogging.Json nuget package provides a few handy extension methods, like ExtendedInfo above, which take an object or dictionary to be used as key-value pairs that are serialized to JSON as part of the log entry. These are the calls we want to be using going forward so we can send data, instead of squishing it into a string.

Now we’re ready to run the console app and then go and see what Logstash received!

cd LoggingExample
dotnet run

We should see a few lines of JSON in the console and, hopefully, we now have log entries we can play with. Let’s go back to Kibana (http://localhost:5601) and check.

Configure Kibana

Back in Kibana we’ll be asked to configure the index pattern again. It will suggest logstash-* but we’re using an index with a logs- prefix, so change this to logs-*. This will then inspect the index and grab all the field names from our log entries, which is why we had to get some log entries into the index before we could progress.

In the Time Filter field name field there are two timestamp options: @timestamp and TimeStamp. The former is when the log entry was recorded in Elasticsearch and the latter is the timestamp generated by NLog. Select @timestamp and click the Create button and we’re done.

It may seem odd to have two timestamps, and we could have Logstash remove one of them if we really wanted to, but we’ll do something interesting with these a little later on.

Playing with Data

Next we can see what Logstash has received from our app. Click the Discover link (with a compass icon), top-left. We should be able to see the log entries from our console app.

More importantly, all of their properties are separate fields and can be used to filter and search and that is why we want structured logs 🙂

Kibana will let us create calculated fields too. As we have two timestamp fields, we could calculate a timestamp delta which contains the delay between the moment the log entry is created in our app and when it arrives in Elasticsearch. This could be a useful statistic for monitoring network latency between the two, for example, providing us with another health check datum for free.

To quickly set that up:

  1. Click the Management cog icon on the left;
  2. Click Index Patterns on the next page;
  3. Click the scripted field tab;
  4. Click the Add Scripted Field button;
  5. Enter TimestampDelta in the Name field; and
  6. Enter doc["@timestamp"].value - doc["TimeStamp"].value in the Script field

Leave the rest of the fields as they are and click the Create button at the bottom of the page, then return to the Discover page (Compass icon, top-left) and expand one of the log entries. You should see a new field called TimestampDelta with the difference between the two timestamps of the log entry, in milliseconds.

I’m not going to describe how to use Kibana here (I’m still learning myself) but now it is a lot easier to get log events from an app into the Elastic stack and then to filter, search, summarise, and visualise the information they contain.

As the data is no longer hidden inside arbitrary strings, it’s possible to quickly ask questions of the data and uncover useful information and insights which would not have been possible before. Well, not without some gnarly regex patterns…

I will be using this to create dashboards to monitor our service statuses, and that’s just to begin with. I’m sure there is a lot more we can think of once we start exploring the data in the time we save not having to parse log messages.

And finally…

If you’d like to grab the code for this post, I’ve shared it on Github.

The docs for the Just Eat Nuget Package have some useful “best practices” for structured logs.

And thank you, Charity, for the tip and all the time and frustration this will save me in the future 🙂

Don’t forget to share!

Related posts