Running Our Agent

We have written a lightweight tool called honeytail. Honeytail will tail your existing log files, parse the content, and send it up to Honeycomb.

If you already have structured data in an existing log, this is the easiest method to get that data in to Honeycomb.

The quality of your dataset within Honeycomb depends entirely upon the quality of the data going into the log file. To get the most useful insight out of Honeycomb, you must provide high quality data in your log file. In addition to as much detail about each event as you can include, it is best to always include some host-level information to give each event in the log context, for example the host on which the log exists.

Honeytail is designed to run as a daemon so that it can continuously consume new content as it appears in the log files as well as detect when a log file rotates. It must be configured with your Team Write Key and the name of the Dataset to which you want to write data. You specify one of the available parser modules depending on how your log data is structured. Once running, honeytail will take care of uploading all the data in your log file and picking up new data as it comes in.

Honeytail is open source—we encourage auditing the software you will run on your servers. We also happily consider pull requests with new log format parsers and other improvements.

Installation

honeytail will tail existing log files, parse the content, and send it up to Honeycomb. You can view its source here.

Download and install the latest honeytail by running:

wget -q https://honeycomb.io/download/honeytail/linux/honeytail_1.574_amd64.deb && \
      echo 'ef1abcb7f22597099ba825999b9bcca0252c7230a876c4fc69c34666191b8678  honeytail_1.574_amd64.deb' | sha256sum -c && \
      sudo dpkg -i honeytail_1.574_amd64.deb

The packages install honeytail, its config file /etc/honeytail/honeytail.conf, and some start scripts. The binary is just honeytail, available if you need it in an unpackaged form or for ad-hoc use.

You should modify the config file and uncomment and set:

The docs pages for JSON, NGINX, MongoDB, MySQL, and regex have more detail on additional options to set for each parser. The other available options are all described in the config file and below.

Launch honeytail by hand with honeytail -c /etc/honeytail/honeytail.conf or using the standard sudo initctl start honeytail (upstart) or sudo systemctl start honeytail (systemd) commands.

honeytail will automatically start back up after rebooting your system. To disable this, put the word manual in /etc/init/honeytail.override (upstart) or run systemctl disable honeytail (systemd).

Launch the agent

Start up a honeytail process using upstart or systemd or by launching the process by hand. This will tail the log file specified in the config and leave the process running as a daemon.

$ sudo initctl start honeytail

Backfilling Existing Data

If you have a number of old log files that you’d like to load in to Honeycomb once, you want to use the --backfill flag for honeytail.

Note: honeytail does not unzip log files, so you’ll need to do this before backfilling.

Here’s an example honeytail invocation to pull in multiple existing logs and as much as the current log as possible.

honeytail -c /etc/honeytail/honeytail.conf \
  --file=/var/log/app/myapp.log.* --file=/var/log/app/myapp.log \
  --backfill

Let’s break down the various parts of this command.

Honeytail will read all the content in all the old logs and then stop. When it finishes, you’re ready to send new log lines. By default, honeytail will keep track of its progress through a file, and if interrupted, will pick back up where it left off. By launching honeytail pointing at the main app log, it will find the state file it created while reading in the backlog and start up where it left off.

Here’s the second honeytail invocation, where it will tail the current log file and send in recent entries:

honeytail --writekey=YOUR_WRITE_KEY --parser=json --dataset='My App' --file=/var/log/app/myapp.log

Note: We enforce a rate limit in order to protect our servers from abuse. This can be raised on a case-by-case basis; please contact us to lift your limit.

Troubleshooting

Below, find some general debugging tips when trying to send data to Honeycomb. As always, we’re happy to help with any additional problems you might have!

New data doesn’t show up in Honeycomb, and new dataset doesn’t appear on dashboard

“Datasets” are created when we first begin receiving data under a new “Dataset Name” (used/specified by all of our SDKs and agents). If you don’t see an expected dataset yet, our servers mostly likely haven’t yet received anything from you. To figure out why, the simplest step is to add a --debug flag to your honeytail call. This should output information about whether lines are being parsed, failing to send to our servers, or—whether honeytail is receiving any input at all.

Another useful thing to try may be to add --status_interval=1 to your flags, which will output a line like the below, each second (newlines added for legibility):

INFO[0002] Summary of sent events   avg_duration=295.783µs
                                    count_per_status=map[400:10]
                                    errors=map[]
                                    fastest=259.689µs
                                    response_bodies=map[request body is too large:10]
                                    slowest=348.297µs
                                    total=10

The total here is the number of events sent to Honeycomb; the rest are stats characterizing how those events were sent and received. (A total=0 value would clue us into the fact that honeytail just isn’t sending any events at all.) In the line above, we see that events were, in fact, invalid and being rejected by the server.

New events don’t appear in an existing dataset

When using honeytail, the --dataset (-d for short) argument will determine the name of the dataset created on Honeycomb’s servers. If you’re writing into an existing dataset, the quickest way to check for new data is the SAMPLES link in the dataset header:

Samples in the Dataset header

Clicking SAMPLES ⬇ will trigger a small screen to pop down from the header, containing the ten events most recently received for that dataset.

If you don’t see your new events appear, try the --debug or --status_interval=1 (change 1 to 5 to see the summary every 5 seconds).

honeytail doesn’t seem to be progressing on my log file

Are you trying to send data from an existing file? honeytail’s default behavior is to watch files and process newly-appended data. If you’re attempting to send data from an existing file, make sure to use the --backfill flag—this will make sure honeytail begins reading the file from the beginning and exits when finished.

Existing timestamps values aren’t respected

Our JSON parser makes a best-effort attempt to parse and understand timestamps in your JSON logs. Take a look at the Timestamp parsing section of the JSON docs to see timestamp formats understood by default.

If you suspect your timestamp format is unconventional, or the time field is keyed by an unconventional field name, providing --json.timefield and --json.format arguments will nudge honeytail in the right direction.

Sampling High Volume Data

Let’s say you have an incredible volume of log content and your website gets hit frequently enough that you will still get excellent data quality even if you’re only looking at 1/20th the traffic. Honeytail can sample the log file and for each 20 lines, only send one of them. It does so randomly, so you won’t see every 20th line being sent - instead each line will have a 5% chance of being sent.

When these log lines reach Honeycomb, they will include metadata indicating that each one represents 20 similar lines, so all your graphs will show accurate total counts.

honeytail --writekey=YOUR_WRITE_KEY --dataset='Webtier' --parser=nginx --file=/var/log/nginx/access.log \
  --samplerate 20 --nginx.conf /etc/nginx/nginx.conf --nginx.format main

Adjusting the sample rate based on the content of your events can allow you to keep important infrequent events while discarding less important higher volume traffic. Honeytail has a dynamic sampler that will vary the sample rate based on the contents of the fields of your choice - more frequent occurrences of the content of the field will be sampled more heavily.

For example, suppose that successful web traffic (HTTP status codes in the 200 range) is much more frequent than errored traffic (status codes in the 500s) - you might want to discard more of the successful traffic and keep more of the errored traffic. Applying the dynamic sampler to the status field in your nginx traffic will have this effect. The actual sample rate applied will vary based on the cardinatily of the chosen field and the frequency of each value, but it will be in the ballpark of the samplerate specified.

honeytail --writekey=YOUR_WRITE_KEY --dataset='Webtier' --parser=nginx --file=/var/log/nginx/access.log \
  --samplerate 20 --nginx.conf /etc/nginx/nginx.conf --nginx.format main --dynsampling status

You can specify the dynsampling flag multiple times and it will sample traffic based on the frequency and uniqueness of concatenating all the values of the fields you specify.

Adding Extra Information into Your Log

It’s not unusual for a log to omit interesting information like the name of the machine on which the process is running. After all, you’re on that machine, right? Why would you add the hostname? Log transports like rsyslog will prepend logs with the hostname sending them, but if you’re sending logs from each host, this data may not exist. Honeytail lets you add in extra fields to each event sent up to Honeycomb with the --add_field flag.

For this example, let’s assume that you have ngnix running as a web server in both your production and staging environments. Your shell sets $ENV with the environment (prod or staging). Here is how to run honeytail to consume your nginx log and insert the hostname and environment along with each log line:

honeytail --writekey=YOUR_WRITE_KEY --dataset='Webtier' --parser=nginx --file=/var/log/nginx/access.log \
  --nginx.conf /etc/nginx/nginx.conf --nginx.format main \
  --add_field hostname=$(hostname) --add_field env=$ENV

Dropping or Scrubbing Fields

Sometimes you will have fields in your log file that you don’t want to send to Honeycomb or that you want to obscure before letting them leave your servers. For this example, let’s say that you have in your log a large text field with the contents of an email. It is large enough that you don’t want it sent up to Honeycomb. Also in this log you have a some sensitive information like a person’s birthday. You want to be able to ask questions about the most common birthdays, but you don’t want to expose the actual birthdays outside your infrastructure.

Honeytail has two flags that will help you accomplish these goals. --drop_field will remove a field before sending the event to Honeycomb and --scrub_field will subject the value of a field to a SHA256 hash before sending it along. You will still be able to do inclusion and frequency analysis on the hashed fields (as there will be a 1-1 mapping of value to hashed value) but the actual value will be obscured.

Here is your honeytail invocation:

honeytail --writekey=YOUR_WRITE_KEY --dataset='My App' --parser=json --file=/var/log/app/myapp.log \
  --drop_field email_content --scrub_field birthday

Versioning honeytail Config

The honeytail binary supports reading its config from a config file as well as command line arguments. To get started, if you’ve already been using a few command line arguments, add an additional flag: --write_current_config. This will write your current config to STDOUT so you can use it as a starting point.

$ honeytail -p mysql -k YOUR_WRITE_KEY -d YOUR_DATSET -f ./mysql-slow.log --write_current_config
[Required Options]
; Parser module to use. Use --list to list available options.
ParserName = mysql

; Team write key
WriteKey = YOUR_WRITE_KEY

; Log file(s) to parse. Use '-' for STDIN, use this flag multiple times to tail multiple files, or use a glob (/path/to/foo-*.log)
LogFiles = ./mysql-slow.log

; Name of the dataset
Dataset = YOUR_DATASET

This can be particularly useful for versioning or productionizing honeytail use—or for providing additional configuration when using advanced honeytail features like scrubbing sensitive fields or parsing custom URL structures.

Once the config file is saved, simply run honeytail with a -c argument in lieu of all of the other flags:

$ honeytail -p mysql -k YOUR_WRITE_KEY -d YOUR_DATSET -f ./mysql-slow.log \
    --scrub_field=field_name_1 --scrub_field=field_name_2 \
    --write_current_config > ./scrubbed_mysql.conf
$ honeytail -c ./scrubbed_mysql.conf

Parsing URL Patterns

honeytail can break URLs up into their component parts, storing extra information in additional columns. This behavior is turned on by default for the request field on nginx datasets, but can become more useful with a little bit of guidance from you.

There are several flags that adjust the behavior of honeytail as it breaks apart URLs.

Identifying the URL Field

When using the nginx parser, honeytail looks for a field named request. When using a different parser (such as the JSON parser), you should specify the name of the field that contains the URL with the --request_shape flag.

Using this flag creates a few generated fields. Given a request field containing a value like:

GET /alpha/beta/gamma?foo=1&bar=2 HTTP/1.1

… will produce nginx events for Honeycomb that look like:

field name value description
request GET /alpha/beta/gamma?foo=1&bar=2 HTTP/1.1 the full original request
request_method GET the HTTP method, if it exists
request_protocol_version HTTP/1.1 the HTTP version string
request_uri /alpha/beta/gamma?foo=1&bar=2 the unmodified URL (not including the method or version)
request_path /alpha/beta/gamma just the path portion of the URL
request_query foo=1&bar=2 just the query string portion of the URL
request_shape /alpha/beta/gamma?foo=?&bar=? a normalized version of the URL
request_pathshape /alpha/beta/gamma a normalized version of the path portion of the URL
request_queryshape foo=?&bar=? a normalized version of the query portion of the URL

(The generated fields will all be prefixed by the field name specified by --request_shape— in the above example request. Use the --shape_prefix field to prepend an additional string to these generated fields.)

If the URL field contains just the URL, the request_method and request_protocol_version fields will be omitted.

URL Normalization

The path portion of the URL (from the beginning / up to the ? that separates the path from the query) can be grouped by common patterns, as is common for REST interfaces.

For example, given a URL fragments like:

/books/978-0812536362
/books/978-9995788940

We can break the fragments into a field containing the generic endpoint (/books/:isbn) and a separate field for the ISBN number itself by specifying a --request_pattern flag:

honeytail ... \ # other arguments
  --parser=nginx --request_pattern=/books/:isbn

This will produce, among other fields:

request_path request_shape request_path_isbn (other fields)
/books/978-0812536362 /books/:isbn 978-0812536362
/books/978-9995788940 /books/:isbn 978-9995788940

You can specify multiple --request_pattern flags and they’ll be considered in order. The first one to match a URL will be used. Patterns should represent the entire path portion of the URL - include a “*” at the end to match arbitrary additional segments.

For example, if we have a wider variety of URL fragments, like:

/books/978-0812536362
/books/978-3161484100/borrow
/books/978-9995788940
/books/978-9995788940/borrow

We can provide our additional --request_pattern flags and track a wider variety of request_shapes:

honeytail ... \ # other arguments
  --parser=nginx --request_pattern=/books/:isbn/borrow --request_pattern=/books/:isbn

We’ll see our request_path_isbn populated as before, as the :isbn parameter is respected in both patterns:

request_path request_shape request_path_isbn (other fields)
/books/978-0812536362 /books/:isbn 978-0812536362
/books/978-3161484100/borrow /books/:isbn/borrow 978-3161484100
/books/978-9995788940 /books/:isbn 978-9995788940
/books/978-9995788940/borrow /books/:isbn/borrow 978-9995788940

A URL’s query string can be broken apart similarly, with the --request_query_keys flag, with generated fields named like <field>_query_<keyname>.

If, on top of our previous examples, our URL fragments had query strings like:

/books/978-0812536362?borrower_id=23597

Providing --request_query_keys=borrower_id would return us a Honeycomb event with a request_query_borrower_id field with a value of 23597.

If you would like to automatically create a field for every key in the query string, you can use the flag --request_parse_query=all. This will automatically create a new field <field>_query_<key> for every query parameter encountered in the query string. For any publicly accessible web server, it is likely that this will quickly create many useless columns because of all the random traffic on the internet.

For more detail and examples see our urlshaper package on GitHub.