Amazon’s Relational Database Service (RDS) lets you use a number of databases without having to administer them yourself. The Honeycomb RDS connector gives you access to the same data as if you were running MySQL on your own server.
The Honeycomb RDS connector surfaces attributes like:
Honeycomb allows you to calculate metrics and statistics on the fly while retaining the full-resolution log lines (and the original MySQL query that started it all).
Once you’ve got data flowing, be sure to take a look at our starter queries; our entry points provide our recommendations for comparing lock retention by normalized query, scan efficiency by collection, or read vs. write distribution by host.
Note: Run the following commands from any Linux host with the appropriate AWS credentials to access the RDS API.
Before running the RDS connector, configure MySQL running on RDS to output the slow query log to a file. Refer to Amazon’s documentation on setting Parameter Groups to get started, and find more detail about the configuration options below in the MySQL docs for the slow query log.
Set the following options in the Parameter Group:
If you switch to a new Parameter Group when you make these changes, make sure you restart the database.
Once you’ve made these changes, verify you are getting RDS logs via the RDS Console
rdslogs will stream the MySQL slow query log from RDS or download older log files.
It can stream them to STDOUT or directly to Honeycomb.
You can view the
rdslogs source here.
Get and verify the current Linux version of
wget -q https://honeycomb.io/download/rdslogs/rdslogs_1.66_amd64.deb && \ echo '93df1a8ce71a86d4abb88f4492be79855dbb275eff728dc85f4bbb68efc863b1 rdslogs_1.66_amd64.deb' | sha256sum -c && \ sudo dpkg -i rdslogs_1.66_amd64.deb
wget -q https://honeycomb.io/download/rdslogs/rdslogs-1.66-1.x86_64.rpm && \ echo '4bd4a517c34babb2ec02ac99c235bd8c26c859a6c0bde7332d06432cf942da67 rdslogs-1.66-1.x86_64.rpm' | sha256sum -c && \ sudo rpm -i rdslogs-1.66-1.x86_64.rpm
wget -q -O rdslogs https://honeycomb.io/download/rdslogs/1.66 && \ echo '85d062c8e12542866da1961c42a22cc5c446a903020222a1d0bfbabcb3da25fc rdslogs' | sha256sum -c && \ chmod 755 ./rdslogs
Use the rdslogs command with the
--output flag set to
honeycomb to connect to RDS and send data from the current log to Honeycomb.
rdslogs -i <instance-identifier> --output=honeycomb --writekey=YOUR_WRITE_KEY \ --dataset='RDS MySQL'
--sample_rate to send a subset (1/N log lines, defaults to N=1) of your data. Sampling in Honeycomb is described in detail in Sampling high volume data.
We believe strongly in the value of being able to track down the precise query causing a problem, but we also understand the concerns of exporting log data which may contain sensitive user information, so you have the option of hashing the contents of the data returned by a query.
To hash the concrete
query, add the flag
normalized_query attribute will still be representative of the shape of the query and identifying patterns (including specific queries) will still be possible, but the sensitive information will be completely obscured before leaving your servers.
For more information about dropping or scrubbing sensitive fields, see “Dropping or scrubbing fields” in the Agent documentation section.
If you’re getting started with Honeycomb, you can load the past 24 hours of logs into Honeycomb to start finding interesting things right away. Launch this command to run in the background (it will take some time) while you hook up the live stream. (However, if you just now enabled the slow query log, you won’t have the past 24 hours of logs. You can skip this step and go straight to streaming.)
The following command will download all available slow query logs to a newly created
slow_logs directory and then start up
honeytail to send the parsed events to Honeycomb. You’ll need your RDS instance identifier (from the instances page of the RDS Console) and your Honeycomb write key (from your Honeycomb account page).
mkdir slow_logs && \ rdslogs -i <instance-identifier> --download --download_dir=slow_logs && \ honeytail --writekey=YOUR_WRITE_KEY --dataset='RDS MySQL' --parser=mysql \ --file='slow_logs/*' --backfill
Once you’ve finished backfilling your old logs, we recommend transitioning to the default streaming behavior to stream current logs.