While we believe Honeycomb is the next generation of observability tooling for any sort of complex system, there are some scenarios in which Honeycomb is in a class of its own.
As a platform, you’re exposed to a greater level of chaos than anyone else— usage patterns and workloads are more varied by customer and circumstance, and one bad actor can be easily drowned out by your normal, stable traffic. Complex multitenant systems exacerbate the issue— that one bad actor can easily hog resources meant to be shared by the whole system, and it can be painful to identify the culprit without the proper tooling.
Honeycomb is that tool, built to support Parse’s approach for ensuring the health of its Mongo deployment in the face of traffic from over a million distinct mobile apps.
Honeycomb believes in blazing-fast analysis of the sorts of high-cardinality datasets produced when any customer might matter—and producing the same averages, percentiles, and breakdowns that you rely on to gauge the health of your overall system. When considering the increased level of chaos faced by SaaS platforms, it’s nigh impossible to predict the cause of your next outage. Take advantage of Honeycomb’s flexibility and power to make sure you have enough information at your fingertips to help.
Context matters. It’s true in general when trying to debug interactions between components of your system, but is especially true when trying to introspect a particular complex one like your database, where a single “query of death” can spell doom for any number of subsequent queries.
Trying to debug degenerate database performance without access to the original query can be like working with one hand tied behind your back— theoretically possible given enough preparation, but so much more painful than it has to be.
Honeycomb combines the best of both worlds, from log aggregators to time series metrics systems, in order to be best-in-class for database debugging. You can get speedy graphs describing your system’s high-level performance characteristics while still being able to break down by table, query family, or even original query.
One of Honeycomb’s beliefs is that “fast and mostly right is better than slow and perfectly accurate.” When you’re investigating some unknown in your system, the problem is just that— you’re not sure where to start or what the culprit might be. By prioritizing speed of an individual query, it becomes simpler and—dare we say it?— enjoyable to iterate until you find a concrete trail to follow.
You rarely know ahead of time exactly what combination of attributes might be correlated with some degenerate behavior. Sometimes it’s as simple as a bad build ID (still tricky to track or query by, in many monitoring solutions!) or as esoteric as a particular combination of customer + replica set; either way, Honeycomb makes it simple to store lots of metadata without paying the performance cost of maintaining additional indices or reading more data from disk.
Known problems are easy to monitor and track; it’s the unknowns lurking in your system that are worth investing in. Don’t get stuck having to predict the future or making reliability an ops problem— everybody deserves realtime observability.