We listened. Simpler Pricing. You’re welcome.

Time for Change

7 Min. Read

I’ve tackled this question before: how much should my observability stack cost?

While the things in that post are true now as ever, I did end on one somewhat vague conclusion. When it came to figuring out exactly what you need in your stack by drawing a straight line from the business case to the money you spend, my conclusion was that “it depends.”

That’s how we approached pricing at Honeycomb: it depends on your needs, so we should give you many different options. But in practice that meant our customers often spent time fiddling with sliders on their usage page in response to spikey traffic patterns and devising clever ways to map event volume to gigabytes of storage. It worked (sorta), but it led to unpredictable spend for many teams and it incentivized the wrong behaviors.

I’m happy to announce that today we rolled out a simplified pricing model based entirely on event volume, that doesn’t penalize you for gathering up rich loads of data.

Costs should be predictable

You should have a plenty long runway when it comes to data retention. Most teams opt for less, but we’ve found that two months worth of data can be helpful in tracking down the most hidden of problems. As of today, all plans include 60-day retention on all data sets.

You shouldn’t have to worry about figuring out ways to calculate how many events equal 1GB of data storage. As of today, data size is no longer a constraint for all plans. Billing is segmented solely by event volume. No additional charges per user, per service, per server, or per anything else. Just event volume. That’s it.

Engineers that own their code in production can usually ballpark their workload in terms of events/second. Similarly, you should be able to easily tell if Honeycomb can ingest your workload without sampling, or otherwise, how much sampling you’d need to fit into one of our plans. Check out this useful breakdown of what constitutes an event for Honeycomb for tips on how to get there.

Yes, “it depends” is still the answer for what exactly you need. (It will ALWAYS depend, and as the Dread Pirate Roberts once said, “anyone who says differently is selling something.”) With this new approach, we give everyone basic auto-instrumentation and then some, by default. But you also have flexibility. You can still add whatever custom dimensions you may want and we built in mechanisms that help you deal with the unexpected.

Traffic isn’t always predictable

Our new plans center around Events Per Month (EPM). But everyone knows that predicting workloads is never that simple. Enter Honeycomb’s new Burst Protection.

Burst Protection helps to account for unexpected spikes in traffic by automatically triggering whenever you exceed your daily event targets by a factor of 2X or more. Once activated, any of the excess events for that day won’t count against your EPM. Let’s say your normal traffic is 20M events/day. Thursday, you see a big abnormal spike in traffic with 50M events. With Burst Protection, those additional 30M events wouldn’t count against your EPM limits.

Which means, when you think about how your workload fits into Honeycomb’s pricing, you don’t need to worry about unpredictable anomalies. We’ve got you covered.

Add infinitely more detail to your events, for free!

I have to point out one more super cool thing about this pricing scheme which is that, from your perspective, it doesn’t matter how wide your event is. We charge for one event whether it has three dimensions or three hundred dimensions. It’s the inverse of metrics-based pricing schemes, where you get charged extra for every custom metric you define. We want to encourage you to pile on the rich event-level instrumentation. Literally anything you think might be useful, pile it on! No charge.

Why? Because we never want to incentivize you to capture less detail. Your Honeycomb events get more and more interesting and useful the more ways you can correlate and slice and dice and match and group them. The richer your events are, the more powerful your dataset will be for debugging and asking detailed questions or finding unknown-unknowns and unknown correlations.

We wanted our pricing scheme to motivate you to do the right things and not do the wrong things. Therefore appending another bit of deliciously identifying information to an existing event will always be free.

So you want to build an o11y stack?

Engineers cost money. They’re expensive and recruiting them is hard. Engineering cycles are the scarcest resource in our world. Focusing engineers on non-mission-critical work, inferior tooling, one-offs, frustrating maintenance work, and other initiatives that have nothing to do with core business value is absurd. It always has been.

But in a time of plenty, teams often wriggle out of making hard choices about where to invest those cycles. Directors hire vanity teams to build an in-house observability stack, or they indulge a valued principal engineer who really wants to build a database and is threatening to leave and do it elsewhere. They let themselves be persuaded that their o11y problems are special snowflakes (hint: they aren’t). Then all of that magical thinking tends to come to an abrupt halt at the next downturn.

That’s why now, during a global pandemic, companies are more focused than ever on the investments they make and how best to proceed in times of uncertainty. Building in-house observability tools is not an investment that strengthens core competencies. You can’t do it better than a company for whom observability *is* their core competency, I promise you that. And even if you plausibly could, you also can’t afford the distraction and opportunity cost of those lost eng cycles.

Outsourcing happens in fresh waves during every economic downturn. When you know you won’t get to hire waves of engineers for the foreseeable future, it forces you to think critically about where you actually need to focus your effort. The proof that these outsourcing choices were good decisions? Few, if any, of them ever reverse course to go in-house when times are flush again. It’s pretty much a one way stop.

You do what you do best. And turn to us for what we do best.

Did I mention the Free Plan?

All those things I mentioned before—60-day data retention, a focus on event volume instead of data size, and handling spikey traffic with burst protection—they’re all also available in our FREE plan.

As of today, the recently announced Free plan lets you ingest up to 20M events per month. That’s not enough for you to run in production with any kind of realistic load, but it is PLENTY big enough for you to solve real production problems that no other tool can.

Having trouble convincing your team or your manager that Honeycomb is different or worthwhile? Well, code wins arguments. Set a few minutes aside for installing a Beeline and solving one of your intractable high cardinality problems, and just show them. Or try running BubbleUp and wow your team by turning up the problems they didn’t know were there. The Free plan is also great for smaller or experimental projects. No risk, no charge, and no trial period. Try it out! Let us know what you think.


Check out our new pricing plans and sign up for a Free account or try all of our Enterprise features during a 14-day trial. For more details on the new pricing plans and features, go to our new pricing FAQ.

Don’t forget to share!
Charity Majors

Charity Majors

CTO

Charity Majors is the co-founder and CTO of honeycomb.io. She pioneered the concept of modern Observability, drawing on her years of experience building and managing massive distributed systems at Parse (acquired by Facebook), Facebook, and Linden Lab building Second Life. She is the co-author of Observability Engineering and Database Reliability Engineering (O’Reilly). She loves free speech, free software and single malt scotch.

Related posts