Futureproof your software for what comes next with the Honeycomb platform.
Discover why Honeycomb is the better choice for your engineers, your customers, and your bottom line.
Start your journey with the definitive guide to observability. Download our complimentary ebook.
Bring observability to every software engineer.
Learn about our company, mission and values.
Come for the impact, stay for the culture.
See Honeycomb's latest press releases, media, and more
Learn more about becoming a Honeycomb partner.
Already a Honeycomb customer?
Austin Parker
A parhelion is created when light refracts through hexagonal ice crystals in the atmosphere, forming bright spots that appear on the horizon, connected by a faint halo. You don’t have to squint very hard to appreciate how relevant this is to our current AI moment.
Erwin van der Koogh
Charity Majors
Kale Bogdanovs
Midge Pickett
The new Honeycomb MCP course in the Honeycomb Academy gives you a starting point when you're not sure what to ask, and teaches you how to direct an investigation so you're getting evidence, not just answers.
Mike Goldsmith
This is what a semantic convention migration looks like in practice: not a clean cutover, but months of coexistence where old and new attribute names overlap. In this post, I'll explain why this happens, how the OpenTelemetry Collector's schema processor is designed to automate migrations in both directions, and what we're actively working on to get it into a state where everyone can use it.
Ken Rimple
Observability is the visibility you need to get the job done. Sending telemetry to Honeycomb explains what your agents are actually doing. OpenTelemetry provides semantic conventions for generative AI systems, a spec that defines how agents, LLMs, MCPs, and tools are properly observed. The primary telemetry is defined as trace spans and other events with specific naming patterns, mostly starting with gen_ai.
One of the customers I’m currently working with is a large financial institution that has a robust three pillar implementation. Every critical application ships their telemetry to either or both their cloud-native tool and a central tool. This worked fine when they had relatively monolithic applications, but with their architecture moving towards a service-based one, it’s getting harder to manage.
Rox Williams
Over the last three months, we’ve been exploring what about software development and observability changes with AI, and what doesn’t. Our conclusion: these five principles will still remain true, even when 90% of the code is AI-driven.
Alex Boten
The performance impact of instrumentation on running applications should be minimized wherever possible, and this is what led to the investigation described in this article.
We got a ton of great questions from attendees, and I didn't have time to answer all of them during the session. So, here are my answers to the ones I found most interesting, and most representative of what people are actually grappling with right now.
Douglas Soo
The following is what I’ve come to as a set of theory and practice for adapting to what is, and continues to be, one of the most rapid changes to how work is done in arguably any career. It’s worth noting that none of this is predicated on whether you are an AI believer or a skeptic. Even if you believe that 90% of what people are claiming about AI is just hype, the remaining 10% still has the ability to radically change what it means to be a software developer.
The previous posts in this series looked at some of the use cases Honeycomb customers are implementing to observe LLMs in production and power agentic observability workflows. In this final post, we’ll take it back to basics and look at how the fundamental capabilities and infrastructure of Honeycomb provide the comprehensive data and fast performance that makes these use cases work at scale.
In our previous post, we looked at how Honeycomb provides unique visibility into LLMs operating in your production environment. Now, let’s explore how Honeycomb provides observability insights uniquely suited to helping your AI agents rapidly diagnose and fix production issues.
AI agents are rewriting how software is built and operated. In this series, you’ll learn about 12 use cases across LLM observability, agent debugging, MCP-powered coding agents, and automated AI investigations that prove Honeycomb is the observability platform built for what comes next.
Get it delivered straight to your inbox.
By subscribing to our newsletter, you agree to Honeycomb’s Terms of Service and Privacy Notice.
Abdullah Chowdhury
Honeycomb's KubeCon + CloudNativeCon Europe recap: why observability and fast feedback loops are essential as AI reshapes how software is built and run.
Josh Parsons
In early 2019, I was ramping up and wrapping my head around our areas of ownership, and that was also around the time when Liz Fong-Jones left Google and joined Honeycomb. Her arrival introduced me to the world of observability, which forever shifted my perspective on how my engineering colleagues and I could be empowered to deeply and collaboratively understand and manage the services and systems in our care.
With the release of our Agent Skills, we’ve used our domain knowledge to make your agents even more powerful. Instead of having to copy and paste blocks of markdown into Claude Code, we've distilled everything we know about using Honeycomb into a set of core skills, packaged it up, and are making it open source today!
Your data doesn’t become linearly more powerful as you add more context, it becomes exponentially, combinatorially more powerful with each added attribute.
Nick Travaglini
When we compose teams or staff an incident review, we almost always use identity as a proxy for perspective. We include someone from platform, someone from the application layer, someone from the team that owns the affected service. We assume that different roles and tenures will produce different mental models of the problem, and sometimes that assumption holds. But research on how people actually build mental models of complex systems suggests it fails more often than we'd expect.
In early February, Martin Fowler and the good folks at Thoughtworks sponsored a small, invite-only unconference in Deer Valley, Utah—birthplace of the Agile Manifesto—to talk about how software engineering is changing in the AI-native era. The longer I sit with this recap, the more troubled I am by what it doesn't say. I worry that the most respected minds in software are unintentionally replicating a serious blind spot that has haunted software engineering for decades: relegating production to the realm of bugs and incidents.
Martin Thwaites
The idea behind shifting things to the left, meaning to move certain actions to the start of the process rather than the end or the middle, is around increasing development efficiency by reducing rework or changes. We remove waste from the process by getting it right the first time.
This guide gives you a more rigorous framework for evaluating observability tools in an era where your AI assistant depends on them as much as your engineers do. The criteria that matter most are not the ones that show up first in a sales cycle.