Get all your observability data in one unified platform with limitless possibilities.
Discover why Honeycomb is the better choice for your engineers, your customers, and your bottom line.
Explore our latest blogs, guides, training videos, and more.
Give all software engineering teams the observability they need to eliminate toil and delight their users.
Brian Chang | Dec 02, 2024
As the company experienced rapid growth, Duolingo remained steadfast in their commitment to delivering a high-quality user experience. This dedication led to the launch of a reliability initiative, which included the formation of a specialized team focused on observability. The engineering team recognized that comprehensive observability was critical to their mission.
Martin Thwaites | Nov 26, 2024
So, how do we get JSON logs into a backend analysis system like Honeycomb that primarily accepts OTLP data? In this post, we’ll cover how to use the filelog receiver component in the OpenTelemetry Collector to parse JSON log lines from logs files, as there are a few ways to achieve this.
Brian Chang | Nov 25, 2024
OneFootball recognized that observability was essential to delivering a seamless experience—and as seasoned engineers, they prioritized having the right tool to achieve it. Identifying issues quickly, and resolving them before they impacted fans, required visibility across their entire platform. This mission led them to Honeycomb, setting the stage for a transformative journey in how they approach reliability and performance at scale.
Rox Williams | Nov 22, 2024
With more and often smaller processes, cloud-native architectures have driven the need for better insights into our software—a way to look into how these processes fit together. To accomplish this insight, we use an approach that goes beyond traditional monitoring and provides deep insights into system behavior. This approach is cloud observability.
Charity Majors | Nov 19, 2024
We’ve been talking about observability 2.0 a lot lately; what it means for telemetry and instrumentation, its practices and sociotechnical implications, and the dramatically different shape of its cost model. With all of these details swimming about, I’m afraid we’re already starting to lose sight of what matters. The distinction between observability 1.0 and observability 2.0 is not a laundry list, it’s not marketing speak, and it’s not that complicated or hard to understand. The distinction is a technical one, and it’s actually quite simple.
Rox Williams | Nov 06, 2024
In the software space, we spend a lot of time defining the terminology that describes our roles, implementations, and ways of working. These terms help us share fundamental concepts that improve our software and let us better manage our software solutions. To optimize your software solutions and help you implement system observability, this blog post will share the key differences between two important terms: traces and logs.
Fred Hebert | Nov 04, 2024
About a year ago, Honeycomb kicked off an internal experiment to structure how we do incident response. We looked at the usual severity-based approach (usually using a SEV scale), but decided to adopt an approach based on types, aiming to better play the role of quick definitions for multiple departments put together. This post is a short report on our experience doing it.
Quinn Leong | Oct 30, 2024
Brian Chang | Oct 29, 2024
Since its inception in 2004, Lansweeper has been at the forefront of helping businesses understand, manage, and protect their IT devices and networks through a powerful IT asset management platform. As the platform grew from an on-premises solution to a cloud-based SaaS offering, Lansweeper expanded its reach to a global, multi-region customer base. With this growth came the inevitable challenges of scaling observability and ensuring the engineering team could maintain performance and reliability across regions.
Jessica Kerr | Oct 28, 2024
Observability means you know what’s happening in your software systems, because they tell you. They tell you with telemetry: data emitted just for the people developing and operating the software. You already have telemetry–every log is a data point about something that happened. Structured logs or trace spans are even better, containing many pieces of data correlated in the same record. But you want to start from what you have, then improve it as you improve the software.
Nick Travaglini | Oct 23, 2024
As discussed in the first article in this series, a Center of Production Excellence (CoPE) is a more or less formal, provisional subsystem within an organization. Its purpose is to act from within to change that organization so that it’s more capable of achieving production excellence. The series has, to date, focused mainly on how best to construct such a subsystem and what activities it should pursue. In this concluding post, however, I want to return to the point of a CoPE, discuss signs of success, and evaluate the impacts it’s having.
Liz Fong-Jones | Oct 21, 2024
Let's be real, we've never been huge fans of conventional unstructured logs at Honeycomb. From the very start, we've emitted from our own codestructured wide events and distributed traces with well-formed schemas. Fortunately (because it avoids reinventing the wheel) and unfortunately (because it doesn't adhere to our standards for observability) for us, not all the software we run is written by us. And Kubernetes is a prime example of such a load-bearing part of our infrastructure.
Mei Luo | Oct 16, 2024
At Honeycomb, we know how important it is for organizations to have a unified observability platform. This is why we’re launching Honeycomb Telemetry Pipeline and Honeycomb for Log Analytics: to enable engineering teams to send and analyze data—including logs—into a single, unified platform.