New to Honeycomb? Get your free account today.
Another one in the history books: 2024 is (almost!) over. The OpenObservability Talks podcast, hosted by Dotan Horovits, recently featured a lively discussion with Charity Majors, Co-founder and CTO of Honeycomb, to reflect on the trends, achievements, and future of observability.
OpenTelemetry’s breakthrough year
2024 was a pivotal year for OpenTelemetry (OTel), which surpassed Kubernetes as the largest CNCF project by contributors. The adoption of OpenTelemetry has soared, signaling a move towards eliminating vendor lock-in, and reinforcing the value of unified, standardized data collection.
From observability 1.0 to 2.0
The shift from “three pillars” observability (metrics, logs, traces) to a unified, structured approach dominated the discussion. Observability 2.0 emphasizes:
- A single source of truth: Storing data as wide structured log events, from which you can derive all other data types.
- Cost-effectiveness: Unified data storage reduces operational overhead and unlocks powerful insights, removing the cost multiplier of storing the same data many times across disparate tools.
- The ability to slice and dice data: Future You doesn’t know what you’ll need to investigate, so it’s crucial to retain as much high-cardinality data as possible, and have a quick way to compute outliers and identify correlations.
The AI hype in observability
Artificial intelligence is a hot topic, but Charity urged caution against overpromising. While AI can streamline data processing and insights generation, its impact is most meaningful when paired with strong foundational practices. Observability is crucial for understanding AI systems, as tracing and context remain indispensable for debugging and optimization. Investments in AI tooling should augment, not replace human expertise.
The rise of platform engineering
Platform engineering matured significantly in 2024, with teams adopting product-oriented mindsets to manage the boundary between infrastructure and application code. This approach emphasizes creating self-service, developer-friendly platforms that streamline operations.
Charity noted the importance of holistic organizational change to support platform engineering, ensuring developers can own, understand, and operate their code.
Controlling costs
Managing observability costs at scale remains challenging, particularly for services with high request volumes. Charity emphasized the critical role of intelligent sampling:
- Capture all errors and anomalies while reducing noise from routine traffic.
- Use tail-based sampling to prioritize data that matters most.
Observability as a development tool
The conversation underscored that observability isn’t just about operational metrics; it’s a tool for faster, more effective development. Engineers can leverage observability to:
- Measure the impact of feature releases.
- Identify unexpected user behaviors.
- Iterate quickly and confidently.
Observability accelerates feedback loops, enabling teams to respond to changes and innovate at a competitive pace.
Looking ahead to 2025
With the momentum gained in 2024, the future of observability looks promising. As organizations embrace observability 2.0 with its single, unified data source and AI integrations, they stand to achieve greater efficiency, scalability, and resilience.