Webinars Observability Engineering Best Practices AI & LLMs

AI’s Unrealized Potential: Honeycomb and DORA on Smarter, More Reliable Development with LLMs

Enterprise software teams are eager to harness AI, but are they actually making software more reliable, or just producing more code? Join Charity Majors and Phillip Carter of Honeycomb, along with Nathen Harvey, DORA Lead, as they explore the challenges of AI usage in software engineering and how teams can avoid common pitfalls.

DORA’s latest research confirms what many engineers already suspect: more AI features and code don’t necessarily mean better software. While AI can boost perceived productivity, it also introduces new risks—low-quality code, unreliable outputs, bloated telemetry costs, and misleading shortcuts. However, with the right strategies, AI can become a powerful ally in producing useful, high-quality, and observable software.

Join us to learn:

  • The most common AI development mistakes—and how to avoid them
  • How to set up AI dev tooling for success
  • Why context is key when working with LLMs
  • How AI can improve rather than pollute your telemetry
  • Ways AI can help build more reliable systems with better observability

Discover how to use AI wisely to enhance observability and build smarter—not just faster.