OpenTelemetry   Instrumentation  

Understanding OpenTelemetry’s Browser Instrumentation

By Winston Hearn  |   Last modified on May 7, 2024

Recently, Honeycomb released a Web Instrumentation package built around the OpenTelemetry browser JS packages. In this post, I’ll go over what the OpenTelemetry auto-instrumentation package gives you, and what Honeycomb’s distribution adds in order to give you even more insight into your web services.

If you're interested in instrumenting your website or app with OpenTelemetry, our package drastically reduces the setup work required, and it easily works with Honeycomb or any other vendor that accepts OpenTelemetry! You can follow the getting started guide to test it out.

The OpenTelemetry browser auto-instrumentation gives you a good bit of data out of the box that is similar to what you might expect from a proprietary real user monitoring (RUM) agent, but via an open-source library. Last year, the package was updated (in good part due to work by Honeycomb developers) to take advantage of webpack's tree-shaking capabilities, which brought its size down to 300kb uncompressed and just over 60kb compressed, which is also in line with popular RUM vendors’ agent sizes. If you're exploring RUM vendors, it's well worth looking at OpenTelemetry as an alternative to avoid vendor lock-in. Of course, that's only feasible if you can get the data you need, so let's dive into what you get out of the package.

Overview of auto-instrumentation

The OpenTelemetry browser instrumentation has a few built-in packages for instrumentation. Here's an overview of the data it collects and what questions you can answer with it.

Document load

When a user visits a page on your website, their browser downloads the HTML and starts loading all of the resources necessary to render the page for them. The document load instrumentation package collects all resources loaded during page load into a single trace. The top level of the trace tracks the entire duration of the page load, and then each resource loaded on page is attached as a span. OpenTelemetry's browser auto-instrumentation uses the browser performance APIs to grab data about everything loaded on the page. These APIs track information about all the requests that your page makes while it is loading, which gives you a rich set of data:

  • All Javascript, CSS, and media assets that your page requests: these are tracked in order, giving you a waterfall view of each asset that loads in the order the browser requests them.
  • Timing for every asset: Along with knowing which assets are downloaded, you'll get a series of timestamps for the request, such as when the request was started, when it was sent to the server, the time it took to fulfill, and the completion of the request. This allows you to see the full duration of the request, as well as break it down by the different components.
  • Render-blocking resources: A recent API released in certain browsers tells you if a given asset was render-blocking, meaning the browser could not show the user content until the asset had finished downloading. Minimizing blocking assets helps a page show up faster, which makes your users happier.

The data collected from these performance APIs helps you understand how any given page loads. The timing data is useful for monitoring as metrics, but the value of this data is more than just metrics. It's more important to ask questions like:

  • How fast are my pages loading? This info can be aggregated across your site, but it gets more interesting when you can slice it by a variety of correlated information, such as specific page route, screen size, or custom attributes you add, to understand the specifics of how users experience your web page.
  • What are the largest assets my pages are loading? This helps you find slow assets that need to be optimized.
  • What assets impact my page load times? Answering this question also gives you areas of improvement for page speed.
  • Where am I loading assets from? If you have a complicated web service that uses CDNs and APIs, a common problem is trying to get a comprehensive mapping of where all the assets come from. This data is helpful for answering that question.

User interactions

Along with document load data, the other main auto-instrumentation package collects user interaction data. This data is not collected by default, so you can set it up to collect only the user events you care about. Potential options are clicks, form submits, keypresses, and any other browser events you care to instrument.

These events give you insight into what users do on your site. With clicks, you can see which buttons are clicked and how users navigate your pages.

Questions you can answer with this data:

  • Which buttons and forms are most popular with users? This data helps you see how users move through your sites and what is popular.
  • Which buttons and forms are not useful? The inverse of popularity is lack of popularity. Seeing that new features or high-value paths are not being used suggests areas for optimization and room for improvement.

Going beyond core auto-instrumentation

The two packages above are what's included in web auto-instrumentation. The data they collect is powerful, but it doesn't give you insight into questions that are relevant in the world of web optimization today, namely Core Web Vitals investigations, and full insight into how users interact with your web services.

That's where Honeycomb's web instrumentation expands the value of OpenTelemetry and allows you to answer even more questions about what users experience in your web service.

Core Web Vitals

Google's Core Web Vitals (CWV) are important metrics for SEO and measuring user experience on the web. Most RUM tools available today track them as metrics, giving you insight into where things are or or not working on your web service. Unfortunately, because they are collected as metrics, you won't be able to understand the causes behind poor metrics. Metrics are just numbers, when it comes to debugging, you need context about the metrics. That's where Honeycomb's web instrumentation helps. Out of the box, our instrumentation collects not just the Google-defined metrics, but also the attribution data that explains what caused the metric. This attribution data helps you answer questions like:

  • What are the specific elements on a given page that are causing poor CWV scores? The data Honeycomb can surface in near real-time for this question gives you actionable information to determine how to improve the score.
  • What are the pages that affect my CWV scores the most? Sometimes, your CWV scores are not related to a single page, but a specific type of page in your CMS or web app. The attribution data, correlated to all the other data you collect, helps you understand if your scores are related to a single page problem, or a set of pages that have a common cause.

Session context

When a user visits your website, they may stay on it for a while (this is definitely true if your website is actually a web service where they need to perform a series of tasks). If that's the case, it's helpful to know the context about their journey, such as the type of device (often proxied by screen size), how they came to your site (which page they landed on, where they came from), and other contextual details. This data is not collected in OpenTelemetry auto-instrumentation, but it is collected in Honeycomb's web instrumentation.

Collecting this data gives you a far richer set of information to slice and dice against, ensuring you get more accurate insights and stronger hypotheses when debugging page performance issues. This gives you a much richer set of data to explore as you make sense of how your users experience your web services.

Conclusion

In this post, I've summarized both the data collected by the OpenTelemetry browser instrumentation and the Honeycomb web instrumentation built on top of the OpenTelemetry packages. If you're interested in answering questions like I listed above, we have easy-to-follow documentation that can have you collecting data within an hour.

Interested in learning more about frontend observability? Join me in my webinar, It’s 2024, The Frontend Deserves Observability Too. You’ll learn:

  • How to instrument your web frontend with OpenTelemetry
  • What observability 2.0 practices look like
  • How to use custom instrumentation to answer your specific questions

Register today!

 

Related Posts

OpenTelemetry   Observability  

Observability, Telemetry, and Monitoring: Learn About the Differences

Over the past five years, software and systems have become increasingly complex and challenging for teams to understand. A challenging macroeconomic environment, the rise of...

OpenTelemetry   Observability  

Real User Monitoring With a Splash of OpenTelemetry

You're probably familiar with the concept of real user monitoring (RUM) and how it's used to monitor websites or mobile applications. If not, here's the...

OpenTelemetry  

Transitioning to OpenTelemetry

This article touches on how we at Birdie handled our transition from logs towards using OpenTelemetry as the primary mechanism for achieving world-class observability of...