Query Assistant Notice/Usage Policy and FAQ

Last Updated: May 3, 2023

Notice and Usage Policies

Query Assistant uses machine learning systems such as a Large Language Model (LLM) with a generative pre-trained transformer to assist in creating Honeycomb Queries using natural language. The intent of Query Assistant is twofold:

  1. To help users who are new to Honeycomb’s UI get value out of our product faster. Using natural language, new users can have less of a learning curve to begin querying their data.
  2. To make routine tasks faster so any user can spend more time refining a query in a core analysis loop.

Query Assistant does not replace the usage of the Query Builder UI.

Your usage of Query Assistant is subject to the following additional terms:

  1. Agreement. By using the Query Assistant, you agree to the following additional terms, incorporated into and governed by the subscription agreement between you and Honeycomb. To the extent there is an inconsistency between the terms in the subscription agreement and these additional terms, these additional terms control. Use of Query Assistant is further subject to OpenAI’s Terms of Use. If you do not agree to these additional terms, then you must not use the Query Assistant.
  2. Data Processing. You grant Honeycomb the permission to use and share your natural language input and the names of fields in your schema with OpenAI’s API to generate a query with an LLM. You agree that OpenAI’s access of this data is subject to its Privacy Policy.
  3. Updates. Honeycomb may update or terminate Query Assistant at any time without any liability to you. You understand that this is an experimental functionality and that it’s possible that Honeycomb may decide not to release a commercial version of Query Assistant.
  4. Ownership. Honeycomb makes no claims of ownership over queries generated with Query Assistant.
  5. Disclaimer. Query Assistant is provided AS-IS without any express or implied warranties. Query Assistant is not subject to any service level agreements or any warranties you may have in the subscription agreement. You may use Query Assistant at your own risk. Although effort is made to produce useful queries, Honeycomb does not guarantee the accuracy of queries generated using the Query Assistant. Honeycomb makes no claim of responsibility for any action taken from the use of a query generated from Query Assistant. It is the user’s responsibility to ensure that any query they use is accurate for their needs. The user retains responsibility for ensuring their data and queries comply with any applicable policies.

FAQs

Is Query Assistant experimental?

Query Assistant is a core product offering following a period where it was experimental. Moreoever, we are exploring how AI can be employed to improve the overall Honeycomb product experience, starting with querying. Our guiding philosophy is that we’re here to build mecha suits, not robots. Our product is and will remain human-centered in its approach to incorporating AI.

Why do I only get 50 queries per day?

We currently limit your usage to 50 natural language queries per day. This limit is something we may revisit in the future.

Does Honeycomb use ChatGPT to generate queries?

No. Honeycomb uses OpenAI’s API instead of ChatGPT. OpenAI’s API exposes the base Large Language Model (LLM) that ChatGPT also uses. ChatGPT adds additional layers of machine learning systems suited for a general-purpose chat application and uses a subset of data it receives to further train their systems. The systems ChatGPT adds atop the LLM are not part of Honeycomb’s product implementation.

Will my data be used by OpenAI to train machine learning models?

No. OpenAI does not train models on data sent via their API. OpenAI does retain all data for a short period of time to monitor for abuse and misuse. We do not use their opt-in mechanism for training and have no plans to offer that as an option for users at this time.

What data does Honeycomb send to OpenAI’s API?

Honeycomb sends your natural language input and the names of fields in your dataset schema to OpenAI for the purpose of generating a runnable query based on your input. We do not send any identifying information like team name, user name, etc. We do not send the values of the data sent to Honeycomb.

Is any data sent to OpenAI passively?

Yes. Column names in a schema are sent to OpenAI’s text embedding model API to produce a Vector Embedding of that schema. This vector embedding is stored in Honeycomb’s infrastructure. Otherwise, no other data is currently sent to OpenAI passively. Your natural language input is only sent when you execute a natural language query. In the future, we may explore other machine learning-assisted features that would send data to a model without a direct user interaction.

Can I turn off Query Assistant?

Yes. If you are a team owner, you can turn it off for a whole team. Otherwise, you can collapse the UI and that state will persist.

Can I use Query Assistant if I signed a BAA?

No. We do not currently offer Query Assistant to any customer who has signed a BAA with us. We will notify any customer who has signed a BAA when we can make it available to them.

Why does Query Assistant not notice old columns?

When determining which columns to include in a query, Query Assistant filters columns that are older than 7 days. That is, these columns have not received any data since at least 7 days ago. This is done to reduce the amount of data used to create a vector embedding of a schema.

Why does Query Assistant give a different answer for the same input?

Large Language Models are nondeterministic. While we try to do our best to achieve a degree of consistency for similar inputs, we cannot guarantee the same query for the same input each time. If you care about having a consistent query to run, we recommend saving a query to a board for later use.

I see that OpenAI may be a subprocessor. What personal data do you send to OpenAI?

Honeycomb does not automatically send any personal data or telemetry data to OpenAI. However, customers may choose to enter personal data into the Query Assistant, which would cause this data to be transmitted to the OpenAI API. For example, if you ask the Query Assistant, “Show me all the requests from customer John Smith on August 1st,” the substring “John Smith” is technically personal data that would be transmitted to OpenAI by the Query Assistant as it attempts to generate a query matching your request. No query results or other personal data would be transmitted.

Does Honeycomb train ML models on my data?

No, Honeycomb does not use any data to train ML models today.

Will my queries or other kinds of data be used as training data for any future ML models?

Potentially, yes. There are future product features that we’d like to explore around incorporating user feedback systematically to improve query generation. We’re also interested in exploring more personalized experiences to help with onboarding customers based on the “shape” of the schema for data they send, such as field names. We have no plans to incorporate data itself, and all data is still subject to our Data Retention window.