Ship First, Model Later: A Short Recap of AI.Dev

Ship First, Model Later: A Short Recap of AI.Dev

2 Min. Read

In a keynote at AI.Dev, Robert Nishihara (CEO, Anyscale) described the shift: A year ago, the people working with ML models were ML experts. Now, they’re developers. A year ago, the process was to experiment with building a model, then put a product on top of it. Now, it’s ship a product, find the market fit, then create customized models.

The general-purpose generative AI models available to all of us today (such as ChatGPT) change the way work is done. We can start developing on top of them with a few hours of prompt engineering. Products and features come first, because the models are there for us.

After our product is proven useful, it makes sense to optimize the models underneath it. Create or tweak some models to suit this purpose, and they’ll be faster and cheaper than the big ones at OpenAI or Anthropic.

I say models (plural), because that’s another shift: we aren’t using bare LLMs. At a minimum, we supplement them with relevant info (Retrieval Augmented Generation, or RAG). Many talks at AI.Dev described agents that use LLMs to create instructions for other LLMs, LxMs that generate something other than text (like images), and tools that can do math, code execution, database operations, etc. Building features using generative AI is now all about shipping and iterating.

To iterate, you need observability. Honeycomb’s CEO, Christine Yen, showed how we iterated on our Query Assistant feature by looking at inputs and outputs, plus in-product feedback. 

In Christine’s talk, someone asked the question: How do you measure whether a generated response was good? This is an open question. I talked to a conference sponsor working on that, DeepChecks. Personally, I think we’ll wind up asking yet another LLM!

It’s exciting times in tech. Charles Herring, who spoke about Cassandra this time, said he was considering retiring—and then ChatGPT was released. This added broad new avenues of possibility for his work as CTO of WitFoo.

Lucky us, we get to explore these possibilities! I don’t have to learn the intricacies of models first—just a little prompt engineering. I’ll keep doing my favorite things: ship and iterate.

Enjoy conference recaps? Check out our comprehensive KubeCon NA 2023 recap!

Don’t forget to share!
Jessica Kerr

Jessica Kerr

Manager, Developer Relations

Jess is a symmathecist, in the medium of code. She sees development teams as learning systems made of people and running software. If we make that software teach us what’s happening, it’s a better teammate. And if this process makes us into systems thinkers, we can be better persons in the world.

Related posts