Get Phillip Carter’s book: Observability for Large Language Models.
Take a drink if you hate that term.
Tim O’Reilly has written a post about the huge shift AI is making with our relationship to software engineering (read: The End of Programming as we Know It).
Scary title.
But the content is less so. Why?
The more things change…
How many of you started with technologies you still use exactly the same way today? Even if you’re a master Lisp/Clojure developer, you’ve still evolved, right? I should really get my head around those parentheses some day.
The post starts with the familiar progression of tech, so I’ll spare you my C64 -> [scene missing] -> AI code assistance
fable. We’re all there with various starting lines.
…the more things stay the same
What encourages me from Tim’s post is that he sees not a legion of talented programmers out of work because of AI, but a legion of talented programmers enabled to focus more deeply on solving business problems. AI is going to cause a lot of new tools to be written and people to help understand what’s being generated.
Of course, it’s very helpful if you’re a React expert and you ask Cursor to create a nice Next.js application and use your favorite design engine.
Tim can see a future where someone more deeply involved with the business could also participate, building starter projects and concepts, even actual running software. This person could partner with a more experienced engineer with both technical depth and stronger AI experience/tools who could shape it further.
I know the big fear here is loss of control. The fear is real. So, how can we wrestle with it better?
AI code as a level of abstraction
I don’t think about my video drivers anymore (well, mainly because I don’t have a Surface Book running Linux anymore. For example, a wonderfully bad disaster story). In the same way, it could be possible in a few years to treat generated code a little more like that.
There is only so much any one person can keep in their working memory. Take frontend development, for example. You can build a React application in so many different ways, using a client state engine, server rendering, testing techniques, various forms libraries, pre-rendering, pre-generating, client components, server components, tons of frontend component libraries, the list goes on and on. And the problem is? Yep, every client I’ve run into does it differently.
Using AI as a smarter version of the Google/Stack Overflow/GitHub Repo loop to remind you of common patterns is really helpful. I’ve forgotten much more technology than I’ve retained in my career. Getting a simple baseline sample going in a few minutes with ChatGPT spares me from having to completely remember or deeply learn an API or pattern in a quick experiment.
It’s not all roses and ice cream
Some of the more difficult things are iterative. I could load some sample code into the context and start the conversation with, “We need to split this application’s features into two sections and secure each one to a different role.”
Then, I’ll end up in a feedback loop for a few hours/days as the chat helps me iterate on the ideas, and I can show progress to our stakeholders along the way.
You have to be okay with small experiments, committing incrementally, rolling them back. COMMIT
frequently. ROLLBACK
frequently. Testing ideas out quickly (feature flags, anyone? OpenTelemetry instrumentation to prove/challenge assumptions?). Honeycomb and observability-driven development, for example, fits quite nicely in here.
Focusing on getting better at prompting will help a bit, as well as loading information into the context. An example of prompts from Claude may prove interesting.
A human in the AI loop is essential
Don’t be fooled into thinking a GPT is great for every task, since it relies on indexing vast amounts of content. That content had better be there, or be similar enough, for it to score a win. Hopefully the resources are findable online for a chat API to query.
A new API or SDK that doesn’t have great documentation, or where sample code isn’t online, won’t be as accessible to a chatbot as much as the well-trodden Ruby on Rails API will. The latter is simply based on code and conversations that have been online for almost two decades.
Humans will have to review the generated guesswork. I look forward to a day where hallucinations and semi-correct-looking answers happen less frequently. It’s a real problem right now.
If you’re looking at things that are brand new, it’s likely you’re going to have to get a lot more creative with your prompts, use analogies, or just abort the loop for a bit and read the APIs you’re attempting to use. The chatbot is an assistant, not an intelligent human with our intuition. Even Anthropic’s Pro AI claims you’d use it for 50% of your coding tasks, not 100%. I find even that claim to be a bit stretched.
But if we think that kind of work will be done automatically, well, by a completely closed loop of “I tell the AI to do it, and I get a finished product,” I have suspicions. It’s all about feedback loops. The human must be in there. I assume the massive mistake of “Let’s lay off 90% of our programmers because AI can do it” is about to run into the “Oops, we don’t have enough people to get things done anymore, or even verify that the AI-generated code even works!”
And sometimes, after a big argument with the chat tool, you have to bail and go do it the old way. I’m trying to get better at this myself. Sometimes you ask ChatGPT to fix one thing and it breaks another, and you get into a loop where each fix breaks something else.
What do you think?