When using AI feels rewarding

4 minute read


The popular belief is that artificial intelligence will help to streamline production by making it more efficient and removing the human element where it is not needed.

I completely disagree with this sentiment.

After the past two years of engaging with AI-powered tools for manipulating code and information, I’ve found that the benefits to productivity–and more importantly, the delight–arise only after developing what feels like a close relationship with the model. If this sounds silly, let me explain further.

When I’m asked about my working style, I’ll flatly admit that I sling a lot of mud at any moving target and that I’m really not much of a thinker. I love physics and statistics, but the joy that I derive from these subjects isn’t necessarily from uncovering any fundamental truths (though they are nice) but rather because both of these disciplines give their practitioners enormously powerful tools. For physics, one example would be the least-action principle and accompanying Euler-Lagrange equation. For statistics, I mean Markov chain Monte Carlo in the context of numerical integration.

These tools are so powerful that they won’t just crack a hard nut - they’ll clearcut the whole forest and torch the litter before whipping you up a nice bowl of toasted walnut bits. I’d never felt the same about any tools or formalisms in in AI or CS more broadly until recently.

I signed up for the GPT3 beta immediately after it opened, and quickly tried to throw some vague, thorny problems at it. One of my favorite situations was this - I’d been working on a Bayesian model to predict what type of material or amenities a building might have given some small piece of additional information, like its location or date of construction. From a statistical perspective, the data is 99% missing and the whole thing is dangerously underconstrained. But, with the right prompt engineering, I was getting perfectly reasonable answers! It knew that multistory buildings with pools are usually hotels, and that you won’t be finding any barns in the middle of McMansioned suburbans.

I know this sounds trivial and dumb, but getting a reasonable answer everywhere is no small feat. When I realized that GPT3 was already making my bespoke, hand-crafted Bayesian model nearly obsolete, I felt like an understimulated preteen stumbling onto a cache of Red Bull and bottle rockets.

There is definitely hard work involved in using AI tooling. Prompt engineering is now becoming its own skillset with entire subreddits and forums devoted to finding out how to guide Stable Diffusion into getting that cyberpunk Lord of the Rings panorama to look just right. What I love about this is that, for now, it really feels like an intuitive process with an effort-reward curve that steepens sharply with the amount of time invested. It’s intuition guiding what types of prompts & problems are tractable, the length of the possible solutions, as well as clever ways to formulate a big problem as a series of intermediate steps.

This kind of problem solving is a blessed relief from scanning StackOverflow posts. Developing software with CoPilot makes programming feel a lot more like blue sky science than ever before. I’ve lost count of the number of times that I’ve thought to myself “…it would be crazy if this worked, but what the hell?” and pulled something totally workable out of CoPilot’s LLM. Testing, in particular, is 900% more fun when you aren’t responsible for every single keystroke. If you made me explain to someone else how to work it, I wouldn’t know what to say. It really seems to take some experience and personal curiosity to figure out how to bend AI to your aims. Obviously, with perfect knowledge of the idiosyncrasies of the training corpora, one could probably design optimal prompts in a formulaic way.

I am hopeful that our use of AI goes down a different road - one that favors those who are willing to expend the effort to learn from experience and use them to solve conventional goals with 100x efficiency. I also want to believe that the types of people that will be most empowered by artificial intelligence–the curious and inquisitive–will also be those who are best positioned to solve big problems in a thoughtful, empathetic way. In this same vein, I also hope that AI thus becomes a powerful agent to help level the balance of power between small and large, whether it is in commerce or science; that plucky, scrappy, and creative can hold their ground against wealthy, bloated, and derivative.