/ 6 min read
AI code assistants are awesome
Appreciation post for the potential of LLMs in writing code
Last Updated:
Productivity-enhancing AI code assistants are a big deal right now, with much promise.
Lots of YCombinator companies building these seem to be being funded, in line with the more broad trend of AI-powered startups being founded all over in general. A bubble? Maybe. But there will be differentiation and really useful survivors if/once it bursts. And like most tech bubbles, it likely will push the world along significantly and the advancements will persist.
This post’s a brief bit of my thoughts on these and one (specific) use case I was impressed by earlier today- although I do try to use Github Copilot a lot in my typical coding workflow.
At least from the outset, AI assistants seem an incredibly freeing technological improvement- freeing humans from unnecessary, repetitive mental labour the same way the industrial revolution rid most of us from physical labour.
Types Of
From the interface standpoint, two ways of using AI for coding assistance sum them up:
-
The conventional prompting and repeated
cntrl c + v
-ing with browser based LLM access points like ChatGPT and Claude. Some process friction in switching between them and an IDE, sure, but their large context windows help, and some models are obviously better than others (through RLHF/fine-tuning/good data used in their training- whatever the latest methods are). Usually lots of cycling back and forth and manual edits required with these, as anyone who’s used them for these tasks knows. -
The IDE based LLMs. Less friction, can use large base models as well, and just more seamless overall. Also agentic- I like the fact that many inject/edit code directly into your open files. Better than just the chat based interface. Now a lot of these seem to be VSCode forks these days (like Cursor) and forks of forks of VSCode (cough, like PearAI, a fork of Continue.dev):
(^featured: infinite money glitch from r/programmerhumor)
Something I’ve heard (and which makes sense) is to use the former AI assistant kind for top-level planning (like architecture and framework choice) and learning stuff (“Give me a review on the REST API framework usage conventions”), and the latter for when you’re actually building things out.
Copilot-generated recursive tree-depth-traversing dict-briefening functions
I’ve actually only used Github copilot so far on VSCode. Definitely interested in experimenting with other IDE code solutions ones later though.
I’m currently working on and off on finishing off a fullstack usable app out of an earlier Hackathon project, BOPE-GPT (The specific code for this example in this currently open PR here).
There’s some large data structures created in it during execution.
I wanted to log the important info in them, but doing so in their entirety would just swamp the log files and take forever to scroll through. The entire data structures are stored in MongoDB anyway if I wanted to check them out, in that nested document form that is very convenient to look through on MongoDB Atlas:
No, the intent of the logs are just to make sure the code is working at a high level and returning what it should- and a lot of it, when the /next_iteration/
backend API endpoint is hit, is that large Gaussian Process visualization data. After serializing this data and converting it from a Pydantic object to dict form so it can be converted to JSON and sent back to the frontend/stored in MongoDB, logging all of it leads to this:
Almost decided just not to log this. But I asked Copilot for ideas on making it briefer. A couple of back-and-forth-prompt cycles later and I got two recursion-based Python object summarizing and depth calculating functions up and running, with node data types and depths displayed:
That was fast. And it works. Wow?
Feels reminiscent of DSA stuff- using the concepts of recursion and depth-first tree traversal. Not something most human developers could probably come up with quickly though. Or even want to as a high priority task, since this isn’t part of the user experience or flow. Just a logger for developer eyes.
The logs after using these functions:
I added a snippet to calculate the time taken for the summarizing function to run to check if the time consumed was worth it:
This one instance took 0.26 seconds. Not a lot in comparison to the 38.30 seconds that entire iteration (plus visualization) of the BOPE process (and it’s the visualization that’s very time taking- requires a LOT of model inference runs to build a visualizable map up, scaling exponentially with number of input dimensions- 4 in this case) took. Seems fine then.
The Future
Anyway, the impressive point is I didn’t come up with the larger higher level logic of what exactly to do in these functions- something that you usually you have to do with these AI code tools. Which is what people seem to talk about in using these well, in discourse on this online: You typically describe the higher level logic, and rely on your AI code assistant to fill in the gaps, and then you look over it to see if it seems sensible. I’m not using more complex logging or performance monitoring tools either, which there’s open frameworks to help out with out there.
I just gave it a vague-ish description of the output I wanted without knowing what a solution could be like and it came up with both the functional logic and implementation on its own. Maybe this is because this has been done before in its training data and it just knew what to do- These AI tools definitely work better with what they’re familiar with, and not what’s unfamiliar or more new. Still, it inspired me enough to make my second ever blog post (this).
In general, abstracting away excessively technical minutiae accelerates human potential, and is in the long run always a good thing. Frameworks, design abstractions and no code tools were an improvement over custom code for every application, higher level programming languages were an improvement over machine level code, and the concept of software-as-computer-instruction in general over the punched tape instructions passed to mainframes.
Abstraction is the psychotechnology that lets us build on the shoulders of giants. The proliferation of AI assistants will enable more of this and be amazing for humanity. Some while till we get there though- can’t wait.