7 Comments
User's avatar
Bill Klein's avatar

Thanks for these posts. As my small team gets deeper into the exploration of the ways in which LLM-aided coding is useful (and the ways in which its utility is still TBD), I'm reading these posts partially to challenge my own assumptions about what is possible and how to best work with the tools.

My first instinct is frequently something like, "bah! f**k that! that's more work than just doing it myself!" Of course that depends on the task size (and the acceptability of the result). It's interesting that you make a statement like, "It’s possible that your specific task is too complex for the method in this post," because my thinking is that the task needs to be sufficiently large in order for this method to be worth one's while... I guess, as most people seem to recognize, one of the keys to using these tools productively is identifying a task which is sufficiently complex, but still actually specifiable, and realistically doable by current LLM-coding-tools in an acceptable manner.

Thanks again.

Varun Godbole's avatar

Thanks for reading my posts and the kind words!

Yup there's definitely a bunch of trade-offs in effort/reward when prompting.

Re: coding tools and what's possible - I'd highly recommend checking out some of the stuff Dex Horthy from HumanLayer has posted about this stuff. For example, this blog post - https://github.com/humanlayer/advanced-context-engineering-for-coding-agents/blob/main/ace-fca.md. I like how generally low bs Dex's posting is.

It's definitely possible to engage in a workflow where you spend more and more time reviewing PRs as the agent gets better at generating them. But there's no free lunch here w.r.t the work it takes to onboard these agents. And there's definitely trade-offs relative to the immediate priorities of your team within a given quarter.

I'm happy to chat with you and your team and offer more targeted recommendations if that'd be helpful?

Vamsee Jasti's avatar

Love the insight to spell out implicit assumptions.

Thanks for writing these!

Varun Godbole's avatar

Thanks for reading these posts, and the words of encouragement!!

Please don't hesitate to hit me up if you run into trouble with this process or have any questions about LLMs. Each of these posts is usually downstream of someone asking me questions.

Vamsee Jasti's avatar

I'm not exaggerating when I say that I'd pay for newsletter. Love the focus on giving language and meaning to what's top of mind for many product builders.

Some top of mind questions for me:

1. For product builders, what are the practical tools in the toolbox to steer LLMs to do the right thing. For e.g. when should you fine tune vs just prompting would do. What are other tools available if you don't work have the ability to influence the models during pre-training.

2. The state of the union with LLMs as it relates to foundational models generalization vs specialization. Why don't we see more domain specific models? Intuitively, it feels like building, say, a finance specific model would outperform a generically trained model on top of world knowledge? On the other hand, we do see some specialization with nano banana and some product specific wrappers like Claude code on top of foundational models.

Varun Godbole's avatar

I'll also say that your questions are also deeply contextual. I'm happy to zoom about your specific context if that'd be helpful

Varun Godbole's avatar

Haha thanks! I don't want to charge for this newsletter because then it'll feel too much like a job. The primary goal for this newsletter is to give me an outlet while my body recovers.

These are all great questions! Will add them to my essay ideas list!