Who Am I?
My name is Varun Godbole. I’ve spent the last decade at Google working on deep learning research across Brain and DeepMind. I’ve worked across the spectrum of applied and research projects, and was a core member of Gemini from its inception. I quit in November 2024 to go on sabbatical for a multitude of reasons.
Simply put, I was a bit burned out and wanted a break. But I’d also been working on LLMs before ChatGPT made them cool. Watching the evals improve exponentially across the board forced me to think through the broader Nth order consequences of knowledge work (especially coding) becoming too cheap to meter.
I started wondering what it’d mean to build an organization from the ground-up that was truly “AI-first”.
The central question I’ve been pondering
It turns out I wasn’t alone in this. There’re many organizations seeking to create value with AI, but also struggling to do so.
The “ideal” would be for organizations to be so AI-first that if GPT-9 suddenly dropped, they’d be able to fluidly re-organize to create value in proportion to the underlying model improvements.
But this isn’t what we see. Why not?
This has been the central question of my thinking for the last few years.
What set of factors would need to be true for this to occur? And what can we do about it?
Where this question has taken me
I’ve been sitting with this central question for a couple years. I’ve found provisional answers that point to territory most AI discourse hasn’t touched yet. Not because others haven’t been looking. But because the answers required drawing from unusual sources.
The most compelling answers came from developmental psychology, 4E cognitive science and the world’s various wisdom traditions. I doggedly followed the questions where they led. AI forces organizations to confront far more ambiguity and uncertainty (i.e. nebulosity) than they were originally built to handle. Organizations need to substantially increase their capacity for engaging with nebulosity for becoming more AI-first. Each of these disparate fields had something valuable to say about that.
I’ve been learning heavily from people like Prof John Vervaeke, Charlie Awbury, David Chapman and Brian Whetten. Their work’s heavily shaped my research.
I’ve started mapping out the terrain piece by piece. For example:
Why do individuals and groups find creating value with AI uncomfortable?
The next few essays will go deeper into developmental psychology and start articulating clearer prescriptions for organizations.
I write for people who sense that AI represents something profound, but find most AI discourse either too hype-driven or too dismissive. The question of how one can become more AI-first to create substantial real-world value is quite deep. This newsletter is for you if you’d like to sit in these questions with me rather than reactively rushing to the answers.
Please subscribe if you’d like to think about AI differently than many of your peers.
