Creating Value with AI is Uncomfortable
And it’s a very reasonable reaction to a profound shift
Early chatbots handled narrow, tightly scoped tasks. AI-first products are different. They maintain memory, surface and resolve misunderstandings, and adapt to users over time. They turn the product into an evolving relationship for creating value rather than acting as a static tool. AI-first agents go even further. They have coherent identities, boundaries to maintain these identities and the capacity to participate in social dynamics almost like coworkers. It’s clear that this overall trajectory points to something profound in our future. Yet the destination remains mostly nebulous.
Despite the uncertainty, most people do feel the profundity of the societal change we’re about to experience, even if they can’t consciously articulate it. There’s an undercurrent of anxiety about who’ll be affected next. At the same time, frontier models are still hard to use in most use cases and not yet ready to wholesale eat into labor budgets. They make mistakes a competent human wouldn’t make.
It’s natural for people to be wondering: when will the threshold get crossed? Two years? Five years? Has it already happened in some domains and we just haven’t heard about it yet?
This is a confusing and uncomfortable situation to be in. It’s not clear what it means to remain economically valuable as a knowledge worker. The skills that got you here likely won’t automatically carry you forward. I’ve worked through my own version of this discomfort and come out the other side with some clarity.
Four failure modes for AI knowledge work
I’ve noticed the following consistent patterns in how people are relating to AI.
Not engaging at all
The org-wide Slack channel about AI tools sits unread. You skip the optional training. When someone asks what you think about the latest model release, you deflect. “I haven’t really looked into it yet.”
It’s not that you’re opposed to AI per se. You’re just too overwhelmed to engage because you don’t know where to start. But there’s also a quiet fear underneath. You’re worried that something specific about you means that you’re going to get left behind by this transition. So if you try to use AI you’ll look like an idiot. Especially when there’s so many confident people on Twitter or at the office. It seems easier to just wait until things settle down. Perhaps until there’s some stability of “what it all means” and someone can tell you exactly what you need to learn.
But this creates a gap between you and the people that are actively experimenting. It starts to gradually widen. You freak out every time you notice this gap, which makes the stakes for experimentation feel even higher. The end result is that you continue waiting, and the cycle continues.
Doomscrolling the AI news cycle
You can recite the differences between GPT-5 and Claude 4. You know which models are best at coding vs. writing. You diligently track all the latest benchmarks and follow all the right people on Twitter. You sign up for all the latest demos the moment they show up in your feed. It feels like you’re staying current but it’s deeply performative.
There’s a gap between knowing about AI and knowing how to use it. Your consumption doesn’t have much depth. While you may have opinions about each model, you’ve got no muscle memory for effectively working with them. Doomscrolling social media creates a sense of motion. But it doesn’t change much in your day-to-day ability to create value.
You choke and panic when someone finally asks you to build something with AI. You realize that you don’t actually know how to wield this technology to create value. All that forward motion confused urgency with reactivity. This realization is especially painful because you have been putting in the effort. Just not in the direction that develops the skills you need. Confronting this is really overwhelming. So you go back to doomscrolling and the cycle repeats itself.
Surrendering to the model
You paste in the task, hit enter, copy the output. You ship it if it looks good enough. The model seems so capable and confident that critically engaging with its outputs feels like a waste of time. Who are you to second-guess it?
Moreover, being picky about what “good” means is hard because it requires judgement you’re not sure you have. It’s easier to just let the model decide.
But this surrender is extremely disempowering. You gradually become interchangeable with anyone else with the same tools. You’re not actually using nor developing your idiosyncratic judgement, which is the very thing that would gradually make you asymmetrically valuable. In the meantime, the models continue to improve really fast. You feel so far behind the eight ball that spending any effort cultivating discernment feels like a waste of time. So the cycle repeats itself.
Jumping straight to building an AI-first product
You quit your job to build an AI startup. Or you volunteer to architect the company’s new sacred cow AI project. You want to prove your worth as a leader and an expert. That you “get it”. And you’re okay to “fuck around and find out”. You figure you’ll just throw yourself into the deep end and figure it out as you go along. The future seems glorious with possibilities.
Six months later, the product doesn’t work reliably. You’re hit with the realization that you don’t actually understand why the models behave the way that they do. For example, you got really swept up by the grandiose vision of vibe coding breathlessly described on Twitter. Or the capabilities of “agentic search”. Your vision was too ungrounded in reality. You jumped way past the kiddie pool despite having no prior experience with any comparable technology. In the meantime, you substantially overpromised what your project would be able to deliver. Perhaps your investors or the rest of the company already made difficult-to-reverse decisions based on your confidence. But again, the product doesn’t really work reliably and you can’t see a clear path for fixing this. Coming into contact with the realization that your visions may not come to pass, along with the pressures of expectation produces an incredible amount of stress on your body that creates feedback loops of reactivity.
Building an AI-first product is extremely risky and capital intensive. It’s still very poorly understood and very difficult even for strong teams. Most teams will likely fly too close to the sun. Making good decisions in this domain requires practice and a substantial amount of leadership.
What I think is going on
All of these responses are deeply understandable. I’ve participated in some versions of all of them and reflected on why they occur.
Knowledge work is shifting from “how” to “why/what.” Deterministic-native work placed an extensive premium on being a “machine” that executes the “how” within the assemblage of a company’s workflows. For example, writing code, fixing bugs, shipping features. Moreover, even though there was nebulosity about “why” and “what” a given project should have been doing, the deterministic nature of software meant that one could confidently make forward progress via iteration.
AI-first work is fundamentally different. It forces teams to confront the nebulosity of the “why” and “what” in a comparatively confronting way. The inherent stochastic nature of LLMs necessarily front-loads a lot of consideration about what “good” even means. At the very least, so that this “good” can be codified into a prompt. This process immediately triggers questions like:
What should this agent do when a user asks something ambiguous?
What counts as “helpful” in this context?
What’s the right trade-off between safety and usefulness? Is there a way to simultaneously optimize for both of them?
Deterministic-native software afforded a sense of certainty in the “how”, even if the “why/what” were deeply uncertain. The stochastic nature of AI-first work requires embracing a lot more nebulosity than what orgs have been built to handle. This creates a specific kind of pressure. Individuals have to accept that while certainty is impossible, you can develop the skills and discernment to confidently navigate product development faster. Not because you eventually achieve certainty. But because you get better at navigating without it.
Each of the patterns I described above is a way avoiding this new reality of increased nebulosity:
Not engaging avoids the discomfort entirely. But the world moves on. You fall behind without noticing until the gap feels insurmountable.
Doomscrolling creates the feeling of keeping up without ever having to sit in the ambiguity directly. You know about AI but never face the discomfort of making it work.
Surrendering avoids having to define “good” by letting the model decide. But then you’re not adding value. You’ve outsourced the very thing that would make you irreplaceable.
Overreaching tries to skip past the uncomfortable learning phase. But you can’t. The discomfort isn’t a phase. It’s the territory itself.
This is why I’ve come to see the pursuit of value creation with AI as the cultivation of capacity for nebulosity. Such capacity is a lot easier to cultivate if your broader culture supports it. The next essay explores what it looks like for the collective psychology of teams and companies that face their own version of this challenge.
Acknowledgements
Brian Whetten, Prof John Vervaeke, Charlie Awbury, David Chapman for everything they’ve taught me.
Dan Hunt for helpful discussions, editing and generally co-creating these essays with me.


