love this, very helpful. In 3 diff aspirationally AI first products I’m involved in, some version of the challenge is how you make the UX not annoying. Ie. good deterministic products just work, while these feedback loops tend to feel more like you’re on-boarding a new assistant, investing hoping for downstream benefits.
really appreciate the product framework lens. I read a recent research study on scribes not necessarily lightening the workload for users; they just create a different kind of cleanup.
The KPI is really about "meaningful automation", which is exactly the issue you proposed here. Until products are designed to build long-term memory, ask better clarifying questions, and genuinely learn how a specific person works, we’re going to keep getting shallow and slightly-off output. Always appreciate such clear thinking from y'all.
This is brilliant and profound. As a technical question, does that mean that an AI first product has to keep building ever larger context windows about the user and feeding those into each query to the LLM? Value conversations for the win.
love this, very helpful. In 3 diff aspirationally AI first products I’m involved in, some version of the challenge is how you make the UX not annoying. Ie. good deterministic products just work, while these feedback loops tend to feel more like you’re on-boarding a new assistant, investing hoping for downstream benefits.
Thanks! More posts on this incoming!!
really appreciate the product framework lens. I read a recent research study on scribes not necessarily lightening the workload for users; they just create a different kind of cleanup.
The KPI is really about "meaningful automation", which is exactly the issue you proposed here. Until products are designed to build long-term memory, ask better clarifying questions, and genuinely learn how a specific person works, we’re going to keep getting shallow and slightly-off output. Always appreciate such clear thinking from y'all.
Ohh interesting! Can you please link to the research study?
This is brilliant and profound. As a technical question, does that mean that an AI first product has to keep building ever larger context windows about the user and feeding those into each query to the LLM? Value conversations for the win.