
Jamie Quint, who built Notion's data stack in 2020, published a how-to guide this week on building an AI data agent that can "5x+ your data team bandwidth." It's technically sharp, useful in places, and ends with a section literally titled "Feel the AGI."
The architecture is the real thing here. Replacing the semantic layer — that brittle, hand-maintained mapping between business concepts and SQL that every data team dreads — with a context sub-agent that reads your actual DBT models on demand is a clean idea. You stop trying to anticipate every question in advance and start letting the agent investigate the codebase per query. The "quirks" store, where user corrections get extracted into reusable institutional knowledge, is the best idea in the post. That's the stuff that currently lives in one analyst's head and walks out the door when they quit.
The self-scoring SQL loop is also right. Agents confidently deliver wrong answers because the query ran — it just answered a slightly different question than the one asked. Scoring structural correctness, execution reliability, and context alignment separately, then building a deterministic gap brief rather than asking the model to assess its own vibes, is how you catch that before it reaches a VP who trusts the number.
So the signal is real. Here's where it falls apart.
This entire architecture assumes you already have clean, well-annotated DBT models, a legible codebase an agent can actually navigate, data governance mature enough that dynamic SQL generation doesn't create a compliance incident, and a security team that's already had the conversation about what an agent is allowed to query. Quint built the stack at Notion. He's describing what's possible when the foundation is already solid. Most organisations trying to implement this are working from a foundation that is not solid.
"Three weeks end-to-end" is the timeline for someone who knows exactly what they're building and doesn't have to route a request through anyone's legal or infosec team first.
The closing claim — that this collapsed a four-to-five analyst hiring plan down to one — deserves more than a sentence. We don't know the company, the question volume, the failure rate, or what that one remaining analyst actually does. (My guess: fixes what the agent confidently got wrong.) "Sales and Customer Success can now self-serve complex data questions in Slack" is doing a lot of work in that paragraph. Complex data questions require knowing what you're actually asking — what a metric means, what filters apply, whether the comparison is even valid. Moving the translation layer from human to agent doesn't eliminate the need for analytical judgement. It just makes the errors quieter.
The architecture is legitimate. The timeline is a best case at a well-run company. The analyst displacement math is a projection wearing the clothes of an outcome.
The "Feel the AGI" sign-off is Quint knowing exactly what he's doing. He's not wrong that this is pointing somewhere real. He's just pointing from a base camp that most teams haven't reached yet.
Stay current weekly
Get new commentary and weekly AI updates in the AI Primer Briefing.