Two analysts, one question
A few years ago I watched two analysts answer the same question.
The question was simple: "How many customers churned last quarter?" The junior answered in four minutes. The senior took forty.
The junior's number was wrong. It looked right — round, plausible, the kind of number a CEO nods at — but a LEFT JOIN had silently dropped about 12% of the cancelled accounts because their subscription_id had been reissued during a billing migration the previous year. [REPLACE: swap in your own real example if you have one — the more specific, the better]
The senior's number was right. Not because she wrote better SQL. Her query was, honestly, less elegant than the junior's. She was right because somewhere around minute six she stopped writing and started squinting. She ran a COUNT(*) before the join and a COUNT(*) after, noticed the drop, and spent the next half hour figuring out why.
I've thought about that gap a lot since, and I've come to think the thing we call "senior analyst judgment" is almost entirely one specific skill: how fast you get suspicious of your own output.
It's not SQL fluency. Both of those analysts knew SQL. It's not domain knowledge — the junior had been on the team longer. It's a reflex. The senior didn't decide to be careful. She couldn't help it. The number came back and something in her gut said that's too clean, and she went looking for the lie before anyone asked her to.
What speed of suspicion actually looks like
When I started paying attention to it, I noticed it shows up as a handful of small, almost unconscious habits:
Row counts before and after every join. Seniors do this without thinking. Juniors do it after they've shipped a wrong answer once and gotten yelled at. The reflex of "I just joined two tables — did I lose rows? Did I duplicate rows?" is the single highest-leverage habit in analytics, and most analysts don't have it until something breaks.
Sanity-checking against a number they already know. If revenue last month was $487K and this month's query says $2.1M, the senior doesn't celebrate. She checks. The junior writes it up in the dashboard. Knowing one or two anchor numbers cold — total customers, last month's revenue, average order value — is what lets you spot a query that's silently broken.
Asking "would this be true even if the data were perfect?" before "is the data perfect?" This is the deeper one. Sometimes the SQL is fine and the data is fine and the answer is still wrong because the question was wrong. "How many customers churned" assumes you've agreed on what a customer is and what churn means. Seniors push back on the question. Juniors answer it.
Knowing which tables lie. Every warehouse has tables that look authoritative and aren't. The users table that includes test accounts. The orders table that includes refunded orders unless you filter for status = 'completed'. The events table where late-arriving data shows up three days later and silently changes yesterday's number. Seniors carry a mental list of which tables in your specific warehouse are trustworthy and which aren't. Juniors trust the schema.
None of this is taught. You learn it by being wrong in public and remembering the feeling.
Why this matters more now than it did five years ago
For most of analytics' history, the bottleneck was writing the query. SQL is not a hard language, but it's tedious, and a 200-line CTE with seven joins takes real time to construct. Seniors were faster at writing them, so seniors shipped more.
That's over. With AI tools — and I include the one I'm building, but also Hex's Magic, ChatGPT with a schema dump, Cursor pointed at a dbt project, whatever — the time-to-first-query has collapsed to seconds. Anyone on your team can produce SQL that looks correct in less time than it takes to read this paragraph.
Time-to-first-query went to zero. Time-to-suspicion didn't.
That's the whole shift, and I don't think most data teams have absorbed what it means yet. The bottleneck moved. It used to be: can this person write the query? Now it's: can this person tell whether the query they just got back is lying to them?
If you're a Head of Data and your team is shipping faster than ever and your stakeholders are happier than ever, I'd ask you a small uncomfortable question: are they shipping faster because they're better, or because they've stopped getting suspicious?
What to do about it
Two things, mostly.
First, change how you interview. Stop testing SQL syntax. Everybody can write SQL now; the AI does it for them in the interview, the same way it'll do it for them on the job. Test instead whether someone can spot a wrong-looking number. Hand a candidate a query and a result set and ask "is this right?" The good ones will start poking at it within thirty seconds. The bad ones will say yes because the SQL parses.
Second, change how you measure analyst output. "Queries shipped" was a dumb metric even when SQL was hard. It's a catastrophic metric now. The valuable analyst on a modern data team isn't the one shipping the most queries — the AI is shipping queries. The valuable analyst is the one catching the queries that are wrong.
Measure that, if you can figure out how to. I don't have a clean answer. But "bad analyses caught before they shipped" is the right shape of the metric, even if it's hard to instrument. Anything that rewards speed-of-shipping in 2026 is rewarding the wrong thing.
The uncomfortable version of this
The fastest analyst on your team is probably the one shipping the most wrong answers.
If that sentence makes you flinch, good. Go check.