Agents Don't Fail Where You Expect
We feared hallucination. The most common failure was much simpler: unclear instructions. Most AI problems are actually workflow problems.
Agents Don't Fail Where You Expect
We initially feared hallucination would be the biggest issue. Wrong facts, invented citations, plausible-sounding nonsense. We prepared for that.
Instead, the most common failure mode was much simpler: unclear instructions. When context was incomplete or ambiguous, agents filled the gaps with guesses. Sometimes those guesses were fine. Often they weren't. The agent wasn't "hallucinating" in the sense of making things up—it was inferring, and inferring wrong because we hadn't given it enough to work with.
Once we improved briefs and structured prompts—clear constraints, explicit context, defined outputs—failure rates dropped significantly. The same agent, with better inputs, behaved far more predictably.
The insight: most AI problems are actually workflow problems.