When AI gets confident: design for honesty, not illusion
When the AI sounds sure, and still gets it wrong
We’ve all seen it. A chatbot answers with total confidence, and it’s completely wrong. That’s not a bug, it’s a hallucination - when an AI tool makes up something plausible but false.
As Nielsen Norman Group puts it, these systems “sound right but aren’t.” The problem isn’t just the inaccuracy, it’s the confidence.
And when confidence meets design polish, users stop questioning it.
That’s when it becomes a product problem, not a technical one. Because hallucinations don’t just mislead users, they quietly chip away at trust, and trust is the one currency you can’t rebuild overnight.
Why product teams can’t ignore hallucinations
Hallucinations happen because AI models don’t know facts, they predict patterns. As IBM explains, “they’re not thinking; they’re completing.”
Even the best models still fail in specific contexts. A Stanford study found hallucination rates above 60% in legal reasoning tasks, which should make every product leader pause.
It’s easy to think this doesn’t apply to your product, until your AI tool “summarises” the wrong policy or invents a data point in a customer report.
This isn’t about policing bad outputs. It’s about building systems that communicate uncertainty well.
Our stance: make limitations visible
We believe AI should earn trust through honesty, not confidence.
It’s tempting to make AI interfaces sound human - polished, fluent, certain. But that polish can hide fragility.
Our view is simple: if your product uses AI, it should be clear about what it knows, what it guesses, and where it might fail.
That doesn’t mean making users doubt everything. It means designing transparency that informs without alarming.
Because trust isn’t built by removing doubt — it’s built by showing your work.
How to design for honesty
Here’s how we help product teams approach this:
1. Use tone to create transparency
The difference between “Here’s your answer” and “Here’s what I found” might look small, but it changes user perception completely. Microcopy can reinforce humility, signalling that the system is helpful, not authoritative.
2. Make confidence visible
Borrow from UX patterns like progress bars or error rates. Even simple cues (“This insight is 80% confident”) let users apply their own judgement.
As Nature’s 2024 review on AI hallucinations notes, “interface transparency is the most direct defence against overtrust.”
3. Keep humans in the loop
If your product deals in risk, law, or money, a human must review AI-generated content. Transparency doesn’t replace accountability; it supports it.
4. Design feedback loops
Make it easy for users to flag wrong or unhelpful outputs. It’s both good UX and good data hygiene.
(We’ve written about this before: Turning messy user feedback into actionable product decisions.)
5. Treat trust as a measurable metric
Include it in success criteria. If support tickets about “AI got this wrong” increase, that’s a UX signal.
Where teams often get it wrong
We’ve seen three recurring pitfalls:
Assuming fine-tuning fixes everything. It doesn’t. Even perfect data can’t prevent confident nonsense when the prompt is vague.
Treating hallucinations as an engineering problem. It’s a product experience issue - users blame the product, not the model.
Polishing away uncertainty. The instinct to “make it seamless” can make it misleading.
Transparency isn’t friction. It’s informed empathy. It’s how you help users make smarter choices, even when the system doesn’t.
What this means for product leaders
You don’t need to be an AI expert to lead well in this space. You just need to create the culture and design systems that value truth over illusion.
That means:
Making uncertainty visible.
Encouraging teams to question model output.
Rewarding accuracy and clarity, not just speed and confidence.
Because users can forgive imperfection, but they won’t forgive deception.
FAQs
-
A: AI hallucinations happen when generative systems produce confident but incorrect information. They sound right but aren’t because AI predicts patterns, it doesn’t understand facts.
-
A: UX can’t stop hallucinations entirely, but it can limit their impact. Designers can display confidence levels, show data sources, label AI content clearly, and use wording that invites user judgement rather than blind trust.
-
A: Not fully. Even well-trained models guess when faced with unfamiliar or ambiguous prompts. The real solution is to design around uncertainty, with visible context, validation steps, and feedback loops.
-
A: Track metrics like user satisfaction, support tickets referencing wrong answers, and time to correction. Qualitative research (e.g. user interviews or usability testing) can also reveal whether people feel confident and informed when using AI features.
Need help designing your AI-powered product?
Whether you’re exploring how AI fits into your roadmap or refining an existing product, we help teams build trust-centred experiences for complex tools.
👉 Book a call with our team to talk about how we can help you design AI that users trust, not just use.