An AI support tool does not fail only when it gives a wrong answer.
It also fails when it gives a plausible answer that the user cannot meaningfully verify.
That distinction matters because most support surfaces are optimized for fluency. They produce clean prose, a helpful tone, and enough structure to feel trustworthy. But if the interface cannot show the user where the answer came from — the exact manual page, note, or document fragment that supports the claim — then the product is asking for confidence it has not earned.
That is not a compliance problem. It is a product problem.
A useful bug
During a late-March 2026 hardening pass on the JMP MCP Expert System, the app showed a “Failed to load JMP documentation” message on startup even though the backend and knowledge base were healthy.
The immediate bug lived in renderBootstrap(). After a refactor, it was still touching DOM nodes that no longer existed. The backend was fine. The interface was not.
That sounds small, but it cuts straight into trust. If the UI lies about system state, users learn the wrong lesson twice: they distrust healthy systems when they should proceed, and they become numb to warnings when something actually is broken.
The bug forced a better design question: what exactly does the user need to see in order to trust this answer path appropriately?
The real problem with fluent answers
Many AI support tools are good at sounding organized. They are much worse at showing their work.
That gap matters because a support answer is not only a sentence generator output. It is the visible tip of a longer chain:
- documents and manuals
- chunking and indexing choices
- retrieval and ranking behavior
- prompt assembly
- model synthesis over partial evidence
If the user cannot inspect that chain at the points that matter, “helpful” starts to drift into “persuasive.” The answer may be right, but the product has not made verification easy enough for that confidence to be earned.
What changed in the JMP support surface
The hardening pass improved more than one bug. It changed how evidence is exposed.
Before the cleanup, the main help link often used generic platform metadata. A user could click from an answer and land on a documentation hub instead of the chapter or page that actually supported the claim.
After the cleanup, the app prefers the first cited reference URL. For PDF-backed sources, it now shows human-readable labels such as PDF manual · Essential Graphing · p. 190 and links the reader to the cited page rather than to a generic entry point.
That sounds like a small UX adjustment. It is not. It changes the product from “here is an answer, trust us” to “here is the answer path, inspect it yourself.”
From decorative citations to usable provenance
The app also added a reference modal that opens PDF-backed citations directly in an iframe at the cited page, with next/previous navigation across the document.
That moves provenance from checkbox theater into actual support UX.
- Before: the citation exists, but the user still has to hunt for the evidence.
- After: the citation is the evidence path, presented where the user can act on it immediately.
The source drawer now makes the chain easier to inspect by showing which notes were retrieved, which source each note came from, and which page was cited for PDF material.
This is the important shift: provenance is no longer a decorative trust signal. It is an interaction design decision.
What other builders should steal from this
If you are building an AI support surface, the useful lesson is not “add citations.” It is more specific:
- Prefer answer-linked evidence over generic destination links. If a sentence is grounded in page 190, link page 190.
- Preserve citation structure in the data model. Labels, page references, source URLs, and document identity should survive all the way to the UI.
- Treat status messaging as part of trust design. A false warning is not harmless. It trains users to mistrust both failures and recoveries.
- Make verification cheaper than blind trust. If checking the source costs too many clicks, most users will not do it.
Those are not abstract trust principles. They are product decisions with user-visible consequences.
A better standard for AI support tools
Strong provenance does not make an AI support tool infallible. It does something more useful: it makes the system easier to challenge, debug, and trust at the right level.
That is the standard worth aiming for. Not “the model sounded confident,” but “the user can see exactly where the answer came from, test it, and decide how much confidence it deserves.”
See the current JMP support surface at jmp.codingenvironment.com.