Blog

Deep dives on AI systems, architecture, and measurable business outcomes.

← Back to blog

Building a Source-Backed JMP Expert System with MCP

This is not just a chatbot story. It is a story about turning fragmented JMP support knowledge into a source-backed assistant surface with inspectable tool boundaries and clearer trust paths.

Jmp Mcp Ai Systems
Support application window connected through an MCP boundary note to a stack of manuals and help sources.

Many AI support demos fail in the same way: they produce fluent answers, but the user has no clean way to inspect where those answers came from. For technical support, analytical software, and documentation-heavy workflows, that is a fragile foundation.

The JMP MCP Expert System started from a different premise. The goal was not to build an LLM that somehow "knows JMP." The goal was to build a source-backed assistant surface with visible tool boundaries, explicit retrieval, and room to grow into reusable workflows.

The real problem was fragmentation

JMP support knowledge lived across several surfaces:

  • official online help
  • PDF manuals
  • support-style notes and cases
  • practical workflow knowledge that did not naturally live in one interface

That fragmentation creates two problems at once:

  1. users do not know where to look, and
  2. an assistant can appear smarter than it really is unless its source path stays visible.

So the core challenge was not "make a chatbot for JMP." It was: turn a fragmented support landscape into an answer surface that stays inspectable.

Why MCP mattered here

The value of MCP in this project was not magical intelligence. It was contract discipline.

An MCP-style boundary let the project expose support knowledge through explicit actions such as listing notes, searching support content, and fetching structured support artifacts. That makes the system easier to reason about because:

  • the tool surface is inspectable
  • the retrieval path is more explicit
  • the UI can evolve without rewriting the knowledge contract
  • future MCP-aware clients can reuse the same support surface

That is strategically stronger than a one-off assistant wrapped around hidden prompt logic.

Architecture in four layers

The current system is easiest to understand as four separable layers:

  1. Sources
    • official online help
    • chunked JMP 19 manuals
    • support cases
    • vault-backed PDFs and notes
  2. Knowledge and contracts
    • a vault loader and support server expose retrievable material through stable boundaries
  3. Reasoning layer
    • orchestration decides how to answer
    • retrieval ranks relevant passages
    • synthesis turns that material into a usable response with citations
  4. User surface
    • the web app delivers the answer, source cards, related support material, and provenance links

The important design choice is separation. When sources, contracts, reasoning, and presentation remain legible, the system has room to evolve without collapsing into one brittle blob.

Why answer modes improved the product

One of the better design decisions was keeping multiple answer modes instead of pretending one response style fits every request.

The app currently supports three modes:

  • Template — faster, structured answers for common patterns
  • Synthesis — longer, retrieval-backed answers with fuller citation context
  • LLM — broader exploratory responses when a more open-ended explanation helps

That matters because support requests vary. Some need quick routing. Some need patient synthesis. Some need a conversational bridge into a concept before the user drills into documentation.

The system also recognizes several common statistics contexts — including ANOVA, regression, t-tests, and control-chart-adjacent questions — so retrieval can lean toward more relevant methodology and assumption context instead of returning generic help text.

What makes it source-backed instead of merely fluent

The trust feature is provenance.

Each answer is stronger when it points back to something concrete:

  • a manual page
  • an online help URL
  • a recognizable source label
  • a PDF location when relevant
  • related support material when it adds context

That is what makes this a support surface rather than a fluent model demo. The user has a path to inspect where the answer came from instead of judging the system by tone alone.

At the time of publishing, the knowledge base includes approximately:

  • 6 JMP 19 PDF manuals
  • 28 online help pages
  • 2,835+ indexed knowledge chunks
  • 4 support-case patterns

Those numbers are most useful as scope markers. They say something about current retrieval coverage, not about the system being "finished."

Why this is more extensible than a one-off demo

The current version is retrieval-heavy, but it is already shaped for extension.

The roadmap is not just "answer more questions." It includes work such as:

  • adding new corpora
  • comparing JMP platforms
  • searching JSL examples
  • guiding DOE setup
  • moving from explanation toward session-to-JSL workflow generation

That last direction is especially interesting. A support answer is useful, but it is rarely the end state. A more durable pattern looks like this:

question -> retrieved explanation -> workflow scaffold -> reviewable JSL path

That progression is more useful than a flashy answer box because it pushes the system toward practical action while preserving reviewability.

Honest limits

The current system is not a general intelligence that understands all of JMP from first principles.

What it already does reasonably well:

  • retrieve source-backed material
  • synthesize documentation-backed answers
  • surface provenance clearly
  • provide a structured support interface

What is still emerging:

  • stronger workflow generation
  • richer planner-guided task shaping
  • deeper JSL scaffolding and execution bridges
  • broader coverage and more refined corpora

That distinction matters. The project is strongest when described as a reusable, inspectable support architecture with an expanding action layer — not as a magical fully solved assistant.

Why this project matters

The interesting part of this build is not only that it answers JMP questions. It is that it treats support knowledge as architecture:

  • sources are explicit
  • tool boundaries are inspectable
  • retrieval is visible
  • trust comes from provenance
  • future action layers can be added without replacing the foundation

That is a healthier pattern for serious AI support work than hiding everything inside a clever prompt.

Explore the live system

Explore the live JMP expert system