Skip to content

End User Guide

Audience: People using a Metis pilot day-to-day — managers, clerks, analysts, anyone who asks the system questions about their organization's documents.

You don't need to be technical to use Metis. This guide explains what the system does, how to ask good questions, and how to read its answers.

What Metis is (in 30 seconds)

You drop your organization's documents into Metis. Then you ask plain-language questions about them. Metis searches across everything, finds the most relevant passages, and writes a grounded answer with citations.

It's not a chatbot. It's a knowledge assistant that knows only your documents and tells you exactly which one each piece of the answer came from.

Getting in

You'll receive an invite email with a one-click link. Click the link, set a password (minimum 8 characters, pick something only you would know), and you're in.

Forgot your password? Click "Forgot your password?" on the sign-in screen. You'll get an email with a reset link valid for 4 hours.

Your session stays signed in for 7 days by default — your administrator may have set a different lifetime depending on your organization's security posture (8 hours hard cap or 30 days are the other options).

Asking a question

Type your question in plain English in the chat box at the bottom. Metis works best with:

  • Open-ended questions — "What does our overtime policy say about weekends?" — beats "overtime weekends?"
  • Specific scope — "What was decided in the March 2024 council meeting about the budget?" beats "budget"
  • Multi-part questions — "What does our HR policy require for time-off requests, and how does that interact with the union contract?" Metis breaks compound questions into sub-queries automatically

It's fine to follow up. Metis remembers the conversation context within a session, so you can ask "what about for part-time staff?" after an overtime question and it'll know what you mean.

Reading the answer

Every answer has four parts.

1. The answer itself

Plain language, paragraph form. Cited inline using bracketed numbers like [1], [2]. Each citation maps to one of the source cards below.

2. Source cards

Below the answer, you'll see one card per cited document. Each card shows:

  • Document name — the original filename
  • Section title — which heading in the document the snippet came from
  • Snippet — the actual passage Metis used
  • Document age tagcurrent, legacy, or draft. Legacy docs are deprioritized but still surfaced when relevant
  • Authority tag (when hierarchical mode is on) — binding, guidance, internal, or informational. Binding docs from a higher jurisdiction (e.g., state law for a borough corpus) are grouped at the top under a "Binding higher authority" header
  • Override note — if an admin has manually tagged this document, the note appears here

Click any source card to see the full snippet in context.

3. Confidence badges

Two confidence values, separately:

  • Retrieval confidence — how good the matching passages were. high means the corpus probably has a clear answer. low means the question might be on a topic the corpus doesn't cover well — useful signal for your administrator that documentation is needed.
  • Answer confidence — how confident the system is that its written answer is supported by the retrieved passages. low means the answer may be speculative; double-check the citations.

When both are low, treat the answer as a starting point, not a verdict.

4. Conflict + supersession flags

If two documents disagree on the topic, you'll see a ⚠ Conflict detected banner with a brief summary of the disagreement. This is a feature, not a bug — Metis surfaces real contradictions between your own documents so you can resolve them.

If a newer document supersedes an older one on the same topic, you'll see a superseded by link. Older versions are still findable but deprioritized.

If a binding higher-authority document modifies or contradicts a local procedure (only when hierarchical mode is on), you'll see an elevated Hierarchical conflict badge — the most serious class of disagreement.

Sample questions

When you first land in your pilot, the welcome screen shows a few starter questions tuned to your actual corpus. These are auto-generated after every successful corpus update — they reflect what the system thinks your documents can answer well.

If they don't match what you actually need, just type your own question. The samples are a hint, not a constraint.

What the system can't do

  • It can't tell you what's NOT in your documents. If a topic isn't covered, you'll get a low-retrieval-confidence answer and a "gaps" suggestion in the response. That's the right behavior — it would rather tell you it doesn't know than make something up.
  • It doesn't have access to anything outside your corpus. No web search. No public LLM training data leaking through. The answer comes from your documents or it doesn't come at all.
  • It doesn't change documents on its own. Manual edits, new uploads, and connector syncs are the only ways the corpus changes. If a SharePoint connector is set up, it picks up changes automatically on whatever cadence your administrator chose.
  • Document age, jurisdiction, and authority tags are heuristics. They're usually right but sometimes wrong. If a document looks miscategorized, ask your administrator to add a manual override — instant, deterministic, and outranks the automated tags.

Privacy and access

  • Your sessions are isolated per-organization. People in other organizations can't see your queries or your data.
  • Within your organization, your role determines what you can do:
  • Viewer: ask questions and read answers
  • Editor: same as viewer, plus mark documents outdated, dismiss conflicts, accept supersession suggestions
  • Owner: same as editor, plus invite/remove users, configure connectors, change branding
  • Conversations are not stored permanently. Recent queries are retained for telemetry (so your administrator can see common low-confidence questions and improve the corpus), but they're not used to train any model.

Getting help

  • Stuck on a question? Try rephrasing. The system handles "What's our policy on X?" better than just "X".
  • Answer looks wrong? Click the source cards and read the snippet. If the snippet doesn't actually support the answer, that's a bug worth surfacing to your administrator.
  • Document seems out of date? Tell your administrator. They can mark it deprecated in seconds; you'll see the change reflected on the next query.
  • Need access changes (new user, role change, password reset)? Ask your organization's owner. They can do all of that from inside the pilot.
  • Locked out entirely? Your owner can invite you again. If they're locked out too, Base2ML staff can issue a 1-hour support session to help — every such session is logged in your audit trail with a required reason.