AI-assisted intake (Phase 4)¶
What¶
The new-engagement form has an "✨ AI-assisted intake" panel at the top. The partner types a free-form paragraph describing the engagement (industry, frameworks, concerns); a single LLM call extracts structured intake fields and pre-fills the form below.
The partner reviews and edits the extracted fields, then submits as usual. The form falls through to manual entry if the partner skips the AI panel or the extraction fails.
Why¶
The form's seven fields (domain, audit purpose, frameworks, focus areas, known concerns, archetype, budget) are tedious to fill out from scratch — especially when the partner already has the context in their head. A 60-second paragraph plus 30 seconds of edits beats 5 minutes of structured data entry.
How¶
POST /auditforge/intake/extract takes:
{
"description": "Mid-market defense contractor preparing for a CMMC L2 pre-assessment. Worried that some subcontract flow-down language may be incomplete and the QAP might not exist...",
"client_name": "Northstar Defense Inc.",
"firm_id": "firm-xxx"
}
The endpoint runs a single Sonnet-class call with a tight system prompt asking for strict JSON output:
{
"domain": "Mid-market defense contractor preparing for CMMC L2 pre-assessment",
"audit_purpose": "Identify compliance gaps prior to formal CMMC L2 assessment",
"frameworks": ["NIST SP 800-171", "DFARS 252.204-7012", "CMMC L2"],
"focus_areas": ["subcontractor compliance", "cybersecurity"],
"known_concerns": ["subcontract flow-down may be incomplete", "QAP may be missing"],
"suggested_archetype": "remediation_pipeline",
"suggested_client_name": "",
"cost_cents": 4.2
}
The frontend writes the extracted fields into the form's controlled state; the partner edits as needed and submits.
Cost governance¶
The endpoint instantiates its own LLMClient with a $0.20 hard cap on the budget. This isolates intake-extract spend from any in-progress engagement budget — failure here can't bleed into a running audit's budget.
Typical actual cost: $0.03–0.08 per extract.
Failure handling¶
| Failure | Treatment |
|---|---|
| Description < 30 chars | 422 with "add more context" message |
| LLM call fails (rate limit, network) | 502 with "use the form view to enter manually" — falls back gracefully |
| LLM returns unparseable JSON | 502 — same fallback message |
| LLM returns valid JSON missing some fields | Missing fields default to empty (partner fills manually) |
Code¶
app/auditforge_endpoints.py—IntakeExtractRequest+extract_intakeendpointfrontend/src/components/NewEngagementForm.tsx— AI-assist panel +handleAiExtractfrontend/src/api/auditforge.ts— typedextractIntake()call
Future: conversational mode¶
Today's flow is one-shot extract. A natural extension is back-and-forth dialogue: the LLM responds with both extracted fields AND clarifying questions ("what's the materiality threshold for this engagement?"). The partner answers; the LLM refines. Stops when the form is complete.
That's a higher-effort feature that we deferred — the one-shot flow handles 90% of cases at much lower complexity.