Ask the data. Get the evidence trail.
Natural-language questions about U.S. healthcare-provider supply, answered with the source URL, last-checked timestamp, methodology version, and per-claim limitations attached. The auditor can verify every claim independently.
Or try one of these:
Why this is different from generic NLQ
Generic natural-language-to-SQL vendors return a number. The buyer's compliance team then asks where did this come from, and the vendor either escalates to support or shrugs. Evidence Query returns the number plus the per-claim provenance the auditor needs: source URL, refresh date, methodology version, confidence score, and the explicit limitations attached to that field.
That contract only works because the underlying data was built provenance-first. The Audit Pack, the per-dataset methodology pages, and the per-field provenance API are the substrate; Evidence Query is the natural-language surface on top.
What the demo will refuse
- Predictions / forecasts.“How many derm will there be in 2030?” Fonteum surfaces source-truth, not extrapolations.
- Provider payment, claims, or revenue data. Out of scope. For NPI-level public-source lookup see /api/v1/providers/[npi].
- Vendor / competitor comparisons. Audit-grade-trust positioning is documented at /b2b/healthtech; not generated here.
- Single-named-provider PII fishing. Aggregate questions only — single-provider lookup goes through the authenticated
/api/v1/providers/[npi]endpoint. - Prompt-injection / role-rewrite. Heuristic pre-flight rejects instruction-injection signatures before the classifier sees the input.
How it works
- Pre-flight filters. Banned-pattern + injection-signature checks run before any LLM call.
- Structured classification. Anthropic Sonnet maps the question to a strict whitelist intent (dataset slug + state code + intent kind). Output is runtime-validated; the LLM never produces SQL.
- Typed resolver.The classifier's structured intent dispatches to hand-written, type-safe data accessors (per-state aggregates, audit-pack registry, refresh layer). No LLM-produced string ever touches a query.
- Evidence assembly. Per data point, the per-field provenance from the audit-pack registry is attached: source name + tier, source URL, last-checked timestamp, confidence score, methodology URL + version, and limitations.
- Answer synthesis.A second LLM call writes a 1-2 sentence summary constrained to the resolver's exact numbers. Falls back to a deterministic template when the API is unavailable.
Full pipeline architecture and SQL-injection containment review: /docs/api#tag/Evidence-Query.
Ready for unlimited authenticated queries?
The demo is rate-limited to 10 queries / 24h per IP. Request an API key for unlimited use, full per-key rate limits, and the complete documented contract.
Compliance posture
We don’t sell ranking and don’t accept payment to move a provider up the list. For final hire decisions, verify licensing, insurance, and references directly with the applicable licensing or credentialing body.
No bulk-licensing source family is currently ingested for this vertical. Hire-time checking still routes through the body named above.