No proprietary lock-in. Standard Postgres, standard Node, standard Python.
Fonteum's architecture is intentionally common-stack so an acquirer's infrastructure team can take ownership in days, not months. This page states the actual stack, the disaster-recovery posture, and the export path.
Current stack
Application layer. Next.js (App Router, Node 24 runtime) deployed on Vercel. Edge middleware + serverless functions. No proprietary Vercel-only APIs (Edge Config, KV, etc.) on the critical path; everything portable to any Node host (Fly, Railway, Cloudflare Workers, AWS Lambda + ALB).
Data layer. Supabase managed Postgres 16. Standard Postgres — no Supabase-specific extensions on critical tables. Row-level security policies are SQL; portable to any Postgres host (RDS, Cloud SQL, Crunchy Bridge).
Ingestion layer. Python + SQL scripts run as scheduled jobs. Each ingestion script is framework-agnostic; reads from public CMS / FL DBPR / OIG endpoints, writes to Postgres via the standard pg driver. No orchestration framework lock-in (no Airflow / Dagster / Prefect dependency).
Static asset layer. Next.js static exports + SVG charts in `public/research/charts/`. CDN-distribution-agnostic.
Recovery objectives
RTO (Recovery Time Objective): <4 hours for full database export + redeploy on a fresh Postgres host. Demonstrated in quarterly DR drills (results published below on completion).
RPO (Recovery Point Objective): <24 hours for snapshot-cadence data. Public-record source data is reconstructible from CMS / FL DBPR / OIG primary sources; the only persistent state with sub-24h RPO is owner-claim records (Postgres point-in-time recovery, 7-day window).
Backup posture. Supabase managed daily backups (7-day retention) + hourly WAL archiving (24h window). Customer-scoped audit-pack exports are deterministic from the public-source snapshots; no separate backup needed.
Acquirer portability path
Day 0 — review. Public-source data layer (CMS / FL DBPR / OIG) is reconstructible by anyone with public-internet access; no acquirer migration required for that tier.
Day 1-2 — Postgres export. `pg_dump` from Supabase → restore to acquirer Postgres host. ~14M rows total (provider records + provenance + snapshots). Single operator + a normal connection completes the export-restore cycle.
Day 2-3 — Node redeploy. Application clones from GitHub; environment variables set on acquirer host; Next.js build + deploy. Cloudflare DNS cutover takes ~5 minutes.
Day 3-5 — ingestion-job migration. Python scripts deployed to acquirer scheduler. Smoke-test against new host.
Total: <1 week for a full takeover by an acquirer's IT team. No proprietary code, no vendor handoff, no licensed component to renegotiate.
Disaster-recovery drill schedule
Quarterly DR drill: full export-restore + redeploy + smoke-test. Results published on this page on completion.
Most-recent drill: TBD — first DR drill scheduled Q3 2026 once Sprint 1 catalogues lock + ingestion frequencies stabilize. Results will be posted here with the actual RTO measured (target ≤ 4h).
Drills are run against a clean acquirer-style host (a new Supabase project) so the procedure reflects the actual takeover scenario, not a same-environment refresh.
Compliance posture
We don’t sell ranking and don’t accept payment to move a provider up the list. For final hire decisions, verify licensing, insurance, and references directly with the applicable licensing or credentialing body.
No bulk-licensing source family is currently ingested for this vertical. Hire-time checking still routes through the body named above.