Skip to content
Today's snapshot: 113,537 providers tracked
fonteum
ResearchToolsFonteumPricingCoverageMethodologyTrustAbout
DATA · MAY 8, 2026
Methodology

How we know what we know.

“Every number on this site must come from a real source. That's the whole product.”
— Francis Po, Founder
Methodology version
v1.3
Last reviewed
2026-05-08
Maintained by
Data Lead

Methodology v1.3 · Last revised May 8, 2026 · Changelog · Download (print → save as PDF)

Aligned with W3C PROV-DM (Entity / Activity / Agent decomposition) + the FAIR data principles (Findable · Accessible · Interoperable · Reusable).

This page covers the network-wide methodology. Every research study additionally publishes a study-specific methodology block naming its primary sources, scope window, and editor of record — see the research index for the published catalog.

METHODOLOGY

How a listing becomes a data point.

Every record in our network passes through five stages. No pay-to-play. No sponsored placements. No ghostwritten reviews.

Read the full methodology →
  1. 01 · Sourced
  2. 02 · Checked
  3. 03 · Rated
  4. 04 · Tracked
  5. 05 · Published
01

Sourced

From Google Business Profile sync, public regulatory datasets (FMCSA, NHTSA, BLS, CDC), and human editorial research.

GBPFMCSANHTSABLSEDITORIAL
02

Checked

Cross-referenced against state licensing boards for credentialed verticals — legal and medical businesses must hold active licensure.

03

Rated

Ratings synced from Google. Minimum review volume thresholds applied before a listing earns a rating tier.

4.79 AVG
04

Tracked

Changes tracked over time. Anomalies flagged. Corrections accepted from the public.

05

Published

Listings publish to category directories. Research studies publish to fonteum.com/research.

“Every number on this site must come from a real source. That’s the whole product.”
— Francis Po, Founder
01 · Data sourcing

Where the data comes from.

Every published figure starts from a public, reproducible federal source. Our primary inputs are CMS, HRSA, and BLS public datasets — the canonical authorities for U.S. healthcare provider identity, Medicare facility quality, shortage-area exposure, and healthcare workforce context.

Examples: CMS NPPES for the active provider population, CMS PECOS for Medicare enrollment status, CMS Care Compare for facility-level quality, HRSA HPSA for shortage-area context, BLS OEWS for healthcare workforce wages, BEA Regional for healthcare per-capita economic context.

  • CMS NPPES + CMS PECOS (provider identity + Medicare enrollment)
  • CMS Care Compare (facility-level quality across 8 modules)
  • HRSA HPSA (shortage-area exposure)
  • BLS OEWS + BEA Regional (healthcare workforce + economic context)
  • State medical / nursing boards (Sprint 3 ingestion in progress for CA, FL, AZ, LA)
  • Human editorial review (deduplication, specialty / facility-class assignment)
02 · Coverage scope

What we include, and what we don't.

We track healthcare providers and Medicare-certified facilities with a physical US presence across two source families: (a) Direct Provider Data — CMS NPPES, CMS PECOS, CMS Care Compare; (b) Healthcare Workforce & Economic Context — HRSA HPSA, BLS OEWS, BEA Regional. A record enters the catalog when it meets three tests: a stable federal identifier (NPI / CCN / state license), a publicly accessible source record, and at least one cross-source reconciliation.

We explicitly exclude non-healthcare provider records and entities outside the federal-source coverage we ingest. Known gaps — recently-issued NPIs not yet in the public NPPES snapshot, providers at facilities awaiting Care Compare reporting eligibility — are documented in each study.

03 · Update cadence

How often we refresh.

Listings refresh on a rolling 30-day cadence; ratings and review counts refresh weekly for the highly-rated segment (rating ≥4.5 with ≥50 reviews). Research-grade snapshots — the numbers we cite in studies and on the brand hub — are dated and versioned so readers can reproduce any cited stat at the publication date.

04 · Credential checks

What "highly rated" means on Fonteum.

For healthcare provider records normalized across federal CMS sources and healthcare workforce data sources, the credential status we surface is what those public sources themselves publish — facility-level records (CMS NPPES, CMS PECOS), quality-and-outcomes records (CMS Care Compare modules), workforce records (HRSA HPSA, BLS OEWS) — not credentials we independently audit. We do not assert credentials we cannot trace to a federal public source.

We do not claim to independently vet provider quality. We claim to publish what independent federal sources have already measured, and to make those sources auditable.

05 · Corrections policy

How to report an error — and what we do with it.

Business owners, journalists, and readers can report an error to corrections@fonteum.com. We acknowledge every report within two business days and publish the resolution — accepted, rejected with reasoning, or in-review — on a public corrections log.

Substantive research corrections update the affected article and append a dated note at the top. Minor factual corrections to directory listings update the listing and log the change. No correction is silent.

06 · Editorial standards

No pay-to-play.

Rankings are derived from public ratings and credentials; they are not influenced by any commercial relationship. When a business pays to enhance a directory listing, the enhancement is disclosed on the listing page. Research content is never sponsored.

We do not ghostwrite reviews, host sponsored placements disguised as editorial, or adjust rankings in response to claim-status upgrades. A paid listing and a high-rated listing are different things, and we show both, clearly.

07 · Research methodology

Sample sizes, methods, disclosed limits.

Each study declares its sample size, inclusion criteria, temporal coverage, and statistical methods up front. Datasets are published alongside the article as downloadable CSVs so any finding can be reproduced or reinterpreted.

Where a study uses correlations or modeled estimates, the methods section names the estimator, the assumptions, and the magnitude of uncertainty. When we don't know, we say so.

08 · Source-pack ingestion methodology

How a public dataset becomes a wired source.

Each registered source family — CMS NPPES, CMS PECOS, CMS Care Compare, BLS OEWS, HRSA HPSA, U.S. Census Bureau population estimates — passes through the same five-stage ingestion pipeline before any field it produces becomes visible on a public profile or a research snapshot.

Stage 1, manifest authoring: a versioned manifest declares the source's authority, refresh cadence, distribution license, and the explicit set of fields we intend to surface. The manifest is reviewed against the §125 restricted-source list and the §94 displayability rules. Stage 2, dry-run pilot: a small geographic or specialty-bounded run validates field coverage and match rates against a held-out sample. Stage 3, confidence calibration: the per-field match-rate and confidence distribution are reviewed; low-confidence rows are excluded from the displayable set. Stage 4, registry write: the source row is committed to src/lib/brand/sources-registry.ts with its tier, status, refresh cadence, and explicit limitations. Stage 5, public surfacing: the SourceChip and ProvenanceCard render the source per the four-field contract documented at /data-platform/schema.

  • Stage 1 — versioned manifest + license + field list
  • Stage 2 — bounded dry-run + match-rate validation
  • Stage 3 — confidence calibration + low-confidence exclusion
  • Stage 4 — registry write to src/lib/brand/sources-registry.ts
  • Stage 5 — public surfacing via SourceChip + ProvenanceCard
09 · Entity matching

How we connect a profile to a source-pack record.

Entity matching is a deterministic pipeline today, with probabilistic scoring layered on top for the ambiguous tail. The deterministic stage matches on a stable identifier when the source provides one — NPI for NPPES/PECOS, license number for state contractor boards, CMS Certification Number for CMS Care Compare. Where no stable identifier exists, the probabilistic stage scores candidate matches on (name, address, locality, taxonomy or specialty) using normalized string distance plus geocode proximity.

The §95 confidence tier is assigned at this stage. High-confidence matches require a stable identifier match plus address agreement at the locality level. Medium-confidence matches require multi-field agreement without an identifier. Low-confidence matches do not ship to the public surface; they remain visible only in operator tooling and are flagged for manual review.

  • Deterministic match on stable identifier (NPI / license / CCN)
  • Probabilistic fallback on (name, address, taxonomy)
  • §95 confidence tier assigned at match time
  • Low-confidence matches stay internal — never surface publicly
10 · Provenance field architecture

Why every field carries four pieces of metadata.

The Fonteum provenance contract is a four-field per-(provider, source, field) row: source name, last-checked ISO date, confidence tier, and display permission. The contract aligns with W3C PROV-DM's separation of Entity / Activity / Agent and with the FAIR data principles' Findable + Accessible + Interoperable + Reusable axes — every published field is findable via the source registry, accessible at a stable URL, interoperable through the JSON-LD Dataset emission on each research study, and reusable under the published attribution license.

The wire shape is documented at /data-platform/schema. The visible UI shape is documented at /data-provenance, where the same contract renders the ProvenanceCard on every public profile. The launch gate enforces that no public field renders without a matching provenance row.

  • Four-field contract: source · last-checked · confidence · display permission
  • W3C PROV-DM: Entity / Activity / Agent decomposition
  • FAIR principles: Findable · Accessible · Interoperable · Reusable
  • Wire shape at /data-platform/schema · UI shape at /data-provenance
11 · Confidence scoring

How a row earns the right to render publicly.

Confidence scoring is the hand-off between matching (which says 'this profile is probably this source row') and display (which decides whether to show it). The §95 tier is high · medium · low. High requires a stable-identifier match plus address agreement. Medium requires multi-field agreement without a stable identifier. Low covers the residual ambiguity tail. Public surfaces render high-confidence rows by default; medium-confidence rows render with explicit hedging copy where applicable; low-confidence rows do not render at all.

The confidence threshold is not a learned model today; it is a deterministic rule documented in the §95 matching plan and reviewed quarterly. A learned-model approach is part of the planned roadmap once the per-source false-positive baselines are calibrated against a labeled gold set. The current state is named explicitly as 'deterministic threshold + manual review of the medium tier' so a citing reader has the actual posture, not a future-state aspiration.

  • Tier high — stable identifier + address agreement
  • Tier medium — multi-field match, no stable identifier
  • Tier low — residual ambiguity; not displayed
  • Threshold is deterministic + manually reviewed today; learned-model approach is on the roadmap
12 · Historical change detection

What we version, and what we don't.

We version the value, not the surrounding metadata. A renamed clinic gets a new name change record; a license expiration update gets a new state_license_expiration change record. The change shape is documented at /data-platform/schema and includes provider_id, field, previous_value, new_value, source, observed_at, and snapshot_id.

Change feeds are emitted per snapshot and tied to the originating source's refresh cadence. CMS NPPES runs daily. CMS PECOS runs weekly. HRSA HPSA runs monthly. CMS Care Compare runs quarterly. BLS OEWS and BEA Regional refresh annually. The observed_at field is the timestamp our pipeline reconciled against the source — not the snapshot date, which can lag.

13 · Reproducibility

How a researcher could re-run any published study.

Every published research study under /research ships with the input dataset (CSV at /research/data/<slug>.csv), the temporal-coverage range, the inclusion criteria, the sample-size declaration, and the methodology block. A motivated researcher could pull the same source dataset from its origin (CMS Care Compare's data.cms.gov surface, the CMS NPPES NPI Registry API, the HRSA HPSA download, etc.) and reproduce the per-state aggregates we publish.

Where a study uses our internal indexed dataset (e.g. dermatology access density), the data file we ship is the same file that produced the published numbers; aggregations are simple group-bys documented in the technical appendix on the study page. Where a study aggregates a federal source (e.g. the CMS Care Compare snapshots), our aggregation script lives in scripts/research/ and the data file we ship matches what the script emits.

  • Every study ships its CSV at /research/data/<slug>.csv
  • Methodology block declares sample size + inclusion + temporal coverage
  • Federal-source studies → aggregation script in scripts/research/
  • Indexed-dataset studies → simple group-by documented in technical appendix
14 · Citation guidance

How to cite Fonteum Research.

Each individual research study page carries a copy-paste citation block at /research/<slug>#cite. The press kit at /press surfaces the same block for the top-3 ready-to-cite picks plus an AP-short variant. The minimum acceptable citation includes the study title, the publication month + year, and a link to the canonical study URL.

Datasets are free to cite under attribution. The recommended citation form is: Fonteum Research, "<Study Title>," <Month YYYY>. https://fonteum.com/research/<slug>. Dataset: https://fonteum.com/research/data/<slug>.csv. APA, BibTeX, and the AP-style short form are surfaced on each study page under the §165 citation upgrade.

  • APA: per-study /research/<slug>#cite
  • AP short: (Fonteum Research, <Month YYYY>)
  • BibTeX: per-study /research/<slug>#cite
  • Dataset: /research/data/<slug>.csv
15 · What Fonteum does NOT claim

The disclaimers we put in writing.

Counts published on this site describe the Fonteum indexed provider dataset — currently 26,417 healthcare-confirmed providers across 6 federal public-record sources — not a representative sample of the U.S. healthcare provider data landscape. Where a study cites a network-level number, the source is the snapshot file shipped with the codebase. Where we don't have an aggregate, we mark the section as deferred rather than estimate it.

We do not own, manage, or moderate the Google reviews referenced across our studies. Reviews originate on Google Business Profiles and are owned by Google and the original reviewers. Our role is aggregation and presentation under a published methodology — not authorship.

We do not change organic ranking in exchange for payment. Featured slots, where they exist, are clearly labelled as Featured. Claiming a listing transfers control of the published profile to the owner; it does not alter ranking signals.

We do not apply the banned v-word adjective family to listings in user-facing copy unless an explicit credential check from a public licensing source supports it. SOP §10 prohibits the general-purpose adjective network-wide.

  • Indexed dataset, not a national survey
  • We don't own or moderate Google reviews
  • No pay-to-rank in organic placement
  • No unsupported v-word adjective in user-facing copy
Changelog · methodology updates

What’s changed in this methodology, by date.

These are methodology updates, not corrections. They name what the methodology now requires that it didn’t require before. Corrections to specific published figures are logged separately on the corrections log.

  1. 2026-05-03

    SOP §137

    UX / surface

    Data Graph Visual v2 — moat as a one-second picture.

    Homepage and /data-provenance now render a unified `DataGraphV2` schematic — three columns (Sources → Pipeline → Surfaces) at desktop, stacked vertically on mobile. Source nodes are real `<Link>` elements pointing at `/sources/[slug]`, color-coded by status (live / research-only / pending). Counters are derived live from `getNetworkStats()` + `getAllStudies()` + `SOURCES.length` — no hardcoded literals.

    WhyPre-§137 the homepage carried a scroll-narrative pipeline that predated the source-provenanced moat (referenced GBP / FMCSA / NHTSA / BLS in pre-§120 vocabulary), and /data-provenance's hero schematic was abstract — no source names visible, no clickability. The audit recommended a stronger graph that makes the moat readable in one second.

    See it in practice: Data Graph →

  2. 2026-05-03

    SOP §136

    Doctrine

    Sources Library v2 — status field + restricted-sources doctrine.

    Source-registry types extended with a `status` enum (live / research-only / pending-records-request / deferred) and two new tier values (`pending-manual`, `first-party-research`). Each source page now renders a status chip + ToS-and-usage-notes section + a worked sample-provenance line. /sources adds a 'Sources we do not use' rail with the four restricted/no-go datasets (NMLS, state bars, ABMS / CertiFacts, Google Places / GBP backfill) and the honest reason per item.

    Why§125 v1 conflated tier (display contract) with deployment status. AZ ROC + CSLB were tagged Tier-2 even though no public-records request had been processed; their reality is 'manifest registered, awaiting CSV'. Surfacing pending and explicitly-not-used sources is a stronger trust signal than implying everything is live.

    See it in practice: Source library →

  3. 2026-05-03

    SOP §135

    UX / surface

    /press rebuilt as a journalist + data-user landing page.

    Press kit now carries a 'What we do not claim' doctrine block, a featured-datasets list (6 studies with source / snapshot / row count / Limitations deep-link), a copy-paste citation template, 5 story angles tied to specific studies, and a /data-platform cross-link tile. Counters derive live from `getNetworkStats()` — never hardcoded on the page. Doctrine: no fake press-mention logo strip, no fake customer / partner claims, no headshot placeholder.

    See it in practice: Press kit →

  4. 2026-05-03

    SOP §134

    UX / surface

    /data-platform — B2B / data-product surface.

    New /data-platform page surfaces every dataset in a live catalog (sourced from the research registry) plus four explicitly-labeled 'concept' B2B export scopes. Includes a 'What we do not provide' guardrail block (no pre-screened provider lists, no restricted-source resale, no patient/customer data, no Google Places backfill, no paid API). Replaces the legacy /data press kit; /datasets aliases redirect here.

    WhyJournalists, investors, and B2B prospects landing without a vertical-specific intent had no surface that explained what Fonteum is as a data company. /research answers vertical questions; /data-platform answers the meta question.

    See it in practice: Data platform →

  5. 2026-05-03

    SOP §133

    UX / surface

    Brand chart palette + StatTable typography polish.

    All eight Sprint-1 + CMS Care Compare research-chart SVGs were regenerated with a unified brand palette — bars in `#0F766E` (brand teal), bottom-rated emphasis in `#b91c1c` (warn). StatTable headers carry an explicit `ui-monospace` font-family + tightened weight, and Highest/Lowest emphasis pills now use brand teal + paper tones. No data changed; only chart color tokens + table typography.

    See it in practice: Research →

  6. 2026-05-03

    SOP §132

    UX / surface

    /directories Coverage Atlas — explicit 4-status taxonomy per cell.

    Coverage Atlas grid now resolves every (vertical × source-family) cell into one of four statuses (`live`, `research-snapshot`, `pending-pack`, `not-applicable`) instead of a binary check/no-check. Status legend visible above the grid; each cell carries a status badge + tooltip + (where applicable) a vertical/source link.

    See it in practice: Coverage →

  7. 2026-05-03

    SOP §131

    UX / surface

    Research study template reskinned to brand tokens.

    Study pages (`/research/[slug]`) now render in the brand-token palette (Fraunces hero, paper/cream cards, mist borders, brand-teal accents on charts and CTAs). Citation aside, methodology accordion, FAQs, related-studies block, and chart figures all share a unified visual register. StatTable wrapper gained a horizontal-scroll affordance for narrow viewports.

    See it in practice: Research →

  8. 2026-05-03

    SOP §129

    UX / surface

    /directories repositioned as Coverage / Network Map.

    The /directories page is no longer a flat list of Fonteum-operated directories. Verticals are now grouped by source family (Healthcare graph, Trades graph, Care/Research graph, Indexed coverage). Each card carries a status chip — `live` when source-pack writes are active, `pending` when only the manifest is registered. The grouping is derived live from the §125 sources registry.

    See it in practice: Coverage page →

  9. 2026-05-03

    SOP §128

    Schema

    Single canonical data-snapshot date across the brand hub.

    BrandNav, /press factsheet, /data-provenance, footer, and research aggregates now all read from a single `DATA_SNAPSHOT_DATE_ISO` constant. Pre-§128 the date drifted across four different literals (April 24, April 25, May 1, May 3), which made the brand hub feel un-versioned. Refresh procedure: bump the constant.

    See it in practice: Source →

  10. 2026-05-03

    SOP §126

    Research rule

    Research pages carry an explicit AI-citation summary + Limitations panel.

    Every research study page now renders a `What this dataset covers / does NOT cover` block above the methodology section, plus a Limitations section before the methodology. Dataset JSON-LD only emits when downloadable data exists, with `temporalCoverage` and `spatialCoverage` populated. Doctrine fallback applies when an individual study hasn't authored explicit limitations yet.

    See it in practice: Research newsroom →

  11. 2026-05-03

    SOP §125

    Doctrine

    Public source library at /sources.

    Every public-record source Fonteum cites now has a stable detail page at /sources/[slug]. Each source page documents tier (Tier-1 research-only vs Tier-2 profile-enrichment), refresh cadence, fields used, write-locked fields, and the doctrine sentence the source family carries. The launch gate fails if any entry ships without explicit limitations + a doctrine line.

    See it in practice: Source library →

  12. 2026-05-03

    SOP §122

    Display rule

    Profile provenance reveal cards on listing detail pages.

    Source-backed fields on individual provider profiles now render through a shared `ProvenanceCard` component carrying a SourceChip + `What this means` + `What this does not mean` panels per source family. The non-endorsement sentence renders inline next to every cited value, never tucked away in a footer.

  13. 2026-05-03

    SOP §121

    UX / surface

    /data-provenance upgraded to the public Data Graph page.

    /data-provenance now documents the 7-stage pipeline (Sources → Source pack → Ingestion → Entity match → Field provenance → Display → Research / Verticals), the 4 source-family clusters, and per-field display rules. Carries the source-counters that match the homepage so the numbers can't drift.

    See it in practice: Data Graph →

  14. 2026-05-03

    SOP §120

    UX / surface

    Homepage repositioned as the source-provenanced provider graph.

    The Fonteum homepage now leads with `Local provider data, traceable to its source.` instead of a flat directory pitch. Real-data counters (active businesses, registered sources, provenance field rows) replace any estimated network metrics. Every fact Fonteum displays cites a source, last-checked date, and limitations sentence — the visible moat replaces the link-farm framing.

  15. 2026-05-03

    SOP §117

    Data snapshot

    CMS Care Compare research bundle (home health + hospice).

    Two more Tier-1 research snapshots published from the CMS Care Compare cluster: home-health quality by state and hospice provider availability by state. Source/date/limitations triplet on every cited field; Fonteum does not independently rate, inspect, verify, endorse, or guarantee any agency or hospice.

  16. 2026-05-03

    SOP §116

    Data snapshot

    Dialysis facility research snapshot published.

    First state-level snapshot of CMS Care Compare dialysis facility quality data. Tier-1 research-only — no facility profile writes; data appears in /research aggregates only.

  17. 2026-05-03

    SOP §115

    Data snapshot

    Nursing-home research snapshot published.

    First state-level snapshot of CMS Care Compare Nursing Home Provider Information master dataset. Special Focus Facility status reported at state-aggregate level only — never on individual facility profiles. CMS ratings appear as CMS ratings.

  18. 2026-05-03

    SOP §114

    Research rule

    CMS Care Compare display rules — research vs profile separation.

    Care Compare ratings are cited as CMS-published ratings, with the source URL + last-checked date + limitations sentence. State-level aggregates (mean rating, share-of-stars) NEVER attach to individual facility profiles — they render on /research only. Special Focus Facility status, fines, and abuse-icon flags are write-locked: captured to provenance, never surfaced.

  19. 2026-05-03

    SOP §112

    Display rule

    Florida DBPR contractor-license display rules.

    FL DBPR state-license fields render with classification + status + expiration, alongside a `confirm with the state board` qualifier. Bond, workers-comp, insurance, and disciplinary-history fields are captured to provenance but write-locked pending operator copy review.

  20. 2026-05-03

    SOP §108B

    Display rule

    CMS PECOS Medicare-enrollment indicator display rules.

    PECOS-derived `Medicare-billing-active` indicator renders only on medical-archetype profiles. Display copy frames absence-from-PECOS as a non-negative — providers can be high-quality and not enrolled in Medicare. PAC ID and Enrollment ID are captured to provenance for audit but never rendered.

  21. 2026-05-03

    SOP §95

    Display rule

    NPPES NPI display rules + non-endorsement doctrine.

    Source-backed NPI, taxonomy code, and taxonomy description render on dermatology + chiropractic profiles only when match confidence ≥ 0.75. Each value carries a `Source: CMS NPPES · Last checked YYYY-MM-DD` chip. Provider credential strings from NPPES are captured to provenance but write-locked. The non-endorsement sentence (`Fonteum does not independently rate, inspect, verify, endorse, or guarantee any provider`) renders inline next to every cited value.

  22. 2026-05-03

    SOP §94

    Schema

    Source-provenance schema codified.

    Per-(business, source, field) provenance rows are written to the warehouse with `display_allowed`, `last_checked`, `confidence`, and `source URL`. The display layer reads only through a `provider_field_displayable` view that filters confidence ≥ 0.75, freshness ≤ 180 days, `display_allowed = true`, and source `is_active = true`. Anything failing any of those four filters is captured but not rendered.

    See it in practice: Provenance register →

Every entry above ships at a known SOP wave. We do not invent historical methodology versions; the changelog starts at the point where the source-provenance architecture went live.

Apply this methodology

See the methodology at work in published studies.

Every study below cites the same data source, the same coverage scope, and the same update cadence described above.

  • Directory quality benchmark

    Network coverage, rating, review depth, category breadth — the four signals worth measuring.

  • Business profile usefulness

    What signals make a Google Business Profile decision-useful within the Fonteum indexed dataset.

  • Owner-claim readiness

    Where claiming a directory listing changes the most for a business owner, by category.

  • All studies

    Browse the full research hub — studies, downloadable datasets, and the corrections log.

Methodology in practice

Where the same methodology ranks specific city directories.

Each city directory below uses the public ranking signals described above — review volume, rating thresholds, license references where applicable — applied to a specific (vertical, city) pair.

  • Dermatologists in New York, NY

    Medical/wellness archetype with credential cross-reference.

Found a number that looks wrong?

Tell us. Every correction is logged in public.

corrections@fonteum.com · we acknowledge every report within two business days.

See also
  • /data-provenance → The provider graph — pipeline, source-family clusters, field-level provenance examples, display rules.
  • /sources → Source library — every dataset Fonteum cites, with tier, cadence, fields used, and limitations.
  • /sources/identity-graph → CCN/NPI/TIN/DEA crosswalk coverage matrix + Sprint 2-4 identifier roadmap.
  • /corrections-log → Public log of accepted corrections + a separate methodology- updates index pointing back here.
  • /editorial-policy → Independence, sourcing, conflicts, retractions, and the doctrine block (no rate / inspect / verify / endorse / guarantee).

Compliance posture

We don’t sell ranking and don’t accept payment to move a provider up the list. For final hire decisions, verify licensing, insurance, and references directly with the applicable licensing or credentialing body.

No bulk-licensing source family is currently ingested for this vertical. Hire-time checking still routes through the body named above.

Methodology · Corrections log · Editorial policy

fonteum

Healthcare provider data, traced to source.


RESEARCH

  • Research hub
  • Data platform
  • For health-tech
  • Pricing
  • Press kit

NETWORK

  • Coverage
  • Healthcare graph

ABOUT

  • Mission
  • Methodology
  • Editorial policy
  • Corrections log
  • Security
  • SLA
  • Support
  • Refresh cadence
  • Terms
  • Contact

SUBSCRIBE

The monthly research digest. One email, first of each month. Unsubscribe anytime.


© 2026 FONTEUM RESEARCH · DATA SNAPSHOT MAY 8, 2026 · BUILT WITH CARE

  • X
  • LINKEDIN
  • PRESS