How Ownlisted decides what to publish.
Independence, sourcing, conflicts of interest, the review pipeline, corrections, and retractions. The contract behind every figure on the Ownlisted network and behind every research piece we publish.
- Policy version
- v1.1
- Last reviewed
- 2026-05-08
- Maintained by
- Editorial Lead
Editorial policy v1.1 · Last revised May 8, 2026 · Methodology changelog
00 · Doctrine
What Fonteum will and will not claim.
These are the strongest claim-ceiling guarantees we make. Identical wording lives in the /llms.txt machine-readable doctrine block so AI agents and journalists ground future citations on the same line.
We do not rate, inspect, verify, endorse, or guarantee any provider.
Coverage is sourced and dated. CMS ratings appear as CMS ratings; state-license records appear as state-license records. We do not produce a quality measurement of our own.
We do not employ medical or legal experts unless explicitly named.
Fonteum is a research and data layer. Where a study or page references medical or legal context, the named author / source is shown inline. We do not represent Fonteum itself as medical or legal expertise.
CMS ratings appear as CMS ratings.
Anywhere a CMS Care Compare star rating, a CMS Special Focus Facility flag, or a CMS quality-of-patient-care number is shown, it carries the source URL, last-checked date, and the limitations sentence CMS itself documents.
Fonteum counts appear as Fonteum counts.
Counts derived from the Fonteum indexed dataset are framed as Fonteum-indexed counts, never as a U.S. market estimate. The dataset-scope panel runs on every research page that publishes a count.
We do not blend source ratings with Fonteum rankings.
A study citing CMS quality data does not mix it with platform reviews. A profile rendering a state license does not mix it with Google ratings inside the same provenance card. Surfaces stay separate by design.
No fake trust badges. No pay-to-rank organic order.
Featured / Pro billing buys product features only — never a position in organic city or listing rankings, never a research finding, never a delay on a correction. The /editorial-policy independence section runs the long form.
Mirrored at /llms.txt.
Editorial decisions are made independently of revenue.
Ownlisted publishes research and operates a network of category directories. Editorial decisions — what to study, how listings are ranked, when corrections are issued — are made independently of advertising, featured-listing fees, claim status, and any other revenue contact a business has with Ownlisted.
Featured-tier and Pro-tier owner billing buys nothing other than the documented product features. It does not buy organic placement on city or listing pages, does not influence research findings, and does not delay corrections.
- No pay-to-rank in organic city or listing placement
- No PPL skewing of editorial rankings
- No advertiser veto on study scope or findings
- Featured / Pro billing is product-only, not editorial
Every figure traces back to a documented source.
Listing data is sourced from public Google Business Profiles, public state licensing-board filings, and owner-managed inputs once a listing is claimed. The source for each field is documented in the public data-provenance register and is identified per-field where it materially affects how a reader should weight the data.
Research figures are sourced from the listing dataset itself, federal and state public records, and study-specific primary sources cited inline in the methodology block of each study. We do not cite competitor aggregator sites as primary sources — they aren't.
We disclose financial interests that could influence coverage.
Editors and authors disclose any financial interest in a business or category before they cover it. The standing disclosures: Ownlisted operates the directory network whose listings the research analyses, and Ownlisted's revenue includes Featured / Pro-tier billing from claimed owners across that network. Both standing disclosures are restated in the methodology block of every study.
Per-piece disclosures (a contributor holds equity in a vertical, a contributor's spouse owns a listed business, a contributor previously consulted for a parent organisation) are stated at the top of the affected piece.
Three checks before publish, one after.
Every study passes (1) a numbers check — the editor-of-record reproduces the headline figures from the dataset, (2) a sourcing check — every cited primary source is loaded fresh, and (3) a tone check — language, claims, and visualisations are reviewed against this policy and the methodology page.
After publish, the study sits in a 7-day correction window during which any reader-flagged inaccuracy is treated as a P1 issue. Corrections accepted during the window are noted on the study page and entered into the corrections log.
We correct. We don't quietly edit.
When a published figure is wrong, we correct it. The original page is updated, the changed sentence carries a `[corrected on YYYY-MM-DD]` marker, and the substance of the correction is logged in the public corrections log. We do not silently re-publish a piece with revised numbers.
Reader-submitted corrections route through the public corrections form on this site and on every vertical-domain. We respond to every submission. We name the cause of each accepted correction (data refresh, source change, editor error, owner-claim update) so readers can weight the failure mode.
When a study cannot be fixed in place, we retract.
If a study's premise — not just a figure — turns out to be wrong, the study is retracted. The retraction notice replaces the study text, names the editor of record, names the substantive failure, and remains at the original URL so external citations don't 404.
Retractions are listed in the corrections log alongside corrections, with their own status flag. We have not retracted a study to date; the policy exists so the threshold is documented, not tested.
AI helps with drafting and code; humans own publish decisions.
Authors may use AI assistants to draft prose, generate exploratory queries, or scaffold research code. AI does not approve a study for publish; a named editor does. AI-generated text is reviewed line-by-line against the source data; any number that survives review traces back to the dataset, not to a model's prior.
We do not ship AI-generated images of people, AI-fabricated quotes, or AI-summarised business reviews. Visualisations on study pages are produced from the dataset by a named author.
We don't claim to verify what we don't verify.
The word "verified" — and its adjective family — is forbidden in user-facing copy on every Ownlisted-operated page. Listings are listed; data is sourced; figures are checked. No listing is "verified" by Ownlisted unless an explicit verification process exists for it (today, only owner-claim email verification meets that bar, and the language used is "claimed", not "verified"). The policy is enforced in source by the launch-gate scan.
How a public dataset earns the right to enter the graph.
We tier sources by who publishes them, not by who cites them. Tier-2 (profile enrichment) sources are federal or state agencies with a published distribution license that permits attribution-required republication of the fields we surface — currently CMS NPPES, CMS PECOS, FL DBPR, CMS Care Compare. Tier-1 (research-only) sources are federal or state datasets we cite for context but do not republish per-row — currently BLS OEWS, BEA personal income, U.S. Census, HRSA HPSA. Industry-certification bodies (IICRC, ISA, NABCEP, ICF) are factually named in coverage when a business publishes that credential, but the registry itself is not republished.
Sources outside this taxonomy do not enter the graph. Aggregator sites, broker lists, lead-resale databases, and screen-scrape outputs of restricted registries are explicitly excluded — see the §125 RESTRICTED_SOURCES list at /sources for the standing exclusions (state-bar registries, NMLS Consumer Access, ABMS / CertiFacts, Google Places backfill).
- Tier-2 (profile enrichment): federal/state, attribution-required
- Tier-1 (research-only): federal context, not republished per-row
- Industry-cert bodies: cited, never republished
- Restricted-source list at /sources is the explicit exclusion register
When two public sources disagree, the more authoritative wins.
Disagreements happen — a state contractor board lists an address that doesn't match the address the same provider publishes on Google. The default rule: when a federal/state-board source disagrees with a Google Business Profile, the federal/state source is the displayed value; the Google value is held internally only. The ProvenanceCard shows the source we displayed.
When two equally-authoritative sources disagree (e.g. an NPPES taxonomy code and a state medical-board specialty designation), the field renders both with each one's source citation, not a synthesized merge. The reader gets to see the disagreement.
Edge case: an owner-claim update introduces a value that disagrees with the public-record source. The owner-supplied value is held internally pending operator review; the public-record value continues to display until the disagreement is resolved.
We hold more than we display. The display ceiling is doctrinal.
Several source families produce fields we hold internally for matching, deduplication, or operator review but never expose on a public page — for example, NPPES sole-proprietor flags, state-board complaint histories, owner-claim contact emails. The full list of write-locked-internal fields is documented in src/components/profile/source-family-copy.ts and surfaced in aggregate at /sources/[slug]'s display-policy block.
The display ceiling is a doctrine line, not a coverage gap. The §94 displayability rules are explicit: a field is rendered publicly only when (a) the source license permits it, (b) the field is doctrinally legible (no v-word adjectives, no fabricated quality stamps), and (c) the source's terms allow standalone display without ToS violation. Fields that fail any of those gates are held internally even when we have them.
What causes us to re-pull a source.
Sources are pulled on their published refresh cadence (weekly for FL DBPR, monthly for CMS NPPES + Google ratings, quarterly for CMS Care Compare) plus on these incident triggers: (1) the source's authority publishes a methodology change that affects fields we render; (2) a reader-flagged correction is traced to a stale cache; (3) the operator team observes a coverage drift > 5% on a held-out sample; (4) a major regulatory event (e.g. a CMS provider-directory accuracy rule update) makes a refresh a public-trust priority.
Re-ingestion runs do not silently overwrite. The previous snapshot is preserved, change records are emitted (per /data-platform/schema), and material shifts surface in the methodology changelog at /methodology#changelog.
- Methodology → How figures are sourced, refreshed, and checked.
- Data provenance → Public field-by-field source register.
- Source library → Every dataset Fonteum cites, with tier, cadence, fields, and limitations.
- Corrections log → Every accepted correction, dated.
- Team → The masthead and the editor of record for each study.
- Research hub → Published studies, upcoming research, and citation guidance.
Compliance posture
We don’t sell ranking and don’t accept payment to move a provider up the list. For final hire decisions, verify licensing, insurance, and references directly with the applicable licensing or credentialing body.
No bulk-licensing source family is currently ingested for this vertical. Hire-time checking still routes through the body named above.