Transparency
Scoring methodology
Every Item on suchsignal receives a Signal Strength score — a percentage from 0 to 100 — derived from seven dimensions grouped into two buckets. No score is hidden and no dimension is left unexplained.
Source Quality bucket
Assesses the origin and physical integrity of the source material.
1. Credibility (0–10)
Who produced this? Government agencies, peer-reviewed journals, and named officials with verifiable records score highest. Anonymous sources, unverified accounts, and entities with documented histories of fabrication score lowest.
- 9–10: Official government document, peer-reviewed research, sworn congressional testimony
- 6–8: Named journalist from established outlet, named former official, corroborated whistleblower
- 3–5: Anonymous source with partial corroboration, unverified but plausible provenance
- 0–2: Unknown origin, anonymous with no corroboration, known disinformation actor
2. Integrity (0–10)
Is this document or file what it claims to be? Assessed via EXIF metadata analysis, file hash verification, chain-of-custody notes, and AI-assisted manipulation detection.
- 9–10: Verified unaltered, hash-matched, official chain of custody documented
- 6–8: No detected manipulation, metadata consistent with claimed origin
- 3–5: Minor metadata anomalies, partial chain of custody
- 0–2: Evidence of manipulation detected, metadata inconsistent, no verifiable provenance
Claim Quality bucket
Assesses the claims made within the content, independent of who made them.
3. Corroboration (0–10)
How many independent sources make the same claim? Independence is key — sources that cite each other do not count as independent.
- 9–10: Five or more independent sources
- 6–8: Two to four independent sources
- 3–5: One partial or indirect corroborating source
- 0–2: No corroboration found
4. Internal Consistency (0–10)
Does the document contradict itself? Are dates, names, and technical claims internally coherent?
- 9–10: No contradictions found across the full document
- 6–8: Minor inconsistencies that do not affect core claims
- 3–5: Notable inconsistencies in secondary claims
- 0–2: Core claims are self-contradictory
5. Cross-reference (0–10)
How does this Item relate to the existing suchsignal corpus? Powered by vector similarity search across all published Items. Agreement with existing high-signal content boosts the score; isolated claims with no prior art score lower.
- 9–10: Strongly corroborated by multiple existing high-signal Items
- 6–8: Partially corroborated by existing Items
- 3–5: Isolated claim with no prior art in corpus
- 0–2: Contradicted by existing high-signal Items in corpus
6. Recency (0–10)
Is this new information, or a rehash of previously known claims presented as new? Older documents can still score highly if they are primary sources being disclosed for the first time.
- 9–10: New primary source, previously undisclosed information
- 6–8: Recent synthesis of new and existing information with original analysis
- 3–5: Largely rehashes previously known claims
- 0–2: Entirely repackaged old claims with no new information
7. Specificity (0–10)
Are the claims falsifiable and specific? Vague assertions that cannot be tested or disproved score lowest. Named locations, dates, personnel, and technical parameters score highest.
- 9–10: Highly specific: named individuals, exact dates, verifiable technical claims
- 6–8: Specific in key claims, some vague supporting detail
- 3–5: Mixed specificity — some falsifiable claims among vague assertions
- 0–2: Entirely vague, non-falsifiable assertions
Headline Signal Strength
The headline percentage is a weighted average of all seven dimension scores, normalized to 0–100%. Source Quality dimensions (Credibility, Integrity) together carry 30% of the weight. Claim Quality dimensions (Corroboration, Consistency, Cross-reference, Recency, Specificity) together carry 70% of the weight.
Community Trust Score
Separate from the AI-generated Signal Strength, the Community Trust Score reflects how public readers vote on an Item's trustworthiness. Votes are collected per-dimension and overall. The community score is displayed alongside the AI score — they are never merged or averaged. Divergence between the two scores is itself informative.
Editorial overrides
Admin editors may attach override notes to any dimension if they believe the AI assessment is incomplete or requires context. Override notes are displayed publicly. Numeric scores are never altered after publication — the AI assessment is immutable.
Rubric versioning
The scoring rubric is versioned. Every Item records which rubric version produced its score. When the rubric is updated, historical scores remain unchanged and the version is noted.