High level of correspondence across different news domain quality rating sets

Hause Lin*, Jana Lasser, Stephan Lewandowsky, Rocky Cole, Andrew Gully, David G. Rand, Gordon Pennycook

*Corresponding author for this work

Research output: Contribution to journalArticlepeer-review

Abstract

One widely used approach for quantifying misinformation consumption and sharing is to evaluate the quality of the news domains that a user interacts with. However, different media organizations and fact-checkers have produced different sets of news domain quality ratings, raising questions about the reliability of these ratings. In this study, we compared six sets of expert ratings and found that they generally correlated highly with one another. We then created a comprehensive set of domain ratings for use by the research community (github.com/hauselin/domain-quality-ratings), leveraging an ensemble “wisdom of experts” approach. To do so, we performed imputation together with principal component analysis to generate a set of aggregate ratings. The resulting rating set comprises 11,520 domains-the most extensive coverage to date-and correlates well with other rating sets that have more limited coverage. Together, these results suggest that experts generally agree on the relative quality of news domains, and the aggregate ratings that we generate offer a powerful research tool for evaluating the quality of news consumed or shared and the efficacy of misinformation interventions.

Original languageEnglish
Article numberpgad286
JournalPNAS Nexus
Volume2
Issue number9
DOIs
Publication statusPublished - 1 Sept 2023

Keywords

  • fact-checking
  • journalism standards
  • misinformation
  • news quality

ASJC Scopus subject areas

  • General

Fingerprint

Dive into the research topics of 'High level of correspondence across different news domain quality rating sets'. Together they form a unique fingerprint.

Cite this