Meta Ends Third-Party Fact-Checking Program, Expands Free Expression Policies

Date:

Share post:


Meta announced sweeping changes to its content moderation policies, including the end of its third-party fact-checking program in the United States. The company will transition to a Community Notes model, aiming to reduce censorship while maintaining transparency. These changes are part of a broader effort to prioritize free expression on its platforms, which include Facebook, Instagram, and Threads.

Meta’s Transition to Community Notes

The third-party fact-checking program, launched in 2016, faced criticism for perceived bias and overreach. Meta acknowledged that the program often led to the unintended censorship of legitimate political discourse.

The new Community Notes system, modeled after a similar initiative on X (formerly Twitter), will allow users to contribute context to posts deemed potentially misleading. These notes will be collaboratively written and rated by contributors from diverse perspectives. Meta stated it would not write or select the notes displayed on its platforms.

“Once the program is up and running, Meta won’t write Community Notes or decide which ones show up,” said Joel Kaplan, Meta’s Chief Global Affairs Officer. The company plans to phase in the program over the coming months, starting in the U.S.

Lifting Restrictions on Speech

Meta is also removing restrictions on several topics, such as immigration and gender identity, which it views as central to political discourse. The company acknowledged that its content moderation systems have been overly restrictive, leading to the wrongful removal of content and user frustration.

In December 2024 alone, Meta removed millions of pieces of content daily, but the company estimates that 10-20% of these actions may have been errors. To address this, Meta will focus automated systems on high-severity violations, including terrorism and fraud, while relying on user reports for less severe issues.

“We are in the process of getting rid of most [content] demotions and requiring greater confidence that the content violates [policies],” Kaplan noted.

Revisions to Enforcement and Appeals

Meta is revising its enforcement mechanisms to reduce errors. Changes include requiring multiple reviewers to agree before content is taken down and using large language models (LLMs) to provide second opinions on enforcement decisions.

To improve the account recovery process, Meta is testing facial recognition technology and expanding its support teams to handle appeals more efficiently.

A Personalized Approach to Political Content

Meta plans to reintroduce more political and civic content to user feeds but with a personalized approach. The company’s previous efforts to reduce such content based on user feedback were deemed too broad.

Meta will now rank political content from followed accounts using explicit signals, such as likes, and implicit signals, like time spent viewing posts. Users will have expanded options to control how much political content appears in their feeds.




LEAVE A REPLY

Please enter your comment!
Please enter your name here

Related articles

The only part of X that may be worth paying for is Community Notes cringe

Imagine deciding to start watching Lost midway through its much-maligned final season. Now imagine making that decision...

HP Unveils New Customizable Gaming Solutions at Gamescom 2024

HP Inc. has introduced a new lineup of gaming products under its OMEN and HyperX brands at...

Every American’s Social Security Number May Have Been Leaked

The social security number of every American may have been compromised...

Oregon’s Darlene 3 wildfire prompts evacuations of hundreds of homes

A wildfire in Oregon‘s high desert, near the popular vacation destination of Bend, grew rapidly Wednesday, and...