A co-chair of Meta’s Oversight Board has sharply criticized the company’s decision to end its third-party fact-checking program, accusing the tech giant of “buckling to political pressure” in a move that could undermine its commitment to curbing misinformation. The decision, announced earlier this week, has sparked a backlash from both misinformation experts and political analysts, raising questions about the company’s role in moderating content on its platforms, including Facebook and Instagram.
The fact-checking initiative, which Meta had established in 2016, partnered with independent third-party organizations to identify and label false or misleading information shared by users. The program was widely seen as a key component of the company’s efforts to combat the spread of misinformation, particularly during high-stakes political events such as elections, the COVID-19 pandemic, and climate change debates.
However, in an internal memo obtained by The New York Times, Meta’s leadership stated that they would be “re-evaluating” the program due to increasing concerns about its “effectiveness” and the “burdensome” nature of fact-checking large-scale content. The company also cited growing tensions between government regulators and tech platforms over content moderation as a factor in the decision. Meta’s new focus will shift toward artificial intelligence and automated systems to flag and remove harmful content, a strategy that critics argue could be less transparent and less accurate.
‘Political Pressure’ Allegations
One of the strongest reactions came from Jamal Greene, co-chair of Meta’s Oversight Board, which was established to provide independent review of the company’s content moderation decisions. In a statement issued shortly after the announcement, Greene condemned the move, saying it appeared to be a direct response to mounting political and regulatory pressure.

“Meta’s decision to dismantle its fact-checking program is troubling, and it looks like the company is buckling to political pressure,” Greene said. “For years, this program was a vital part of the company’s efforts to address the spread of misinformation. To end it now, especially as we approach another election cycle, sends the wrong message — one that undermines public trust in the platform and raises serious concerns about its accountability.”
Greene also expressed concern that Meta’s shift to AI-based content moderation would be less transparent and more prone to errors, particularly when it comes to nuanced issues like political speech and disinformation. “Automated systems are not perfect, and they do not have the human judgment necessary to make the tough calls on complex issues like misinformation,” he added. “This decision could leave millions of users vulnerable to harmful content while further eroding accountability for the platform.”
A Growing Controversy
Meta’s decision to end the fact-checking program comes at a time of increased scrutiny from both lawmakers and the public regarding the role of social media platforms in shaping political discourse and the spread of false information. Critics argue that, by removing one of the few systems designed to objectively assess misinformation, Meta is stepping back from its responsibility to protect users from harmful content, particularly in the lead-up to the 2024 presidential election.
While the company has stressed that it will continue to rely on AI and other machine learning systems to detect and flag misinformation, many experts are skeptical that these tools will be as effective as human fact-checkers in addressing complex claims. AI systems, they note, can struggle to contextualize information and often fail to account for cultural, political, or social nuances that might determine whether a claim is misleading or false.
“There’s no question that AI is important for content moderation, but it’s simply not a substitute for the nuanced judgment that human fact-checkers bring to the table,” said Dr. Alice Miller, a professor of media studies at Stanford University. “By eliminating the fact-checking program, Meta is potentially opening the floodgates for more misinformation to spread unchecked, which is especially concerning in the context of political campaigns.”
Meta’s Response
In response to the criticism, Meta has defended its decision, arguing that its evolving content moderation strategy is aligned with broader industry trends and that AI technologies have improved significantly in recent years. The company also emphasized that it will continue to work with independent fact-checking organizations in a more limited capacity, though the specifics of this new approach have not yet been fully disclosed.
“We remain committed to combating misinformation on our platform,” said Andy Stone, a Meta spokesperson. “While we are scaling back some aspects of our fact-checking program, we are doubling down on advanced AI tools and partnerships with trusted organizations to ensure our users have access to accurate information.”
Stone also pointed to Meta’s expanded efforts to provide users with more educational content and context around potentially misleading claims, including fact-checking labels and links to authoritative sources. However, critics have noted that such labels alone are not as effective in curbing the spread of misinformation as thorough fact-checking by independent third parties.
The Political Angle
The timing of Meta’s decision has added to the controversy, as it follows a series of congressional hearings in which lawmakers from both parties have criticized tech companies for their role in moderating content. Some Republican lawmakers have accused companies like Meta of stifling conservative viewpoints, while some Democrats have argued that tech giants are not doing enough to prevent the spread of disinformation, especially in political contexts.
Senator Amy Klobuchar (D-MN), a vocal advocate for stronger tech regulations, expressed concern over the company’s decision, calling it “a step backward” in the fight against harmful content. “Meta’s actions are troubling and send a message that it is willing to cave to external pressures rather than doing what is right for the American people,” she said in a statement.
Meanwhile, conservative groups have cheered Meta’s move, arguing that the fact-checking program was biased and disproportionately targeted right-wing content. The decision has sparked a renewed debate over the extent to which tech platforms should be involved in content moderation and whether government regulation is needed to address the issue.
What’s Next?
Meta’s decision to end its fact-checking program is likely to have significant implications for the company and its relationship with regulators, lawmakers, and users. As the 2024 U.S. elections approach, the company’s content moderation practices will be under intense scrutiny, particularly in light of the growing influence of social media on political campaigns.
For now, the future of misinformation on Meta’s platforms remains uncertain. Whether the company can effectively use AI to address the challenges of moderating vast amounts of content remains to be seen. However, with the Oversight Board and outside experts voicing serious concerns, the pressure on Meta to take accountability for its role in the digital information ecosystem is only intensifying.









