BlogNBC News

Zuckerberg’s fact-checking rollback ushers in chaotic online era



First, it was Elon Musk. Now, it’s Mark Zuckerberg. 

Meta’s announcement that it would be ending its fact-checking program and shifting policies around content moderation — a move that Zuckerberg said was inspired by Musk’s X — marks a new high-water mark for a worldview, pushed in large part by conservatives, that frames centralized efforts to control mis- and disinformation as censorship rather than public service. 

Zuckerberg, who once touted the importance of the company’s moderation efforts, echoed that worldview Tuesday when he said that times had changed and the new shift would reduce “censorship” and “restore free expression” — a message that was quickly embraced by some Republican pundits and politicians. 

To researchers who have both studied moderation efforts and platforms, it’s the most recent move toward a more freewheeling and unbridled social media environment, where what is real and what isn’t will come to be blurred. 

“The fact-checking program was never going to save Facebook, but it was the last bulwark to complete chaos on the platform,” said Nina Jankowicz, former head of a disinformation board within the Department of Homeland Security, who now helms a nonprofit organization focused on countering attacks on disinformation researchers. “And now Mark Zuckerberg is choosing chaos.” 

Along with getting rid of fact-checkers, Meta will remove restrictions around topics that have fueled recent political culture wars, including immigration, issues around trans people and gender. Announcing the policies on Fox News, Meta’s chief global affairs officer, Joel Kaplan, said the company had been “too restrictive.” 

A close reading of Meta’s updated policy guidelines reveals that Meta now explicitly allows users to call each other mentally ill based on their gender identity or sexual orientation, among other changes. 

President-elect Donald Trump addressed the changes at a news conference. Asked by a reporter whether Zuckerberg was “directly responding to the threats that you have made to him in the past,” Trump replied, “Probably.” 

Researchers and advocacy groups framed Zuckerberg’s announcement as a political capitulation to the incoming president — the most recent in a series of changes at Meta that they say reflect a willing submission before Trump takes office. 

Jankowicz characterized the choice as a response to the potential threat of regulation and investigation. 

“This was after two years of Zuckerberg being hounded by the weaponization committee,” Jankowicz said, referring to House Judiciary Chairman Jim Jordan’s Select Subcommittee on the Weaponization of the Federal Government — one of the latest tools used by Republicans to institutionalize their complaints around social media and political bias.

Facebook launched its fact-checking program in 2016, following criticism that it had facilitated the spread of so-called fake news leading up to the presidential election. Facebook, like many other tech platforms, had grown quickly in the past decade and come under growing scrutiny for what it did — and notably did not do — to regulate the posting and recommendation of content in its increasingly influential News Feed. Zuckerberg and other tech executives reacted quickly to public pressure from politicians, journalists and advocacy organizations, instituting a variety of new roles and moderation processes meant to crack down on issues like harassment and misinformation. 

“We take misinformation seriously,” Zuckerberg wrote in a 2016 post

In 2020, Meta’s program expanded and the company rolled out new features to address widespread misinformation around Covid and the 2020 election. Zuckerberg repeatedly touted its fact-checking efforts and partners, including at a 2021 House Energy and Commerce committee hearing, where he seemingly called out Trump for inciting the January 6th attacks, saying, “I believe that the former president should be responsible for his words.” 

At the same time, academics and researchers worked to understand how these platforms operated and their impact on users.

What was initially a relatively uncontroversial field would quickly become the subject of partisan attacks, with Republicans claiming that tech companies were biased and their moderation efforts were unfairly targeting conservatives. Published research has pushed back against claims that conservatives are disproportionately moderated because of biased systems and tech employees, with data that suggests conservatives are more likely to share misinformation — sometimes putting them in the crosshairs of platforms’ policies.

“If there’s a sportsball game and one team fouls four times as much, it’s not ‘biased’ for the ref to call four times as many fouls against that team,” Kate Starbird, a professor at University of Washington and co-founder of its Center for an Informed Public, posted to Bluesky following the Meta announcement. 

Despite the data, Republican complaints of censorship began to take hold as a political narrative and also in Silicon Valley. The most obvious answer to the negative attention such research brought was to shut down access. In 2021, Facebook quietly disbanded the team behind CrowdTangle, which worked under Facebook’s ownership for five years and provided data to researchers and journalists, slow-marching the transparency tool to its eventual death last year. In 2022, Elon Musk acquired Twitter and, along with renaming the company X, he decimated the structures charged with content moderation — slashing whole teams, reversing policies, cutting off access to researchers and feeding internal communications to right-wing journalists that would become the backbone of Rep. Jordan’s congressional committee. Known as the Twitter Files, these series of posts on the platform alleged collusion between disinformation researchers, Twitter employees and government agencies, to censor conservatives.  

Republican politicians also began to move beyond critiques to bring political pressure to bear on both tech platforms and independent researchers. 

Rep. Jordan launched his investigation into the “weaponization” of the federal government in 2023. Jordan’s committee subpoenaed Zuckerberg and other tech companies for documents aimed at proving a conspiracy between government, disinformation researchers and Big Tech. In July, Jordan threatened and then paused a vote to hold Zuckerberg in contempt of Congress for failing to fully comply with the request for documents. In its final report, a 17,000-page document released in December, the committee praised its “real effect,” including the shuttering of disinformation research and Zuckerberg’s statement, in a letter to Jordan, that Facebook had been pressured by Biden administration officials to “censor certain COVID-19 content,” an assertion disputed by a June Supreme Court decision.  

Meta’s announcement comes on the heels of other efforts that seemingly align with the new administration: Trump ally and Ultimate Fighting Championship CEO Dana White joined Meta’s board of directors this week; the company donated $1 million to Trump’s inauguration; last week it promoted Joel Kaplan, a former adviser to George W. Bush with deep Republican ties, from vice president to head of global policy; and last month Zuckerberg dined with Trump at Mar-a-Lago, a meeting the incoming deputy chief of policy, Stephen Miller, characterized as reinforcing Zuckerberg as “a supporter of and a participant” in the change Trump intends to affect on the country.  

In his announcement, Zuckerberg laid blame on the legacy media, fact-checkers and Meta’s own employees, calling them politically biased. To remedy the perception of that bias, Zuckerberg said he was moving Meta’s trust and safety and content moderation teams from California to Texas.

In an emailed statement, Starbird, the University of Washington professor, who testified about her work studying disinformation before Jordan’s committee in 2023, said Meta’s move to end the fact-checking program “will reduce users’ ability to identify trustworthy content on Meta’s products and beyond.”

Zuckerberg said the fact-checking program would be replaced by a system like X’s “community notes,” a sort of crowdsourced content moderation program rolled out before Elon Musk’s acquisition of the company in 2022.

Meta provided no further details on how it plans to fill the gap left by fact-checkers. Concerns of speed, quality of contributors and how notes will impact the feeds of users on Facebook, Instagram and Threads should be addressed transparently, said Sol Messing, a research associate professor at New York University’s Center for Social Media and Politics.

“The people who you get to participate will be incredibly important,” said Messing. “I haven’t seen exactly how they’re going to recruit people to write community notes and how they’re going to ensure that it’s not just a bunch of partisan activists who are participating.”

A system like community notes works best as a complement to fact-checking, not a replacement, said Renee DiResta, a research professor at Georgetown University, also targeted for her work studying disinformation while at Stanford’s Internet Observatory.

“Yes, moderation is imperfect and yes, users across the political spectrum distrust it, but platform moderation policies reflect platform values,” DiResta said. “If the platform sees itself as having an obligation to ensure that its users have the best possible information, cutting fact-checking programs that it’s previously touted, and appears to be continuing to support elsewhere, undermines that. This is a capitulation to political winds — and it reinforces that working the refs works.”



Source link

Leave a Reply

Your email address will not be published. Required fields are marked *