Meta, the company behind popular social media platforms like Facebook, Instagram, and WhatsApp, announced a major policy change that has raised both support and concern. The company will no longer rely on independent fact-checkers to control false information. Instead, it will introduce a new system where users themselves will help by adding notes to posts, known as “Community Notes.” This move is similar to what Elon Musk did on X (formerly Twitter) after he took over the platform in 2022​​.

Mark Zuckerberg, Meta’s CEO, explained why they are making this change. He said, “It’s time to get back to our roots around free expression.” According to Zuckerberg, Meta’s previous system made too many mistakes, leading to what he called “too much censorship.” He admitted that this change means there could be more “bad stuff” online, but it will also reduce the number of innocent posts that are mistakenly removed​​.


Meta’s History with Fact-Checking

Meta’s fact-checking program began in 2016 after concerns grew about false information spreading during the U.S. presidential election. At that time, false news stories were widely shared on social media, and critics blamed platforms like Facebook for failing to stop them. To address these issues, Meta worked with third-party fact-checkers, including trusted news organizations like The Associated Press and ABC News. Posts flagged as false by these fact-checkers were either removed or labeled with warnings to let people know they might be untrue​​​.

Despite Meta’s efforts, the company received criticism from conservatives who claimed that the fact-checking program was biased against right-leaning opinions. These complaints grew louder when Facebook and other platforms banned Donald Trump after the January 6, 2021, attack on the U.S. Capitol. Although Trump’s accounts were later reinstated, many of his supporters continued to accuse Meta of unfair treatment​​.


A Political Shift and New Partnerships

Meta’s decision to end the fact-checking program comes at a time when Donald Trump has been elected president again. Observers believe that Meta is trying to improve its relationship with the new administration. Reports suggest that Meta executives, including Zuckerberg, met with Trump’s team before the announcement. Additionally, Joel Kaplan, a senior Meta executive with strong ties to the Republican Party, was promoted to lead Meta’s global policy team. Dana White, president of the Ultimate Fighting Championship and a known Trump supporter, was also appointed to Meta’s board of directors​​​.

Zuckerberg himself has met with Trump multiple times, including a private dinner at Trump’s Mar-a-Lago estate. Meta also donated $1 million to Trump’s inauguration fund. Many see these moves as signs that Meta wants to build a closer relationship with the Trump administration​​.


Editor’s Imagination

Concerns Over Misinformation and Hate Speech

While some people support Meta’s decision, saying it promotes free speech, many are worried that it will lead to more misinformation and harmful content online. Nicole Gill, a director at a digital watchdog group, warned that the change could open the door to “the exact same surge of hate, disinformation, and conspiracy theories” that caused the January 6 Capitol riot. Other experts fear that false information could spread more easily without fact-checkers reviewing posts​​.

Fact-checkers who previously worked with Meta were disappointed by the company’s decision. Bill Adair, co-founder of the International Fact-Checking Network, said, “It was particularly troubling to see him echo claims of bias against the fact checkers because he knows that the ones that participated in his program were signatories of a code of principles that requires that they be transparent and nonpartisan”​​.

Angie Drobnic Holan, director of the International Fact-Checking Network, added that Meta’s move could hurt smaller fact-checking organizations, many of which depend on the company for funding. She said, “We’ll see fewer fact-checking reports published and fewer fact checkers working”​​.


Meta’s New Approach

Instead of relying on professionals to check facts, Meta will now depend on users to help monitor content through the new Community Notes program. This system, which Elon Musk introduced on X, allows users to add context or clarification to posts they believe are false or misleading. The goal is for people with different viewpoints to reach an agreement on the accuracy of posts​​​.

Meta also announced that it will move its U.S. content moderation team from California to Texas. Zuckerberg hopes that relocating to Texas, a state known for its conservative values, will help reduce concerns about bias in the company’s moderation efforts. “We’re going to simplify our content policies and get rid of a bunch of restrictions on topics like immigration and gender that are just out of touch with mainstream discourse,” Zuckerberg said​​.

However, critics worry that moving moderation operations could weaken Meta’s ability to handle sensitive topics like hate speech and disinformation. Katie Harbath, a former Meta executive, said that this change might negatively affect women and LGBTQ communities, who often rely on strict moderation to protect them from harmful content​​.

This article is based on the following articles:

https://www.npr.org/2025/01/07/nx-s1-5251151/meta-fact-checking-mark-zuckerberg-trump

https://www.bbc.com/news/articles/cly74mpy8klo

Background Information

What is Meta?

Meta Platforms, Inc. is a major technology company that owns some of the world’s most popular social media platforms, including Facebook, Instagram, and WhatsApp. Originally called Facebook, the company changed its name to Meta in 2021 to reflect its focus on building the “metaverse”—a virtual world where people can interact using digital avatars. Meta plays a huge role in how people share information online and communicate with each other across the globe.


Why Did Meta Start Using Fact-Checkers?

In 2016, during the U.S. presidential election, concerns grew about the spread of false information on social media. This was especially serious because some of the false stories were seen by millions of people and may have influenced voters. After the election, many experts and the public criticized platforms like Facebook for allowing false information to go unchecked. In response, Meta created a fact-checking program to reduce the spread of misleading content.

The fact-checking system involved third-party organizations—independent groups that reviewed posts to check whether they were accurate. If a post was found to be false, it could be flagged, removed, or labeled so users would know not to trust it. Over time, Meta worked with dozens of fact-checking partners around the world, making it one of the most extensive programs of its kind.


What is Content Moderation?

Content moderation is the process of monitoring what users post on social media platforms. The goal is to prevent harmful content, such as hate speech, misinformation, or violent material, from spreading. Companies like Meta employ both people and computer programs (artificial intelligence) to help with this task. If something violates the platform’s rules, it may be removed or hidden.

While moderation helps make social media safer, it also raises debates about free speech. Some people argue that removing certain posts limits their right to express their opinions. Others believe that strict moderation is necessary to protect users from harmful or false information.


Why is Free Speech Important Online?

Free speech means the ability to express one’s opinions without fear of being punished or silenced. In the United States, the First Amendment protects free speech. However, this right isn’t unlimited—speech that incites violence, spreads harmful false information, or encourages illegal activities is often restricted. On social media, balancing free speech with the need to prevent harm is difficult. Companies like Meta have tried to create rules to manage this balance, but these rules are often criticized for being either too strict or too lenient.


Donald Trump and Social Media Controversies

Donald Trump, the former and now re-elected U.S. president, has a complicated history with social media companies. During his first term as president, Trump frequently used platforms like Facebook and Twitter to communicate directly with the public. However, his posts often caused controversy, and some were accused of spreading false information or inciting violence.

After the attack on the U.S. Capitol on January 6, 2021, social media companies, including Facebook, banned Trump’s accounts, saying that his posts had encouraged violence. This decision led to major debates about whether social media companies were censoring political speech.


Who is Elon Musk, and How Did He Influence Meta’s New Policy?

Elon Musk is the CEO of several major companies, including Tesla, SpaceX, and X (formerly Twitter). When Musk bought X in 2022, he made big changes to the platform’s rules about speech and moderation. He introduced the “Community Notes” system, where users can add information to posts they believe are false or misleading. Musk’s approach focuses on allowing more speech while letting users decide what’s true or false.

Meta’s new policy, which includes adopting a similar “Community Notes” system, seems to be inspired by Musk’s changes. Both Meta and X now rely more on users to monitor content instead of professional fact-checkers.


What is Political Bias?

Political bias means favoring one political viewpoint over another. On social media, political bias can happen when certain opinions or groups feel they are being unfairly treated by platforms. For example, if a platform removes posts mostly from one political side, people on that side may accuse the platform of being biased.

Meta has been accused of bias by conservatives, who believe the company’s fact-checking program unfairly targeted right-wing opinions. On the other hand, many researchers and activists argue that Meta’s policies were necessary to prevent harmful false information from spreading.


How Does the Government Regulate Social Media?

In many countries, including the U.S., governments are working on ways to regulate social media companies. This means creating laws that require platforms to take more responsibility for the content users post. Some regulations aim to protect free speech, while others focus on making platforms safer by controlling harmful content.

For example, in the European Union (EU), there are strict rules that require companies like Meta to take down illegal content quickly. If they don’t, they can be fined. In the U.S., however, the debate is more focused on whether platforms are being fair to all political viewpoints.


What is the Role of Fact-Checkers?

Fact-checkers are professionals who investigate claims to determine whether they are true or false. They use evidence, such as official reports or expert opinions, to reach a conclusion. In Meta’s old system, fact-checkers played an important role by helping to stop the spread of false information. Their work was trusted because they followed strict rules to remain independent and nonpartisan.

Fact-checkers often work for news organizations or independent groups that specialize in verifying information. When Meta ended its fact-checking program, many experts worried that it would be harder to stop the spread of false information without these professionals.


Debate/Essay Questions

  1. Is it better for users to decide what is true or false on social media platforms rather than relying on professional fact-checkers?
  2. Is it fair to say that Meta’s decision was influenced by political pressure from Donald Trump and his allies?

Please subscribe to Insight Fortnight, our biweekly newsletter!

By Editor

I have worked in English education for more than two decades. The idea for this website sprang from a real need as an English teacher. I enjoy curating the content for this website very much.

Leave a Reply

Your email address will not be published. Required fields are marked *