Days before the parliamentary elections in Slovakia late last year, the political scene was shaken by the emergence of two viral audio clips. These clips featured Michal Šimečka, a leader of a liberal, pro-Western party, voicing controversial statements. One clip falsely showed him wanting to double beer prices, a move likely unpopular in a country that cherishes its lagers and pilsners. The other depicted him discussing plans to commit election fraud—statements detrimental to any politician, especially one known for advocating liberal democracy.
However, these clips were not genuine; they were created using advanced artificial intelligence technology known as deepfakes. This technology allows for the production of fake audio, images, or videos that seem very real. The International Press Institute highlighted this incident as the first significant use of AI deepfakes in a national election. Although it’s not certain that these fake clips directly influenced the election outcome, Šimečka’s party did not win, and a pro-Kremlin populist leader took charge in Slovakia.
This issue of misinformation—the spread of false information without the intent to mislead, and disinformation—deliberately misleading information, has been identified as a major global risk. A report by the World Economic Forum in January rated misinformation and disinformation as top global risks over the next two years, ranking them higher than threats like war or economic crises.
Economists and scholars have increasingly focused on combating misinformation. A study titled “Toward an Understanding of the Economics of Misinformation: Evidence from a Demand Side Field Experiment on Critical Thinking” by economists John A. List, Lina M. Ramírez, Julia Seither, Jaime Unda, and Beatriz Vallejo examined whether simple, cost-effective strategies could help people identify and dismiss false information. They noted that while most research has looked at the sources of misinformation, like social media or malicious entities, less attention has been given to helping individuals improve their ability to critically assess and reject such information.
In their field experiment conducted around the 2022 presidential election in Colombia, a country also dealing with political division, over 2,000 participants were involved. They were divided into groups and exposed to different interventions aimed at enhancing critical thinking. One group watched a video that illustrated how automatic thinking and stereotypes could affect judgment, showing a positive interaction between individuals from opposing political groups. Another group took a personality test to identify their cognitive traits and susceptibility to biases. A third group participated in both activities, and a control group did neither.
Detailed Insights from the Study
The researchers organized their study to assess how different interventions could enhance participants’ ability to recognize and reject misinformation. They carefully measured the outcomes to understand the impact of each intervention on the participants’ responses to a variety of news headlines and social media posts.
- Impact of the Educational Video:
- The group that watched the video, which depicted people from politically antagonistic backgrounds coming together, showed a marked improvement in their ability to discern misinformation. Specifically, participants in this group were over 30 percent less likely to believe fake news compared to the control group. This indicates a significant enhancement in their skepticism and critical thinking skills.
- However, while the video was effective in reducing the belief in misinformation, it did not significantly encourage participants to actively report misinformation on social media platforms. This suggests that while educational content can improve critical thinking, additional steps may be necessary to motivate proactive behavior in managing misinformation.
- Effectiveness of the Personality Test:
- Contrary to expectations, the personality test, which was designed to make participants aware of their own cognitive biases, did not have a noticeable impact on their ability to reject fake news. This finding was intriguing as it challenges the assumption that self-awareness of biases directly correlates with better information scrutiny.
- The lack of impact from the personality test might imply that simply understanding one’s biases is not sufficient to change behavior or enhance critical evaluation of information.
- Combined Impact of Video and Test:
- Participants who both watched the video and took the personality test exhibited a unique response: they became excessively skeptical. This group was about 31 percent less likely to believe accurate news headlines, indicating that they may have developed a heightened distrust of all news sources, regardless of their veracity.
- This over-skepticism is a critical finding as it underscores the potential risk of making individuals too distrustful, which can lead to a general rejection of both true and false information. This phenomenon complicates efforts to educate the public on misinformation, suggesting a delicate balance is needed in such interventions.
The success of the video in reducing susceptibility to misinformation suggests that encouraging empathy and critical thinking, rather than just debunking false claims, may be effective. Centrist calls for political unity and understanding across divides could also enhance our collective ability to navigate misinformation.
The ongoing battle against misinformation has also seen advancements in technology, such as the development of AI tools to identify AI-generated content. For instance, OpenAI has introduced watermarking for AI-generated images to help users identify them as artificial. Similarly, government initiatives are encouraging the creation of technologies to differentiate between genuine human speech and AI-generated deepfakes.
As AI and misinformation techniques evolve, the combination of technological solutions and initiatives to enhance critical thinking among the public will be crucial. These approaches, often described as forms of “psychological inoculation,” aim to pre-emptively expose individuals to misinformation tactics to better equip them to recognize and reject such content when encountered in the real world.
This article is based on the following article:
Background Information
By understanding these key concepts and contexts, readers can better appreciate the complexities of the issues discussed in the article and the significance of the research findings in combating misinformation effectively.
1. Misinformation and Disinformation
Misinformation refers to false or inaccurate information that is spread without the intent to mislead. Examples include rumors, errors in news reports, or misunderstandings of facts. Disinformation, on the other hand, is false information that is deliberately spread with the intent to deceive people. This can involve organized campaigns to influence public opinion or manipulate political outcomes.
2. Artificial Intelligence and Deepfakes
Artificial intelligence (AI) refers to the capability of a machine to imitate intelligent human behavior. Deepfakes are a type of AI-generated media that take real images, audio clips, or videos and alter them to create something that appears real but is actually fake. This technology uses machine learning and artificial neural networks to replace faces, manipulate voices, and synthesize human attributes. The term “deepfake” comes from “deep learning,” a form of machine learning, and “fake,” indicating falsehood.
3. Political Context in Slovakia and Colombia
Slovakia: Slovakia is a European country with a rich history in central Europe. In recent years, it has experienced significant political changes, with various parties representing a range of ideologies from pro-Western liberalism to pro-Kremlin populism. Understanding the political leanings and controversies in Slovakia helps to contextualize how deepfakes could influence public opinion during elections.
Colombia: Colombia, located in South America, has faced its own set of political challenges, notably political polarization. This means the society is divided sharply into opposing political factions, often leading to intense and sometimes violent disagreements. The political environment in Colombia makes it a fertile ground for misinformation to spread, as people may be more susceptible to believing and sharing information that supports their political views.
4. Critical Thinking Skills
Critical thinking involves the ability to think clearly and rationally, understanding the logical connection between ideas. It enables individuals to engage in reflective and independent thinking, necessary for making reasoned judgments. In the context of misinformation and disinformation, critical thinking skills help individuals to:
- Evaluate the credibility of sources.
- Distinguish between facts and opinions.
- Recognize bias and propaganda.
- Analyze and interpret data and arguments.
5. Importance of Educating Against Misinformation
Educational interventions, like the ones mentioned in the article, aim to equip people with the skills to critically assess the information they encounter. This is increasingly important in a digital age where AI and technology can create realistic but false content. Teaching people, especially young students, to question the validity of information and to think critically about the sources and content they consume is crucial in combating misinformation.
Debate/Essay Questions
- Is the use of AI to create deepfakes a threat to democracy?
- Should governments regulate the creation and distribution of deepfakes?
Please subscribe to Insight Fortnight, our biweekly newsletter!