Potemkin Realities: How Social Media Misinformation Is Rewiring Young Minds
The Rising Threat of Misinformation
History frequently pays for its lessons in blood and chaos, and when we forget them, they tend to recur. While I wasn’t a big reader during my time at the academy, there was one book that left a lasting impression on me.
It introduced me to the concept of Potemkin villages, an early form of deliberate deception used to create a false appearance of prosperity in front of important visitors.
The book—focused on psychological warfare, manipulation, and the art of disinformation—revealed how lies, when dressed up convincingly, can mislead even the sharpest minds.
Grigory Potemkin’s strategy of building fake villages to impress Empress Catherine II wasn’t just a historical anecdote—it was an early warning of how powerful illusion can be in shaping perception.
Today, we’re witnessing a digital version of these villages on social media, where the boundaries between truth and fiction are blurring and curated illusions are spreading faster than facts.
Platforms like TikTok, Instagram, and YouTube are not only places of entertainment for young people but also their primary sources of news and information.
And just like Potemkin’s villages, the polished content often hides a more concerning reality: the exponential rise of misinformation and disinformation.
From Scrolling to Shaping Minds: Social Media’s New Role as a Newsroom
Social media’s transformation from a social space to a news platform has brought many benefits, including instant access to information, greater democratization of news, and real-time updates.
However, this shift has also exposed users, particularly young people, to significant risks.
According to a 2023 report by the Pew Research Center, over 50% of U.S. adults now rely on social media for news.
Among young people aged 18-29, the number is even higher, with platforms like TikTok, Instagram, and YouTube often surpassing traditional media outlets in their reach.
The same study found that nearly 70% of TikTok users said they regularly consumed news content on the platform, a stark contrast to the 30% of Facebook users in the same age group who engage with news.
The problem with this shift is that these platforms prioritize engagement over accuracy, leading to the spread of sensationalized content, fake news, and misleading headlines.
For instance, TikTok’s design keeps users engaged with an endless stream of personalized content, which facilitates the spread of false information.
With TikTok’s short-form videos and algorithmic content curation, harmful content can spread rapidly before it is even identified and removed.
Misinformation vs. Disinformation: What’s the Difference and Why It Matters
Misinformation refers to false or inaccurate information that is spread without the intent to deceive, while disinformation is deliberately fabricated and spread to mislead others.
Both are on the rise, and the consequences are significant.
The World Economic Forum 2025 (WEF) has called misinformation and disinformation “one of the greatest risks facing countries, businesses, and individuals” over the next two years.
The problem is particularly severe because misinformation spreads faster than fact-checking measures can be implemented.
As a result, the public’s trust in media, government, and institutions is eroding, and social divisions are deepening.
A key factor in this rise is the decline of traditional journalism, where professional fact-checking was once the standard.
Many social media users are unaware that much of what they consume online is not fact-checked or verified, and the speed at which news spreads often leaves little room for verification.
One study by MIT found that false news stories are 70% more likely to be shared than true stories. This amplifies the power of fake news and disinformation, which thrives on the virality of sensational, emotionally charged content.
Young people, who are particularly vulnerable to this kind of content, frequently encounter misinformation without the skills necessary to separate fact from fiction.
Enter the Machines: How AI and Automation Supercharge the Spread
While human error and bias have long been factors in the spread of misinformation, the rise of automation and AI tools has introduced new complexities into the mix.
Automation tools, such as bots and algorithmic manipulation, are now frequently used to propagate false information on social media platforms.
These tools can generate vast quantities of fake accounts and content in a short period, overwhelming any attempts to identify and contain them.
AI-driven bots can mimic human interaction, generating fake likes, shares, and comments that give the illusion of credibility to false claims.
Additionally, the advent of AI technologies like deep fakes has amplified the potential for creating convincing, though entirely fabricated, media.
Deep fakes are manipulated videos or audio clips generated by AI that can make it appear as though someone said or did something they never did. These tools have become so advanced that even experts find it difficult to distinguish between real and fake content.
For example, a deepfake of a politician giving a controversial speech could easily go viral, causing public unrest or influencing elections before the video is debunked.
The use of AI in the spread of misinformation has created a “credibility crisis,” where it becomes increasingly difficult for the public to trust the veracity of any media they consume.
This problem is compounded by the fact that social media companies have been slow to implement effective solutions for detecting and mitigating the spread of deep fakes, disinformation, and misinformation.
The Human Cost: How Digital Lies Leave Real-World Scars
Misinformation is not just a digital issue—it has real-world consequences.
On an individual level, young people may find themselves making decisions based on false or misleading information.
Whether it’s misinformation about health, politics, or social issues, the consequences can be significant.
For example, during the COVID-19 pandemic, we had so much misunderstanding, fake news, and misinformation about the virus and about vaccines that spread quickly on social media, contributing to panic, misunderstanding, and hesitation about vaccines.
In extreme cases, misinformation can even lead to violence.
A study published in the journal Science Advances in 2023 (Article: Subscriptions and external links help drive resentful users to alternative and extremist YouTube channels) found that misinformation spread via social media played a role in the spread of extremist views and political violence, especially among younger users who are often more susceptible to impressionability and manipulation.
The mental health impact of misinformation cannot be ignored.
Constant exposure to sensationalized, often false, content can create anxiety, distrust, and confusion, especially among younger people who are still developing critical thinking skills.
Taking Back Control: What We Can Do About It
Combating misinformation is not a simple task, but as individuals, we can play a crucial role in addressing the problem.
Here are some steps we can take to combat misinformation and protect ourselves and others from falling victim to it:
Promote digital literacy
Start by becoming more educated about how misinformation spreads.
By understanding the tactics used by bad actors, such as clickbait, sensational headlines, and emotional manipulation, we can become better equipped to spot misinformation when we encounter it.
Check the source
Before sharing content, always check the source. Is it a reputable outlet? Is the information corroborated by other trusted sources? If you’re unsure, take the time to fact-check.
Be skeptical of deep fakes
Given the rise of deep fakes, always remain cautious about videos or audio clips that seem too sensational or out of character.
If a video seems too perfect or unbelievable, it’s worth questioning its authenticity.
Limit your social media consumption
Social media algorithms prioritize engagement over accuracy.
To reduce your exposure to misinformation, consider limiting your time on these platforms or curating your feed to follow trusted, fact-based sources.
Engage in conversations
When discussing current events or news with others, be willing to engage in respectful, fact-based conversations.
Encourage others to seek out reliable sources, and if you catch someone spreading misinformation, approach them calmly and provide them with credible, well-sourced facts to help them see the truth.
Conclusion: Fighting for Truth in the Age of Illusion
As social media continues to eclipse traditional media outlets as the primary news source for young people, the threat of misinformation grows.
Automation tools, AI-generated deep fakes, and the rapid spread of sensationalized content have created an environment where misinformation can thrive, and trust in media is eroding.
The responsibility to combat misinformation does not lie solely with the platforms; it is a shared responsibility that requires action from governments, tech companies, and individuals alike.
As consumers of news, especially younger generations, we must also take an active role in verifying the information we encounter and developing the skills needed to navigate an increasingly complex digital world.
Ultimately, the fight against misinformation is not just about protecting the integrity of news—it’s about protecting the trust and unity of societies around the world.
With concerted efforts from all sectors and a proactive approach by individuals, we can begin to rebuild the trust that has been eroded and ensure that the digital age doesn’t become one dominated by falsehoods.