Mike Bodnar contemplates detecting authenticity in a digital world of AI slop...
The foundation of a functional democracy and an informed society rests upon a shared understanding of factual reality. Yet, in the modern digital landscape, this foundation is rapidly eroding, to our detriment.
The rise of sophisticated generative Artificial Intelligence (AI) has introduced an unprecedented challenge: the swift, scalable, and increasingly seamless creation of fabricated news stories, images, and videos. Unlike the crude hoaxes of the past, AI-generated content can now mimic authentic journalism with such precision that distinguishing real news from engineered disinformation has become a profound cognitive and technical hurdle.
The current crisis stems from the democratisation of content-generation tools. Large Language Models (LLMs) can produce text that mirrors the tone, style, and structure of professional journalism, often faster than a human can type (certainly faster than me!). This capability has fuelled the rapid proliferation of "news" websites that are entirely AI-generated, churning out massive volumes of stories, often with politically motivated or financially exploitative agendas. In short, AI slop. And for most of us, slop conjours up pictures of bland food served with total disinterest, or shoddiness in workmanship. Both are appropriate analogies in this case.
The speed and volume are the key differentiators from previous eras of misinformation. A single actor can now deploy thousands of highly convincing fake articles across numerous platforms in minutes. This speed of spread, coupled with the algorithmic amplification inherent in social media platforms, means a falsehood can achieve global saturation long before human fact-checkers can even begin to debunk it.
The difficulty in recognition of such material lies not only in the text's fluency, but also in its capacity to incorporate
seemingly legitimate, though often hallucinated, sources and data, creating a
façade of authenticity that is deeply persuasive.
The Visual and Auditory Assault: Deepfakes and the
Disintegration of Trust
While deceptive text is problematic, the ability of AI to generate hyper-realistic images and videos — known as deepfakes — constitutes an even more fundamental assault on shared reality, effectively destroying the maxim that "seeing is believing."
Deepfakes are synthesized or manipulated media that replace one person’s likeness with another's in existing footage, or which create entirely new, non-existent scenes and events. The old demand, 'pics or it didn't happen' doesn't work any more.
But these
fabrications are so convincing, they have already had tangible real-world consequences,
profoundly impacting financial markets and political stability.
In one notable 2023 incident, a deepfake image depicting an explosion near the Pentagon in Washington D.C. briefly circulated online, causing a swift dip in the U.S. stock market before the image could be officially debunked as generated by AI.
Another pervasive example involves the
use of deepfake audio and video to impersonate figures of authority for
financial fraud, such as the case of a finance worker being tricked into
approving a multi-million-dollar transaction after participating in a video call
with convincing AI clones of his company's Chief Financial Officer and other
executives.
In the political arena, deepfakes have shown their power to
sow chaos and distrust. During conflicts, deepfake videos have surfaced showing
high-profile political figures seemingly issuing false surrender orders or
making inflammatory statements, deliberately designed to undermine national
morale and military resolve.
Perhaps one of the most viral, non-malicious-but-illustrative, deepfakes was an image of Pope Francis wearing a ridiculously stylish white puffer coat, an image so perfectly rendered and aesthetically humorous that it fooled millions, demonstrating the sheer power of AI to create believable, if slightly absurd, reality. Don't be surprised to see Jesus walking on water sometime soon.
The inherent problem with deepfakes is
not just that they exist, but that they can be created and deployed so easily,
ensuring that every piece of video evidence must now be treated with an
initial - and essential - layer of scepticism.
A History of Human Deception: Roots of the Current Crisis
While the technology is new, the human impulse to fabricate
narratives for influence or gain is ancient. Understanding this history is
crucial, as it illustrates that the threat is not AI itself, but rather our own tendency to believe what is sensational or what confirms existing biases.
In ancient Rome, Emperor Octavian waged a sophisticated propaganda campaign against his rival, Mark Antony, using pamphlets and slogans etched onto coins to smear Antony’s reputation, portraying him as a corrupted puppet of Cleopatra. This early political misinformation played a critical role in Octavian’s ascent to power.
Scoot forward 1400 years or so and the invention of the Gutenberg printing press dramatically lowered the barrier to mass communication, leading to the rise of printed, sensationalised “canards” (baseless rumours) and politically-motivated pamphlets.
In the realm of visual hoaxes, the infamous “Great Moon Hoax” of 1835 serves as a stark precedent. The New York Sun newspaper published a series of six articles, falsely attributed to the famous astronomer Sir John Herschel, claiming that life — including bat-winged humanoids and unicorns — had been discovered on the Moon.
This sensational (and fabricated) journalism temporarily made the Sun shine as one of the most widely read newspapers in the United States, proving that narrative excitement often trumps factual accuracy.
In art, deception has a long history, too, with countless forged works throughout the centuries, such as the Venus de Brizet, an 18th-century statue buried and then "discovered" by an artist to boost his fame, fooling experts who declared it an ancient Roman artefact.
Perhaps the most famous example of mass media-driven panic
was the 1938 radio broadcast of War of the Worlds by Orson
Welles. Presented as a series of realistic breaking news bulletins, the
dramatisation of an alien invasion caused widespread panic among listeners who
missed the disclaimers, demonstrating the explosive potential of content
engineered to mimic authentic reporting. These historical events confirm that
the vulnerability to deception is a persistent feature of human psychology; AI
has simply upgraded the tools of the deceivers.
And don't get me started on clickbait. Oh okay then...
Clickbait
People don't believe how easy it is to fall for clickbait, and everyone is saying the same thing about headlines...
Clickbait is the insidious gateway drug to online content, a psychological tool designed not to inform, but to guarantee engagement. Its primary function is to exploit the "curiosity gap" — the cognitive space between what a reader knows and what they desperately want to know. By using hyper-emotional language, superlatives, and vague, tantalizing promises ("You Won't Believe What Happens Next!"), clickbait triggers an intense, often subconscious, need for resolution.
This technique bypasses critical thinking entirely, substituting logic with raw emotional manipulation. Headlines frequently lean on feelings of outrage, astonishment, or urgency, ensuring an immediate, visceral reaction (which is why tabloids love 'em). For content publishers, clickbait is a highly effective monetisation model; more clicks translate directly into higher ad revenue, regardless of the story's actual quality or factual basis.
The profound consequence is twofold: it devalues legitimate journalism, forcing quality content to compete with sensationalism, and it actively conditions us to associate engagement with emotional arousal, rather than reliable information. It’s a self-perpetuating cycle that prioritises traffic over truth.
A Guide to Information Self-Defence in the Age of AI
So what can we do?
The fight against AI-generated misinformation cannot be won by technology alone; it requires a renewed commitment to critical thinking and verification by every individual. Since AI detection tools are often unreliable and easily bypassed, we must rely on our own analytical skills and a combination of technical checks.
1. Scrutinise the Source and Context
The first and most important step is to question the origin
of the information. Ask: Is this story coming from a reputable news
organization with a history of fact-checking? Or is it from an unfamiliar blog,
an unverified social media account, or a website that mimics a known
publication? Reverse-search the article's core claim to see if it is reported
by multiple, diverse, and credible sources. If a story is sensational but only
appears on one unknown site, treat it with extreme caution.
2. Look for Technical "Tells" in Visual Media
While deepfakes are improving, current AI still struggles with certain details, which can serve as vital clues:
Hands and Fingers: In AI-generated images, look for anomalies in hands — too many or too few fingers, unnatural angles, or objects being gripped incorrectly.
Faces
and Symmetry: Unnaturally symmetrical or overly smooth skin texture,
or distorted or misaligned ears, glasses, and jewellery can be giveaways.Text and Backgrounds: Text within AI images is often jumbled, misspelled, or nonsensical. Similarly, backgrounds may exhibit "warping" effects or impossible physics, such as objects blending into one another.
Video Inconsistencies: In deepfake videos, look for unnatural eye-blinking (too little or too much), poor synchronisation between lip movements and audio, and inconsistent shadows or lighting across a scene.
3. Analyse the Tone and Language
AI-generated text can often be spotted by its lack of genuine human voice, original analysis, or specific, idiosyncratic details. Look for:
- Repetitive or Formal Language: A dry, overly matter-of-fact tone, excessive use of buzzwords, or repetitive sentence structures can indicate AI authorship. A dry, overly matter-of-fact tone, excessive use of buzzwords, or repetitive sentence structures can indicate AI authorship.
- Lack of Context: If the writing makes sweeping claims but lacks appropriate contextual depth, or if it cites sources that are vaguely referenced or appear fake upon a quick search, it is highly suspect.
4. Employ Verification Tools
![]() |
| TinEye |
So where does that leave us?
The age of generative AI has created a sophisticated new challenge to information authenticity.
By combining historical awareness of our own gullibility with modern vigilance toward technical imperfections, and by prioritising critical thinking over sensationalism, we can - admittedly with some effort - navigate the treacherous currents of the modern information ecosystem.
The battle for factual reality is not about censorship; it is about media literacy, and cultivating a healthy, informed scepticism towards the content that floods our digital lives.
So, now you know. From here on, whenever you see a tantalizing headline, or an image or video that seems almost too perfect (or in which, sloppily, a person has six fingers!), question it, research it, evaluate it. Maintain your hold on reality!
A final note...
Oh and one last thing, apart from about half a dozen sentences - and a few of my own interjections (as well as me altering American spelling to proper English) - this article was written by Microsoft Copilot. Images were generated by Google Gemini in most cases.
It seemed only appropriate to get AI to tell us how to recognise AI...






Good blog Mike. People need to know this stuff.
ReplyDelete