Introduction: A War Fought on Multiple Fronts
The Israel-Hamas conflict is a highly sensitive and complex issue that reverberates on the international stage. While the physical battle takes place in the Middle East, another equally intense war is being waged online—particularly on social media platforms like X (formerly known as Twitter). This digital war is all about controlling the narrative through an unprecedented deluge of disinformation, fake videos, and altered photos.
In this article, we’ll dissect the mechanisms that have allowed this disinformation to proliferate and the role that social media algorithms play in exacerbating the problem.
The Scale of Disinformation: An Unprecedented Phenomenon
It’s not novel for major world events to be accompanied by waves of disinformation. However, the Israel-Hamas conflict presents a unique case due to the sheer scale and speed at which disinformation is being disseminated. Open Source Intelligence (OSINT) researchers like Justin Peden, known online as the Intel Crab, have noted the increasing difficulty in discerning credible sources. In the past, primary sources and verified content were more accessible. Now, these have become a rare commodity on platforms like X.
The Algorithmic Dilemma
To make matters worse, X’s algorithm promotes users who opt for an $8-a-month premium subscription, pushing posts from verified accounts to the top of news feeds. This system inadvertently drowns out voices from the ground, particularly non-English speakers, who are often the most reliable sources in a conflict zone. The algorithmic design not only stifles the diversity of perspectives but also becomes an unwitting tool in the hands of propagandists.
Misleading Visuals: From Video Games to Fireworks
The disinformation isn’t limited to textual content. Users have been bombarded with misleading visuals, including video game footage misrepresented as actual events and images of firework celebrations in Algeria portrayed as Israeli airstrikes. Such false information doesn’t just skew perceptions; it distracts OSINT researchers who could be spending their time verifying genuine footage from the conflict.
The Role of High-Profile Individuals
When public figures like Elon Musk recommend disinformation-spreading accounts to their massive following, it throws a wrench into the already complicated machinery of truth verification. These endorsements can instantly amplify falsehoods, muddying the waters for those seeking accurate information.
The Impact of Organizational Changes: Firing the Gatekeepers
Experts suggest that many of the problems can be traced back to organizational changes at X. After Elon Musk took over, most of the team responsible for tackling disinformation was fired. Emerson Brooking, a researcher at the Atlantic Council Digital Forensics Research Lab, argues that these changes have created an environment conducive to the spread of falsehoods and extremist propaganda.
Financial Incentives and the Dark Side of Engagement Metrics
The algorithm’s focus on engagement over accuracy creates perverse incentives for bad actors to share disinformation. Dramatic, hard-hitting images and videos generate high engagement, making them more likely to be promoted by the algorithm. This has a spiraling effect, where false narratives gain momentum, further entrenching divisive viewpoints.
The Telegram-Twitter Pipeline: An Unchecked Source of Information
Many primary videos and images of the conflict first appear on encrypted messaging platforms like Telegram. While this can ensure a degree of anonymity for individuals on the ground, it also means that the information is often not vetted before being disseminated on platforms like X, where it can be taken out of context to fit a particular narrative.
The Struggle for Ground Truth
In this environment, even seasoned OSINT researchers find it difficult to ascertain what’s real and what’s not. False claims, like the rumor about Israeli Prime Minister Benjamin Netanyahu being hospitalized, demonstrate how easily even experts can be misled.