Discover more from David Moscrop
Information Breakdown During Wartime
The Israel-Hamas war tested the new Twitter. The platform failed and confirmed what we already knew — the space is utterly cynical, toxic, and unreliable.
The Israel-Hamas war hadn’t even started before users on Twitter (forever Twitter, never X) began spreading mis/disinformation. Hamas had launched its terror attack, and news began to spread, but what in the past might be unconfirmed radio or television reports were now widely spread images, videos, claims, theories, and speculation. By the time Israel responded, and the war officially began, the information space on Twitter was utterly unreliable and toxic — more unreliable and toxic than usual, that is.
Misinformation and disinformation are different things. The former is the spreading of untruths, the latter is the deliberate spreading of untruths, usually for some strategic or tactical end. If someone spreads disinformation to, say, make money or sow discord, and you share it, thinking it to be true — and not bothering to check its accuracy — you are spreading misinformation. It’s easy and common for the latter to become the former. Purveyors of disinformation count on it.
As the war unfolded, Twitter was criticized for encouraging and spreading disinformation, particularly through “verified” blue check accounts — that is, accounts that are paid for and may participate in ad revenue sharing driven by engagement. The old verification system, flawed as it was, was free, administered by Twitter, and focused on confirming that notable users were indeed who they said they were. Now, verification is a for-profit undertaking. It’s not about reliable information. It’s about cash.
As the Guardian reports, there was a surge of disinformation from verified accounts (and non-verified accounts) as the war began. According to the BBC’s Shayan Sardarizadeh, the flood of bad information was so bad that fact checkers and the platform's community notes function, which is meant to give context or clarification to misleading or false posts, were overwhelmed.
As Sardarizadeh tweeted
I’ve been factchecking on Twitter for years, and there’s always plenty of misinformation during major events. But the deluge of false posts in the last two days, many boosted via Twitter Blue [now X Premium], is something else. Neither factcheckers nor Community Notes can keep up with this.
For a time, Twitter was the go-to place for breaking news and conversation about domestic and world events. It’s a space replete with journalists, experts, and reliable on-the-ground sources. It’s always been toxic to some degree, unreliable to some degree, but in recent months, since it was purchased by Elon Musk, it’s become orders of magnitude worse — more toxic, less reliable. Truly wretched. And the platform, to the extent it even tries, can’t keep up with abusive, lying accounts.
These accounts are sometimes called “inauthentic.” Bots, for instance. Automated accounts that exist to serve some function or another, often nefarious, almost always annoying. These accounts can be run by individuals or states, anyone looking to further some goal, to game the system to set the agenda, disrupt conversations, encourage abuse, and so forth. They’re a problem. Always have been. Worse now, but not new. However, on top of these disruptive accounts, verified accounts themselves have become a serious problem.
In July, Twitter launched its ad revenue sharing program. Verified users can now receive payment from the platform for subscribing to its paid-for verified status and generating impressions. In the context of day-to-day news, this program creates an incentive for users to share click-bait nonsense, including innocuous engagement traps. “How tall do you think this giraffe is? Quote tweet with your answers!” During a terror attack and war, it incentivizes the worst people on earth to share doctored images or videos, lies, or gruesome content for clicks — and dollars. It also encourages them to produce takes that deliberately outrage and inflame, driving a cycle of attacks and abuse.
Twitter has created a system with few checks and balances that encourages cynical cash-chasing grifters to exploit tragedy in real-time, muddying the information waters and causing further harm in the process. I’ve long said that if you wanted to design a space custom-made to be hostile and counterproductive to good discussion, debate, deliberation, and decision-making, it would look a lot like Twitter (or Facebook). Now it’s become even worse. Indeed, Musk himself recommended Twitter users follow an account to understand what was happening in the war — an account that, it turns out, was writing antisemitic posts.
As Twitter became more toxic and useless for following breaking news, high-quality sources of information — organizations and individuals — have become less inclined to use the platform, creating a death cycle. The good accounts go silent, and the bad accounts scream even louder, filling the space, boosted by Twitter’s algorithm and policies, which prioritize paid-for verified accounts over non-verified accounts.
The Israel-Hamas war is the first major test of the new Twitter. It has confirmed what we knew — the new space is more toxic and hostile, less reliable, and, in short, a threat to the flow of essential public information at home and around the world. And now, more than ever, the worst people have the greatest incentive to indulge their wretched and cynical impulses and to make it harder for everyone else to sort out what’s going on around the world.
Twitter and a subset of verified users are exploiting suffering, death, and chaos to make money. Other users are exploiting the moment for their own ends. And Twitter isn’t doing enough to slow or stop them. These accounts are profiting from international crisis. That crisis may spiral into something even worse than it already is, and these ghouls are set to profit every step of the way.