Twitter to limit politicians’ premature claims of victory, remove calls for violence

It's the latest effort by platforms to deal with a flood of misinformation and disinformation ahead of the election.
(Getty Images)

With less than a month until Election Day in the U.S., Twitter said it would limit politicians’ ability to claim premature electoral victories, and remove calls for violence or interference in election results.

Tweets claiming false victories will be flagged and users will be directed to credible information about the election, the company announced Friday. Any tweet intended to incite electoral interference, whether in the presidential or congressional races, will be removed.

The policy change comes amid a contentious election in which President Donald Trump has repeatedly questioned the integrity of the vote and made unfounded claims about fraud. Twitter has been labeling Trump’s tweets about mail-in voting and directing users to factual information, but critics have called on the platform to do more.

In the unrest following the killing of George Floyd, an unarmed Black man, in May, Trump tweeted, “when the looting starts, the shooting starts,” a message that Twitter said violated its policy against glorifying violence.


Now, Twitter is going further in clamping down on misinformation and incitement. The platform said it would put additional warnings on misleading tweets from U.S. politicians and campaigns or accounts with more than 100,000 followers, and keep users from retweeting or replying to them.

“We expect this will further reduce the visibility of misleading information, and will encourage people to reconsider if they want to amplify these tweets,” Twitter’s Vijaya Gadde and Kayvon Beykpour wrote in a blog post.

Renee DiResta, an expert on misinformation ecosystems at the Stanford Internet Observatory, said that the new Twitter policy was carefully aimed at keeping misleading content from spreading.

The announcement Friday is the latest effort by social media platforms to deal with a flood of domestic and foreign sources of disinformation. Last month, Facebook removed more than 200 phony accounts tied to Russian operatives, including the troll farm that interfered in the 2016 election. Earlier this week, Facebook banned QAnon conspiracy theorists, who have gained a rabid following in the U.S.


Twitter last month began rolling out new features to secure the accounts of political campaigns and major news outlets, whose compromise could impact voter perceptions.

The security measures are inspired by the bruising experience that social media platforms had during the 2016 election. Then, according to U.S. intelligence agencies, Russian bots and trolls spread disinformation on Twitter and Facebook in a bid to damage Hillary Clinton’s campaign and boost Donald Trump. This year, U.S. intelligence agencies have warned that Russia is again trying to denigrate the Democratic presidential nominee, former vice president Joe Biden.

Sean Lyngaas

Written by Sean Lyngaas

Sean Lyngaas is CyberScoop’s Senior Reporter covering the Department of Homeland Security and Congress. He was previously a freelance journalist in West Africa, where he covered everything from a presidential election in Ghana to military mutinies in Ivory Coast for The New York Times. Lyngaas’ reporting also has appeared in The Washington Post, The Economist and the BBC, among other outlets. His investigation of cybersecurity issues in the nuclear sector, backed by a grant from the Pulitzer Center on Crisis Reporting, won plaudits from industrial security experts. He was previously a reporter with Federal Computer Week and, before that, with Smart Grid Today. Sean earned a B.A. in public policy from Duke University and an M.A. in International Relations from The Fletcher School of Law and Diplomacy at Tufts University.

Latest Podcasts