Sign up to our newsletter
The latest security news direct to your inbox
‘Spambots’ are a fact of life on Twitter – fake accounts built to spread everything from infected links to misinformation. Until now, users have had to rely on their instincts, but a tool – “Bot or Not” – helps to uncover fake accounts instantly.
The tool, available free online, instantly offers a verdict on several telling features of accounts – and scores 9.5 out of a possible 10 on a scale of statistical accuracy, the researchers say in their paper.
Bot or Not can be used freely here. It requires a Twitter login, but reports a large range of statistics about an account in graphic form near-instantly – giving away whether it’s human, or a piece of software.
Developed by computer scientists at Indiana University, Bot or Not offers graphs and charts of a Twitter account’s friendship network, the content they have posted, and how people have reacted to it. Bot or Not analyzes more than 1,000 features of a Twitter account’s actions, including their full friendship network, to offer a way to spot fakes, according to Network World.
The research is part of a broader Indiana University research project designed to analyze how large networks of fake accounts or ‘bots’ can be used to spread political misinformation, called Truthy. The project’s current focus is tweets about politics, social movements and news.”
Fake Twitter followers – and fake news – can also be used to spread malware, or to direct users to spammy pages. When a hacker group briefly seized control of E! News’ Twitter account, a Tweet claiming Justin Bieber was gay was retweeted 1,200 times. A fake news story posted on an AP News account, describing a bomb attack on the White House, wiped 143 points off the Dow Jones. Reports on the damage done by these attacks can be found on We Live Security here.
Bot or Not analyzes the sentiment of Tweets, their timing, and linguistic cues to work out if accounts are real or fake – and whether “news” spreading from these accounts is real, or malicious misinformation.
“We have applied a statistical learning framework to analyze Twitter data, but the ‘secret sauce’ is in the set of more than one thousand predictive features able to discriminate between human users and social bots, based on content and timing of their tweets, and the structure of their networks,” said Alessandro Flammini, an associate professor of informatics and principal investigator on the project.
“The demo that we’ve made available illustrates some of these features and how they contribute to the overall ‘bot or not’ score of a Twitter account.”
By training the software with accounts known to be bots, BotOrNot is scoring 0.95 on a standard statistical measurement, with 1.0 being perfect accuracy.
“Part of the motivation of our research is that we don’t really know how bad the problem is in quantitative terms,” said Fil Menczer, the informatics and computer science professor. “Are there thousands of social bots? Millions? We know there are lots of bots out there, and many are totally benign. But we also found examples of nasty bots used to mislead, exploit and manipulate discourse with rumors, spam, malware, misinformation, political astroturf and slander.”
Flammini and Menczer said it’s their belief that these kinds of social bots could be dangerous for democracy, cause panic during an emergency, affect the stock market, facilitate cybercrime and hinder advancement of public policy. The goal is to support human efforts to counter misinformation with truthful information.
Menczer has been interviewed by The New York Times on the use of social bots to sway elections – by cyclically posting the same fake news story until it “goes viral”.
Previous attempts to find “fake” Twitter followers have used less complex tools – analyzing accounts by low numbers of followers and Tweets – but the number out there is enormous, according to Yahoo News.
Millions of accounts on the network are thought to be fakes – ‘silent’ accounts who rarely if ever Tweet, often created to ‘bulk out’ the following of celebrities, companies, or site users.
The business of selling such fake Twitter followers is now worth between $44 million and $400 million a year.
The figure comes from analysis by two Italian security researchers, Andrea Stroppa and Carlo De Micheli, who spent months investigating the ‘grey market’ where Twitter followers are sold – and found dozens of firms selling followers, and even selling ‘retweets’ to make people appear interesting.
Agencies based in London and abroad will deliver thousands of such ‘followers’ for less than $20 – and boast that their client lists include celebrities, politicians and musicians.
Author Rob Waugh, We Live Security