Fake News and Weaponized Bots: How Algorithms Inflate Profiles, Spread Disinfo and Disrupt Democracy

Photo Source Mike Corbett | CC BY 2.0

Algorithms are getting so sophisticated that it is becoming increasingly difficult to tell which online comments are real and which are generated by “bots”; which sites are genuinely popular and which are generating fake hits. In my new book Real Fake News (Red Pill Press), I argue that fake news can be traced back to ancient Babylon (at least) and that today’s hi-tech fakery is merely a continuation of policies designed to reinforce elite domination, be the given elite “right-wing” or “left-wing.”

DO BOTS AFFECT PERCEPTIONS?

Online fake news has become a phenomenon. By the time President Trump came to power, few Americans had heard of the “alt-right,” the ideological grouping partly responsible for Trump’s electoral success. Trump lost the popular vote by 2.6 million, but he won the Electoral College vote. In other words, “alt-right” voters were numerous enough to give Trump a plurality in the overall vote and thus the Electoral College. How do we explain this discrepancy, that online fake news is a phenomenon, yet its main champions remain obscure to most Americans?

It turns out that bots are pushing fake news to make stories go “viral” by sharing them among fake bot accounts (“sock puppets”) on social media. In 2011, a team at Texas A&M University  created gibberish-spewing Twitter accounts. Their nonsense could not have possibly interested anyone, yet soon they had thousands of followers. They found that their Twitter “followers” were, in fact, bots.

In 2017 under a Pentagon grant, Shao et al. analysed 14 million Tweets spreading 4,000 political messages during the 2016 US Presidential campaign. They found that “[a]ccounts that actively spread misinformation are significantly more likely to be bots.” Fake news, they say, includes “hoaxes, rumors, conspiracy theories, fabricated reports, click-bait headlines, and even satire.” Incentives include sending “traffic to fake news sites [which] is easily monetized through ads, but political motives can be equally or more powerful.” During the presidential campaign 2016, it was discovered that the popularity profiles of fake news are indistinguishable from fact-checking articles.

The authors note that, “for the most viral claims, much of the spreading activity originates from a small portion of accounts.” The so-called super-spreaders of fake news are likely to be “social bots that automatically post link to articles, retweet other accounts, or perform more sophisticated autonomous tasks.” Regional vote shares toward Trump did not match the geographical location of (likely) bot accounts. Though it is unconfirmed, it is likely “that states most actively targeted by misinformation-spreading bots tended to have more surprising election results.”

Ratkiewicz et al. argue that Twitter has a structural bias for fake news due to its “140-character sound bytes [which] are ready-made headline fodder for the 24-hour news cycle.” Ferrara et al. write that bots can “engage in … complex types of interactions, such as entertaining conversation with other people, commenting on their posts, and answering their questions.” The authors go on to note that bots “can search the Web for information and media to fill their profiles, and post collected material at predetermined times, emulating the human temporal signature of content production and consumption,” including the time of day when bot activity spikes.

Not surprisingly, the military is in on it, too. In addition to the Pentagon funded mentioned above, in 2014 the Guardian revealed that the UK Ministry of Defence was spending over £60,000 of taxpayers’ money on a project called Full Spectrum Targeting. The project was conducted with Detica (a subsidiary of  BAE Systems), the Change Institute and Montvieux. “Emphasis is put on identifying and co-opting influential individuals, controlling channels of information and destroying targets based on morale rather than military necessity.” The Cognitive and Behaviour Concepts of Cyber Activities project cost over £310,000 and included Baines Associates, i to i Research and several universities, including Northumbria, Kent and University College London.

BOTS AS PSYCHOLOGICAL WEAPONS

There were serious underlying structural problems that led to Donald J. Trump becoming President of the USA. But fake news and the “alt-right” acted as a trigger for those underlying problems. Social media and bots helped Trump’s cause. Scientists have argued that the sheer volume of social media users means that the comparatively small influence of psychological targeting can translate into significant numbers of impacted users.

In 2014, scientists working for the Center for Tobacco Control Research and Education at the University of California and San Francisco exploited nearly 700,000 Facebook users by making them participate in an experiment without their knowledge or consent. “The experiment manipulated the extent to which people … were exposed to emotional expressions in their News Feed,” says the research paper.

The experiment “tested whether exposure to emotions led people to change their own posting behaviors.” The two parallel experiments involved 1) reducing friends’ exposure to positive content and 2) reducing their exposure to negative content:

“[F]or a person for whom 10% of posts containing positive content were omitted, an appropriate control would withhold 10% of 46.8% (i.e.,  4.68%) of posts at random, compared with omitting only 2.24% of the News Feed in the negatively-reduced control … As a secondary measure, we tested for cross-emotional contagion in which the opposite emotion should be inversely affected: People in the positivity reduced condition should express increased negativity, whereas people in the negativity reduced condition should express increased positivity.”

The results concerning emotional contagion were statistically miniscule: 0.001. But, as the authors, point out: given “the massive scale of social networks such as Facebook, even small effects can have large aggregated consequences.” This, they theorize, equates “to hundreds of  thousands of emotion expressions in status updates per day.”

This is relevant to fake news because it shows how bots can spread fake news and cause emotional contagion among large numbers of potential voters.

FAKE NEWS, BOTS & THE MAKING OF A PRESIDENT?

The New York Daily News reports that Robert Mercer, one of Trump’s billionaire hedge-fund backers, worked for IBM on technology used to develop its Watson super-computer (“Brown clustering”), as well as Apple’s Siri technology. Mercer is a Trump mega-donor. There’s no evidence directly connecting Robert Mercer to pro-Trump bots. Yet, the kind of technologies and services in which Mercer-related companies are involved include influencing elections:

Trump has 30 million Twitter “followers,” only half of whom are real; the other 50% are bots. The newspaper also spoke to Simon Crosby of Bromium technologies, who explained that some of the Watson technology, allegedly developed by Mercer, “can quickly build, test and deploy bots or virtual agents across mobile devices or messaging platforms to create natural conversations between apps and users.” Crosby goes on to say that “arbitrary and ridiculous information [is] spread very quickly, and now to targeted  user[s],” who are “more susceptible to believing it and spreading it.”

One of Trump’s first Twitter “supporters” was a bot called PatrioticPepe, in reference to Pepe the Frog; the unfortunate creature appropriated by the “alt-right” for its bigoted agenda. A whole fifth of Twitter accounts tweeting about the election in 2016 were bots. Hilarity ensues when the organization reporting on this, the Washington Post, also reports that the data for fake, Trump-supporting accounts are accrued from Twitter Audit. Twitter Audit also points out that an estimated 35%+ of the Washington Post’s Twitter’s followers are also bots! The reporter overcomes this hypocrisy by writing that Trump’s percentage of fake followers is higher than that of his own organization—so that’s okay, then.

As I (in The Great Brexit Swindle) and others (e.g., Cadwalladr) have documented, Brexit was in part a psychological operation aimed at the public by mega-wealthy hedge fund managers who want out of Europe and its financial control directives. The Guardian’s Carole Cadwalladr spoke with  Andy Wigmore, communications director at the Leave.Eu campaign. Wigmore was behind the famous Trump-Nigel Farage meeting and photo op (Farage being the hard-right, pro-“free market,” former leader of the UK Independence Party).

Recall the contagion effect measured above. Referring to Brexit, Wigmore explained that (in Cadwalladr’s paraphrase): “Facebook was the key to the entire campaign.” Wigmore is quoted as saying: “using artificial intelligence, as we did, tells you all sorts of things  about that individual and how to convince them with what sort of advert,” i.e., spread contagion about things that matter to voters, such as immigration.

CONCLUSION

Psychological warfare emanating from billionaires like Mercer under the guise of online, grassroots (in reality astroturf) organizations, as well as from the military in as-yet-undisclosed forms, cannot dictate politics in a vacuum. Rather, they provide a subtle background to and trigger for complicated underlying factors, the main one being widespread discontent with current political systems. Fake news lights a fuse, igniting the powder keg of discontentment. But we should keep in mind, too, that monarchs, despots, big business, and advertisers have, throughout history, used the latest technologies to manipulate, dazzle and even the terrify those over whom they exercise power.

 

T. J. Coles is director of the Plymouth Institute for Peace Research and the author of several books, including Voices for Peace (with Noam Chomsky and others) and  Fire and Fury: How the US Isolates North Korea, Encircles China and Risks Nuclear War in Asia (both Clairview Books).