The Origin, Rise, Definition and Societal Background of Online Influence Operations

The internet and social media have supercharged Russia’s effectiveness with influence operations. Many other countries and political parties have developed similar capacities to distort the news, discredit opponents and manipulate politically relevant debates. Efforts to control information have been aided by the diminished capacity of traditional media outlets, which have lost their advertising revenues to social media.

Origins and Rise

The history of deception is as old, of course, as history itself. It was a forged letter that Sultan Baybars turned to in 1271 to fool his crusader enemies into surrendering the castle of Krak de Chevaliers. Another, apparently from Grigory Zinoviev appeared four days before the British General Election in 1924, falsely showing a Labour Party in cahoots with the Communists. The British SIS conducted a major covert campaign in the US across 1939 and 1940 to influence them to join the warFootnote 1.

During the Cold War, both sides developed doctrines called ‘active measures’. As conflict was largely pushed out of the sphere of direct military confrontation, there were systematic attempts by states to influence watching publics across both sides of the Iron Curtain. Fake companies, front organisations, leaked letters, bogus journalism, planted conspiracy theories and manufactured protests were all used to wield influence. Statecraft and stagecraft were two sides of the same coin during the Cold War, as was the control of truth and the use of force.

For the practitioners of illicit influence, the arrival of the Internet and social media created a new puissant landscape where these doctrines could be applied. Social media companies had built enormous global forums far more open than newspapers and televisions. In the interest of growth, they were as frictionless as possible; very easy to join, very easy to post in, where any kind of security check or identity challenge were, as friction, things that stood in the way of growth and use. They were curated and shaped by algorithms and features that could be reverse engineered, gamed and manipulated. They also became increasingly personalised, serving up the information they thought users wanted and sometimes in doing so creating knots of hyper-partisanship; small online groups that could each be contacted and exploited. And they were areas that were anonymous or pseudonymous, where users could appear to be anyone they wanted to be. Across social media platforms, the same logics held: it was easier to lie about who someone was than it was to detect it. Easier to put content up than to take it down. Muddying the waters extremely easy. Provable attribution, almost impossible. Suddenly it was far easier and cheaper for militaries to reach foreign populations.

Since 2010, political parties and governments have spent more than half a billion dollars on the research, development, and implementation of psychological operations and public opinion manipulation over social mediaFootnote 2. The campaigns of Russia have been those most heavily documented, especially in the wake of the 2016 US elections. Russia mounted ambitious online influence campaigns stretching across the Baltic States, Scandinavia, Central Eastern Europe, and the West. Russian information warfare relied on a changing tapestry of newly influential voices online, including ‘independent’ experts, famous conspiracy theorists, front groups, real groups, activists, united fronts, real documents, forged documents, academics, journalists, and people posing as either or both—and they all come together online around certain stories and narratives. There were automated accounts, semi-automated accounts, real accounts that look automated, and automated accounts that look real. There was targeted advertising. There were groups that ‘bomb’ search engine results and trending algorithms to make their content climb higher, and others that game reporting or flagging systems to clear away opposing content. There are those that package ideas as memes and virals, and experiment to see which spread most quickly. And underneath all of this were hackers, perhaps Russian intelligence masquerading as activists, criminals masquerading as Russian intelligence, or a strange combination of all of them.

However, it was of course not just Russia. Information operations was not just waged by states, but also by political parties and candidates, often through consultants and firms at an arms-length deniability from the main candidate themselves. In Mexico, tens of thousands of automated accounts are known locally as Peñabots, after President Enrique Peña Nieto emerged, to attack the oppositionFootnote 3. In the Philippines there were salaried social media commentators mounting ‘‘a fanatic defense of Duterte, who’s portrayed as the father of the nation deserving the support of all Filipinos”Footnote 4.

In Turkey, there are reports of ‘white trolls’ that are mobilised in support of the ruling Justice and Development Party. Some 6,000 people have allegedly been enlisted by the party to manipulate discussions, drive particular agendas, and counter government opponents on social mediaFootnote 5. Of all the campaigns that can be identified, in 2017 Oxford researchers found some form of illicit online influence happening in 28 countries. In 2018, it was 48. Angola, Argentina, Armenia, Australia, Austria, Azerbaijan, Bahrain, Brazil, Cambodia, China, Colombia, Cuba, Czech Republic, Ecuador, Egypt, Germany, Hungary, India, Iran, Israel, Italy, Kenya, Kyrgyzstan, Malaysia, Mexico, Myanmar, Netherlands, Nigeria, North Korea, Pakistan, Philippines, Poland, Russia, Saudi Arabia, Serbia, South Africa, South Korea, Syria, Taiwan, Thailand, Turkey, Ukraine, United Arab Emirates, United Kingdom, United States, Venezuela, Vietnam, and ZimbabweFootnote 6.

Definition

There are many words currently used to describe illicit online influence. ‘Information warfare’, ‘information manoeuvre’, ‘information dominance’, ‘influence operations’, ‘fake news’, ‘disinformation’, ‘misinformation’, ‘coordinated inauthentic behaviour’ or, the Russian formulation, ‘moral-psychological-cognitive-informational struggle’. The phenomenon is far easier to conduct than it is to define.

Information warfare is stubbornly resistant to definition because it is, by its nature, a transgression of boundaries and definitions. It involves the creation of fake people amplifying truths, and real people sharing falsehoods. It can both be the hounding and abuse of politicians, journalists and critical voices to drive them off platforms, and also the creation of adoring crowds and cheerleaders for whoever is paying for it. It ropes in famous conspiracy theorists, front groups, real groups, activists, united fronts, real documents, forged documents, academics, journalists, and people posing as either or both. It blends the very visible and very secret, attributed and non-attributed, merging together actors that are centrally controlled with those that are completely self-directing. It is nothing less than the creation of synthetic ideational realities. It is a world that everybody sees, but few know exists, intent on manipulating what is seen, what is believed, our cultures and values, in ways that are hidden.

It blurs the distinction between war and peace, between militaries and civilians, between machines and humans, states and media, between weapons and information. It has no formal legal definition under international law, it is never declared, and never formally ceases. If it is warfare, it is one that is done by a vast array of different groups. If it is not, it is an activity of great interest to militaries. Definitions, legal and otherwise, are needed to develop responses to it, to deter it, to even know it is happening.

Wikipedia

Illicit influence is likely happening across a much broader array of channels, sources and platforms than is often supposed. In 2019, the BBC investigated the possible manipulation of the open-source encyclopedia, Wikipedia, by the Chinese state.

The BBC’s investigation found 1,511 tendentious edits across 22 politically sensitive articles. They could not verify the identity of the users who made these edits, nor whether they reflected a more widespread practice. However, the edits were found to consistently change Wikipedia’s content on these articles to be more consistent with the Chinese government’s stated position. The Hong Kong protests page had seen 65 changes in the space of a day—largely over questions of language. Were they protestors? Or rioters? The English entry for the Senkaku islands said they were ‘islands in East Asia’ but earlier this year the Mandarin equivalent had been changed to add ‘China's inherent territory’. The 1989 Tiananmen Square protests were changed in Mandarin to describe them as “the June 4th incident” to “quell the counter-revolutionary riots”. On the English version, the Dalai Lama is a Tibetan refugee. In Mandarin, he is a Chinese exileFootnote 7.

The investigation also contained testimony from Taiwanese Wikipedians who reported they had been subject to hostility, abuse and threats from pro-Chinese users. It also identified documents which called on the Chinese government to adopt a concerted strategy to change Wikipedia’s content. One paper, called ‘Opportunities and Challenges of China’s Foreign Communication in the Wikipedia’, was published in the Journal of Social Sciences in 2019Footnote 8. In it, the academics Li-hao Gan and Bin-Ting Weng argue that “due to the influence by foreign media, Wikipedia entries have a large number of prejudiced words against the Chinese government”. They continue, “we must develop a targeted external communication strategy, which includes not only rebuilding a set of external communication discourse systems, but also cultivating influential editors on the wiki platform”. They end with a call to action: “China urgently needs to encourage and train Chinese netizens to become Wikipedia platform opinion leaders and administrators… [who] can adhere to socialist values and form some core editorial teams”.

Societal Background

Finally, a note on the wider societal background that the practice of illicit influence online occurs against. It is not only true that the rise of digital platforms has created new spaces and opportunities for the practice of influence operations. These platforms and services are also themselves agents of social change that may have decreased the resilience of society to illicit influence through a number of other macro-trends.

The first trend is the commercial decline of professional journalism. US newspaper advertising revenue has fallen from $65.8 billion in 2000 to $23.6 billion in 2014. British newspaper ad spend went from $4.7 billion to $2.6 billion over roughly the same periodFootnote 9. By 2013, it was estimated that more than 3,000 Australian journalists had lost their jobs in the previous five years. The number of journalists in the UK shrank by up to one third between 2001 and 2010; US newsrooms declined by a similar amount between 2006 and 2013Footnote 10. Global newspaper and most media companies around the world are reporting falling revenuesFootnote 11.

As journalism declines, especially at the local level, both ‘news deserts’ and ‘ghost papers’ have been created. These are places either completely unserved by local news, or where the ones that exist have insufficient resources to provide original reportage of local issues.

First, it is possible that illicit influence activities—especially through hyper-partisan news outlets—can move into these underserved markets. But more broadly, mainstream news journalists are under greater pressure to publish more stories. A growing trend is for the themes, tropes or claims from illicit influence to be repeated and amplified by mainstream news sources, possibly simply because the overburdened journalist did not have time to verify their origin. Third, there are fewer resources available for the time-consuming and expensive investigation journalism requires to actually report on illicit influence itself, attempting to reveal who is behind it and what the tactics and interests are.

The other major trend is that a substantial transformation in how and where crime occurs has caused a crisis of law enforcement. The 2016 Crime Survey of England and Wales—the first to include questions on cyber-crime—estimated that 3.9 million cyber-crimes had been committed over the previous year. Over half of fraud cases (1.9 million incidents) were cyber-related, and there had been another 2 million computer misuse offences. Overall, more than 40 per cent of crimes that people living in the UK actually experienced, it estimated, were committed through the internetFootnote 12.

This presents a focal and cross-cutting problem for the models of justice, investigation, and jurisprudence around the world. Crime on the Internet challenges the very basis on which the police are organised, geography. Victims, suspects and the evidence that links them together are scattered all over the world. Police forces simply cannot reach beyond borders to reach the victims, catch the suspects and gather the evidence that they need.

The risk of doing cybercrime for someone from an uncooperative jurisdiction abroad is very low. It is therefore possible for the practitioners of illicit influence to operate from jurisdictions that are uncooperative with the jurisdictions that they target, making it unlikely that law enforcement response or criminal sanction will be possible. There are also multi-use cyber-criminal infrastructures (such as compromised password lists or whitelisted IPs) which are leveraged to conduct social media manipulation.

Conclusion

Overall, then, illicit influence is one of the most subtle yet possibly disruptive threats that are commonly faced by liberal democracies around the world. As a practice, it is inherently easier for autocracies to conduct than for polities with enshrined distinctions between state and the media. It can also intervene within liberal democracies during elections, emergencies and other important events. It is hard to define and whilst it is outside the scope of this paper to suggest responses to illicit influence, but little evidence yet exists for the efficacy of many that are being considered. In its many forms, illicit influence does not only have a long history, but likely a stubbornly long future too.

Page details

Date modified: