Chapter 4 - Foreign influence efforts and the evolution of election tampering

After successes in the Arab Spring and the Russian election of 2011-12, the Kremlin increased its use of information operations and kompromat. Many techniques are employed to make disinformation appear genuine, including selecting television interviewees who will provide a pro-Moscow interpretation of events on state-controlled channels and exploiting both human and automated dissemination techniques to distribute faked stories to those willing to mount dissent within foreign political systems.

The central concept to understanding Russian information-influence operations beyond the country’s borders is the ‘protest potential of the population’. This term is included in Russian military doctrineFootnote 19 as one of the main features of modern (not just Russian) conflict, alongside military activities, political, economic and informational tools, as well as special forces. The term was introduced in the doctrine after the events of the Arab uprising of 2011, and the widespread protests against vote-rigging in Russia in 2011 and 2012. According to Russian official statements, Western powers staged these protests to topple pro-Russian regimes.

The Kremlin's initial reaction was to target Russians, to prevent any recurrence of democratic enthusiasm. Initiatives such as the ‘foreign agent’s law’, cracking down on pro-transparency NGOs, stem from this period. Simultaneously, a troll factory—Russians paid to make political posts online—was established in St. Petersburg to flood Russian opposition communities with pro-government posts. Russia served as a test-bed for these methods; the government's first goal, as so often, was to ensure its own survival. Subsequently, and especially after the Crimean annexation in 2014, the same weapons were extended to international targets, first to Ukraine, then to the West.

Approach

Russia's approach to information-influence operations in democratic states can be summarised as ‘vilify and amplify’. Different parts of the Kremlin's systems generate or gather material designed to undermine the target; the other parts of the system amplify that material, while preserving a degree of plausible deniability. This method dates back to pre-Soviet times and the concept of kompromat (from ‘compromising material’). In the 1980s, the Soviets posted a fake claim in an Indian newspaper that the CIA had created AIDS, and then amplified it worldwide. The advent of deniable web sites and social media has made such techniques much easier to deploy.

One simple technique is to give a platform to commentators in the target country who validate the Kremlin’s narrative. For example, in 2014 and 2015, RT interviewed a disproportionately high number of members of the European Parliament from Britain's anti-EU UK Independence Party (UKIP); in the first half of 2017, Sputnik France devoted disproportionate coverage to politicians who attacked Emmanuel Macron. During the US election, RT and Sputnik repeatedly interviewed an academic who claimed that Google was rigging its auto-complete search suggestions to favour Clinton.

In such cases, what is important is what is left out, as much as what is included. The interviewees can be, and usually are, sincere in their beliefs; the propaganda technique consists of amplifying and validating those beliefs without providing the other side of the story. RT has repeatedly been found guilty by the UK telecommunications regulator in this regard.

Close analysis of the ‘experts’ themselves is also important. For example, in the build-up to the Catalan referendum on 1 October 2017, Sputnik's Spanish service headlined tweets from Wikileaks founder Julian Assange more than any other commentator, including the Catalan president or Spanish prime minister. Assange had never mentioned Catalonia in tweets until 9 September 2017; he is not known to have any special expertise in Spanish constitutional affairs. Sputnik's decision to amplify his tweets, which attacked the Spanish government, therefore appears based on his message, rather than any expertise.

Fake experts: Partisan commentators

A separate technique is to plant comments from Kremlin-aligned speakers without mentioning their affiliation. For example, after the shooting-down of Malaysian Airlines flight MH17 over Ukraine, investigative journalists with the Bellingcat group gathered evidence from open sources demonstrating that the plane was shot down with a Buk-M1 missile which had entered Ukraine from Russia.

In response, a group of initially anonymous and ‘independent’ bloggers calling themselves ‘anti-Bellingcat’ published a lengthy report rebutting Bellingcat's findings. The anti-Bellingcat report was widely reported in multiple languages by Kremlin outlets.

It later emerged that, far from being independent, one of the two lead authors worked at the state-owned company which produces the Buk missile; the other was spokesman for a Kremlin-founded think tank linked to Russian intelligence.

Kremlin bodies also have created a number of ‘independent’ sites which mask their ties to the Russian government. NewsFront.info, for example, produces pro-Kremlin and anti-Western content in a number of languages; according to a whistleblower interviewed by Die Zeit, it is funded by Russian intelligence. A collection of web sites in the Baltic states, Baltnews, claim to be independent, but have been traced back to Sputnik's parent company. In October 2017, a highly active and influential far-right US Twitter account, @TEN_GOP, was exposed as being run from the troll factory. This account was extraordinarily successful—quoted in the mainstream media and retweeted by key Trump aides—amplifying disinformation which was eventually quoted by Trump himself.

The same month, a group known as AgitPolk (‘agitation regiment’) was outed as being tied to the troll factory. This group posed as online activists, and repeatedly launched pro-Kremlin or anti-Western hashtag campaigns, including attacking US actor Morgan Freeman and wishing Russian President Vladimir Putin a happy birthday. On one occasion, unknown actors created a complete mirror web site of The Guardian to post a story claiming that the former head of MI6 had admitted that the UK and US had tried to break up Russia in the early 2000s. The fake was quickly exposed, but this did not stop Russian state TV from running lengthy reports on the story, validating their narrative of a Russia under siege.

The most damaging technique is hacking the emails of target politicians, and leaking them online. This is especially harmful because:

  • there is an implicit assumption that any leak must be damaging;
  • it is easy to insert faked documents amidst the real ones;
  • leaks can be held back until the most damaging moment; and
  • in an unsuspecting environment, real media are likely to amplify the leaks.

The hacking of emails from the campaign of US Democratic candidate Hilary Clinton, and their leaking online, fits squarely into this kompromat pattern. The leaks were used particularly aggressively, with a selection being published daily in the month before voting day. The intent of these operations appears to have been two-fold: to undermine Clinton personally, and to attack the legitimacy of the election process in general. This was done in the hope of galvanising the ‘protest potential of the population’ in the event of a Clinton victory. It is one of the ironies of 2016 that Clinton lost, and that Russia's interference in fact undermined the president it had boosted.

Another divisive technique which is still being exposed is the practise of buying partisan advertisements for placement on social media. Combined with the use of anonymous and aggressive social-media accounts, this technique appears designed to pit multiple groups with protest potential against one another.

Developments

Given the widespread exposure of recent techniques, we can expect them to evolve rapidly. Adaptations are likely to aim at masking attribution more effectively, and blurring the distinction between human and automated operators. We have already seen efforts to reduce the danger of leaks from the troll factory through a heightened insistence on patriotism among staffFootnote 20. It is also noteworthy that, while the Clinton campaign emails were leaked via Wikileaks, emails hacked from Macron's campaign were dumped anonymously on 4chan, a web site, and amplified by the far right in the US, suggesting a desire to vary the delivery platform.

Social-media accounts are becoming increasingly sophisticated in their combination of human-authored and automated posts. Such cyborgs typically post at high rates, in the hundreds per day, but intersperse these with authored posts, making them less obvious to bot-detection algorithms, and harder to counter. This trend is likely to accelerate.

Hacking attempts can be expected to grow, especially from deniable actors whose links to the Kremlin are masked. The experience of 2016 showed that hacking and leaking can be a devastating weapon, but that this can backfire if the hacks are attributed. It is likely that the leaks attacking Emmanuel Macron were published anonymously on 4chan and spread by the far right in the US in an effort to make attribution still more difficult. A move away from overtly Kremlin-owned outlets such as RT and Sputnik may also materialise, as these come under increasing scrutiny, with a greater emphasis on front outlets such as NewsFront and the BaltNews family.

Countermeasures: Building resilience

A number of disinformation countermeasures have already been trialed. The simplest has been to block the accreditation of pseudo-journalism outlets such as RT and Sputnik, as was seen in the Baltic states and France. This approach sends a powerful signal, but also sets a precedent which can be open to abuse. Such moves should only be used as a last resort.

Registration of state-controlled media is also an avenue worth pursuing; at the time of writing, RT and Sputnik are reportedly facing demands to register as foreign agents in the US. Again, such approaches must be measured: the key is to label the outlet without giving the impression of silencing it.

Regulation of journalistic standards can also play a part. In the UK, the national telecoms regulator, Ofcom, has found RT guilty of breaching journalistic standards in a number of broadcasts. The sanctions have been symbolic; the reputational damage has been considerable. Such regulatory findings, based on the detail of individual programs, and pegged to transparently-defined standards of due accuracy and impartiality, are a valuable tool in efforts against all disinformation, from all sources.

Detailed fact-checking also has a part to play in debunking false stories and narratives. Given the emotional nature of most fake stories, fact-checking is not best suited to countering a specific story; however, over time, a regular pulse of fact-checking can help to expose key sources of fakes. Exposing influence attempts is also important. In the best case, such as recent fake allegations of rape against NATO soldiers in the Baltic states, rapid official engagement with the mainstream media to expose the attempt materially contributed to those stories' failure to gain traction Footnote 21.

However, for such exposure to succeed, there must be a degree of understanding in the media and in society that influence operations are dangerous, should be taken seriously, and should be addressed promptly. Brushing aside the issue can have consequences. The US Director of National Intelligence warned, on 7 October 2016, that Russia was attempting to interfere in the election. Quickly drowned out by the release of the Access Hollywood tapes in which Trump boasts about grabbing female genitalia, the warning only gained nationwide traction after the election.

The importance of education and engagement with the population cannot be overstated. Disinformation spreads best in groups which are unsuspecting or who are biased in favour of the fake. Online literacy skills, such as how to identify a fake social media account, stolen photo or tendentious article, should be taught far more widely; governments might also invest more in identifying, engaging with, and listening to, particular segments of their societies, to understand how and why fake stories spread among them.

There is no single answer to the complex and multi-faceted nature of disinformation. Regulation, fact-checking, exposure and education all have a role to play; a response which highlights just one, while ignoring the others, can be expected to fail. The solution is to boost resilience on as broad a front as possible.

Page details

Date modified: