Disinformation is changing the way we view the world

By August 10, 2017Uncategorized
  •  Disinformation is proliferating and intensifying threats, seen and unseen.
  • False narratives propagated by state and non-state actors utilizing disinformation can ruin corporate and personal reputations, threaten business continuity, cause or intensify conflict and drive deep ideological fault lines.
  • A recent study of computer generated propaganda and disinformation revealed that disinformation is having a strong impact on public opinion in some countries.
  • A lack of public understanding of the depth and sophistication of disinformation operations is leaving governments, organizations, and individuals vulnerable to cybercrime and false narratives that undermine sound decision-making.
  • Tools, information, and counter-measures are emerging, but education is the most effective tool we have against disinformation.

Back in January, not long after the new administration took office, Starbucks made a pledge to hire 10,000 refugees. The move quickly became a political issue with folks on one side applauding the effort, and on the other, angry about it.

What drove some of the anger however, was, at best, misinformation (unintentionally misleading info), and possibly disinformation (intentionally misleading with a goal to manipulate public opinion/ perceptions), via several viral memes. One of the most visible memes said: “Hey Starbucks, instead of hiring 10,000 refugees, why don’t you hire 10,000 veterans?” But here’s the thing: Starbucks has had a goal to hire 10,000 veterans through their very successful military partners program since 2013. And while there were other factors at play too, the disinformation amplified the issue, which quickly lodged itself in a partisan fault line. Starbucks handled the issue, in part, by drawing attention to their long running veteran’s program. In addition, veterans who work with Starbucks came to their defense. But the story is illustrative of a growing problem. Fake news and disinformation travel at the speed of a text message or a twitter post and its impact can damage reputations – and worse – catalyze security and major geopolitical events.

As we’ve highlighted in a previous article, disinformation is NOT new. But it is an increasingly potent force driving public opinion or catalyzing crises amplified by its use with other mediums like social media, malware and increasingly popular alternate media outlets. Technology, speed and open access to low cost tools to disseminate information instantly and globally allow anyone: state and non-state actors, politicians, companies and ordinary people to disseminate and amplify disinformation with the right combination of tools.

Disinformation is shaping public opinion

A new study from Oxford University discusses how technology and social media contribute to the spread of disinformation and propaganda. The Computational Propaganda Research Project, found that propaganda and disinformation spread via social media is being utilized in multiple countries around the world to successfully shape public opinion on political issues, especially related to referendums and elections. Moreover, it found that highly automated social media accounts (also known as “bots”) were responsible for as much as 45% of all Twitter traffic in some countries. The study took a look at recent events in Canada, Russia, Brazil, China, the US and several others to understand the impact of social media – especially automated social media, on public opinion. Particularly interesting findings from the paper included:In Brazil “bot networks and other forms of computational propaganda were active in the 2014 presidential election, the constitutional crisis, and the impeachment process.” These included “highly automated accounts supporting and attacking political figures, debate issues such as corruption, and encouraging protest movements.”

  • In the US, “Twitter bots … reached highly influential network positions within retweet networks during the 2016 US election. The botnet associated with Trump-related hashtags was 3 times larger than the botnet associated with Clinton-related hashtags.”
  • In Russia, over 45% of all social media activity is generated by bot nets. Nearby “Ukraine is the frontline of experimentation in computational propaganda, with active campaigns of engagement between Russian botnets, Ukraine nationalist botnets, and botnets from civil society groups”

In simple terms, bots are being utilized to pump out information, usually false and misleading, at alarmingly high numbers in some countries. The use of bots to increase the visibility of disinformation across the internet plays into several biases including availability bias and confirmation bias, and has proven highly effective at shaping public opinion. That impact affects everything from referendums and elections, to social and political stability, to consumer brand preferences, to whom we trust online with sensitive information, to our reputation and security.

Disinformation is driving geopolitical events

Disinformation has the potential to catalyze diplomatic tensions into much larger geo-political situations. Particularly in countries where hair-trigger response to threats are common (Israel/Palestine, North Korea, South Korea, etc), disinformation at the right tempo and time could touch off a major conflict before there is time to set the record straight. Two recent examples include an article in a Bahrain newspaper – planted disinformation that catalyzed the still-festering Qatar blockade and a fake article about Israel and Pakistan.

In Bahrain, the offending article may have been a catalyst (and a convenient excuse) for GCC countries to cut-off diplomatic relationships with Qatar, though it was by no means the cause of the dispute. The crisis has created expensive logistical problems for businesses in the region and substantially complicated transnational relationships across the Gulf, US regional relationships and counter-terrorism efforts there.

In another recent incident, the Pakistani Foreign Minister tweeted a nuclear threat at Israel after reading a fake news piece on Israel’s response to reports that Pakistan was getting involved in Syria.

Disinformation is driving real world security events

Disinformation, misinformation, and outright fake news are causing real world security problems. Most people will now be familiar with the fabricated news story that circulated in the final days of the election about a certain candidate running nefarious activities out of the basement of a pizza restaurant in Washington DC, Comet Ping Pong. The fake story, which originated on Twitter and quickly spread to Reddit, back to Twitter and on Facebook, became the basis for a real world security event when a man named Edgar Welch showed up at the Pizzaria with a rifle in hand. Only then did he discover that the restaurant didn’t HAVE a basement, let alone any politicians involved in human trafficking there.

Long before the election, similar fake stories were generated in an attempt to create more panic over the Ebola crisis, Ferguson and a completely made up event about a chemical plant, owned by Colombian Chemicals, exploding in Louisiana on 9/11.

The Colombian Chemicals case, in particular, demonstrated how such a hoax can cause disruption to a company, local emergency services and propagate disinformation through the use of botnets (though this particular attempt was not as successful as the perpetrators intended).

These cases show obvious links between disinformation and the actual event, but more cases can undoubtedly be found by digging into the annals of radicalization. Cases exist across the political and ideological spectrum highlighting the selective disinformation and propaganda that radicalized individuals have consumed in the process of becoming radicalized.

Disinformation can lead to major IT security threats

Malicious links are often found in hyper-partisan click-bait publications that trade in fake news, disinformation, hyperbolic and polarizing headlines or advertisements created by botnets. In a recent case, a Defense Department employee clicked on a link about vacation packages tailored to his interests in his twitter feed. This link downloaded malware that allowed a server in Russia to take control of his computer and Twitter account. That same malware was sent to at least 10,000 DoD employees.

Like phishing attacks, these campaigns trick users into clicking on them, based on a sense of trust in the friend who shared the post, or the targeted nature of the information which is based on their specific social media behavior. According to government sources, Twitter has the most malicious links embedded in disinformation and other types of posts due to the large amount of botnets active on the site and sharing the information at a high volume. The problem has also been identified in smaller volume on Facebook, which has been taking public steps to address disinformation and fake news proliferating on the site.

Disinformation may be driving some of your most important decisions

Distinguishing between what is real and what is fake is getting more difficult. Even more worrying is that gray area in between where information is intentionally altered, taken out of context, or twisted to support a particular narrative. Over-time these misconceptions become embedded in political discourse and become part of our basic assumptions. This drives poor decision-making and in the extreme; dangerous rhetoric that dehumanizes and encourages violence against people who do not agree with a particular point of view. Whether it is viral memes suggesting that police officers are violent or media outlets that portray peaceful protests as being carried out by “violent criminals.” Disinformation and propaganda, in extreme scenarios fed numerous genocides that took place during the 20th century, whether religious, ethnically, or politically driven.

In a less dramatic – but worrying – scenario, Russia has targeted multiple demographics in the US including US military personnel, through the use of fake “patriotic” websites and Facebook groups that plant disinformation and fake news to influence their behavior and loyalties.

In aggregate, continued exposure to disinformation affects consequential decisions, from split second battlefield decisions to major policy initiatives fed by hyper-partisan narratives. But it also feeds everyday decisions, like who we choose to hire, whether we interpret a person’s actions as a threat based on the way they look, or some aspect of their character, and how we respond to that threat.

Falling behind on information warfare

Disinformation – though as old as time – has been harnessed in the information age to amplify social issues, increase distrust among disparate groups of people and drive wedges between previously unified groups. While some countries have been wise to this for many years, in the US, the phenomenon has caught many off-guard, leaving us more vulnerable than many realize. The US public had more awareness of disinformation operations during the Cold War, but the fall of the Soviet Union and the shift in security focus to terrorism and threats from non-state actors dulled the public’s understanding of the topic. Meanwhile, disinformation tradecraft (particularly in Russia and Eastern Europe) became more sophisticated, automated and easier to exploit. This was most dramatically seen in the recent US election cycle, but has been occurring throughout the last decade and particularly since the US and EU levied sanctions against Russia in 2014 following annexation of Crimea.

Managing Pandora’s Box

As with every problem that vexes our society, there is innovation and ingenuity at work to solve these problems. And though there is no way to close Pandora’s Box again, there are ways to lessen the impact of disinformation. Key among these is educating ourselves, and those around us, about how our biases make us prone to falling prey to disinformation. In addition, there is a growing body of literature and study on the topic that can help us understand what is occurring, when it’s occurring and how to address it. We can also familiarize ourselves with some of the tools (FacebookGoogle) that are emerging to address this problem; from initiatives to develop artificial intelligence solutions, to academic research, to the use of trusted data sources to refute questionable claims. There are good reasons to be hopeful about our ability to manage the threat that disinformation creates, but the fight against the use of technology to mislead, misinform, and drive wedges between people is particularly urgent. Especially in an age when mass communication happens at the touch of a button.

Meredith Wilson is the Founder and CEO of Emergent Risk International, LLC. Find out more or subscribe to our newsletter here.

Leave a Reply