Calling all interns!!!

By | Uncategorized | No Comments

Intelligence Analysis Intern

Emergent Risk International is hiring interns! ERI is a Dallas based risk and intelligence advisory firm. Our business is assisting companies in utilizing intelligence and analysis to better drive their business. We focus on three primary activities:

  • Assessment and Analysis: We provide bespoke geopolitical and threat intelligence products to address issues of concern to our clients. These products range from those developed for regular distribution to in-depth new market entry assessments.
  • Training: We train intelligence analysts to provide intelligence analysis in a business environment focusing on tradecraft and tools that drive more efficient and effective analysis. We offer a range of in-house and open trainings to address specific levels of experience and need.
  • Consulting: We help companies develop and improve their intelligence programs, providing end-to-end support; from assessing needs and providing analysis to recruiting and hiring highly qualified candidates.

Intelligence Analysis Interns will be responsible for helping ERI with a range of research, technology, analytic and administrative tasks to better serve its clients. Interns will contribute to research projects, products and services and will have a role in developing new products for the company.

Primary responsibilities:

  • Assist analysts by providing structured country research on issues of importance to ERI clients
  • Assist in building data visualizations and social media posts
  • Assist in developing business leads
  • Develop and maintain an awareness of relevant global issues impacting ERI’s primary client base
  • Administrative tasks as necessary

Remuneration: This role will be unpaid and run from the beginning of October through December (candidates interested in continuing through the spring semester will also be considered). Interns will receive intelligence analysis training, start-up experience and exposure to other critical professional skill sets. This internship can also be taken on for course credit with the permission of a candidates’ institution. Successful candidates will work out of our offices in downtown Dallas 15-20 hours per week. Some exceptional candidates may be considered for remote work.

Experience: The right candidate will possess most or all of the following qualifications:

  • Excellent writing skills
  • In final year of undergrad or has finished BA/BS degree
  • Foreign language capability
  • International relations, political economy, economics, development, political science or other related fields of study are preferred
  • Working knowledge of information technology, social media and tech related trends
  • Experience living or studying abroad
  • Strong academic record
  • Self-motivated and able to manage time effectively
  • Strong work ethic and commitment

Application Deadline is September 30. Please send your resume, cover letter, and a recent writing sample to: eriteam@emergentriskinternational.com

The Geopolitics – Cyber Nexus

By | Uncategorized | No Comments

 

We live in a world exquisitely dependent on science and technology, in which hardly anyone knows anything about science and technology.

Carl Sagan

When I talk to students or my interns about careers, one thing I tell them all: they must possess an above average understanding of cyber and technology. Inability to process the meaning and implications of technology on the geopolitical landscape will make it impossible to grasp the future trajectory of global security, inter-state relations, economics, politics and security.

For years, tech oriented analysts warned of quickening convergence between cyberthreats and traditional security threats – be it crime or the convergence of geopolitics, war, diplomacy and cyber warfare. Today, cyber and physical threats are becoming indivisible. Consider the rise of ISIS – which would not have taken shape as quickly or broadly without the internet – and especially social media. The protests of the Arab Spring, which, aided by social media, rose faster and pushed the Middle East further in a shorter time frame than anyone would’ve thought possible. And criminality, from human trafficking to drug smuggling to the most basic of crime: credit card fraud. You are more likely to have someone steal your credit card information virtually (or purchase it from a dark web site), than you are to be mugged on the street.

Cyberwarfare and technology form a major facet of the strategy of state and non-state actors. “Cyber” is nothing more than another tool used by a range of geopolitical and criminal actors to influence an outcome by force. When we discuss North Korea, for example, most reporting highlights developments in its missile program and the threat of traditional war. Much less is said about covert cyberwarfare that has purportedly been aimed at North Korea’s nuclear program for several years now. Even less is said of North Korea’s cyber capability and the attacks it has reportedly carried out against governments and major multi-national companies. Or the country’s purported role in the WannaCry virus that surfaced earlier this year, fashioned out of leaked CIA cybertools. None of this is a secret, but it is repeatedly left out of geopolitical analysis that examines how scenarios between North Korea and the rest of the world might unfold.

We cannot credibly assess future scenarios without taking the cyber capabilities of the actors – state or non-state – into account. Likewise – it is increasingly hard to do the opposite as well. That is, credible cyberthreat analysis cannot be undertaken without consideration of the geopolitical and security aims of the actor. Is the attack part of a broader strategy to attack our country or organization? Is it being carried out by a state or non-state actor? Is there a motive that stretches beyond money? Who are they connected to? And yet, we continue to separate our analysis into the cyber and the non-cyber – almost guaranteeing that we are not appreciating the whole picture.

Ukraine is the most obvious example of how cyberwarfare has become an integrated part of political and conflict strategy. Its become a virtual blueprint for what modern hybrid warfare looks like. It is the future of conflict, but it remains relatively unappreciated outside of a few well-informed circles of the geopolitical and cyber community. The Petya “not Petya” attacks in late June are one of the most recent examples. While ground warfare continues in and near separatist controlled Ukraine, an ongoing disinformation campaign has persisted for years. And critical infrastructure has been hit repeatedly, not by bombs (though that has also happened), but by cyberattacks. The Petya attack – while not spreading as far as WannaCry – impacted trains, airports, banks, electricity and several other types of critical infrastructure simultaneously. To that point – simultaneous attack of several types of critical infrastructure had not been accomplished by a malicious cyber actor, ever. The attack, disguised as an accounting software update, which was then disguised as a ransomware attack was actually created to wipe out hard drives, with no recovery. In addition to taking critical infrastructure of the intended recipient offline; it spread to global business, bringing supply chains and logistics to a grinding halt. Many affected businesses still have not fully recovered and will suffer significant losses from the recovery expense for Q2 this year.

While the probable state actor was identified quickly, given that Ukraine has suffered virtually non-stop cyber attacks for the past four years, it may have taken longer to make this connection had the attack targeted a country or company with a less clear cut adversary, potentially making mitigation and defensive measures harder to implement.

In order for us to improve the way we understand the world we have to better understand those things that may lie outside our expertise or comfort zone. Facebook’s Head of Global Security, Alex Stamos, made a similar point to the cyber community at the 20th Anniversary of Black Hat, an annual conference dedicated to hacking. In Stamos’ words, the hacking community has to become more diverse and inclusive. This is not just for diversity’s sake, but in order to better understand the implications of hacking, who it harms and how it harms them. It is estimated that the world will need around 2 million more cybersecurity experts in the next two years alone. Meanwhile, academics, intelligence professionals, business leaders – and especially governments – need to get much smarter about cyber, rather than dismissing it. Until we do, our analysis will be incomplete.

Disinformation is changing the way we view the world

By | Uncategorized | No Comments
  •  Disinformation is proliferating and intensifying threats, seen and unseen.
  • False narratives propagated by state and non-state actors utilizing disinformation can ruin corporate and personal reputations, threaten business continuity, cause or intensify conflict and drive deep ideological fault lines.
  • A recent study of computer generated propaganda and disinformation revealed that disinformation is having a strong impact on public opinion in some countries.
  • A lack of public understanding of the depth and sophistication of disinformation operations is leaving governments, organizations, and individuals vulnerable to cybercrime and false narratives that undermine sound decision-making.
  • Tools, information, and counter-measures are emerging, but education is the most effective tool we have against disinformation.

Back in January, not long after the new administration took office, Starbucks made a pledge to hire 10,000 refugees. The move quickly became a political issue with folks on one side applauding the effort, and on the other, angry about it.

What drove some of the anger however, was, at best, misinformation (unintentionally misleading info), and possibly disinformation (intentionally misleading with a goal to manipulate public opinion/ perceptions), via several viral memes. One of the most visible memes said: “Hey Starbucks, instead of hiring 10,000 refugees, why don’t you hire 10,000 veterans?” But here’s the thing: Starbucks has had a goal to hire 10,000 veterans through their very successful military partners program since 2013. And while there were other factors at play too, the disinformation amplified the issue, which quickly lodged itself in a partisan fault line. Starbucks handled the issue, in part, by drawing attention to their long running veteran’s program. In addition, veterans who work with Starbucks came to their defense. But the story is illustrative of a growing problem. Fake news and disinformation travel at the speed of a text message or a twitter post and its impact can damage reputations – and worse – catalyze security and major geopolitical events.

As we’ve highlighted in a previous article, disinformation is NOT new. But it is an increasingly potent force driving public opinion or catalyzing crises amplified by its use with other mediums like social media, malware and increasingly popular alternate media outlets. Technology, speed and open access to low cost tools to disseminate information instantly and globally allow anyone: state and non-state actors, politicians, companies and ordinary people to disseminate and amplify disinformation with the right combination of tools.

Disinformation is shaping public opinion

A new study from Oxford University discusses how technology and social media contribute to the spread of disinformation and propaganda. The Computational Propaganda Research Project, found that propaganda and disinformation spread via social media is being utilized in multiple countries around the world to successfully shape public opinion on political issues, especially related to referendums and elections. Moreover, it found that highly automated social media accounts (also known as “bots”) were responsible for as much as 45% of all Twitter traffic in some countries. The study took a look at recent events in Canada, Russia, Brazil, China, the US and several others to understand the impact of social media – especially automated social media, on public opinion. Particularly interesting findings from the paper included:In Brazil “bot networks and other forms of computational propaganda were active in the 2014 presidential election, the constitutional crisis, and the impeachment process.” These included “highly automated accounts supporting and attacking political figures, debate issues such as corruption, and encouraging protest movements.”

  • In the US, “Twitter bots … reached highly influential network positions within retweet networks during the 2016 US election. The botnet associated with Trump-related hashtags was 3 times larger than the botnet associated with Clinton-related hashtags.”
  • In Russia, over 45% of all social media activity is generated by bot nets. Nearby “Ukraine is the frontline of experimentation in computational propaganda, with active campaigns of engagement between Russian botnets, Ukraine nationalist botnets, and botnets from civil society groups”

In simple terms, bots are being utilized to pump out information, usually false and misleading, at alarmingly high numbers in some countries. The use of bots to increase the visibility of disinformation across the internet plays into several biases including availability bias and confirmation bias, and has proven highly effective at shaping public opinion. That impact affects everything from referendums and elections, to social and political stability, to consumer brand preferences, to whom we trust online with sensitive information, to our reputation and security.

Disinformation is driving geopolitical events

Disinformation has the potential to catalyze diplomatic tensions into much larger geo-political situations. Particularly in countries where hair-trigger response to threats are common (Israel/Palestine, North Korea, South Korea, etc), disinformation at the right tempo and time could touch off a major conflict before there is time to set the record straight. Two recent examples include an article in a Bahrain newspaper – planted disinformation that catalyzed the still-festering Qatar blockade and a fake article about Israel and Pakistan.

In Bahrain, the offending article may have been a catalyst (and a convenient excuse) for GCC countries to cut-off diplomatic relationships with Qatar, though it was by no means the cause of the dispute. The crisis has created expensive logistical problems for businesses in the region and substantially complicated transnational relationships across the Gulf, US regional relationships and counter-terrorism efforts there.

In another recent incident, the Pakistani Foreign Minister tweeted a nuclear threat at Israel after reading a fake news piece on Israel’s response to reports that Pakistan was getting involved in Syria.

Disinformation is driving real world security events

Disinformation, misinformation, and outright fake news are causing real world security problems. Most people will now be familiar with the fabricated news story that circulated in the final days of the election about a certain candidate running nefarious activities out of the basement of a pizza restaurant in Washington DC, Comet Ping Pong. The fake story, which originated on Twitter and quickly spread to Reddit, back to Twitter and on Facebook, became the basis for a real world security event when a man named Edgar Welch showed up at the Pizzaria with a rifle in hand. Only then did he discover that the restaurant didn’t HAVE a basement, let alone any politicians involved in human trafficking there.

Long before the election, similar fake stories were generated in an attempt to create more panic over the Ebola crisis, Ferguson and a completely made up event about a chemical plant, owned by Colombian Chemicals, exploding in Louisiana on 9/11.

The Colombian Chemicals case, in particular, demonstrated how such a hoax can cause disruption to a company, local emergency services and propagate disinformation through the use of botnets (though this particular attempt was not as successful as the perpetrators intended).

These cases show obvious links between disinformation and the actual event, but more cases can undoubtedly be found by digging into the annals of radicalization. Cases exist across the political and ideological spectrum highlighting the selective disinformation and propaganda that radicalized individuals have consumed in the process of becoming radicalized.

Disinformation can lead to major IT security threats

Malicious links are often found in hyper-partisan click-bait publications that trade in fake news, disinformation, hyperbolic and polarizing headlines or advertisements created by botnets. In a recent case, a Defense Department employee clicked on a link about vacation packages tailored to his interests in his twitter feed. This link downloaded malware that allowed a server in Russia to take control of his computer and Twitter account. That same malware was sent to at least 10,000 DoD employees.

Like phishing attacks, these campaigns trick users into clicking on them, based on a sense of trust in the friend who shared the post, or the targeted nature of the information which is based on their specific social media behavior. According to government sources, Twitter has the most malicious links embedded in disinformation and other types of posts due to the large amount of botnets active on the site and sharing the information at a high volume. The problem has also been identified in smaller volume on Facebook, which has been taking public steps to address disinformation and fake news proliferating on the site.

Disinformation may be driving some of your most important decisions

Distinguishing between what is real and what is fake is getting more difficult. Even more worrying is that gray area in between where information is intentionally altered, taken out of context, or twisted to support a particular narrative. Over-time these misconceptions become embedded in political discourse and become part of our basic assumptions. This drives poor decision-making and in the extreme; dangerous rhetoric that dehumanizes and encourages violence against people who do not agree with a particular point of view. Whether it is viral memes suggesting that police officers are violent or media outlets that portray peaceful protests as being carried out by “violent criminals.” Disinformation and propaganda, in extreme scenarios fed numerous genocides that took place during the 20th century, whether religious, ethnically, or politically driven.

In a less dramatic – but worrying – scenario, Russia has targeted multiple demographics in the US including US military personnel, through the use of fake “patriotic” websites and Facebook groups that plant disinformation and fake news to influence their behavior and loyalties.

In aggregate, continued exposure to disinformation affects consequential decisions, from split second battlefield decisions to major policy initiatives fed by hyper-partisan narratives. But it also feeds everyday decisions, like who we choose to hire, whether we interpret a person’s actions as a threat based on the way they look, or some aspect of their character, and how we respond to that threat.

Falling behind on information warfare

Disinformation – though as old as time – has been harnessed in the information age to amplify social issues, increase distrust among disparate groups of people and drive wedges between previously unified groups. While some countries have been wise to this for many years, in the US, the phenomenon has caught many off-guard, leaving us more vulnerable than many realize. The US public had more awareness of disinformation operations during the Cold War, but the fall of the Soviet Union and the shift in security focus to terrorism and threats from non-state actors dulled the public’s understanding of the topic. Meanwhile, disinformation tradecraft (particularly in Russia and Eastern Europe) became more sophisticated, automated and easier to exploit. This was most dramatically seen in the recent US election cycle, but has been occurring throughout the last decade and particularly since the US and EU levied sanctions against Russia in 2014 following annexation of Crimea.

Managing Pandora’s Box

As with every problem that vexes our society, there is innovation and ingenuity at work to solve these problems. And though there is no way to close Pandora’s Box again, there are ways to lessen the impact of disinformation. Key among these is educating ourselves, and those around us, about how our biases make us prone to falling prey to disinformation. In addition, there is a growing body of literature and study on the topic that can help us understand what is occurring, when it’s occurring and how to address it. We can also familiarize ourselves with some of the tools (FacebookGoogle) that are emerging to address this problem; from initiatives to develop artificial intelligence solutions, to academic research, to the use of trusted data sources to refute questionable claims. There are good reasons to be hopeful about our ability to manage the threat that disinformation creates, but the fight against the use of technology to mislead, misinform, and drive wedges between people is particularly urgent. Especially in an age when mass communication happens at the touch of a button.

Meredith Wilson is the Founder and CEO of Emergent Risk International, LLC. Find out more or subscribe to our newsletter here.

Intelligence Analysis in the Age of Disinformation.

By | Uncategorized | No Comments

Sometimes the US intelligence community gets it wrong, but often, they get it right. And sometimes, they get it really right. We featured the excerpt below in our summary piece on the ODNI threat assessment back in February – highlighting its importance for those who provide analysis and information for decision-makers.

“Future cyber operations will almost certainly include an increased emphasis on changing or manipulating data to compromise its integrity (i.e., accuracy and reliability) to affect decision-making, reduce trust in systems, or cause adverse physical effects. Broader adoption of IoT (internet of things) devices and AI—in settings such as public utilities and health care—will only exacerbate these potential effects… cyber actors, who post disinformation on commercial websites, might seek to alter online media as a means to influence public discourse and create confusion… (pg. 2)” 

As the perfect storm that is the US Presidential election is upon us, we are facing a global problem of how to address veracity and credibility of sources, not just for analysts and researchers but for the general public; who ultimately make voting decisions that set the course for US government foreign policy towards the rest of the world.

The proliferation of information on the internet has given rise to an avalanche of disinformation. At the same time, disparate groups across political, cultural and idealist-driven divides increasingly disagree on what constitutes credible information. This creates problems for analysts and decision makers as we grapple with utilizing the information that most accurately reflects the reality facing our business or organization. Below we outline some thoughts and resources for addressing disinformation within your organization, analysis and decision-making:

Agree on a common understanding credibility.

Like many terms in the geopolitical-security world, we sometimes assume that when we discuss credible information, that we have a common understanding and acceptance of what constitutes a credible source. Even within small organizations, opinions on credibility often vary markedly; informed by each individuals’ life experiences and personal biases. For example, one analyst could be assigning high amounts of credibility to only sources that are politically left leaning, while another may be doing the same with right leaning media. (In reality, we hope analysts are looking at sources across the ideological spectrum to understand what their customers may be reading too). Having team discussions about how your team defines credibility – and even utilizing a simple credibility ratings system for information and human sources – like the one used by the US intelligence community – can go a long way to ensuring good communication, use of high-integrity sources and internal understanding of whats credible and what may not be.

Emphasize the importance of integrity of reporting and information over speed of reporting. 

While there are notable exceptions to this rule, in general, we should rarely be focused on being the first to report breaking news information to our leadership. There are so many free and paid services that do this already – and some that do it really well. The role of the intelligence and analysis function is to make sure that the organization has the most organization specific, relevant, high-integrity information and analysis to ensure high-quality decision making. Some good resources for understanding the integrity of online sources and journalistic guidelines and standards can be found hereherehere and here. Teams can learn a lot through reviewing these materials, both about how they can better assess the integrity of sources, as well as how to evaluate other organizations and the sources of information they provide.

Develop an understanding of disinformation. 

While Russian propaganda is receiving the most attention at present, it should be noted that disinformation campaigns are as old as espionage itself and have been utilized across the world. What has changed is how disinformation is disseminated and its exceptionally wide and quick propagation across the internet – which often makes it difficult to counter before it has become accepted as true. The EU, and increasingly, the US, have been especially affected by propaganda campaigns since sanctions were levied against Russia in 2014 over Ukraine. The EU has even created a task force responsible for educating the public about disinformation and highlighting disinformation in the press.

Exercise extreme caution in utilizing leaked materials of any kind. 

Leaked documents, such as those released by Wikileaks, Edward Snowden and other actors, can provide insight into how individuals and organizations communicate. Unfortunately, it is also exceptionally difficult to identify whether the information in these documents has been tampered with, is outright false or taken out of context. In our work, the most important thing we can do related to leaked materials is ensure that our personnel and organizations are not in danger as a result of information in purported leaked documents.

Understand the WHY of fake information. 

While much of the above-discussed disinformation is designed to sow confusion and divisive politics, there are many types of disinformation and fake information out there today. These do not all serve the same purpose. For example, a large body of fake information is driven by the pursuit of web advertising revenue; fake news sites that publish alarmist headlines in an effort to get readers to click on the story. Many of these are so poorly written that their veracity is easily questioned – but not all. There are also fake news sites that are generally devoted to providing entertainment (also to bring in advertising revenue, but overtly). The most well known among these is The Onion; but lesser known satire sites can get picked up by less savvy readers and quoted as fact. By learning the motives behind fake information, it is often easier for analysts to divine fact vs fiction and good sources vs bad ones.

Stay on top of the changing information landscape

Finally, in an information environment that is changing by the minute, ensure that your team’s remit includes regular reviews of their sourcing choices and an assessment of their personal biases towards information. Keep them focused on utilizing high quality information from credible government sources, think tanks, academia, and news sites with a record of integrity and accurate reporting. Ensure an understanding of best practices in the use of social media and ground-level sources. And insist on independent source verification of all reported information, including that which comes in from information vendors’ everyday. After all, our analysis is only as good as the credibility of the information that it is based upon. 

Sign up for our Newsletter

Sign up for our newsletter to receive our monthly intelligence briefing, infographics, and updates on the latest events. (And we promise we don’t do spam).


Subscribe!