Hard Questions is a series from Facebook that addresses the impact of our products on society. The following posts cover different facets of our investigations into cyber threats and information operations. They were originally published alongside other announcements.
How Much Can Companies Know About Who’s Behind Cyber Threats?
When Do We Take Action Against Cyber Threats?
How Do We Work With Our Partners to Combat Information Operations?
Originally published on July 31, 2018:
How Much Can Companies Know About Who’s Behind Cyber Threats?
By Alex Stamos, Chief Security Officer (Note: Alex Stamos is now an adjunct professor at Stanford University as of August 2018.)
Deciding when and how to publicly link suspicious activity to a specific organization, government, or individual is a challenge that governments and many companies face. Last year, we said the Russia-based Internet Research Agency (IRA) was behind much of the abuse we found around the 2016 election. But today we’re shutting down 32 Pages and accounts engaged in coordinated inauthentic behavior without saying that a specific group or country is responsible.
The process of attributing observed activity to particular threat actors has been much debated by academics and within the intelligence community. All modern intelligence agencies use their own internal guidelines to help them consistently communicate their findings to policymakers and the public. Companies, by comparison, operate with relatively limited information from outside sources — though as we get more involved in detecting and investigating this kind of misuse, we also need clear and consistent ways to confront and communicate these issues head on.
Determining Who is Behind an Action
The first challenge is figuring out the type of entity to which we are attributing responsibility. This is harder than it might sound. It is standard for both traditional security attacks and information operations to be conducted using commercial infrastructure or computers belonging to innocent people that have been compromised. As a result, simple techniques like blaming the owner of an IP address that was used to register a malicious account usually aren’t sufficient to accurately determine who’s responsible.
Instead, we try to:
Link suspicious activity to the individual or group with primary operational responsibility for the malicious action. We can then potentially associate multiple campaigns to one set of actors, study how they abuse our systems, and take appropriate countermeasures. Tie a specific actor to a real-world sponsor. This could include a political organization, a nation-state, or a non-political entity.The relationship between malicious actors and real-world sponsors can be difficult to determine in practice, especially for activity sponsored by nation-states. In his seminal paper on the topic, Jason Healey described a spectrum to measure the degree of state responsibility for cyber attacks. This included 10 discrete steps ranging from “state-prohibited,” where a state actively stops attacks originating from their territory, to “state-integrated,” where the attackers serve as fully integrated resources of the national government.
This framework is helpful when looking at the two major organized attempts to interfere in the 2016 US election on Facebook that we have found to date. One set of actors used hacking techniques to steal information from email accounts — and then contacted journalists using social media to encourage them to publish stories about the stolen data. Based on our investigation and information provided by the US government, we concluded that this work was the responsibility of groups tied to the GRU, or Russian military intelligence. The recent Special Counsel indictment of GRU officers supports our assessment in this case, and we would consider these actions to be “state-integrated” on Healey’s spectrum.
The other major organized effort did not include traditional cyber attacks but was instead designed to sow division using social media. Based on our own investigations, we assessed with high confidence that this group was part of the IRA. There has been a public debate about the relationship between the IRA and the Russian government — though most seem to conclude this activity is between “state-encouraged” and “state-ordered” using Healey’s definitions.
Four Methods of Attribution
Academics have written about a variety of methods for attributing activity to cyber actors, but for our purposes we simplify these methods into an attribution model with four general categories. And while all of these are appropriate for government organizations, we do not believe some of them should be used by companies:
Political Motivations: In this model, inferred political motivations are measured against the known political goals of a nation-state. Providing public attribution based on political evidence is especially challenging for companies because we don’t have the information needed to make this kind of evaluation. For example, we lack the analytical capabilities, signals intelligence, and human sources available to the intelligence community. As a result, we don’t believe it is appropriate for Facebook to give public comment on the political motivations of nation-states. Coordination: Sometimes we will observe signs of coordination between threat actors even when the evidence indicates that they are operating separate technical infrastructure. We have to be careful, though, because coincidences can happen. Collaboration that requires sharing of secrets, such as the possession of stolen data before it has been publicly disclosed, should be treated as much stronger evidence than open interactions in public forums. Tools, Techniques and Procedures (TTPs): By looking at how a threat group performs their actions to achieve a goal — including reconnaissance, planning, exploitation, command and control, and exfiltration or distribution of information — it is often possible to infer a linkage between a specific incident and a known threat actor. We believe there is value in providing our assessment of how TTPs compare with previous events, but we don’t plan to rely solely upon TTPs to provide any direct attribution. Technical Forensics: By studying the specific indicators of compromise (IOCs) left behind in an incident, it’s sometimes possible to trace activity back to a known or new organized actor. Sometimes these IOCs point to a specific group using shared software or infrastructure, or to a specific geographic location. In situations where we have high confidence in our technical forensics, we provide our best attribution publicly and report the specific information to the appropriate government authorities. This is especially true when these forensics are compatible with independently gathered information from one of our private or public partners.Applying the Framework to Our New Discovery
Here is how we use this framework to discuss attribution of the accounts and Pages we removed today:
As mentioned, we will not provide an assessment of the political motivations of the group behind this activity. We have found evidence of connections between these accounts and previously identified IRA accounts. For example, in one instance a known IRA account was an administrator on a Facebook Page controlled by this group. These are important details, but on their own insufficient to support a firm determination, as we have also seen examples of authentic political groups interacting with IRA content in the past. Some of the tools, techniques and procedures of this actor are consistent with those we saw from the IRA in 2016 and 2017. But we don’t believe this evidence is strong enough to provide public attribution to the IRA. The TTPs of the IRA have been widely discussed and disseminated, including by Facebook, and it’s possible that a separate actor could be copying their techniques. Our technical forensics are insufficient to provide high confidence attribution at this time. We have proactively reported our technical findings to US law enforcement because they have much more information than we do, and may in time be in a position to provide public attribution.Given all this, we are not going to attribute this activity to any one group right now. This set of actors has better operational security and does more to conceal their identities than the IRA did around the 2016 election, which is to be expected. We were able to tie previous abuse to the IRA partly because of several unique aspects of their behavior that allowed us to connect a large number of seemingly unrelated accounts. After we named the IRA, we expected the organization to evolve. The set of actors we see now might be the IRA with improved capabilities, or it could be a separate group. This is one of the fundamental limitations of attribution: offensive organizations improve their techniques once they have been uncovered, and it is wishful thinking to believe that we will always be able to identify persistent actors with high confidence.
The lack of firm attribution in this case or others does not suggest a lack of action. We have invested heavily in people and technology to detect inauthentic attempts to influence political discourse, and enforcing our policies doesn’t require us to confidently attribute the identity of those who violate them or their potential links to foreign actors. We recognize the importance of sharing our best assessment of attribution with the public, and despite the challenges we intend to continue our work to find and stop this behavior, and to publish our results responsibly.
Originally published on August 21, 2018:
When Do We Take Action Against Cyber Threats?
By Chad Greene, Director of Security
As soon as a cyber threat is discovered, security teams face a difficult decision: when to take action. Do we immediately shut down a campaign in order to prevent harm? Or do we spend time investigating the extent of the attack and who’s behind it so we can prevent them from doing bad things again in the future?
These questions have been debated by security experts for years. And it’s a trade-off that our team at Facebook has grappled with over the past year as we’ve identified different cyber threats — including the coordinated inauthentic behavior we took down today. There are countless things we consider in each case. How active is the threat? How sophisticated are the actors? How much harm is being done? And how will the threat play into world events? Here is a summary of what we have learned over the years – in many cases lessons that we have had to learn the hard way.
Who We Share Information With — and When
Cyber threats don’t happen in a vacuum. Nor should investigations. Really understanding the nature of a threat requires understanding how the actors communicate, how they acquire things like hosting and domain registration, and how the threat manifests across other services. To help gather this information, we often share intelligence with other companies once we have a basic grasp of what’s happening. This also lets them better protect their own users.
Academic researchers are also invaluable partners. This is because third-party experts, both individuals and organizations, often have a unique perspective and additional information that can help us. They also play an important role when it comes to raising the public’s awareness about these problems and how people can better protect themselves.
Law enforcement is crucial, too. There are cases where law enforcement can play a specific role in helping us mitigate a threat that we’ve identified, and in those instances, we’ll reach out to the appropriate agency to share what we know and seek their help. In doing this, our top priority is always to minimize harm to the people that use our services.
When we decide to take down a threat — a decision I’ll go into more below — we also need to consider our options for alerting the people who may have been affected. For example, in cases of targeted malware and hacking attempts that we know are being done by a sophisticated bad actor, like a nation state, we may put a notice at the top of people’s News Feed to alert them and make sure their account is safe. In the case of an attack that seeks to cause broader societal harm – like using misinformation to manipulate people or create division – where possible we share what we know with the press and third-party researchers so the public is aware of the issue.
When We’d Wait — And When We’d Act
When we identify a campaign, our aim is to learn as much as we can about: the extent of the bad actors’ presence on our services; their actions; and what we can do to deter them. When we reach a point where our analysis is turning up little new information, we’ll take down a campaign, knowing that more time is unlikely to bring us more answers. This was the case with the campaign that we took down today which was linked to Russian military intelligence services.
But if we’re still learning as we dig deeper, we’ll likely hold off on taking any action that might tip off our adversary and prompt them to change course. After all, the more we know about a threat, the better we’ll be at stopping the same actors from striking again in the future.
This is particularly true for highly sophisticated actors who are adept at covering their tracks. We want to understand their tactics and respond in a way that keeps them off Facebook for good. Amateur actors, on the other hand, can be taken down quickly with relative confidence that we’d be able to find them if they crop up elsewhere — even with limited information on who they are or how they operate.
Often, though, we have to take action before we’ve exhausted our investigation. For example, we’ll always move quickly against a threat when there’s an immediate risk to safety. So if we determine that someone is trying to compromise another person’s account in order to determine their location — and we suspect the target might be in physical danger — we’d take action immediately, as well as notify the person being targeted and law enforcement when appropriate.
These considerations don’t stop at physical harm. We also look at how a threat might impact upcoming world events. This sometimes means that we speed up taking something down because an event is approaching. This was the case when we removed 32 Pages and accounts last month. In other cases, this may mean delaying action before an upcoming event to reduce the chances that a bad actor will have time to regroup and cause harm.
Our Best Bet
Security experts can never be one hundred percent confident in their timing. But what we can do is closely consider the many moving pieces, weigh the benefits and risks of various scenarios, and make a decision that we think will be best for people on our services and society at large.
Originally Published on November 13, 2018:
How Do We Work With Our Partners to Combat Information Operations?
By Nathaniel Gleicher, Head of Cybersecurity Policy
Preventing misuse on Facebook is a priority for our company. In the lead-up to last week’s US midterms, our teams were closely monitoring for any abnormal activity that might have been a sign of people, Pages or Groups misrepresenting themselves in order to mislead others.
But finding and investigating potential threats isn’t something we do alone. We also rely on external partners, like the government or security experts. When it comes to coordinated inauthentic behavior — people or organizations working together to create networks of accounts and Pages to mislead others about who they are, or what they’re doing — the more we know, the better we can be at understanding and disrupting the network. This, in turn, makes it harder for these actors to start operating again.
To get this information, we work with governments and law enforcement agencies, cybersecurity researchers, and other technology companies. When appropriate, we also share what we know with these groups to help aid their investigations and crack down on bad actors. After all, these threats are not limited to a specific type of technology or service and have far-reaching repercussions. The better we can be at working together, the better we’ll do by our community.
These partnerships were especially critical in the lead-up to last week’s midterm elections. As our teams monitored for and rooted out new threats, the government proved especially valuable because of their broader intelligence work. As bad actors seemingly tried to create a false impression of massive scale and reach, experts from government, industry, civil society, and the media worked together to counter that narrative. As we continue to build our capability to identify and stop information operations, these partnerships will only grow more valuable. This is why today, I want to share more about how we work with each of these groups — and some of the inevitable challenges that come along with this collaboration.
Government & Law Enforcement
With backgrounds in cybersecurity, digital forensics, national security, foreign policy and law enforcement, the experts on our security team investigate suspicious behavior on our services. While we can learn a lot from analyzing our own platforms, law enforcement agencies can draw connections off our platform to a degree that we simply can’t. For instance, our teams can find links between accounts that might be coordinating an information operation based on how they interact on Facebook or other technical signals that link the accounts together — while a law enforcement agency could identify additional links based on information beyond our scope.
Tips from government and law enforcement partners can therefore help our security teams attribute suspicious behavior to certain groups, make connections between actors, or proactively monitor for activity targeting people on Facebook. And while we can remove accounts and Pages and prohibit bad actors from using Facebook, governments have additional tools to deter or punish abuse. That’s why we’re actively engaged with the Department of Homeland Security, the FBI, including their Foreign Influence Task Force, Secretaries of State across the US — as well as other government and law enforcement agencies around the world — on our efforts to detect and stop information operations, including those that target elections.
There are also inherent challenges to working with governments around the world. When information is coming to us from a law enforcement agency, we need to vet the source and make sure we’re responding the right way based on the credibility of the threat and information we learn from our investigation. And sharing information with law enforcement agencies introduces additional complexities and challenges. We’re particularly cautious when it comes to protecting people’s privacy and safety — we have a rigorous vetting process to evaluate whether and to what extent to comply with government requests and we deny requests that we think are too broad or need additional information. This is true for any government request, including those around coordinated inauthentic behavior.
Cybersecurity Researchers
Our partnerships with third-party security experts are also key as we combat threats. We have established contracts with various cybersecurity research firms and academic institutions who can help us discover vulnerabilities in our systems to make our defenses stronger. We’ll often turn to these groups when we suspect a threat from a certain actor or region. They’ll then combine deep computer learning and human expertise to detect patterns from the outside in — and will alert us to signs or behaviors that suggest a real likelihood of a security risk or threat. At that point, we’ll be able to launch an internal investigation to learn more. At other times, these groups identify suspicious activity on their own, without guidance from us.
This past July, for instance, FireEye, one of our cybersecurity vendors, alerted us to a network of Pages and accounts originating from Iran that were engaged in coordinated inauthentic behavior. Based on that tip, we investigated, identified, and removed additional accounts and Pages from Facebook.
We also partner closely the Atlantic Council’s Digital Forensic Research Lab, which provides us with real-time updates on emerging threats and disinformation campaigns around the world. They assisted in our takedown of 32 Pages and accounts from Facebook and Instagram for coordinated inauthentic behavior in July of this year as well as our our recent takedown of a financially motivated “like” farm in Brazil. In these cases, they’ve let us increase the number of “eyes and ears” we have working to spot potential abuse so we can identify threats and get ahead of future ones.
It can be challenging to coordinate the operations and timing of these investigations, though. As Chad Greene noted in his earlier post on when to take action against a threat, timing is key to our success and the more entities involved, the harder it inevitably is to get everyone synced seamlessly. That’s why it’s so important to have open lines of communication with all of these partners so we can ensure we’re all aligned, and that we take action in timeline that best disrupts the adversary.
Tech Industry
Threats are rarely confined to a single platform or tech company. If another company identifies a threat, we want to know about it so we can investigate whether the actor or actors behind it are abusing our services as well. Likewise, if we find indications of coordinated inauthentic behavior that might extend beyond our platform, we want to give others a heads up.
That’s why we’ve worked closely with our fellow tech companies, both bilaterally and as a collective, to deal with the threats we have all seen during and beyond elections. This includes sharing information about the kinds of behavior we’re seeing on our respective platforms and discussing best practices when it comes to preventing our services from being abused.
Collaboration in Action
These partnerships all proved critical in our work ahead of the US midterms. In September, we launched our first elections war room at our Menlo Park headquarters — a place where the right subject-matter experts from across the company gathered to address potential problems in real time and respond quickly. A big part of centralizing key information in the war room was receiving valuable information from our government, cybersecurity, and tech industry partners and taking the appropriate action.
We have an important role in protecting people and public debate on our platform, and we are focused on that mission. Security, though, is bigger than just Facebook. We are — and will continue to be — most effective when we draw on available resources and information. Our partnerships are a key part of the effort and will play a vital role as we prepare for other elections around the world.
Back to Top
How Much Can Companies Know About Who’s Behind Cyber Threats?
When Do We Take Action Against Cyber Threats?
How Do We Work With Our Partners to Combat Information Operations?