This is a follow-up to my first post, as well as a summary of a G+ discussion...let's continue now that it's public.
David Bianco's "Pyramid of Pain" As a follow-up to my previous post on TTPs, a couple of us (David Bianco, Jack Crook, etc.) took the discussion to G+.  Unfortunately, I did not set the conversation to public, so I wanted to ...
David Bianco's profile photoSteve Mancini's profile photoJeff Geiger's profile photodre g's profile photo
I would also say that we, as responders, often have a pretty good idea of who we are dealing with attribution wise.  If you see something or someone often enough you tend to get familiar with the signs that may point in their direction.

It's a great point that +David Bianco made when he talked about long term attribution and being able to almost predict when activity may occur i.e. phishing, times when they are most likely to be active.  
TL;DR: I find attribution to be crucial for campaign tracking, helpful but not mandatory for IR. 
Regarding attribution, does it really matter if you know exactly who (person or organization) is behind an attack if that information is not actionable? By that I mean that either the organization that detected the attack/is being attacked or a law enforcement or government organization to whom the information is passed can actually put a physical stop to the threat actor. If there can be no arrests or otherwise physical dismantling of a threat actor, then what value is there in attribution?

Certainly identifying TTPs will help in prevention, detection, and response and make the attacker's job more difficult, but it seems that can be done without attribution. Even when there is positive attribution, there is often little value in anything but an abstract sense. For example, many attacks have been positively attributed to China, and even specific Chinese organizations, but even the political pressure of the U.S. government will unlikely stop them from continuing.

It seems attribution would be most beneficial in criminal cases where there might be a reasonable expectation that a government would pursue known criminals operating within their borders. When it's a nation state conducting the attacks, it would be unrealistic to think they will stop what they are doing because someone brought it to their attention. Does anyone think the NSA will cease conducting its core missions because of the Snowden leaks? China? Iran?
During an ongoing IR engagement, as a consultant, you're often operating at a pretty low level on the PoP, at least initially.   Who did it takes a back seat to scoping.

What happens when what you're using to attribute to a particular group changes?  This group uses this RAT, with this config, and this C2 infrastructure...but then "they" stop using RATs all together and start using your remote access infrastructure.  If you've got an open Terminal Server that your employees use, how do you then determine who's accessing your org...does it even matter?

Does public disclosure (or even semi-private) lead to a changes in TTPs?  Do groups drop the use of one RAT for another, perhaps in an attempt to mask who they are?  I've heard some say "yes", some say "not always".

Let's say your attribution says that a particular group targets defense and finance orgs.  Is it then the same group if similar techniques are found in health care orgs, or used against a law firm?

At a certain point, don't most bad guys ultimately resort to a pretty set group of actions?  Recon the infrastructure using native capabilities (net.exe, ipconfig.exe) and minimal tools, collect/archive data and exfil using whatever means is available.  If a compromised org doesn't allow FTP out, and the options for exfil are limited, how to you attribute the actions to a particular group?
I'm almost never concerned about putting someone behind bars. But with persistent threats, long term campaign tracking is important, and that relies on attribution. I think most people think of attribution on a case-by-case basis, which leads them to downplay its importance. 

At this point, attribution seems to be the topic of discussion.  
With respect to this thread overall, what are the key questions up for discussion?
To begin with I would say:
1. Is attribution truly important from an IR and intel perspective?

2. How can people use the PoP to identify opportunities for detection related to groups of actors?

3. Can people do attribution with limited data (i.e. backdoor, domain/ip)?

I know we've talked about this, but hopefully others will jump in and share their views and opinions.

Anything else?
+jack crook , great list.  I'd suggest swapping #1 and #2, keeping the attribution questions grouped together.

Something else that may be valuable is defining TTPs, and how they differ from indicators.
First time google+ user so bear with me.  In my opinion as a responder, attribution is information I use to help scope an incident. Sometimes it's more valuable than others.  It may help me answers questions like, "should I expect lateral movement?" and "if so, what might that look like?".
+Bamm Visscher , interesting approach...because there's almost always lateral movement of some kind, usually within a somewhat limited range of options ("net use", at.exe, psexec.exe or similar).
+Harlan Carvey I think it depends on the space and type of actor. It also depends on what you found and how quickly you found it. For example, if you found remnants of an initial compromise who moved laterally and set up a C2 infrastructure elsewhere long ago. Then that second question becomes extremely useful. Especially for actors you may not be familiar with.  
So when I think of the PoP, I think of how I can cause the most disruption to the adversary, with the least amount of pain to my team. IHMO, looking at the PoP by itself when evaluating IOCs/signatures  is like only looking at vulnerability exposure when evaluating risk.

Brainstorming other factors that I would consider when evaluating the value of a particular IOC:

1: Effort/Speed to create
2: Effort/Speed to deploy
3: Effort/Speed to analyze
4: Others?

A reputation indicator like an IP address or domain name, may not have the greatest impact by itself, but if I can quickly create, deploy, and analyze it, then it could have a great impact to the adversary as a whole. Of course, if one or many of those efforts has a significant cost to it (which is often the case with reputation based indicators), then value of the IOC lessens respectively. At that point, I may choose not to leverage the indicator until a means to better create/deploy/analyze becomes available.

Redundancy/coverage may be another factor to consider. When evaluating  a broad indicator, like the use of a custom RAT, it can have many sub-indicators: network protocol, hash of the binary, host remnants, C2 addresss, etc. I may be able to quickly push out detection of the current known C2 address, so it's more valuable initially. Later, when I acquire the malware or get a sample of the network traffic, it's value may lessen as I create detection for identifying the malware on a host or communicating on the wire. Thoughts?
I +1'd David's TL;DR post above but for the sake of discussion I think early attribution causes more harm than good.  It results in analysts spinning their wheels, arguments taking up valuable detection time and potentially miscommunication with the customer/management.  Just follow the IR plan, document findings and wait for the event to conclude before trying to attribute it.
+Bamm Visscher, regarding the scenario about a C2 address being more "valuable" immediately, I'm not sure "valuable" is really the best way to express that.  There are probably several types of "value" so maybe that term isn't nuanced enough.  I agree, though, that you go with the best you have.  If all you have are domains and IPs, that's the best you can do until your investigation turns up new indicators.  At that point, you might retire the IP/domain and use the higher pyramid level detection.

It's probably semantics, but I prefer to think of the "value" of the indicator as constant, as long as it still describes the activity you're trying to find.  I think time degrades the value, as the adversary changes tactics or switches up their infrastructure or something, but probably not at the time scale you're referring to.
+Tony Hudson I suppose there is such a thing as attribution being too early, if it's based on a pool of information that's too small to draw conclusions from.  It's probably a fuzzy line, and also is probably different every time, as you interact with adversaries with which you have varying levels of experience, for example.  But good point to keep in mind.  
I kind of agree with +Tony Hudson 's comments on attribution.  Not all attribution is the same; it can depend on who's doing it, what they're looking at, etc.  There may be circumstances where there is not a one-to-one correlation in attribution, which will lead to misunderstanding and miscommunication.
First and foremost, sincerest respect to all of you professionals men and women, I have learned a lot over the years from reading your posts, blogs, books, etc. While I agree with some of the comments here, I will add in my .02 which will hopefully give food for thought/criticism (your pick).

As someone who comes from a "grey hat" background, I have been dabbling with forensics now for about 10 years, with 7 of those heavily focused on malware, incident response. I have covered mobile malware, "state sponsored" (a term which I dislike) and financial malware/viruses. I have had the opportunity to reverse engineer and analyze malware for different agencies (and I will leave that as is).
I also teach cyberwarfare related content in the beltway area from time to time. Think "highly offensive" (non-tool based) hacking along with what I call "rapid incident response" as well as counter forensics. I can tell you from my experience, many are wasting a lot of time on attribution.

When I teach students (all students need to possess min SF86's so I am not teaching malicious people) about incident response, it is done for dual purposes 1) to obviously flag who MAY be doing something, but primarily to make our (the class') attacks more covert via way of disinformation. We PURPOSEFULLY inject all sorts of false indicators into campaigns, because we know DFIR people will become fixated on these tidbits. It points the finger elsewhere and keeps us off the radar. The same TTPs we as DFIR/4n6 people use, are also abused.

I demonstrated a callback honeypot system once. The goal was to see the ultimate end point of someone opening a file. This is the most important thing I've come across (who saw what last). I believe that too much is falsely flagged as "state sponsored" when it is more along the lines of "Espionage as a Service."

Most capable actors/groups are well aware they are under the radar/microscope, and WELL STRUCTURED actors/groups use this fact to their advantage. Do not be fooled into ever thinking someone accidentally fudged up. It could have been done purposefully. "Because the mind is a terrible thing" - 345th TPC Attribution is not difficult, but it is NOT reliable, and can NEVER be as reliable as some paint this picture out to be.
Very good points, +J. Oquendo.  I'm sure there are groups out there who make false flag a part of their operations, though I have yet to (knowingly) encounter any.  I also don't hear this a lot when talking with those who do commercial threat intel for a living.  I guess that could mean we're all missing it, but I think it's more likely that most of the threat groups we track don't do it (or don't do it very often).  It is something to keep in mind, though.  I remember there was a lot of suspicion about possible false flag indicators inside Stuxnet, back when everyone was trying to figure out who wrote it.  It's certainly something to be aware of.
+ J - excellent point.

To +David Bianco 's point, while it's entirely possible that false flags are employed, I don't know that I've seen them used.  I tend to try to use multiple sources of data to support my findings, and not rely on single data points on which to hang a finding, or a case.
I was part of a group that analyzed Stuxed pre Ralph Langner. We found a VERY interesting trail that worked like this (to sum it all up). Developer (Russian) with about 18 years of Siemens S7 experience was contracted and went to Iran (we gathered this through his blog, and through forums on Siemen's support website ). The guy who registered at the Siemen's support site, was listed BY NAME on one of the domain's registrars for one of the C&C domains. He had/has a blog, and it turned out, that at the time, he had been doing contractual work IN Iran. Post Iran work, he moved to Denmark (location of one of the C&C domains), and a Stuxnet (football) C&C was registered timeline-wise a week after he blogged about finishing in Iran, and now being in Denmark. The company he worked for, was based out of Russia, and had ties to RBN malware groups. Coincidence?

We ended up getting all sorts of information that painted a very different picture from the "guava" / "Israel" theme and it began looking so absurd that we didn't know what to make of it. Others were fingerpointing at the US/Israel and I believe we (the US) was posturing up: "If they think we're that capable, they may think twice."

I was fixated on this Russian developer doing some form of backdoor whereby he'd always have a source of revenue: "turbines are broken, we need you to fix this system" which makes sense for an RBN developer, yet at the same time makes less sense considering on this scale, they could just "out" him on a return.

Long story short, there was enough data to paint a very different picture, the group I was in, left that entire theory alone and just took the technical non-speculative route. False flag TTPs? It is possible, SOMEONE wanted certain things to be correlated, either that or as the old saying goes: "If you look for something hard enough..."
After re-reading more of this thread, I wanted to clarify that I believe there are varying levels of attribution and I need different levels of details depending on what I need to accomplish.  Attribution can be as simple as "opportunistic vs targeted" or as detailed as "a specific unit within a nation state".  The former may  be an important part of my IR Plan as it helps define at what priority the IR team and IT staff respond to specific incidents (PIVY installed by an advanced actor may be a higher priority than PIVY installed by an opportunistic actor). The latter, as I stated above, can provide useful information as I am scoping a particular incident. Of course, I have the luxury of working with an Intel team who is accountable for attribution, not my IR team. So when I talk about attribution, I am talking about using it, and not interrupting my IR procesess to get it.
+Bamm Visscher , thanks...I agree, those are important distinctions to make.  This is the value of bringing all of these voices to the conversation...there're different perspectives at play here.
I 100% agree with +Bamm Visscher last comment.  If we take heartbleed as an example, there are (or reports of) script kiddies, cyber crime groups and nation state actors all exploiting this vulnerability en masse.  The type of threat imposed by these different groups of attackers can be very different.  I can imagine the flood of people attempting to exploit this vulnerability, most of which I'm sure were not attempting to steal IP.  Being able to attribute those attempts which are motivated  by IP theft (or whatever your org deems most threatening) can help focus response efforts.  I'm not saying that they all don't need to be investigated, but directing response to the highest priority first is important.  
+Bamm Visscher Can you provide an example of how attribution to a specific real-world entity (person or organization) would help scope an incident vs. just recognizing a set of TTPs as belong to the same entity but not knowing that entity's true identity? The latter could, I suppose, be considered attribution, as in "an ascribed quality, character, or right", but I'm using it in the more specific sense of "the ascribing of a work (as of literature or art) to a particular author or artist" (

I do see the general definition being important to scoping an incident, such as Jack's last example, but I'm still so sure about the specific definition unless you are in a position to take action against the actor you've identified.
+David Bianco , one of the things I find most difficult about "campaign tracking" is getting enough info/indicators/intel to line things up.  Not everyone looks for, or at, the same things, even if indicators are provided.  For example, there isn't a great deal of focus on endpoints, from what I've finding out what actually happened on endpoints can be difficult.
+Matt Gregory That's really a question for someone who does intel as their primary responsibility. I can't think of a time I needed to know the specifics about a certain entity, but I am assuming the intel team would prefer to attribute/track activity at the most detailed level possible. This would allow them to provide the most accurate and detailed responses to my inquiries (current TTPs, target industries, etc).
+Harlan Carvey Regardless of whether or not you plan to pursue attribution, any good IR needs enough detail to reconstruct the complete picture of the attack.  It's the same info, really, just put to a different purpose.
From my perspective/experience as someone jumping back into IR after a several year hiatus to do more theoretical/longview threat analysis, campaign tracking has a few values. Most of them deal with prioritization of limited resources, and assessment of how well an approach is working.  If I have the ability to know that some incident is the same adversaries as a prior incident, it calls into question if I really evicted them in the first place. It lets me consider if there are other avenues facilitating their re-entry.  Also, I would imagine it is possible to over time decide certain actors "like" you as a target - so if I see data on the wire that some actor group has rebuilt their tools, remobilized resources, changed their C2's - it could mean for me that I spend more time diving into pushing what is new and known about them through my security analytics models/work instead of relying strictly on automated monitoring to tell me there is a problem.  There is also, unfortunately, a less critical, but pragmatic reality - if my management looks at other reports (the flashy ones with cool adversary names that I despise) they start to question the effectiveness of my team. To date, it has not impacted the team but it lingers in the wings with questions about 'what would it take to be that good'...  (which does not immediately translate into a fundraising opportunity).
Add a comment...