Truth in Disasters, Part Two
Ransomware is not a threat. The threat is the APT.
Ransomware is simply one of the many tools available to the APT.
Another one of those Dan’s New Leaf Posts, meant to inspire thought about IT Governance . . . .
Our article about truth in disasters garnered a lot more initial views than we’re used to. I’m thinking that is due to all the focus on the mainstream applications of the 2020’s that clearly contain the most data corruption (Social Media), and so my blog post is getting views from people just wanting to continue the ongoing argument that such applications create.
But Truth in Disasters was just the beginning of a discourse on using Reliable Information during a disaster, and it focused on “how do we know what we know.” It ended by pointing out that because a well-designed SIEM is not on the network being attacked, it is in a great place to be the “truth repository” during an incident.
Many people rightfully believe the point of a SIEM should be to PREVENT an incident, which is a worthwhile goal that we readily take on. But a SIEM can’t always prevent an incident. Just one layer in a complex web of controls we weave, in the hopes of preventing that breach, the SIEM too is not the silver bullet we (again?) find ourselves hoping for. A change to the network, unaccompanied by proper notice to the SOC, can lead to a false sense of security . . . which inevitably leads to a breach.
In my first article, I suggested five primary ways to know something is true:
- Direct experience or observation.
- Reading it from text.
- Being told or taught it by another person.
- Logical reasoning.
- Intuition.
I ended the article positing that one of the most important reasons you need to utilize a security operations center separate from your network infrastructure is that, in a breach, you often can’t directly experience what happened (i.e. you’re locked out of your network). You must, in that case, rely on what others have directly experienced. If you can at least go to the SIEM (that is managed by somebody else, or, at the very least, on a different network), and get the picture of what happened, you will be in a much better position to respond to the scariest of breaches, as well as (of course) the most common, the one that occurred in our example – ransomware.
But what is the truth of the SIEM? Does it prevent our example: ransomware?
It usually does, and we can change the word “usually” to “almost always” if we understand something about truth in disasters.
In disasters, no one entity has the truth. The truth is a combination of the direct experience of many different entities.
So let’s discuss what is actually happening with a SIEM. At a metaphorical level, you could say that a SIEM strikes a baseline and then investigates changes to/from that baseline. In that regard, it’s like “spot the difference,” the game we’ve all helped our children (or grandchildren) play in the Highlights Magazine at the dentist office.
But in reality, the information system is always in a constant state of flux. Impermanence is the only constant.
Thus, instead of spot the difference, a more accurate metaphor would be Jane and Joe watching a beach, looking for rule violations or worse. When we hear a child’s scream, we need to investigate, for a child screaming is a potential IOC (Indicator of Compromise), but it may take a while before Jane can ask Joe to run down the parents of the child, or ask the child itself, about the incident.
We need the input of more than one source in order to understand “the truth” of what is actually happening.
If a shark happens to be lurking in the waters, Jane and Joe may not see it until it swims high enough for us to see its fin. The APT can be doing things like writing scripts and storing them with innocuous extensions until they are ready to go. (Limiting access to scripting tools is one of the gaps we see in the Cybersecurity Assessment Tool, by the way). Or the APT could be trying passwords found on the dark web, to see if the victim is not using unique passwords (another gap in the Cybersecurity Assessment Tool). At infotex, our biggest worry is that the APT is looking for “tuning weaknesses” in the SIEM, (which is a third gap in the CAT.) Or the fifth and sixth CAT gap could be the APT’s priorities . . . (looking for MFA weaknesses, and figuring out where the network is not segmented.)
There are thirteen gaps.
SOCs are always looking for those shark fins, but we’re also investigating that child screaming, dealing with parents who aren’t sure if they should tell us why the child is screaming, and smaller fins from fish that are not wanting to eat our swimmers.
That shark is an advanced persistent threat (APT). It could be lurking in the waters for a long time before we become aware of it. The APT is the scariest of threats because it is persistent, leverages access and lack of communication, has a level of sophistication usually greater than the average IT person in a community bank can stomach, and is targeted. Even if the initial access was not as a result of a targeted campaign, what makes an APT an APT is that it is now going to slow down and learn about the victim, all in an effort to maximize the effectiveness (their word, our word would be impact) of the breach.
Fortunately, a MANAGED SIEM has a team of people that are looking for that shark. Whether or not they find it depends on many ever-changing variables, including communication, but a SOC can usually detect an APT given enough time. The problem is all those fins on all those fish that turn out to be legitimate. Or what we call “false positives.”
99.999% of all alerts generated by a SIEM are false positives. That .001 percent needs to be investigated, because it could be a gigantic, mean, hungry, and highly motivated shark.
The APT usually gets on a network via social engineering. (The days of hacking through poorly managed firewalls, at least at financial institutions, have long since passed). Somebody clicks on a link and the shark enters the water. A SIEM may have the ability to see that particular click, but if the link was to a site not yet identified by reputation agencies, it may think the click was on a legitimate link. In 2020, the average time to detect a breach was 228 days. (Varonis Breach Report, 2021). That is totally unacceptable, even for the fortune 500 networks that are causing that number to skew up.
We are proud that our own average number falls well within our rather aggressive SLAs. But how long will that last?
Hopefully a long time, as long as we can understand Truth in Disasters.
Once the malware is on the network, the APT then has access, no matter how small. It will leverage that access to begin “reconnaissance,” by figuring out first where it is, what controls are on the network, where it can go to from its starting point, etc. Since most pings and trace routes and such are seen as legitimate traffic, a SIEM may not be able to block actions during this phase of a breach. (Do you see why better informing the SIEM of where legitimate reconnaissance would take place from is helpful?) It may try to install additional tools, and hopefully the SIEM will see this action.
Or, to stay under the radar of the SIEM, the APT may use tools already on the network. For example, an APT can use the 7zip application we make available to everybody, to encrypt files and even traffic. A SIEM cannot look into encrypted traffic, so that helps keep the shark’s fin under water.
Ultimately though, the APT needs to install some of its own tools. Many of the tools that could be installed are regulars on the ShadowIT list your SIEM can compile for you. Cloud-providers, bit-torn sites, etc. Just by reviewing the websites hit in a month, we could detect the existence of an APT. But do the people on the SOC know what to look for when they conduct this review?
But all this activity is being captured in a well-designed SIEM. The scary thing about an advanced persistent threat . . . its persistence . . . is also its bane. And ultimately, the APT’s time is a total waste unless they can exfiltrate data, and/or install TOOLS like ransomware.
Because your SOC can be hunting for these actions . . . in real time of course, but also as part of a review process. And very often that review process starts during a breach.
Which returns me to the gist of the part one of this series . . . truth in disasters . . . . knowing what we know is true. Because what we know, at least in a cybersecurity breach, is always a subset of reality until THE ENTIRE TEAM HAS MET. Way too often we react to breaches based on what we know, without getting the benefit of asking coworkers, our peers, our partners, and our entire incident response team for input.

The NIST Cybersecurity Standard suggests the multidisciplinary incident response team communicate with a myriad of partners.
The above diagram, from NIST’s Cybersecurity Publication on Incident Response, is how we arrive at truth in disasters. The direct observations of EVERYBODY is what needs to occur . . . not only during the review process after an incident, but BEFORE any incident even occurs.
So let me end this post with a question. . . . the cusp question . . . the question that each member of the Risk Monitoring Team should be asking:
What legitimate traffic is actually illegitimate?
To find the answer to this question, we need the input of more than one entity. This question is why you need a multidisciplinary team. The insider threat relies on illegitimate traffic appearing legitimate. The phishing attack is another example. People clicking on links is legitimate. Fortunately SIEMs know where most existing phishing sites are, and have the ability to look for the BEHAVIOR of a phish. But visibility issues, the fact that we are still acting like we’re guarding a castle with a moat, and the possibility of a targeted, orchestrated attack . . . an APT . . . . has us asking that question.
It’s why, after the second subscenario of an incident response test, we put the lone rangers of the team on a cruise.
It’s why the scariest ShadowIT are those scripting tools. The APT loves using those tools. We need to make the use of those tools illegitimate.
It’s why the scariest legitimate tools are those tools that give our users the ability to encrypt files. The APT loves encryption. We need to at least recognize (through awareness of the team) that our SIEM cannot see encrypted traffic.
What legitimate traffic is actually illegitimate?
That’s a question you won’t see AI answer. It’s a question that requires a voice and a few sets of ears to process. We need a multidisciplinary team to help us know the answer to this question. We need a process that includes the asking of this question.
This question is why we at infotex still maintain that a SIEM is not an application, it is a process. It is a process organically carried by three teams acting as one team. It is a process that relies on the direct observation . . . of all three teams.
Original article by Dan Hadaway CRISC CISA CISM. Founder and Managing Partner, infotex
Dans New Leaf is a fun blog to inspire thought in the area of IT Governance.