Purple Team Cat And Mouse

In an old blog post, I spoke about the importance of a Red Team program for mature security organizations. Part of that is having well-defined rules of engagement. Today, I’ll be talking a bit more in depth about what that means, and how you can ensure that both Red Teams and Blue Teams concentrate on what really matters: improving the security of the organization, not fighting each other.

Defining rules of engagement is important for a few reasons. First, it helps ensure that testing is done in a way that minimizes impacts to production systems, so that you aren’t causing unnecessary outages and wasting goodwill from the rest of the organization. It also lays out a clear escalation path for when testing inevitably runs up to, or crosses, that line between allowed and not allowed. While these are definitely important, some of the most important rules revolve around what kinds of activities are allowed on both sides with regards to scoping out, investigating, or outright surveilling the other side to try to figure out what they are doing.

When Red Team programs are around for a while, and both Red and Blue teams start to get the hang of what engagements are like, it is not uncommon for goals to start shifting away from those laid out in the beginning, which are typically aimed at testing the efficacy of security controls, and towards something a bit more adversarial. Specifically, the goal of the Blue Team becomes to catch the Red Team at any cost, and the goal of the Red Team becomes to evade, deceive, or frustrate the Blue Team as much as possible. Sometimes this begins innocently or unintentionally, but a more typical way into this pattern is when strict metrics are defined for both sides: “Why aren’t you catching the Red Team 100% of the time?” for Blue Teams, and “Why are you spending all this time/money/technology and getting caught too often?” for Red Teams.

Once this happens, incentives tend to work at cross-purposes and lead to poor outcomes. Instead of focusing on general security controls and identifying weaknesses, for example, Blue Teams hone in on the members of the Red Team and start focusing on their activities only, sometimes to the point of sabotage: I’ve heard stories of Red Team members being kicked from infrastructure or added to more restrictive policies in security controls just because of who they are. Similarly, Red Teams can sometimes take advantage of their inside knowledge and the fact that they work very closely with the rest of the security team to time exercises for tool maintenance periods, deliberately target infrastructure that they know is lacking security controls or has less strict policies, and efficiently pivot by taking advantage of already-known access points and gaps.

As a result of this change in focus, exercises can lose their value and turn into an ongoing cat and mouse game between the two sides. By hyper-focusing on the interplay between the Red and Blue teams themselves, focus is shifted away from the true goal of a Red Team program: ensuring that the security program can handle real-world adversary activities, and identifying true gaps that realistic adversaries could take advantage of. Too often, this means that Blue Teams set up rules that work only against the Red Team and not other adversary tools, techniques, and procedures (TTPs), or Red Teams keep going back to the same tried-and-true vulnerabilities because they have pre-existing knowledge of the infrastructure.

While there is no way a Blue Team can ever treat a Red Team engagement as a true representation of an external adversary, and there is value in treating a Red Team as a kind of Advanced Persistent Threat (APT), Blue Teams can’t get tunnel-vision when it comes to responding to Red Team tactics. Similarly, Red Teams can’t overuse their inside knowledge to craft perfect or impossible-to-detect scenarios. Both sides should ask themselves the following questions:

Blue Team: Is this security control/countermeasure effective against just your Red Team? Is it broad enough to capture adversaries that use a slightly different TTP?

Red Team: Is this test an accurate representation of what an external adversary would likely do once inside the environment, or does it rely too much on inside knowledge?

Sometimes, the results of asking these questions are noisier or less effective security controls, or less-successful Red Team exercises. And that’s okay! Rarely do we get to choose our adversaries, and so it is necessary to sometimes cast a broader, but less effective net. In the end, while Red Team simulations can be incredibly useful, they are simulations, and need to be viewed as such.

By pulling back and ensuring that activities are more broadly applicable, both Red and Blue Teams can get away from an adversarial relationship that doesn’t benefit the security program as a whole, and instead focus on improvements that may catch real-world attackers.