Don’t Throw Away Your Phishing Program …Yet

I’m hearing more and more discussions on how the act of phishing employees actually creates more harm than good. The arguments in favor of ditching your phishing program are compelling.

It can tick off employees and consequently create a rift between them and the security team for a variety of reasons through:

  • Enticement/misleading promises
    We’ve heard of companies using phish templates that promise enticing things like bonuses, free products, etc. The employee gets excited (feels good) just to learn it was a company exercise (feels tricked). Resentment toward the security team ensues.
  • False Positive Results
    These are common and are usually due to other security tools “checking” the links. See Nathan Hunstad’s post on the latest issue we noticed with our phishing program. A false positive shows up as a click on that employee’s account even if they never clicked, in fact, they likely even saw and reported the email in good, secure fashion. Any additional training that pursues from this “click” is bound to cause resentment and is unfair.
  • Training overload
    As if our employees don’t already have enough required security trainings, some companies send additional training to “clickers”. This results in training overload, burnout and resentment. Not to mention it’s ineffective.
  • Smack Down
    A new employee joins the company, is excited about the new opportunity, has good will all over the place and then gets “tricked” by the security team with a phish. All that was accomplished was either embarrassment, disappointment or even resentment toward the security team and potentially against the new employer. Eek.

These are real concerns to which security teams need to pay close attention. But the answer is not to throw the phishing program out the window because a GREAT phishing program can avoid these pitfalls and at the same time strengthen defenses against phishing threats, a long-standing vector for attackers, especially in successful breaches as noted year over year in the Verizon Data Breach Investigations Report (VDBIR).

Email filters can be effective and greatly reduce the number of malicious attempts that actually make it to your employees. But they don’t catch 100% of the risks and even if 1–3% of the attempts make it through, as you are well aware, it only takes one click for the attacker to get the upper hand. The occurrence risk is low but the impact risk ranges from high to incredibly high, depending on the damage that could ensue. And until that magical day when our filters are guaranteed to be 100% effective in catching all malicious emails that come our way, you can quite easily engage your employees to be your human firewall without damaging relationships and the reputation of your security team.

To do so you’re going to need a plan and the right folks to carry it out. Luckily it’s easier than you think but you’ll need to make the INTENT of the program clear to all and then follow up with a risk-based approach. If damage has already been done at your organization, it might make sense to take a few months off to give your employees some time to cool off and for you to rewrite your program. It’s doable. At Code42 we’ve brought our click rates from upwards of 25% to less than 3%, on average and our employees actually enjoy the challenge. We didn’t get there overnight, as you’d expect with any good outcome, it took time and patience. Here’s how we did it.

You Need Their Help
It is so critical that employees know why you phish them, what it means for you and for them and how they can be a part of reducing risk for the company. They have a vested interest in the success of the company — that’s where their paycheck comes from and where they spend a ton of their time. They also likely hold pride in their work. So most everyone will understand it when you tell them that the security team alone cannot protect the company. We need them to help keep the company secure. We won’t be nearly as successful without them. Instilling this call for help appeals to most people. If that alone doesn’t do it for everyone, read on.

Your employees must know your intention for the program. It should never, ever, ever be about trying to “catch” anyone. In fact if you are choosing templates because you are sure it will cause people to fall for it, you better be darn sure that it is a template that you see coming from the wild and making it past your filters. If not, what risk are you addressing? Work with the team who sees what types of phish emails are making it past your filters. That is your real risk — that’s what you should mimic. Anything else is a futile exercise that wastes time, achieves little and frustrates many.

If you are seeing phish attempts in the wild that may hit close to the heart of your employees, such as free virus testing or vaccines, consider communicating about those to all users and include a screenshot rather than sending the phish, as the way to inform them. This is a more ethical approach and will help avoid the emotional roller coaster of good things promised followed by, what they’ll perceive as a slap in the face. Believe me, I learned the hard way on this one.

The Goal
Make it clear that it is not your desire to “catch” anyone. In fact, the goal is to catch absolutely no one (achieve the elusive 0% click rate). To that end you are going to give them opportunities to practice, because we aren’t good at anything in life unless we practice. Natural abilities only get us so far.

At Code42 I have the luxury of following up with everyone that clicks after each exercise to learn more about what happened because our click rate is so low. I tell them at onboarding to expect me to reach out but it is only for two reasons 1) to check if it was a real click or if technology interfered (see Nathan’s post mentioned above.) And if they say they didn’t click, I will absolutely believe them. I’ve had colleagues laugh at what appears to be gullible innocence. The way I see it; what’s the point of being skeptical or cynical when it comes to an important relationship? And what relationships succeed without trust? 2) if they did in fact click, I ask what happened (they usually offer it up first) so I can learn where our employees are failing and use that real world information for further education for them but importantly for others (with no names mentioned) about pitfalls we are actually seeing.

What to do about the clickers
We’re human, that’s why phishing works for attackers. So a single click in an exercise should not be seen as a risk to the company, quite the contrary. Guess who is least likely to engage with the next phish, perhaps a real one. This group.

Frequent or repeat clickers are more the risk you want to work on. Know who these folks are and make sure you have ruled out false positives. As a result of the work done by Nathan in his above mentioned blog, we reached out to our phishing vendor and learned that we could easily identify those false positives in our dashboard and remove those clicks from employees’ records. If you are a large company you may have to automate this or take a little time with some v-lookups. It is time well spent. No one wants to have a strike against them that they didn’t cause.

After you’ve removed false positives, the number of repeat clickers should be low. You need to connect with them to find out what’s going on. They need more direction, meaning a one on one discussion with someone on your team. I’ve haven’t run into anyone who repeatedly clicks but that doesn’t mean they don’t exist. So you should build a process into your program that defines a threshold after which to engage the employees manager, then department leader, then HR. If someone continuously clicks after being talked with and further educated, it may indicate they truly don’t care about the company and you likely have a bigger problem on your hands than a trigger finger.

There is no need to provide names to leadership of one-time or infrequent clickers. Leaders, please don’t ask for these. This can only result in a futile exercise in shaming and will be ineffective. In fact, tell your employees at the start of your program or at onboarding that if they slip up they will not be reported by name. Department heads may get result metrics for their area but with no names attached. Now if the employee slips up over and over again, that’s a different story.

Train your employees on how to recognize a suspicious email BEFORE you start your phishing program. It’s not fair to test them without training them and it will feel like trickery.

Automatically assigning training to folks who click is also likely to tick them off. Besides, if they didn’t learn from your prior training, this isn’t likely effective. They need one on one attention. If you don’t have the staff to do this, consider slowing down your phishing program to give the analyst(s) who run it, the time to connect with folks. Just make sure their intent is to assist the person, not slap them on the wrist. See Intent section above.

Run a Risk Based Program
We already discussed choosing risk-based templates and how to help out true, risk-prone users. Next, adjust frequency to meet your goals. If your numbers are in the acceptable range for your company (VDBIR states that average these days is ~3%) then maybe it will be sufficient to send a phish quarterly. Free up those analysts for other work if your numbers are low. Continuing to phish for the same results is not a good use of anyone’s time. Alternatively, you can take a look to see if there are some groups not meeting that threshold. If that is the case, focus on those groups. Use templates specific to them, and which you are seeing in the wild.

Communicate, Communicate, Communicate
This is not just your check the box that you sent an email, put an article on the company intranet or messaging app. The goal is to influence. If that is not your forte, study up. There are books on sales tactics and on creating meaningful, memorable messages, like Made to Stick by Chip and Dan Heath. Search YouTube. Just know that if you want to be effective, you need to persuade. A good company-wide communication would be around how the company is improving. Discuss success rates, rather than click rates. It may seem subtle but it is all about rewarding good behavior.

Get time during onboarding. If the team who does your onboarding says they can’t possibly fit you in, negotiate even if you only get five minutes. If you can’t get that, help them truly understand the impact a few minutes of onboarding time will have on reducing risk to the company. During onboarding talk about the phishing program and your intentions around it. Be positive and sincere and let them know that the skills they will learn will benefit them and their families at home too. You can even sell it as a gamified way of learning. Don’t underestimate your energy, selling this “opportunity” is worth its weight in gold.

It’s always good to show that leadership supports the phishing program as well. If you don’t have their support, work to get it. If you have it, a message from the CEO or CISO or other C-Suite executive can help build good will around the program. Just make sure it conveys only positive intent and is not a finger shaking message.

So there you have it. You have a choice. Throw out your phishing program and accept the risks that open up to attackers or create a GREAT phishing program that is effective and well received by your employees. There really isn’t a middle ground here. But the latter is doable, more nuanced and will take some time but we’ve achieved it at Code42 and so can you.

The security team at Code42 is passionate about improving security everywhere. You’ll find more great security blogs by our thought leaders at Or feel free to reach out to me on LinkedIn.

Who’s watching your VirusTotal submissions?

Phishing testing is often a part of a company’s security training, and conducting frequent phishing tests is part of our security program here at Code42. As part of those tests, we monitor both click rates and reporting rates, as we consistently message our employees to report any suspicious-looking emails to the security team. So when the most recent phishing test report from our vendor KnowBe4 included three clicks from IPs in China, it resulted in an investigation that ultimately uncovered some interesting consequences of using public security scanners.

All three users said that they hadn’t clicked on the link in the phishing test, and all three did report the email as expected. Although having a user click on a link after they submit it as suspicious is possible, it’s not a typical behavior pattern that we see. Plus, due to the security culture at Code42, most users willingly own up to clicking on links in emails, so we really had no reason to doubt that the users weren’t being honest when they said they didn’t click on anything. That did present a puzzle, however: how could a personalized link from KnowBe4 that exists only in the phishing test email make its way to some IP address in China, and was that indicative of something malicious? Seeing no immediate answers, the SOC team started digging in.

When coming up with possible explanations for how this URL could end up outside of the user’s account, we brainstormed several threat vectors:

  • The obvious one is malware, in this case a type of malware on the endpoint that scrapes URLs in emails or otherwise provides remote access that could be leveraged for extracting data from an email account. That’s not impossible, but the thought of somebody leveraging such access to grab a URL from an email and not, say, deploy ransomware seemed unlikely.
  • A fake OAuth application: these are apps that look legitimate and connect to your online accounts via OAuth, requesting permissions like reading your email in your Office365 or Google accounts. These are becoming more and more prevalent so this was a serious option to consider.
  • Another possibility was a malicious browser extension that could grab data from a Google Mail page. This is also becoming an issue that security teams need to keep in mind as part of their threat model.
  • Finally, we considered that our own security tools or processes may have triggered some unintended side effects that led to this behavior.

It turned out that the last one was the culprit! But before moving onto the explanation, a few words about investigating the other options. An EDR tool would be most useful for investigating the malware scenario. Viewing connected apps in your cloud provider’s admin console is one way to try and find suspicious OAuth apps, as is monitoring your log event stream and capturing all new OAuth permission adds as they happen. As for browser extensions, a tool like OSQuery can be used to enumerate extensions and help identify any that look odd.

But in this case, it was our own security tools that led to the odd clicks. At Code42, we use Palo Alto Cortex XSOAR as our SOAR platform, and one of our key automation playbooks is handling emails that users send to our SOC team. We build into that playbook an automated response for people who submit phishing tests thanking them for the submission and keeping track of who reported the email for those aforementioned metrics. However, sometimes the email is forwarded in such a way that the playbook logic can’t automatically determine that it is part of a phishing exercise. When that happens, it goes through the normal investigation workflow, including sending URLs to services like and VirusTotal. Ultimately, it was determined that it was the latter service that led to the recorded click event, but how did we get there?

First, we started with a standard tactic: seeing if those suspicious IPs were seen anywhere else besides the URL click. When we looked for other traffic from those IPs in our logs, we did find a few events. They were to URLs on webservers that we control and hence log, and they were largely pretty innocuous events: HTTP requests to static pages, support articles, and so forth. Digging into one particularly personalized URL, though, we found that not only had the suspicious IP visited it, but so had a number of other IPs, including IPs from Google, DigitalOcean, and Palo Alto Networks. Taking a close look at the user agents for some of those events uncovered that the sequence of events always appeared to start with a Google IP, one that included “appid: s~virustotalcloud” in the user agent string. Once we saw this, things began to fall into place.

We discovered a pretty consistent pattern: the VirusTotal HTTP request came first, then over a period of 24–36 hours, other IPs would make HTTP requests to the same URL. For some of these URLs, they were very long and had arbitrary data added, so the only logical source could have been the original VirusTotal HTTP request. In other words, it looked like organizations were ingesting all VirusTotal URL submissions via API and visiting those URLs themselves to (likely) do their own analysis.

For some of the source IPs, this explanation made a lot of sense: VirusTotal does have a robust API, including a feed of all submitted URLs. Other security vendors use this data as an input into their own tooling to add additional context. But some of the traffic indicated that unknown non-security actors were doing the same thing.

At least, that was the hypothesis we had put together. The next step was to test it, and so we generated a fake, easily-trackable URL that was on a domain we controlled. We submitted it to VirusTotal and sat back to wait for the results. And sure enough, we saw the same pattern once again:

HTTP requests

The first HTTP requests came from VirusTotal. As before, Palo Alto, Digital Ocean, and AWS showed up. But so did curious networks like “Orange Polska Spolka Akcyjna” and “Vysokie tehnologii Limited Liability Company”. Finally, at the end, the network we saw in our phishing exercise, “Guangdong Mobile Communication Co.Ltd.” appeared as if on schedule. That traffic consistently has a user agent string of “okhttp/3.12.13”, which also matched up with the phishing reporting dashboard “Generic Browser” data point for the browser that registered the phishing link click.

In the end, we felt our hypothesis was confirmed and that the clicks were neither user-initiated or malicious. We also followed up with KnowBe4 and learned that we can remove those non-user-initiated clicks to ensure that our reporting is 100% accurate. But it served as a great reminder that when you use a tool like VirusTotal as part of your investigation, you don’t control who sees what you are submitting, and they may decide to take their own look at what you are sharing. More importantly, when you see strange activity in a phishing exercise, remember to “assume breach” but realize there are other explanations out there too!

Don’t Get Hooked: 3 Practical and Quick Steps to Prevent Being Phished

My wife is a public-school teacher and is also a volunteer at our son’s school. Each year, our son’s school holds a charitable auction that is the largest fundraising event for the school. The weeks leading up to the event are hectic and stressful as everyone finishes last-minute preparations. Recently, she called me during her lunch break in a panic. “I think I was scammed!”, she exclaimed. “I responded to an email from the head of fundraising committee but then realized it wasn’t from him.” Nervously, she went on, “I just finished up lunch and was getting ready for my class when I received this urgent email from him. It sounded really important, so I responded!”

This is something that happens all too often, even to those of us with a keen eye toward spotting a phish. The adversaries have refined their tactics to know just how and sometimes, when to catch us with our guard’s down. They anticipate when we may be distracted or multi-tasking, such as lunch time, holidays, after-hours, or just as we head into an afternoon of meetings. That’s why it is important to stay vigilant and focused, even when we are rushing toward an event or deadline. Here are a few tips that are super quick and easy to do before interacting with a potential phish.

1. Check the sender’s email address, not just the display name.

This is how my wife realized she had been scammed, but only after it was too late. The scammer was impersonating someone she knew and attempting to take advantage of that trusted relationship. The scammer’s email was very similar to the address of the committee member’s email and it had the exact same display name, which in this case was a nickname, not the proper name of the committee member.

Scammers will change the display name (the sender’s name) in the email, and/or the first part of the email address (before the ‘@’ symbol) to something that looks familiar, or something that we trust at first glance.

In my wife’s case, the difference was the domain of the email address; it was sourced from Gmail rather than the school’s domain. Whether you use Gmail, Hotmail, AOL, or another email service, you should be able to quickly see the sender’s entire email address. In Gmail, one way to do this is to click the three vertical dots on the right and select “Show Original” in the pop-up menu.

Does the domain of the sender email address look correct? Take a closer look. Scammers are registering domains (the portion after the ‘@’) that resemble known domains only with small changes to them in an attempt to fool us. For example, they may use something like ‘’, ‘’, or ‘’ vs. the real domain of ‘’. The differences are easy to overlook with a quick glance, but noticing them could prevent major headaches for you, your company, and/or your family.

2. Use URLscan to quickly validate a link before opening it.

We’ve all heard it before, “Make sure you look at the link before you click it!” The problem is most emails contain URL shortened links that obfuscate the true destination. A simple way around this is to right-click on the link and in the pop-up menu, select “Copy Link Address”, “Copy Link Location”, or similar depending on which browser you use. This writes the URL of the link to the clipboard.

You can then use a free online tool such as URLscan ( to scan the link and give you a summary of the site. URLscan will provide the real or effective URL of the link, and in most cases will also provide a classification of the website that the link goes to, as well as an image preview.

For example, this is a screenshot of a site impersonating a PayPal authentication page:

URL scan result

In the screenshot above, notice the Verdict toward the bottom: Potentially Malicious. This site is likely attempting to steal a victim’s credentials. If a victim enters their email address and password for authentication, the site will store this information and falsely prompt the victim that their credentials are incorrect. This allows the adversaries to verify the victim’s email address and password used for this site and will almost certainly use the victim’s credentials to gain access to other websites as well.

Note: There is an option of performing a Private scan with URLscan, so that any sensitive information potentially contained in a URL remains private. With the default Public scan, the results of the scan are made publicly available.

While it’s not a catch-all, URLscan is a quick and easy way to check the URL of any link or website to verify that it is legitimate. Does the link take you to where you would expect it to go? Is there an unexpected authentication page? Is the site classified as Suspicious or Malicious? URLscan can help you answer these questions and provide some confidence before clicking any link.

3. Use VirusTotal or Anti-Virus software to scan an attachment.

You should use caution before opening an attachment from an unknown sender or an email you weren’t expecting. There are also times when we receive an unexpected email from someone in our contact list that just seems a bit off. Perhaps it has several typos or contains poor grammar, or maybe the email addresses you by your full legal name instead of a common nickname or simply your first name. Whatever it may be, listen to your senses, and don’t blindly open attachments!

If you have anti-virus software installed on your endpoint, you can scan the file before opening it using your anti-virus software. Caution: in order for anti-virus software to scan an attachment, it must first be downloaded locally to your computer. This can be done without opening or executing the file.If you are uncertain or uncomfortable with downloading the attachment, a safe an easy alternative is to contact the sender and inquire about the email and attachment out of band, i.e., use alternative means to contact the sender rather than responding to or forwarding a potentially malicious email.

If you feel comfortable, downloading any attachment(s) from a suspicious email can typically be done by hovering over the attachment and selecting “Download”or by right-clicking on the attachment and selecting, “Save As”, etc., depending on your email service and/or browser.

If you don’t have anti-virus software installed, another option is to upload the file to a free online tool such as VirusTotal ( to scan and analyze the file.  VirusTotal leverages many different anti-virus vendors to simultaneously scan the file you upload. While false-positives can be expected with any anti-virus vendor, the use of multiple vendors at once can provide a high level of confidence in the results. Below is an example of what a scan in VirusTotal looks like:

VirusTotal Scan

Generally, you can make a quick decision from just the Detection section of the scan based on the number of Suspicious results. But if you need more data to make an informative decision about your attachment, check the Details, Relations, Behavior, and Community sections on the scan page for in-depth details about the file such as whether it is signed, the file history, if it makes any network connections or launches any macros, and much more.

VirusTotal is an invaluable tool to search and analyze IP addresses, domains/URL’s, and file hashes. It provides incredible detail including community feedback to help make a quick decision. One caveat is that anything uploaded to VirusTotal becomes publicly available – there is no option for a private scan.

So far this blog post has focused on email phishing. But I would be remiss if I didn’t mention SMS phishing, or Smishing. According to an article on ( from January of 2021, citing research by Proofpoint, phishing via text messages increased over 300% in 2020!

Clearly, scammers are taking advantage of the fact that we tend to trust text messages AND we’re usually multi-tasking and checking texts at all times of the day and night. In November of 2019, Asurion (,tech%20care%20company%20Asurion1.) published an article stating that Americans check their phones an astounding 96 times per day!

Fortunately, the steps I suggested for spotting a phish are similar for a spotting a smish. There is a phone number associated with every SMS message; don’t click a link in a text message from an unknown phone number!

Instead, do a quick Google search for the phone number. If the text message claims to be from a business, the phone number from the text message should be associated with that business.

If you have an iPhone, you can hold your finger on the link in the text message until a pop-up menu appears. From there, you can copy the link and use either URLscan or VirusTotal to scan and preview the URL right from your phone, without having to open the link first. Check to see if the link is associated with the business the message claims to come from, if any authentication is required, if the URL is categorized as Suspicious or Malicious. Also, be skeptical of any text message from an unknown number asking for money or gift cards.

With the new work-from-home environment, it’s easy to get distracted amongst all the chaos in our busy lives. But catching a phish or a smish doesn’t have to be difficult or time consuming, and you certainly don’t need to be a savvy infosec person. Pause and take a second glance, trust your gut, use these quick and practical tools when a message looks off, and hopefully they will help prevent you from getting hooked.