Phishing Simulation Campaigns
Phishing has become one of the most talked about threats in cyber security and so, quite rightly, organizations want to protect themselves against it.
Products marketed to stop phishing will typically involve training users, and more often than not, this is based around the idea of ‘phishing your own users’. These are usually sold as a single package which sends fake phishing emails to your users. Then, if a user clicks they get training information and you get to know who clicks. Over time, the number of people who click reduces, apparently showing an increased ability to spot phishing. It seems really elegant. But if you scratch the surface you find that it is hiding a number of issues.
An impossible task
First things first. No training package (of any type) can teach users to spot every phish. Spotting phishing emails is hard. Spotting spear phishing emails is even harder. The advice given in many training packages is based on spotting standard signs like checking for poor spelling and grammar, and while these can be a good place to start, they can’t be used to spot all phishing emails. Bad guys can spell (and some nice genuine people can’t).
More importantly, we can’t expect users to remain vigilant all the time, even if there were concrete signs to look out for. Being aware of the threat from phishes while at your desk (where users are probably most aware of the risk) is hard enough. But phishing can happen anywhere and anytime, and people respond to emails on their phones and tablets, and outside core hours. Clicks happen.
Responding to emails and clicking on links is an integral part of work. Attempting to stop the habit of clicking is not only extremely difficult, but is it what you want? Asking users to stop and consider every email in depth isn’t going to leave enough hours in the day to do work. This is why the BCSF have taken a much broader approach in our new phishing guidance, where we emphasize the importance of developing multi-layer defenses so you have multiple opportunities to stop a phishing attack causing serious damage.
If a phishing simulation package claims to have achieved a tiny click rate then, quite frankly, your fake emails were unrealistically easy to spot. Which brings us on to…
The lure of metrics
Lets be honest with each other. Phishing simulations aren’t just about training. They are also popular because they produce a metric (e.g. ‘Last week 60% of people fell for our phish, this week only 35% fell for it’). It appears really positive and encouraging, since it appears to show that something is being achieved, but unless you’re careful you might just end up wasting time and effort.
Metrics are extremely difficult to come by in the security space, and having a clear, quantitative metric that can show progress, in an area you care about, can be really seductive. But you need to look beyond the headline figures, and find out what it’s really telling you. Is it really giving you an idea of your company’s defenses against the real threats? The risk of living or dying by this single metric is: what happens when you make the test emails more sophisticated, for example to test spear phishing? This will do terrible things to your click rate. You can get any result you want by adjusting the emails you send out, which is hardly an objective measure of your defenses. And if you are on the receiving end of a metric that shows a vast improvement, you should be asking some very probing questions about how the simulation was designed, because it is likely that the emails are just too obvious.
There may still be a role for this metric but you have to know what question you want to answer, and design your simulation systematically to answer it. For example, you might want to know which departments are more susceptible to phishing, or whether a new approach to training has had an effect. This could help you plan your defenses and offer support to where it most needed. The important thing is to approach it methodically and design it carefully to avoid any unintended consequences, unnecessary disruption, or a meaningless result.
The consequences of blaming users
While developing the guidance, we discovered a widespread blame culture relating to phishing. Many organizations believed that if users were blamed or punished for clicking phishing emails, they would somehow be able to spot them next time around (and if they clicked again, the answer was more punishment). Quite simply, this does not work, and it can also cause a great deal of distress and even distrust between users and security. There is absolutely no reason that blame and punishment needs to be part of training, and phishing simulations should never be used as a tool to catch people out.
It is not OK to blame the user for the following reasons:
It doesn’t help.
Research shows that people click for lots of reasons. These could be personality traits (such as being helpful and efficient) or situational (if a person is being busy and stressed). Or, the phisher did a really good job of writing the email. Threatening someone with punishment doesn’t change these factors.
It undermines the relationship between employees and security
You want employees to trust you and to come to you with their concerns when something doesn’t feel right. They can be a valuable early warning system and are a vital part of any monitoring system. Someone who is afraid for their job will not report mistakes.
You risk sticky legal issues
With more evidence showing that no one can be expected to spot all phishing emails, punishing people for clicking on emails you’ve sent starts to resemble entrapment. Always check with your HR department before undertaking any phishing simulations.
Training should be about building confidence and empowering users to make informed decisions.
Destroying their confidence by asking them to perform impossible tasks - and then calling them ‘the weakest link’ - is counterproductive.
Even if a user clicks repeatedly, disciplinary action is not warranted. A user clicking three times in row may be a sign of a root cause (such as the requirements of their role, lack of necessary workplace adjustments, a high stress situation, or confusion about the current training material) or just bad luck. If a particular user appears to be struggling, try engaging with them to try and find out if there is another way to help. While there won’t be an easy fix in every case, you may find some cases where a different business process, or a different approach to training, can make a real difference.
So where does this leave training?
There is clearly a need to provide information to your users; they are a vital layer in your defenses. If just one user reports a phish, you can get a head start on defending your company against that phishing campaign and every spotted email is one less opportunity for attackers…but phishing your own users isn’t your only option.
Try being more creative; some companies have had a lot of success with training that gets the participants to craft their own phishing email, giving them a much richer view of the influence techniques used. Others are experimenting with gamification, making a friendly competition between peers, rather than an ‘us vs them’ situation with security.
And think about when you actually want to do training. Stopping a user to explain how they should have spotted your fake phish just after a click intuitively seems like a good idea, but if you lock someone’s IT until they complete a lengthy course, you are causing a massive disruption to their working day. Besides, if it is really tedious, it might seem like punishment. Even if that wasn’t your intention.
Whatever you do, remember that no training technique will get your users to recognize every phish. It’s also essential that you don’t spend your entire budget on training when you need to invest in multiple layers of defense to build a solid defense against phishing.