Opinion - Taking the User out of Cybersecurity
Not all the time, just enough to let other controls show how crap good they are
Infosec
What are the chances that just one of your users will compromise the security of your network? Many would argue that it’s 100%. I agree, the user has routinely demonstrated themselves to be the weak point in a long line of network defence. After all, it only takes one.
So if you accept this this, I argue that organisations should simply side-step people when testing defences and leave that to the HR\people teams. It can detract from the technical measures that you’re paying a lot of money for, and ultimately if we’re testing the human condition, the results won’t change anytime soon.
In both defence, offence and the purple world in between, we’re quick to blame the user for clicking a link, downloading a suspicious file, or blindly following instructions from an unknown caller - but when performing our own cyber defence tests, we’re still suprised when this happens. The test is considered a success; everyone pats each others backs and employees get lots more ‘education’. The cycle repeats.
Finished at Reconnaissance
But is the test complete? Using the MITRE Attack* framework as an example, it’s suprisingly common for an organisation to stop once the phishing link has been clicked. “Why?” they ask, and put everything on hold until an “investigation” has been done (little do they know, they are investigating the Human condition, good luck with that one!). What about the rest of the framework? Execution? Forget about it! Persistence? Who cares! The technical controls barely get a chance to save the day.
Why did the email get through the filters? Why did it land in the inbox? Focusing on the technical “why?” is important, and requires attention. But it can be simulated without a user, which allows for all the remaining technical controls to be tested too. It’s surprising how many of the layered aproaches to security rarely get peeled back.
Hello, this is the CEO
Some methods are just plain cruel. There’s stories of employees breaking down in tears after a particuly nasty phishing call from their own organisation attempting to extract an MFA code or open an attachment. Of course it is a real threat and does happen often - but that doesn’t mean you need to do it. For the same reason we don’t break into peoples homes and steal their badges, we shouldn’t be bullying them into breaking security best practices. We should be educating them to prevent it.
Starting the tests when you have the MFA code of an employee? Now that’s where it gets interesting. The employee doesn’t even have to be real, a dedicated ‘dummy’ user that is managed properly doesn’t have to introduce anymore risk than a real one, and it leaves your employees free to get on with their jobs.
Having technical teams test users is an exercise in futility. Not only does it create a “them vs us” mentality within the organisation, it leads to a game of entrapment where users have no chance of success. This is especially true if you have no defined success criteria where the pressure against a human can increase unchecked. This is made worse when attackers\victims are both within the same organsiation. Privileged knowledge and access make it almost impossible for the user to know the best course of action.
So how can we break from this cycle? Here’s a thought.
Split the Technical and the Human
So what’s a better way? Some of the most successful teams I’ve worked with in 2024 have split out the technical aspects from the human. Not in all tests, and they still converge on occasion, but for the most part all engagements consider the user compromised; bad; uneducated; malicious. All the terms we scream when they click our intentionally placed malicious link from a source we know is allow-listed and crafted to bypass the very controls we manage.
It also means we can get right into the details of our technical controls. Let’s start the engagement at the point where the link has been clicked, and let HR worry about the why. This allows the tests of network security, EDR and Incident Response to run more regularly, and puts the real defence strategy to the test. We can still measure the success of a phishing link to an account we control, and remove the random nature of determining if it will ever be clicked of not.
It’s too easy to go after the user, and a lot of defence\offence testing naturally focuses there. Is it because they are the highest risk? Again, 100%. But if we ignore the user for now, we can discover a lot of other weak points we may have yet to discover (like “why does that exception even exist!?”)
Keep Measuring Success
Cybersecurity can’t be judged on completed goals, but rather how it functions. We still need to test the user base to determine if our expensive online video training is getting through or not, and these tests open up useful insights into an organisation (for example, that one location or office that always fails the test!) But it shouldn’t be used as the first step every time. My argument is that it should take a back seat role and let the technical controls that if properly implemented, demonstrate their effectiveness at protecting an organisation from both internal, external, santioned-exercises and malicious threat actors alike.
* Yes, it’s actually stylised ‘ATT&CK’ but lets keep it simple, shall we?