One of the most challenging elements of securing an organization from cyber threats is ensuring that their employees aren’t placing themselves or their colleagues at risk. But as Arctic Wolf’s new Human Risk Behaviour Snapshot shows, even security practitioners and IT leaders aren’t always model citizens when it comes to mitigating their own cyber risk.
The report, which surveyed 1,500 IT and cybersecurity leaders alongside their end users in other departments, found large gaps in best practices regarding human risk management, including password reuse, compliance failures, and internal security policy education and understanding. Of the IT leaders surveyed, more than a third (36%) have disabled security measures on their system, while two-thirds of IT and cybersecurity leaders and end users surveyed reuse passwords at least occasionally.
Password management and security compliance are foundational elements to a strong security culture, and these results reveal that a significant portion of leaders and end users are ignoring them. A strong security culture is cultivated from the top on down, so it is imperative for organisations to have their leaders demonstrate proper security hygiene.
Despite being familiar with security hygiene and strong passwords, IT leaders and users often prioritise immediate tasks over security best practices, overlooking the risks posed by even the actions of a single person. For example, our survey revealed that while 80% of IT and cybersecurity leaders are confident their organisation won’t fall for phishing attacks, 64% have clicked on phishing links themselves. This complacency is dangerous, especially when leaders overestimate employee reporting. Although 85% of leaders believe employees feel comfortable reporting incidents, only 77% of end users do. This gap highlights the need for a more proactive security culture.
So, what is there to do to mitigate human risk? The answer is to build a culture on a foundation of trust. Security awareness programming cannot be treated as a ‘gotcha’ exercise wherein employees are penalised for failing a test.
Unwittingly clicking on a malicious link isn’t a moral failing on behalf of any employee, but it does mean that the user likely needs additional training and/or testing to boost their knowledge of common threats. Leaders have a responsibility to design their security environment to be easy-to-use and transparent. People want to know whether a link they reported as suspicious was in fact suspicious, or, if they fell for a phishing scam, how they can improve their detection skills. This kind of analysis can help take cybersecurity from a nebulous concept for many employees to a practical skill that they can feel good about. Strong communication between leadership and staff is also critical when new rules, like AI policies, are being implemented. Clear, consistent communication via email or another avenue helps everyone stay current on what tools are okay to use and what new attack vectors might appear in inboxes soon.
Read the full report here.