SOC-as-a-Service, SOCs and SIEMs
Arctic Wolf Networks

Why Artificial Intelligence Does Not Live Up to its Promise in Cybersecurity

It seems the world is moving towards the reality of what you see in science fiction movies, where artificial intelligence (AI)-enabled robots are sometimes hard to differentiate from real human beings. Technology pundits are warning of a possible world apocalypse that sounds surprisingly similar to what is depicted in the Terminator movie series. AI’s application to cybersecurity is particularly pertinent to these warnings, because the hijacking of AI systems is one of the worst-case scenarios that could occur. While humans are guided by their own morals and ethics and thus are more difficult to manipulate, an AI system could be hijacked by compromising its security and injecting some malicious code.

AI is an exciting technological development, with tremendous potential to benefit mankind.  But in cybersecurity, AI alone is far from living up to its promise.

The best way to implement AI in cybersecurity is to use an approach similar to what the AI field defines as a human in the loop (HITL) model. In this model, a seasoned cybersecurity analyst essentially supervises the AI learning process to ensure and validate the AI’s predictive accuracy. Analysis has shown that this could improve threat detection by 10 times and reduce false positives by five times.

HITL is superior to unsupervised AI because one of the biggest problems in cybersecurity data is false positives. If an AI system learned how to detect threats using data permeated with false positives, its predictive capability would be utterly useless. An HITL model avoids this and ensures that the AI system is only fed accurate data, which helps the AI system improve its predictive capability over time.

 

“HITL is superior to unsupervised AI because one of the biggest problems in cybersecurity data is false positives.”

Critics may point to recent examples where humans were bested by AI in games that required intuition and adaptive intelligence as proof that AI could replace cybersecurity analysts. Two examples often touted as AI’s potential to achieve human adaptive intelligence are Deep Blue and AlphaGo. Deep Blue defeated reigning chess world champion Gary Kasparov in 1996. Then, in May 2017, AlphaGo defeated top-ranked Go player Ke Jie.  Both of these milestones were notable achievements for AI. However, the intelligence required to replace what a cybersecurity analyst does day in and day out is an order of magnitude far more complex than these board games. In chess and Go, there are no exceptions to consider, and no additional sources of data that helped these computers achieve their victories. However, in cybersecurity, the exceptions are the real threats you need to worry about.

 

Read more about the best way to leverage AI for threat detection here.

AWN_HYBRID_AI_CTA_BANNER