Hybrid AI Can Reduce False Positives on Your Network
A few years ago, “big data” was one of the biggest buzzwords in tech. However, Gartner dropped the term from its hype cycle in 2015 and today, attention has moved on to more granular topics such as machine learning and artificial intelligence (AI), which are currently being harnessed for use cases as diverse as autonomous cars and network security monitoring.
Will Hybrid AI Improve Upon Big Data Security?
One of the key shortcomings in big data, as identified by Slate technologist Will Oremus, was the lack of proper interpretation and contextualization – in other words, specialties of human analysts. This resulted in issues such as the takeover of the typical Facebook news feed by clickbait articles elevated by algorithms that rewarded “likes,” as well as many false positives and negatives in security contexts. Modern applications of machine learning and AI aim to avoid similar problems by supplementing automatically captured data with human knowledge as needed, a practice often labeled hybrid AI.
Moreover, their combination of input from both machines and humans often leads to better results in the security realm in particular:
- A 2016 MIT study of a hybrid “AI2” system – merging machine learning-driven anomaly detection with continuous human analysis – found that it could detect more attacks and produce fewer false positives than traditional non-hybrid alternatives.
- The power of human insight for security operations management can also be glimpsed in the ongoing modification of systems such as DeepMind to make them more like human brains. Researchers have attempted to give such AI platforms brain-like features including attention spans, to improve pattern recognition and overall flexibility, according to Engadget.
- AI adoption in general is rising. An October 2017 Vanson Bourne survey revealed that 80 percent of enterprises had incorporated some form of AI. Improved “security and risk” was one of the top bottom-line benefits from AI implementation.
AI has frequently been promoted as a panacea for today’s complex cybersecurity challenges, which run the gamut from classic phishing attacks to more subtle advanced persistent threats. The truth is that a more nuanced solution, such as a Security Operations Center (SOC) incorporating a Security Information and Event Management (SIEM) platform with hybrid AI, is necessary for full risk mitigation.
“People are people,” stated Brian NeSmith in a recent online interview with Information Security Media Group. “We do random things that in the cybersecurity domain look like indicators of compromise, but in reality are just people being people. In the security industry, we talk about false positives and those false positives are typically driven by people just doing things out of their normal behavior. Humans are best at helping machines understand how people work.”
NeSmith cited actions such as downloading a bunch of data in preparation for a conference call as a harmless behavior that might get flagged by an automated AI platform. Across the board, traditional AI struggles with such false positives and with security events that are occurring for the first time and accordingly are not contained in any existing lists.
Supplementing an automated AI solution with the expertise of a human engineer can streamline the process of sorting false positives and false negatives and also help modify the rules to better handle these situations in the future. False positives/negatives are costly: In 2015, the Ponemon Institute estimated that the average organization spends $1.3 million per year recovering from the bad security intelligence that produces them. Hybrid AI might offer a way out.
“False positives and negatives are costly.”
Hybrid AI, False Positives and the Current Cybersecurity Skills Gap
Unsupervised machine learning and standard AI are still necessary for combing through what NeSmith called the “sheer volume and variety of data sources” relevant to security screening. At the same time, their raw efficiency is not enough to make them catch-all solutions to an overarching problem in modern cybersecurity: the so-called “skills gap.”
There is projected to be a major shortfall of security professionals for the rest of the decade, with Cybersecurity Ventures estimating 3.5 million unfilled jobs through 2021. The automation offered by machine learning and AI might help ease the burden shouldered by shorthanded security teams, but it’s not sufficient. A 2016 Deloitte assessment raised this issue by noting that human input is particularly useful when evaluating internal threats, which often involve variants on the “normal behavior” mentioned by NeSmith.
Significant challenges lay ahead for security professionals. SOC-as-a-Service with concierge engineers providing the hybrid AI dimension, can quickly replace your SIEM and upgrade your defenses. Learn more by clicking the banner below.