Artificial intelligence (AI) drives many of today’s major innovations in software and hardware, from social media chatbots to highly automated customer relationship management applications. In the cybersecurity world, many vendors are frequently including AI capabilities to their solutions to provide enhanced detections of sophisticated threats. It’s now what organizations expect.
A 2019 survey by the Capgemini Research Institute found that 69 percent of 850 security executives and leaders felt they couldn’t respond to cyberattacks without AI. Additionally, 73 percent of organizations polled were testing use cases for AI in cybersecurity.
Why do organizations place so much faith in AI? And what is next for AI in cybersecurity as the pace of adoption accelerates?
One trend to look at in the years ahead is the use of hybrid AI vs. traditional, standalone AI. Hybrid AI fuses autonomous machine learning with human activity—combining the best of both worlds.
What Is AI?
The concept of artificial intelligence, which is as old as the history of computers, refers to the idea of machines displaying cognitive abilities. In cybersecurity, the AI subfield of machine learning, in particular, has emerged in recent years as a solution to complex problems that require machine speed.
Machine learning has proven to have many uses including the ability of machines (or, technically, software) to learn and extract insights from large amounts of data. Basically, algorithms and machine learning processes are applied to data sets for analysis and to make autonomous decisions.
One of the biggest advantages of AI in cybersecurity is the capability to quickly handle massive amounts data to detect patterns and anomalies that may otherwise go undetected. The human brain is simply not capable of this level of processing power.
AI, however, is not a fully formed ideal solution for cybersecurity since it still lacks important human qualities, including intelligence and intuition. These qualities are essential for decisions such as identifying false positive alerts and prioritizing threat response.
That’s where hybrid AI comes in. To understand the difference between hybrid and traditional AI, we can compare two AI-driven technologies: IBM Watson and the Tesla Model S.
Watson Versus Tesla
Could the integration of human expertise be the secret to making bots, machines, and applications more intelligent than the ones that rely on today’s limited AI?
IBM Watson is an example of conventional AI. It has become an important exploratory tool in fields such as healthcare, but it remains relatively rudimentary in terms of how it arrives at its insights.
Watson needs to plow through massive amounts of data to learn anything. This puts it in a similar position to a human toddler who can only determine if a stove is hot by actually touching it, whereas an adult could infer it was unsafe to touch by evaluating various cues and evidence. Machine learning needs these advanced human insights to make its own inferences and deductions about datasets, so that its conclusions enjoy greater context.
“What all this AI is lacking is an ontological model where you can describe a structure abstractly,” observed Cris Ross of the Mayo Clinic. “Watson had no idea what a patient was, what a hospital is, what a doctor is, what a drug is, what the effect is on a patient, what’s the relationship between a doctor, drug, a patient and an outcome.”
As Eliza Strickland, senior editor of IEEE Spectrum, put it, “Even today’s best AI struggles to make sense of complex medical information. And encoding a human doctor’s expertise in software turns out to be a very tricky proposition.”
In contrast to Watson, there are the Tesla automobiles. Tesla isn’t just known for making electric vehicles, but also for its integration of cellular connectivity, powerful software, and features enabled by such infrastructure, including full self-driving technology within all Tesla models.
The autopilot in Tesla relies on deep neural networks—algorithms modeled loosely after the human brain—that are trained on complicated and diverse scenarios to solve problems like perception and control.
The self-driving feature is a form of AI, although one that can be overridden by a human as needed. It’s important to note that the human override feature came in later. Originally, Tesla planned to make the cars completely independent of human interaction.
This Tesla example isn’t fully representative of hybrid AI’s structure or possible use cases. In cybersecurity, hybrid AI is less like the potentially dramatic human intervention possible in a Tesla Model S/X and more like subtle, continuous, and powerful guided analysis, which helps better identify and defend against threats.
AI Use Cases in Cybersecurity
There’s a lot of hype surrounding AI, and it’s not always easy to ascertain which applications are actually practical. Here are some examples of how AI is being used in cybersecurity along with issues to consider:
- An AI-enabled security information and event management (SIEM) platform that receives billions of anomalous alerts every month can use algorithms to learn from data and historic actions. If a specific alert was dismissed 100% of the time as a false positive, then certain models may learn to dismiss future matching alerts to significantly reduce noise.
- Next-generation antivirus and endpoint detection and response tools use machine learning to detect new threats that are not based on signatures. The AI looks for similarities between known malware criteria and activities in the system’s processes. These models may then alert on future matches with a high degree of confidence.
- More than 17,000 new vulnerabilities were discovered in the first three quarters of 2020 alone. Traditional analysis tools can only detect a limited number of vulnerabilities within a give timespan; but when paired with machine learning algorithms they are able to do it faster and more efficiently.
While AI is better than humans at performing actions like those in the examples above, most of these solutions don’t work as set-and-forget models. You still need human interaction to set up, update, and train the models. And humans still need to provide insights in situations involving novel threats with no history, or when entering uncertain areas caused by sophisticated tactics such as spear-phishing.
This is similar to the Tesla autopilot, which may require human intervention when a situation on the road becomes dicey, but—in cybersecurity practices—making distinctions is even more subtle. For example, a suspicious email attachment might cause problems for a standard AI-driven defense: Is it truly dangerous, or it is just harmless spam?
Traditional AI might not excel at picking up on subtle cues, such as awkward phrasings, minor typos and alternative spellings, or threats and references to government agencies in the body of an email containing a phishing attachment—all telltale signs. A human analyst, on the other hand, can discern the slight variances.
Many experts believe that AI is the future of cybersecurity, and new, promising use cases for AI are emerging. But this future doesn’t eliminate humans—if anything the involvement of people in the process is just as important as ever. In the foreseeable future, skilled security experts will always be necessary to make AI more intelligent.
How Hybrid AI Can Improve Your Cybersecurity
To see how hybrid AI might improve your defenses, let’s consider how traditional AI works. An AI-driven solution gathers information like event logs and network flow data, transforms it into correlated observations, and then produces alerts, which spur actions —for instance it quarantines a server, blocks connection to a website, or disables a compromised user account.
These capabilities dramatically reduce exposure to possible data breaches. However, AI-driven machine learning approaches also create some problems.
To detect anomalies, security solutions typically rely on unsupervised machine learning. Because the algorithm looks for patterns in unlabeled data (in other words, the AI doesn’t know precisely what to look for), it tends to generate a lot of false positives as it tries to detect anomalies. The end result is operational inefficiency and alert fatigue.
Now let’s add a human analyst into the mix. Let’s says there’s a new malware strain. The security expert would examine the threat’s behavior through the kill chain, discover new parameters to help classify the malware, and then feed this new threat intelligence into the machine learning model. In the future, the AI will automatically detect this new variant.
Of course, this leaves a question: What level of human intervention is optimal within hybrid AI? As you might expect, both autonomous (machine-driven) and nonautonomous (human-directed) processes have their specific use cases. The benefits are as follows:
Autonomous machine learning:
- Efficiently screens out noisy, benign events on the network.
- Automatically blocks known threats (e.g., connections from untrusted locations or from TOR nodes).
- Detects anomalous behavior in relation to normal patterns.
Nonautonomous human activity:
- Avoids frequent blocking of legitimate traffic and other false positives.
- Helps catch one-off attempts at infiltrating the network, which helps keep false negatives to a minimum.
- Can provide innovative solutions, such as safely detonating a potentially new strain of malware in a sandbox.
Basically, human intuition, expertise, and experience help fill in the evaluative gaps that machine learning cannot and determine which activities cross the line and which ones do not.
Human-supervised AI offers much more accurate model than an unsupervised alternative in security situations. The biggest difference lies in navigating complex threats, such as polymorphic malware that may appear differently depending on the context.
A 2016 study at MIT, for example, evaluated the use of a hybrid AI model to predict cyberattacks and found it accurate 85 percent of the time. The MIT prototype system used AI to comb through billions of pieces of log-line data, cluster it into meaningful patterns using unsupervised learning, and detect suspicious activity. Human analysts then looked at the suspicious activities, confirmed which events were, in fact, attacks, and fed that threat intelligence back into the next data set.
Hybrid AI Is a Vital Part of Your Security Team
A careful balance between machine learning and human intelligence helps eliminate noise and accelerate threat detection and response. For example, security experts can create custom policies that enable AI to filter out events that don’t pose high security risks.
With AI automatically taking care of such rules-based scenarios, human analysts can focus on suspicious activities that require human intuition and intelligence.
And now that hybrid cloud environments have become more common, hybrid AI enables security operations to have consistent visibility into threats across both on-premises infrastructure and public clouds.
A significant challenge of hybrid AI, however, is that many organizations don’t have enough security talent or resources to provide the continuous 24/7 monitoring this model requires. That’s why many are integrating hybrid AI into their security operations by taking advantage of scalable cloud-based infrastructure, outsourced security experts, and reusable playbooks.
An outsourced model eliminates the need for the expensive hardware upgrades, SIEMs, and additional expertise on staff, while providing a reliable platform and highly trained experts who augment AI-driven capabilities with human expertise.
Arctic Wolf Provides a Solution
Arctic Wolf enables resource-constrained organizations to benefit from the power of hybrid AI. The Arctic Wolf security operations platform processes more than 65 billion events every day and automatically detects advanced threats with machine learning.
The Arctic Wolf® Platform combines AI with human expertise to improve the results of AI and machine learning, so that we can provide you with additional insights to continuously improve your security posture.