The AI Genie Is Out Of The Bottle, And We Can’t Put It Back In

Share :

Last month, we saw top government officials meet with leading tech executives, including the Alphabet and Microsoft CEOs, to discuss advancements in AI and Washington’s involvement. But as quickly as the ChatGPT, Bard and other well-known generative AI models are advancing, American businesses have to know that malicious actors representing the world’s most successful hacking groups and aggressive nation-states are building their own generative AI replicas — and they won’t stop for anything. 

There’s ample reason for experts to be concerned about the overwhelming speed with which generative AI could transform the technology industry, the medical industry, education, agriculture and nearly any other industry in not only America, but the world. Movies like The Terminator, for example, provide plenty of (fictional) precedent for being scared of the effects of a runaway AI, fueling more realistic concerns like AI-induced mass layoffs. 

But it’s exactly because AI has the power to revolutionize society as we know it that America cannot afford a private or government-ordered pause on developing it, and why doing so would cripple our ability to defend individuals and businesses from our enemies. Because AI development happens so quickly, any amount of delay that regulators put on that development would set us back exponentially in comparison with our adversaries who are also developing their own AI. 

AI Advances Quickly, Government Regulates Slowly 

Regulators aren’t used to moving at the speed that AI necessitates, and even if they were, there’s no guarantee that it would make a difference in how we’re able to use AI to successfully defend ourselves from adversaries. For example, legislators have attempted for decades to regulate and penalize the recreational drug trade in America, but criminals pushing dangerous, illicit substances don’t follow those rules; they’re criminals, so they don’t care. The same behavior will occur among our geopolitical rivals, who will disregard any attempt America makes to place guardrails around AI development. 

In the past eight months, hackers have claimed to be developing or investing heavily in artificial intelligence, and researchers have already confirmed that attackers could enable OpenAI’s tools to aid them in hacking. How effective these methods are currently and how advanced other nations’ AI tools are doesn’t matter as long as we know that they’re developing them — and will certainly use them for malicious purposes. Because these attackers and nations won’t adhere to any moratorium that we place on AI development in America, our country cannot afford to pause our research, or we risk falling behind our adversaries in multiple ways. 

In cybersecurity, we’ve always referred to our ability to create tools to thwart attackers’ exploits and scams as an arms race. But with AI as advanced as GPT-4 in the picture, the arms race has gone nuclear. Malicious actors can use artificial intelligence to find vulnerabilities and entry points and generate phishing messages that take information from public company emails, LinkedIn, and organizational charts, rendering them nearly identical to real emails or text messages. 

On the other hand, cybersecurity companies looking to bolster their defensive prowess can use AI to easily identify patterns and anomalies in system access records, or create test code, or as a natural language interface for analysts to quickly gather info without needing to program. 

What’s important to remember, though, is that both sides are developing their arsenal of AI-based tools as fast as possible — and pausing that development would only sideline the good guys. 

The Need For Speed 

That isn’t to say we should let private companies develop AI as a fully unregulated technology. When genetic engineering evolved to become a reality in the healthcare industry, the federal government regulated it within America to enable more effective medicine while recognizing that other countries and independent adversaries might use it unethically or to cause harm — creating viruses, for example.  

I believe we can do the same for AI by recognizing that we have to create protections and standards for ethical use but also grasp that our enemies will not be following those regulations. In order to do so, our government and technology CEOs need to operate swiftly without delay. We have to operate at the pace of AI’s current development, or in other words, the speed of data. 

This article originally appeared in VentureBeat

Learn more about the Future of Artificial Intelligence in Cybersecurity from our survey conducted by over 800 cybersecurity decision makers in North America and the United Kingdom.

Picture of Dan Schiappa

Dan Schiappa

Dan Schiappa is Arctic Wolf’s Chief Product Officer (CPO). In this role, Dan is responsible for driving innovation across product, engineering, alliances, and business development teams to help meet demand for security operations through Arctic Wolf’s growing customer base—especially in the enterprise sector. Before joining Arctic Wolf, Dan Schiappa was CPO with Sophos. Previously, Dan served as Senior Vice President and General Manager of the Identity and Data Protection Group at RSA, the Security Division of EMC. He has also held several GM positions at Microsoft Corporation, including Windows security, Microsoft Passport/Live ID, and Mobile Services. Prior to Microsoft, Dan was the CEO of Vingage Corporation.
Share :
Table of Contents
Categories
Subscribe to our Monthly Newsletter