With the emergence of artificial intelligence (AI), there has been a flurry of new terms to describe an increasing variety of new problems. Some of those problems have been around for decades but are now more difficult to manage due to the versatility of AI-based tools and applications. One of those ongoing challenges is shadow IT with a new class of problems classified as shadow AI.
What is Shadow IT?
Shadow IT is the unauthorised use of any apps, devices, services, technologies, solutions, and infrastructure without the knowledge, approval, and support of the IT department. This term emerged around the 1980s-1990s when personal computing was on the rise and employees began using their personal devices for work-related tasks. Now, with the increase in cloud-based applications and software-as-a-service (SaaS), the problem is even more prevalent and dangerous. Each unauthorised use of an application increases the risk of a cyber incident and, according to Microsoft, 80% of employees use non-sanctioned apps without review.
What is Shadow AI?
A new term has emerged that refers to a specific type of shadow IT, the unapproved use of AI. This includes using generative AI from services like ChatGPT, Bard, Gemini, and Claude with a personal, unapproved account for business purposes. Additionally, it includes generating local models and using them in workplace flows. A use of shadow AI could include loading customer data into a machine learning (ML) model or downloading an existing model from an online source, like HuggingFace or GitHub, and then using its output to guide decisions or processes. Using AI without complying with an organisation’s security policies increases the risks of data leakage, infiltration, and even issues with code in production from copy-paste behaviour.
AI is, however, becoming more popular in the workplace, with usage increasing 485% between March 2023 and March 2024, including sensitive data input by workers up 156%, according to Cyberhaven. According to this same study of workplace use, 73.8% of ChatGPT, 94.4% of Gemini, and 95.9% of Bard accounts are personal and lack the security and privacy controls of corporate accounts from the same service. The study found that 12.9% of sensitive data sent to these services are source code.
Even scarier, 3.9% of this sensitive data is coming from HR and includes the sharing of confidential employee documents and information. Sharing this information in a non-secure way leads to risk of proprietary knowledge and even personally identifying information (PII) from employees and customers getting lost or exposed. This increased use and increased sharing of sensitive data exposes companies to greater cyber risk.
AI-as-a-Service (AIaaS)
When using plugins, APIs, and services without the safeguards and security provided by corporate accounts or your own organisation’s cybersecurity infrastructure, there is a risk of information passed to these gen AI tools being exposed, which gives threat actors an opportunity to intercept that information. Examples include:
- Pasting source code to a generative AI tool to help solve a bug
- Copying a conversation between you and a customer to help summarise the engagement
- Passing an Excel document of employee-related data to generate a plot
Hackers are aware of these risks and are already taking advantage. Recently, hackers stole ChatGPT credentials from 225K devices using infostealer malware. Like many applications, hackers can also take advantage of vulnerabilities in ChatGPT services to exfiltrate information.
Local AI and Traditional Cyber Means
When using models locally, there are additional risks of infiltration. To address this, MITRE Atlas has built a matrix of adversarial attacks against AI as a new addition to the MITRE ATT&CK framework.
For example, when downloading a model locally, researchers at Arctic Wolf demonstrated how easy it is to insert a malicious payload into the model file that is undetectable by security vendors today. These types of files are commonly used to share open-source, foundation models in popular repositories like HuggingFace, further demonstrating the risk of employees’ use of shadow AI. Other cybersecurity researchers have found that malware can be hidden in deep learning models without significant impact on the model’s performance. This is done by breaking the malware into pieces and distributing it to neurons. When the model is built and running, the malware becomes whole and undetectable to conventional anti-malware tools.
Researchers have also shown how models can be corrupted during the training process, for example, through poisoned datasets that are accessed online or through hardware-based fault attacks during model training. There are also many tools and software used to conduct AI research and develop local models that can expose users to hackers. For example, PyTorch, a popular deep-learning framework, had a compromised PyTorch-nightly dependency chain in 2022 where the installation package contained a malicious binary.
With the increasing research demonstrating the many vulnerabilities of using AI and the increased vectors of attack, it is vital that users adhere to internal security policies and invoke additional safeguards, as each use of shadow AI lessons the impacts of an organisation’s cybersecurity measures.
Risks in Shadow AI
Many of the risks of shadow AI are similar to the risks of shadow IT with an increased focus on:
Data Exposure: Providing internal or customer data to AI tools can violate privacy policies and risks proprietary information being stolen, which can lead to future credential theft and other identity-based cyber attacks.
Cyber risk: The risk of infiltration from adversarial AI attacks increases with the use of third-party AI tools and downloading of AI content when users fail to adhere to formal security recommendations and policies.
Legal and compliance issues: Sharing sensitive data to an AI platform could violate data privacy regulations, such as HIPAA.
How to Reduce the Risk of Shadow AI
Just like with shadow IT, the risk of shadow AI can be reduced across an organisation by following these steps:
1. Educate your workforce
Training your employees on your organisation’s policies and procedures is crucial for enforcing strong security practices. As AI continues to evolve, it’s increasingly important to educate employees on how these policies specifically apply to AI use. By ensuring your employees understand what AI is, how to use it safely, and the potential risks involved, you’ll foster greater awareness and careful consideration of whether activities align with your organisation’s security policies.
2. Manage approved applications
Publishing a list of approved and prohibited AI tools, along with a clear approval request process, will enhance transparency and clarity. This approach empowers IT to support secure productivity by enabling the use of trusted tools while effectively blocking unauthorised ones.
3. Monitor your network
According to Gartner®, 30% of companies will automate more than 50% of network activity by 2026. Gartner attributes much of this increase to the growing use of generative AI.
APIs are the primary means of accessing AI tools and are also used to make decisions that lead to further API use, making it essential for organizations to monitor network activity 24×7 for unauthorized API usage. By doing so, companies can proactively mitigate many of the risks associated with shadow AI and ensure robust security is placed around AI services their employees may utilise.
4. Conduct security assessments
Vulnerability management isn’t just for on-premises programs, but is equally relevant for shadow AI, as downloading content from open-source AI tools and using network connections to send/receive information via APIs to AI services introduces new potential attack vectors for cyber infiltration. Given the rapid pace of AI development, security practices must evolve just as quickly, making it crucial to stay informed about emerging security recommendations, such as those from Arctic Wolf Threat Intelligence.
5. Implement a zero-trust approach to AI tools
This includes enforcing authentication measures, particularly multi-factor authentication (MFA) wherever possible, before employees or users can access sensitive network areas, data, or critical applications. Doing so minimises the risk of employees exposing sensitive information to unauthorized AI tools or connecting them to internal networks and applications. Additionally, if a threat actor can gain access to a network through data taken from an AI tool, a zero-trust approach will prevent lateral movement.
Learn more about how a security operations approach can reduce your organization’s Shadow IT risks.