November 17, 2023

Naga Vydyanathan

Guarding the Gates: Addressing Security and Privacy Challenges in Large Language Model AI Systems

The age of large language models, where chatbots and AI-powered large language systems are becoming our virtual companion, raises the question of who guides them?

Table of contents

Introduction

In the age of Large Language Models, where chatbots and AI-powered language systems are becoming our virtual companions, the question arises: who’s guarding the gates of our digital fortress? It’s not quite “Game of Thrones,” but the stakes are high when it comes to addressing the security and privacy challenges in these large language model AI systems. From data breaches to privacy concerns that could turn even the most confident of AI enthusiasts into paranoid conspiracy theorists, it’s clear that we’re navigating uncharted waters. Picture this: you’re entrusting your prized data to a sophisticated AI system, and just as you’re about to embark on a top-secret conversation, it blurts out a secret or shares your personal information with the world. This is but one comical glimpse into the security and privacy challenges that lurk in the realms of AI.

In March this year, chatGPT was forced to be taken offline for a few hours to fix a bug in an open-source library used by it. This bug resulted in the leak of users’ chat history, first messages and in some cases even payment related information for a certain window of time. A recently published report from Stanford and Georgetown University serves as a clear and sobering reminder that the security threats posed to AI-based systems are genuine and substantial.

So, how do we harness the benefits of AI while guarding our digital gates and maintaining our data’s integrity? The answer lies in gaining a profound understanding of the privacy and security challenges inherent in Large Language Models (LLMs), deciphering their root causes, and formulating practical solutions for mitigation. This article is your comprehensive guide to achieving just that!

Security Loopholes in LLMs – where are the data compromise points?

Large language models, or LLMs are fundamental building blocks of any generative AI system. An LLM is a deep learning model that trains on massively large data sets to infer and learn patterns that enable it to understand, summarize, predict and generate new content.

At the highest level, an LLM based AI system has the following components – a prompt interface, a model, a training database and an inference database, as seen in Figure 1. The prompt interface serves as the entry point for users to interact with the AI system. Users provide prompts, queries or questions in natural language or other formats. This input is processed, clean and prepared for consumption by the LLM model. This may involve tokenization, formatting and other preprocessing steps to make the input suitable for the LLM model.

The core of the AI system is a pre-trained large language model that has undergone extensive training on vast datasets and is capable of generating human-like textual output based on the context provided by the prompts. The LLM model serves as the inference engine, taking in pre-processed prompts as inputs and generating textual responses. The model leverages transformer architectures, attention mechanisms and deep neural networks to understand and generate coherent text.

Finally, there are two types of databases involved in LLM AI systems – the training and inference databases. The training database contains massive amounts of training data from various sources and is used during the initial training of the LLM to learn language patterns, semantics and context. This database is constantly updated and the model is retrained to keep up with new information. The inference database is used to store generated responses from the LLM model. This can include previously generated responses, user interactions, and system logs. To improve response times and reduce the load on the LLM model, frequently generated or commonly requested responses may be cached in the inference database.

Every element within the LLM AI system that we have seen, represents a potential weak point that could jeopardize both the security of your data and the integrity of the AI system itself.

Prompt Poisoning

Prompt poisoning, also referred to as prompt injection attacks, in the context of LLMs like GPT-3 or similar models, refers to a manipulation or exploitation technique where malicious actors intentionally craft prompts or queries to lead the model into ignoring previous instructions and performing unintended actions such as generating harmful, biased, or inappropriate responses or leaking sensitive information. This can be done by inputting prompts that contain misleading information, offensive language, or harmful instructions to the model. A recent prompt injection attack on Microsoft’s Bing Chat revealed the initial instructions of the AI model, which are typically hidden from users.

According to the Open Worldwide Application Security Project (OWASP), common prompt injection vulnerabilities encompass techniques like circumventing filters or restrictions through the utilization of particular language patterns or tokens, exploiting weaknesses in the LLM’s tokenization or encoding mechanisms, and deceiving the LLM into executing unintended actions through misleading context. For instance, a malicious user could evade a content filter by employing specific language patterns, tokens, or encoding methods that the LLM fails to identify as restricted content. This can grant the user the ability to carry out actions that are intended to be blocked.

Model Evasion, Extraction and Denial of Service

The large language model in an AI system can be subjected to two types of attacks – model evasion and model extraction. In model evasion, the attacker tries to manipulate or deceive a machine learning model, typically a classification or detection model, to make it produce incorrect or undesirable outputs. Model evasion often involves adversarial attacks, where an adversary carefully crafts input data that appears normal to a human observer but is designed to confuse or mislead the machine learning model. The attacker’s intentions can vary widely, but common objectives include bypassing security systems, tricking recommendation algorithms, or causing errors in automated systems.

In model extraction, the attacker attempts to reverse-engineer or replicate a machine learning model, often with the goal of obtaining a copy of the target model’s architecture and parameters. This can be done by observing the model’s behavior, making queries to the model, or analyzing its outputs. Attackers may attempt model extraction for various reasons, including intellectual property theft, cost savings (avoiding the need to train a model from scratch), and potentially adversarial purposes (e.g., using the stolen model to launch further attacks). Model extraction can be accomplished through various techniques, such as making queries to the target model, using transfer learning to fine-tune a surrogate model, or analyzing the model’s responses to inputs.

Model Denial of Service (Model DoS) is a cyberattack that disrupts machine learning models, including Large Language Models (LLMs), by targeting their infrastructure. Attackers aim to render models unavailable, with motives ranging from disrupting services to financial gain or compromising user experiences. Model DoS attacks can take various forms, including flooding the model with a high volume of requests or inputs in a short period, submitting resource-intensive queries, or exploiting vulnerabilities in the model’s infrastructure. When successful, they slow down or disable models, causing service interruptions. Attackers may have diverse motives, such as disrupting competitors, extortion, or creating chaos.

Data Leaks and Poisoning

Data leaks refer to unintended exposures of sensitive or confidential information during the training or use of Large Language Models (LLMs). These leaks can occur when the training data or the generated outputs from LLMs inadvertently divulge private details about individuals, organizations, or proprietary data. This can lead to unauthorized access, privacy infringements, and security breaches. The OWASP report states that common vulnerabilities contributing to data leaks include inadequate filtering of sensitive content in LLM responses, the risk of overfitting or memorization of sensitive data during LLM training, and unintentional disclosure of confidential information due to misinterpretations or errors by the LLM. Attackers might intentionally craft prompts to extract memorized sensitive data from the LLM, while legitimate users might inadvertently reveal confidential information through their queries. Samsung’s accidental leak of sensitive data to chatGPT, ChatGPT’s outage in March this year where users’ chat history and payment information was leaked, and the leakage of windows 10 Pro product keys on chatGPT in July this year, are classic examples of data leaking in LLM AI systems.

Data poisoning is a malicious activity in which attackers deliberately tamper with the training data used for Large Language Models (LLMs). The goal is to insert biased, false, or harmful information into the training dataset, with the intent of influencing the LLM to produce undesirable outputs. This can compromise the model’s security, effectiveness, and ethical behavior. Common issues associated with training data poisoning include the introduction of backdoors or vulnerabilities into the LLM by manipulating the training data in a malicious manner and injecting biases that lead the LLM to generate biased or inappropriate responses.

For instance, in the context of email spam filters, data poisoning occurs when attackers inject spam-like characteristics into the training data. The objective is to manipulate the filter’s behavior, causing it to misclassify legitimate emails as spam, while actual spam messages go undetected. This form of manipulation compromises the filter’s performance, leading to the unintended consequence of genuine emails being marked as spam, resulting in users missing important messages.

Apart from the above, the external plug-ins and the deployment mechanism of LLM AI systems are also potential security loopholes.

Supply Chain Vulnerabilities

The lifecycle of Large Language Model (LLM) applications involves a complex web of dependencies, including third-party supplied datasets, pretrained models, and plugins. Vulnerabilities introduced at any of these stages can jeopardize the entire model’s security, making it an enticing target for attackers. For instance, susceptibility in pretrained models or poisoned training data from third-party sources can compromise the integrity of the LLM. Hosting LLMs off-premise on third party cloud vendors or using proprietary LLMs as black boxes can also pose potential risks. Furthermore, insecure design or implementation of LLM plugins can introduce security vulnerabilities, potentially enabling the execution of arbitrary code or unauthorized access to sensitive data. Recent instances like ChatGPT’s data leak due to a Redis library bug and the Log4j vulnerability, which affected organizations like Amazon, Tesla, and Apache, highlight the real-world impact of supply chain vulnerabilities in LLMs.

The figure below summarizes the potential security loopholes and data compromise points in LLM AI systems.

So, how do we guard the gates?

While Large Language Models (LLMs) do present various privacy and security concerns, the encouraging news is that these issues can be systematically addressed. Let’s methodically examine each vulnerability in an LLM AI system and explore strategies to prevent and mitigate them.

Open-source and Self-hosted LLMs

Large Language Models (LLMs) can either be proprietary, developed by private companies or organizations, or open-source, freely accessible and modifiable by the public. Examples of proprietary LLMs include GPT by OpenAI, PaLM 2 by Google and Claude 2 by Anthropic. Popular open source LLMs include Meta’s Llama 2, MosaicML’s MPT-7B and Falcon 180B by the Technology Innovation Institute.

To address the security risks arising from proprietary LLMs hosted in third party premises, enterprises can look at open-source alternatives and self hosting. Open source and self-hosted LLMs offer greater flexibility, control, transparency and a sense of comfort of having your data within your premises. Organizations can tailor security configurations to their specific needs and exercise more oversight over the infrastructure. The transparency offered by open source LLMs allows users to inspect the source code, which can help identify and address security vulnerabilities more effectively.

However, it’s important to recognize that with this control and transparency comes responsibility. Users or hosting organizations must ensure that proper security measures are in place, including regular updates and patches to address known vulnerabilities. The security of self-hosted LLMs also heavily depends on the correct configuration of the hosting environment, encompassing firewalls, access controls, and encryption.

LLMs, whether open source or not, are not immune to emerging threats and attack vectors. Security measures must evolve to counter new risks. User behavior plays a critical role as well, with weak passwords, poor access controls, and improper data handling potentially introducing vulnerabilities. Additionally, both open source and self-hosted LLMs may rely on third-party libraries and services, necessitating the consideration of the security of these dependencies.

In summary, while open source and self-hosted LLMs offer advantages in terms of control and transparency, their security hinges on robust configuration, regular updates, adherence to best practices, and vigilant monitoring to address potential vulnerabilities and adapt to evolving security threats.

Validated and Sanitized Inputs and Outputs, Access Control, Human-in-the-loop

Preventing and mitigating prompt poisoning and prompt injection attacks in LLMs involves a combination of proactive measures and continuous monitoring.

Input Scrubbing, Anomaly Detection and Red Teams

This practice involves examining and cleansing user inputs to filter out potentially malicious or inappropriate content. By validating the format and context of input prompts, removing harmful characters or patterns, and rejecting inputs that deviate from expected norms, organizations can significantly reduce the risk of malicious inputs triggering harmful responses from LLMs. Prompt delimiters that separate prompts from instructions, prompt validation models that detect and classify prompt injections, contextual validation to ensure prompts adhere to expected contexts and logical sequences and checking prompts against whitelists and blacklists are some techniques that sanitize the input prompts before being fed into the model.

Impact of prompt injection attacks can be mitigated by employing anomaly detection systems that scrutinize the outputs for inconsistencies, biases or unusual patterns. Context-aware filtering and output encoding can be used to prevent manipulation. Utilizing ‘Red Teams‘ to simulate prompt injection and poisoning attacks represents a proactive strategy for uncovering system weaknesses and vulnerabilities, effectively guarding against injection attacks.

Access and Privilege Control

Access and privilege control mechanisms operate collaboratively to prevent and mitigate prompt injection attacks effectively. Access control mandates user or system authentication before any interaction with the LLM, guaranteeing that only authorized individuals or entities can access the system. Subsequently, access control specifies the extent of access and privileges granted to each user or process, encompassing query permissions and permissible prompt types. Role-based access control introduces precise role assignments to users or processes, each with well-defined permissions, creating a granular approach that confines LLM access according to predetermined roles.

Privilege control governs the LLM’s access to backend systems by furnishing it with dedicated API tokens. These tokens confer precise permissions for expanded capabilities, such as utilizing plugins, data retrieval, and defining function-level authorizations. By strictly following the principle of least privilege, the LLM is allocated only the essential access necessary for its intended functions. This strategy not only mitigates the risk of unauthorized access or misappropriation of backend systems but also empowers the LLM to execute its tasks with efficiency and enhanced security.

Human-in-the-loop

The human-in-the-loop (HITL) approach plays a pivotal role in bolstering the security of Large Language Models (LLMs) against prompt injection attacks. HITL involves human reviewers who actively monitor user interactions, promptly identifying and responding to potential malicious or inappropriate prompts. These reviewers ensure real-time content moderation, enforcing guidelines and security policies, and safeguarding against harmful interactions. Moreover, HITL’s adaptability aids in recognizing evolving attack patterns, maintaining interaction quality, and providing valuable feedback for long-term training data improvements.

While HITL is a critical layer of security, it should complement other measures such as access controls and automated detection for comprehensive defense. This multifaceted approach ensures responsible LLM use, upholds security, and addresses ethical considerations.

Adversarial Training, Model Obfuscation, Rate Limiting and more…

Guarding the LLM model against evasion, extraction and DDOS attacks is a critical part of securing your LLM AI system. This aspect of LLM security is continuously evolving, with ongoing research yielding a range of noteworthy techniques and strategies.

Adversarial Training, Feature Squeezing, Ensemble Learning

Model evasion, also called adversarial attack, occurs when adversaries manipulate input data to mislead machine learning models, causing incorrect predictions. Adversarial training is a technique that strengthens machine learning models against model evasion by training models on both regular and intentionally altered data (adversarial examples), making them more resistant to deceptive inputs. This process is iterative and helps models better handle manipulation attempts.

Feature squeezing is a security technique in machine learning, particularly in image classification. It reduces the precision of input features, making subtle adversarial perturbations more noticeable. During inference, the model compares predictions on the original and squeezed inputs, detecting potential attacks when predictions significantly differ. This aids in identifying and mitigating adversarial attacks, especially in image recognition tasks.

Ensemble learning combines multiple models to enhance prediction accuracy and security. In mitigating model evasion, it’s valuable because it makes crafting effective adversarial examples much harder for attackers. Even if one model is vulnerable, the collective decision of the ensemble offers greater resistance to adversarial attacks, improving overall security.

Transfer learning and deep reinforcement learning are other techniques that strengthen your model against adversarial attacks. Transfer learning allows models to leverage knowledge from pre-trained models, making them more resilient against evasion attempts through the adoption of robust features and learned patterns. Deep reinforcement learning enables models to dynamically adapt and respond to evolving attack strategies, enhancing their evasion detection capabilities.

Model Obfuscation, Homomorphic Encryption, Differential Privacy

Model obfuscation is a technique used to prevent and mitigate model extraction, where attackers attempt to reverse-engineer a machine learning model to steal its intellectual property or proprietary knowledge. Obfuscation involves adding deliberate complexity to the model, such as parameter encapsulation, renaming, short-cut injection, noise or redundant information, making it much harder for attackers to deduce the model’s architecture, parameters, or underlying logic.

Homomorphic encryption is a cryptographic technique that allows data to be processed while encrypted. This approach mitigates model extraction by keeping machine learning models and their data encrypted during computations, preventing attackers from accessing the model’s architecture or sensitive information. Even if extracted, the information remains unintelligible, safeguarding the model’s integrity and confidentiality.

Differential privacy is a privacy-preserving concept in data analysis and machine learning that adds noise or randomness to query responses to protect individual data points’ privacy while still providing accurate aggregate information. In the context of mitigating model extraction, differential privacy can be applied to the training process. By introducing controlled noise to the gradients and updates during training, the specific data points used are obscured, making it extremely challenging for attackers to reverse-engineer the training data or extract sensitive information about individual data entries.

Dynamic Resource Allocation, Rate Limiting, Behavioral Analysis

Dynamic resource allocation mitigates model Denial of Service (DoS) attacks by flexibly managing and prioritizing resources, giving higher priority to critical models or services while adapting to increased demand caused by the attack. Continuous resource monitoring identifies anomalies and bottlenecks, facilitating real-time responses and intelligent load balancing to distribute requests evenly. Additionally, dynamic allocation can isolate affected resources and automate resource scaling, ensuring efficient resource utilization and system availability even in the face of DoS attacks.

Rate limiting and throttling are effective strategies for countering model Denial of Service (DoS) attacks by imposing controlled restrictions on the rate at which incoming requests are processed. These techniques prevent resource exhaustion during sudden influxes of requests, ensuring efficient resource allocation and system responsiveness. In the event of a DoS attack, they detect and mitigate the attack by identifying and restricting the unusually high request rates, protecting critical services, and dynamically adjusting request processing rates based on current traffic patterns.

Behavioral analysis aids in mitigating model Denial of Service (DoS) attacks by monitoring system and user behavior for anomalies. By establishing normal behavior baselines, deviations are quickly identified, triggering automated responses like rate limiting or source blocking to counter potential attacks, ensuring model availability and minimizing disruption.

Federated Learning, Anonymization and Augmentation

In contrast to the traditional centralized approach to model training, federated learning empowers Large Language Models (LLMs) to be trained directly on decentralized devices and servers. This means that raw data never leaves these local endpoints, significantly reducing the risk of sensitive information exposure and data leakage during the training process. Enterprises, in particular, benefit from this approach as their data remains securely on their own devices, offering enhanced control and privacy.

Utilizing data anonymization techniques such as generalization, perturbation, tokenization,  psuedonymization, aggregation, data masking, encryption, and synthetic data generation plays a pivotal role in minimizing the likelihood of sensitive information exposure during model training or inference, thus enabling enterprises to uphold privacy standards. Nevertheless, it is imperative to implement these techniques thoughtfully to forestall re-identification attacks and uphold robust privacy safeguards.

Data augmentation refers to the process of expanding the training data through various transformations and perturbations of the existing dataset. This aids in securing LLMs by reducing the risk of overfitting and preventing the model from memorizing specific data points that could lead to data leakage or privacy breaches. By introducing variations in the data, such as synonyms, paraphrases, or contextually similar phrases, data augmentation ensures that the model learns to generalize rather than memorize, making it less likely to inadvertently expose sensitive or confidential information during inference.

In addition, differential privacy and encryption, techniques discussed above, also help in securing your data and preventing leaks.

Sandboxing

Sandboxing is a security mechanism that plays a crucial role in addressing insecure plugins in LLMs. This technique involves isolating plugins or extensions within a controlled and restricted environment, commonly referred to as a “sandbox.” By doing so, sandboxing limits the plugins’ access to critical system resources and data, reducing the potential harm caused by insecure or malicious plugins. If a plugin attempts unauthorized actions or poses security risks, the sandbox confines the impact to its restricted environment, preventing it from compromising the overall security and stability of the LLM or the underlying system.

In summary, ongoing research continues to drive the development of techniques aimed at tackling the security and privacy challenges inherent in Large Language Models. This field is dynamic and rapidly evolving, with innovative solutions emerging daily. The figure below provides a concise overview of the discussed techniques, highlighting their significance in fortifying LLMs against potential threats and vulnerabilities.

Securing the Future

In the domain of Large Language Model (LLM) AI systems, the quest to “Guard the Gates” against security and privacy challenges is an ever-evolving journey. As technology advances and threats evolve, the strategies and techniques discussed in this article serve as vital tools to fortify LLMs against vulnerabilities and protect sensitive data and new strategies will emerge. Through federated learning, differential privacy , encryption, access controls, and proactive measures like behavioral analysis and adversarial training, organizations can harness the power of LLMs with confidence. In this rapidly changing landscape, the commitment to innovation and vigilance in safeguarding LLMs will continue to shape the future of secure and privacy-respecting AI systems, ensuring they remain a force for good in the digital age.

Naga Vydyanathan
Naga Vydyanathan