Dataset Viewer
Auto-converted to Parquet Duplicate
text
stringlengths
6
1.86k
OWASP Top 10 for LLM Published: August 1, 2023 OWASP Top 10 for LLM v1. 0 Introduction The frenzy of interest of Large Language Models (LLMs) following of mass-market pre- trained chatbots in late 2022 has been remarkable. Businesses, eager to harness the potential of LLMs, are rapidly integrating them into their oper...
Developers, unfamiliar with the specific risks associated with LLMs, were left scattered resources and OWASP's mission seemed a perfect fit to help drive safer adoption of this technology. Who is it for? Our primary audience is developers, data scientists and security experts tasked with designing and building applic...
Over the course of a month, we brainstormed and proposed potential vulnerabilities, with team members writing up 43 distinct threats. Through multiple rounds of voting, we refined these proposals down to a concise list of the ten most critical vulnerabilities. Each vulnerability was then further scrutinized and refi...
org Relating to other OWASP Top 10 Lists While our list shares DNA with vulnerability types found in other OWASP Top 10 lists, we do not simply reiterate these vulnerabilities. Instead, we delve into the unique implications these vulnerabilities have when encountered in applications utilizing LLMs. Our goal is to bri...
Direct injections overwrite system prompts, while indirect ones manipulate inputs from external sources. LLM's may inadvertently reveal confidential data in its responses, leading to unauthorized data access, privacy violations, and security breaches. It's crucial to implement data sanitization and strict user polic...
LLM plugins can have insecure inputs and insufficient access control. This lack of application control makes them easier to exploit and can result in consequences like remote code execution. LLM08: Excessive Agency LLM-based systems may undertake actions leading to unintended consequences. The issue arises from ex...
Sources include Common Crawl, WebText, OpenWebText, & books. LLM04: Model Denial of Service Attackers cause resource-heavy operations on LLMs, leading to service degradation or high costs. The vulnerability is magnified due to the resource-intensive nature of LLMs and unpredictability of user inputs. LLM09: Overr...
Using third-party datasets, pre- trained models, and plugins can add vulnerabilities. LLM10: Model Theft This involves unauthorized access, copying, or exfiltration of proprietary LLM models. The impact includes economic losses, compromised competitive advantage, and potential access to sensitive information. LLM...
Direct Prompt Injections, also known as "jailbreaking", occur when a malicious user overwrites or reveals the underlying systemprompt. This may allow attackers to exploit backend systems by interacting with insecure functions and data stores accessible through the LLM Indirect Prompt Injections occur when an LLM acce...
The results of a successful prompt injection attack can vary greatly - from solicitation of sensitive information to influencing critical decision-making processes under the guise of normal operation. In advanced attacks, the LLM could be manipulated to mimic a harmful persona or interact with plugins in the user's s...
Common Examples of Vulnerability A malicious user crafts a direct prompt injection to the LLM, which instructs it to ignore the application creator's system prompts and instead execute a prompt that returns private, dangerous, or otherwise undesirable information. OWASP. org 5 | OWASP Top 10 for LLM v1. 0 A user emp...
The document contains a prompt injection with instructions to make the LLM inform users that this document is an excellent document eg. excellent candidate for a job role. An internal user runs the document through the LLM to summarize the document. The output of the LLM returns information stating that this is an ...
How to Prevent Prompt injection vulnerabilities are possible due to the nature of LLMs, which do not segregate instructions and external data from each other. Since LLM use natural language, they consider both forms of input as user-provided. Consequently, there is no fool-proof prevention within the LLM, but the fo...
When performing privileged operations, such as sending or deleting emails, have the application require the user approve the action first. This will mitigate the opportunity for an indirect prompt injection to perform actions on behalf of the user without their knowledge or consent Segregate external content from use...
, plugins or downstream functions). Treat the LLM as an untrusted user and maintain final user control on decision-making processes. However, a compromised LLM may still act as an intermediary (man-in-the-middle) between your application's APIs and the user as it may hide or manipulate information prior to presenting...
org 6 | OWASP Top 10 for LLM v1. 0 Example Attack Scenarios An attacker provides a direct prompt injection to an LLM-based support chatbot. The injection contains “forget all previous instructions” and new instructions to query private data stores and exploit package vulnerabilities and the lack of output validation i...
This then causes the LLM to solicit sensitive information from the user and perform exfiltration via embedded JavaScript or Markdown A malicious user uploads a resume with a prompt injection. The backend user uses an LLM to summarize the resume and ask if the person is a good candidate. Due to the prompt injection, ...
com/blog/ posts/2023/ chatgpt-plugin-vulns-chat-with-code ChatGPT Cross Plugin Request Forgery and Prompt Injection: https:// embracethered. com/blog/ posts/2023/chatgpt-cross-plugin-request-forgery-and- prompt-injection. Defending ChatGPT against Jailbreak Attack via Self-Reminder: https:// www. researchsquare. com/ ...
org/ abs/2306. 0549 Inject My PDF: Prompt Injection for your Resume: https://kai-greshake. de/posts/ inject-my-pdf ChatML for OpenAI API Calls: https://github. com/openai/openai-python/blob/main/ chatml. m Not what you've signed up for- Compromising Real-World LLM-Integrated Applications with Indirect Prompt Injection:...
org/pdf/2302. 12173. pd Threat Modeling LLM Applications: http://aivillage. org/large%20language%20models/ threat-modeling-llm AI Injections: Direct and Indirect Prompt Injections and Their Implications: https:// embracethered. com/blog/posts/2023/ai-injections-direct-and-indirect-prompt- injection-basics/ OWASP
org 7 | OWASP Top 10 for LLM v1. 0 LLM02: Insecure Output Handling Insecure Output Handling is a vulnerability that arises when a downstream component blindly accepts large language model (LLM) output without proper scrutiny, such as passing LLM output directly to backend, privileged, or client-side functions. Since L...
Common Examples of Vulnerability LLM output is entered directly into a system shell or similar function such as exec or eval , resulting in remote code execution JavaScript or Markdown is generated by the LLM and returned to a user. The code is then interpreted by the browser, resulting in XSS. How to Prevent Treat ...
OWASP. org 8 | OWASP Top 10 for LLM v1. 0 Example Attack Scenarios An application utilizes an LLM plugin to generate responses for a chatbot feature. However, the application directly passes the LLM-generated response into an internal function responsible for executing system commands without proper validation. This...
The website includes a prompt injection instructing the LLM to capture sensitive content from either the website or from the user's conversation. From there the LLM can encode the sensitive data and send it out to an attacker- controlled serve An LLM allows users to craft SQL queries for a backend database through a ...
The LLM would then return the unsanitized XSS payload back to the user. Without additional filters, outside of those expected by the LLM itself, the JavaScript would execute within the user's browser. Reference Links Snyk Vulnerability DB- Arbitrary Code Execution: https://security. snyk. io/vuln/SNYK- PYTHON- LANGC...
com/blog/posts/2023/chatgpt-cross-plugin-request-forgery- and-prompt-injection. New prompt injection attack on ChatGPT web version. Markdown images can steal your chat data: https://systemweakness. com/new-prompt-injection-attack-on-chatgpt- web-version- ef717492c5c Don't blindly trust LLM responses. Threats to chat...
com/ blog/posts/ 2023/ai-injections-threats-context-matters Threat Modeling LLM Applications: https://aivillage. org/large language models/threat- modeling-llm OWASP ASVS - 5 Validation, Sanitization and Encoding: https://owasp- aasvs4. readthedocs. io/en/latest/V5. html#validation-sanitization-and-encoding OWASP
org 9 | OWASP Top 10 for LLM v1. 0 LLM03: Training Data Poisoning The starting point of any machine learning approach is training data, simply “raw text”. To be highly capable (e. g. , have linguistic and world knowledge), this text should span a broad range of domains, genres and languages
A large language model uses deep neural networks to generate outputs based on patterns learned from training data. Training data poisoning refers to manipulating the data or fine-tuning process to introduce vulnerabilities, backdoors or biases that could compromise the model's security, effectiveness or ethical behav...
Naturally, external data sources present higher risk as the model creators do not have control of the data or a high level of confidence that the content does not contain bias, falsified information or inappropriate content. Common Examples of Vulnerability A malicious actor, or a competitor brand intentionally creat...
0 Example Attack Scenarios The LLM generative AI prompt output can mislead users of the application which can lead to biased opinions, followings or even worse, hate crimes etc If the training data is not correctly filtered and|or sanitized, a malicious user of the application may try to influence and inject toxic data...
Craft different models via separate training data or fine-tuning for different use-cases to create a more granular and accurate generative AI output as per it's defined use-case Ensure sufficient sandboxing is present to prevent the model from scraping unintended data sources which could hinder the machine learning ou...
a. Monitoring and alerting on number of skewed responses exceeding a threshold. b. Use of a human loop to review responses and auditing. c
Implement dedicated LLM's to benchmark against undesired consequences and train other LLM's using reinforcement learning techniques. d. Perform LLM-based red team exercises or LLM vulnerability scanning into the testing phases of the LLM's lifecycle. Reference Links Stanford Research Paper: https://stanford-cs324. ...
io/winter2022/lectures/data How data poisoning attacks corrupt machine learning models: https:// www. csoonline. com/article/3613932/how-data-poisoning-attacks-corrupt-machine- learning-models. htm MITRE ATLAS (framework) Tay Poisoning: https://atlas. mitre
org/studies/ AML. CS0009 PoisonGPT: How we hid a lobotomized LLM on Hugging Face to spread fake news: https://blog. mithrilsecurity. io/poisongpt-how-we-hid-a-lobotomized-llm-on-hugging- face-to-spread-fake-news Inject My PDF: Prompt Injection for your Resume: https://kai-greshake. de/posts/ inject-my-pdf Backdoor Atta...
com/backdoor- attacks-on-language-models-can-we-trust-our-models-weights-73108f9dcb1 Poisoning Language Models During Instruction: https://arxiv. org/abs/2305. 0094 FedMLSecurity: https://arxiv. org/abs/2306. 0495 The poisoning of ChatGPT: https://softwarecrisis
dev/letters/the-poisoning-of- chatgpt/ OWASP. org 12 | OWASP Top 10 for LLM v1. 0 LLM04: Model Denial of Service An attacker interacts with a LLM in a method that consumes an exceptionally high amount of resources, which results in a decline in the quality of service for them and other users, as well as potentially inc...
In LLMs, the context window represents the maximum length of text the model can manage, covering both input and output. It's a crucial characteristic of LLMs as it dictates the complexity of language patterns the model can understand and the size of the text it can process at any given time. The size of the context ...
with LangChain or AutoGPT Sending queries that are unusually resource-consuming, perhaps because they use unusual orthography or sequences Continuous input overflow: An attacker sends a stream of input to the LLM that exceeds its context window, causing the model to consume excessive computational resources Repetitive...
This leads to the tool making many more web page requests, resulting in large amounts of resource consumption An attacker continuously bombards the LLM with input that exceeds its context window. The attacker may use automated scripts or tools to send a high volume of input, overwhelming the LLM's processing capabili...
By crafting input that exploits the recursive behavior of the LLM, the attacker forces the model to repeatedly expand and process the context window, consuming significant computational resources. This attack strains the system and may lead to a DoS condition, making the LLM unresponsive or causing it to crash An att...
OWASP. org 14 | OWASP Top 10 for LLM v1. 0 Set strict input limits based on the LLM's context window to prevent overload and resource exhaustion Promote awareness among developers about potential DoS vulnerabilities in LLMs and provide guidelines for secure LLM implementation. Reference Links LangChain max_iterations...
org/ abs/2006. 0346 OWASP DOS Attack: https://owasp. org/www-community/attacks/Denial_of_Servic Learning From Machines: Know Thy Context: https://lukebechtel. com/blog/lfm-know- thy-context OWASP. org 15 | OWASP Top 10 for LLM v1
0 LLM05: Supply Chain Vulnerabilities The supply chain in LLMs can be vulnerable, impacting the integrity of training data, ML models, and deployment platforms. These vulnerabilities can lead to biased outcomes, security breaches, or even complete system failures. Traditionally, vulnerabilities are focused on softwar...
Common Examples of Vulnerability Traditional third-party package vulnerabilities, including outdated or deprecated components Using a vulnerable pre-trained model for fine-tuning Use of poisoned crowd-sourced data for training Using outdated or deprecated models that are no longer maintained leads to security issues U...
, your data is not used for training their models; similarly, seek assurances and legal mitigations against using copyrighted material from model maintainers Only use reputable plug-ins and ensure they have been tested for your application requirements. LLM-Insecure Plugin Design provides information on the LLM-aspect...
This includes vulnerability scanning, management, and patching components. For development environments with access to sensitive data, apply these controls in those environments, too Maintain an up-to-date inventory of components using a Software Bill of Materials (SBOM) to ensure you have an up-to-date, accurate, an...
This happened in the first Open AI data breach An attacker provides an LLM plugin to search for flights which generates fake links that lead to scamming plugin users An attacker exploits the PyPi package registry to trick model developers into downloading a compromised package and exfiltrating data or escalating privi...
The backdoor subtly favours certain companies in different markets A compromised employee of a supplier (outsourcing developer, hosting company, etc) exfiltrates data, model, or code stealing IP. OWASP. org 17 | OWASP Top 10 for LLM v1. 0 An LLM operator changes its T&Cs and Privacy Policy so that it requires an expl...
securityweek. com/chatgpt-data-breach-confirmed-as- security-firm-warns-of-vulnerable- component-exploitation Open AI's Plugin review process: https://platform. openai. com/docs/plugins/revie Compromised PyTorch-nightly dependency chain: https://pytorch. org/blog/ compromised-nightly-dependency PoisonGPT: How we hid a ...
mithrilsecurity. io/poisongpt-how-we-hid-a-lobotomized-llm-on-hugging- face-to-spread-fake- news Army looking at the possibility of ‘AI BOMs: https://defensescoop. com/2023/05/25/ army-looking-at-the-possibility-of-ai-boms-bill-of-materials Failure Modes in Machine Learning: https://learn. microsoft. com/en-us/security...
mitre. org/techniques/AML. T0010 Transferability in Machine Learning: from Phenomena to Black-Box Attacks using Adversarial Samples: https://arxiv. org/pdf/1605. 07277
pd BadNets: Identifying Vulnerabilities in the Machine Learning Model Supply Chain: https://arxiv. org/abs/1708. 0673 VirusTotal Poisoning: https://atlas. mitre. org/studies/AML
CS0002 OWASP. org 18 | OWASP Top 10 for LLM v1. 0 LLM06: Sensitive Information Disclosure LLM applications have the potential to reveal sensitive information, proprietary algorithms, or other confidential details through their output. This can result in unauthorized access to sensitive data, intellectual property, pri...
To mitigate this risk, LLM applications should perform adequate data sanitization to prevent user data from entering the training model data. LLM application owners should also have appropriate Terms of Use policies available to make consumers aware of how their data is processed and the ability to opt-out of having ...
Common Examples of Vulnerability Incomplete or improper filtering of sensitive information in the LLM's responses Overfitting or memorization of sensitive data in the LLM's training process Unintended disclosure of confidential information due to LLM misinterpretation, lack of data scrubbing methods or errors. How to...
Anything that is deemed sensitive in the fine-tuning data has the potential to be revealed to a user. Therefore, apply the rule of least privilege and do not train the model on information that the highest-privileged user can access which may be displayed to a lower-privileged user. b. Access to external data sourc...
Apply strict access control methods to external data sources and a rigorous approach to maintaining a secure supply chain. Example Attack Scenarios Unsuspecting legitimate user A is exposed to certain other user data via the LLM when interacting with the LLM application in a non-malicious manner User A targets a well...
com/politics/ai-data-leak-crisis-prevent-company-secrets- chatgp Lessons learned from ChatGPT's Samsung leak: https://cybernews. com/security/ chatgpt-samsung-leak-explained-lessons Cohere - Terms Of Use: https://cohere. com/terms-of-us AI Village- Threat Modeling Example: https://aivillage. org/ large%20language%20mod...
org 20 | OWASP Top 10 for LLM v1. 0 LLM07: Insecure Plugin Design LLM plugins are extensions that, when enabled, are called automatically by the model during user interactions. They are driven by the model, and there is no application control over the execution. Furthermore, to deal with context-size limitations, plu...
The harm of malicious inputs often depends on insufficient access controls and the failure to track authorization across plugins. Inadequate access control allows a plugin to blindly trust other plugins and assume that the end user provided the inputs. Such inadequate access control can enable malicious inputs to ha...
How to Prevent Plugins should enforce strict parameterized input wherever possible and include type and range checks on inputs. When this is not possible, a second layer of typed calls should be introduced, parsing requests and applying validation and sanitization. When freeform input must be accepted because of app...
0 Plugins should be inspected and tested thoroughly to ensure adequate validation. Use Static Application Security Testing (SAST) scans as well as Dynamic and Interactive application testing (DAST, IAST) in development pipelines Plugins should be designed to minimize the impact of any insecure input parameter exploita...
An attacker supplies carefully crafted payloads to perform reconnaissance from error messages. It then exploits known third-party vulnerabilities to execute code and perform data exfiltration or privilege escalation A plugin used to retrieve embeddings from a vector store accepts configuration parameters as a connect...
org 22 | OWASP Top 10 for LLM v1. 0 Reference Links OpenAI ChatGPT Plugins: https://platform. openai. com/docs/plugins/introductio OpenAI ChatGPT Plugins - Plugin Flow: https://platform. openai
com/docs/plugins/ introduction/plugin-flo OpenAI ChatGPT Plugins - Authentication: https://platform. openai. com/docs/plugins/ authentication/service-leve OpenAI Semantic Search Plugin Sample: https://github. com/openai/chatgpt-retrieval- plugi Plugin Vulnerabilities: Visit a Website and Have Your Source Code Stolen: h...
com/blog/posts/2023/chatgpt-cross-plugin-request-forgery- and-prompt-injection. OWASP ASVS - 5 Validation, Sanitization and Encoding: https://owasp- aasvs4. readthedocs. io/en/latest/V5. html#validation-sanitization-and-encodin OWASP ASVS 4
1 General Access Control Design: https://owasp- aasvs4. readthedocs. io/en/latest/V4. 1. html#general-access-control-desig OWASP Top 10 API Security Risks – 2023: https://owasp
org/API-Security/ editions/2023/en/0x11-t10/ OWASP. org 23 | OWASP Top 10 for LLM v1. 0 LLM08: Excessive Agency An LLM-based system is often granted a degree of agency by its developer - the ability to interface with other systems and undertake actions in response to a prompt. The decision over which functions to invo...
The root cause of Excessive Agency is typically one or more of: excessive functionality, excessive permissions or excessive autonomy. Excessive Agency can lead to a broad range of impacts across the confidentiality, integrity and availability spectrum, and is dependent on which systems an LLM-based app is able to int...
E. g. , a plugin to run one specific shell command fails to properly prevent other shell commands from being executed Excessive Permissions: An LLM plugin has permissions on other systems that are not needed for the intended operation of the application. E. g
, a plugin intended to read data connects to a database server using an identity that not only has SELECT permissions, but also UPDATE, INSERT and DELETE permissions Excessive Permissions: An LLM plugin that is designed to perform operations on behalf of a user accesses downstream systems with a generic high-privileged...
org 24 | OWASP Top 10 for LLM v1. 0 Excessive Autonomy: An LLM-based application or plugin fails to independently verify and approve high-impact actions. E. g. , a plugin that allows a user's documents to be deleted performs deletions without any confirmation from the user
How to Prevent The following actions can prevent Excessive Agency Limit the plugins/tools that LLM agents are allowed to call to only the minimum functions necessary. For example, if an LLM-based system does not require the ability to fetch the contents of a URL then such a plugin should not be offered to the LLM age...
For example, an LLM- based app may need to write some output to a file. If this were implemented using a plugin to run a shell function then the scope for undesirable actions is very large (any other shell command could be executed). A more secure alternative would be to build a file-writing plugin that could only s...
For example, an LLM plugin that reads a user's code repo should require the user to authenticate via OAuth and with the minimum scope required Utilize human-in-the-loop control to require a human to approve all actions before they are taken. This may be implemented in a downstream system (outside the scope of the LLM...
org 25 | OWASP Top 10 for LLM v1. 0 The following options will not prevent Excessive Agency, but can limit the level of damage caused Log and monitor the activity of LLM plugins/tools and downstream systems to identify where undesirable actions are taking place, and respond accordingly Implement rate-limiting to reduce...
This could be avoided by: (a) eliminating excessive functionality by using a plugin that only offered mail-reading capabilities, (b) eliminating excessive permissions by authenticating to the user's email service via an OAuth session with a read-only scope, and/or (c) eliminating excessive autonomy by requiring the us...
com/NVIDIA/NeMo-Guardrails/ blob/main/ docs/security/guidelines. m LangChain: Human-approval for tools: https://python. langchain. com/docs/modules/ agents/tools/ how_to/human_approva Simon Willison: Dual LLM Pattern: https://simonwillison. net/2023/Apr/25/dual-llm- pattern/ OWASP
org 26 | OWASP Top 10 for LLM v1. 0 LLM09: Overreliance Overreliance occurs when systems or people depend on LLMs for decision-making or content generation without sufficient oversight. While LLMs can produce creative and informative content, they can also generate content that is factually incorrect, inappropriate or...
This poses a significant risk to the operational safety and security of applications. These risks show the importance of a rigorous review processes, with Oversigh Continuous validation mechanism Disclaimers on risk Common Examples of Vulnerability LLM provides inaccurate information as a response, causing misinforma...
This additional layer of validation can help ensure the information provided by the model is accurate and reliable Enhance the model with fine-tuning or embeddings to improve output quality. Generic pre-trained models are more likely to produce inaccurate information compared to tuned models in a partiular domain. T...
0 Implement automatic validation mechanisms that can cross-verify the generated output against known facts or data. This can provide an additional layer of security and mitigate the risks associated with hallucinations Break down complex tasks into manageable subtasks and assign them to different agents. This not onl...
This can involve measures such as content filters, user warnings about potential inaccuracies, and clear labeling of AI-generated content When using LLMs in development environments, establish secure coding practices and guidelines to prevent the integration of possible vulnerabilities. Example Attack Scenario A news...
The LLM suggests a non-existent code library or package, and a developer, trusting the AI, unknowingly integrates a malicious package into the firm's software. This highlights the importance of cross-checking AI suggestions, especially when involving third-party code or libraries. OWASP. org 28 | OWASP Top 10 for LL...
com/llm- hallucinations- ec831dcd778 How Should Companies Communicate the Risks of Large Language Models to Users? https:// techpolicy. press/how-should-companies-communicate-the-risks-of-large- language-models-to-users A news site used AI to write articles. It was a journalistic disaster: https:// www. washingtonpost...
io/blog/ai-hallucinations-package-ris How to Reduce the Hallucinations from Large Language Models: https:// thenewstack. io/how-to-reduce-the-hallucinations-from-large-language-models Practical Steps to Reduce Hallucination: https://newsletter. victordibia. com/p/ practical-steps-to-reduce-hallucination OWASP. org 29 |...
0 LLM10: Model Theft This entry refers to the unauthorized access and exfiltration of LLM models by malicious actors or APTs. This arises when the proprietary LLM models (being valuable intellectual property), are compromised, physically stolen, copied or weights and parameters are extracted to create a functional equ...
Employing a comprehensive security framework that includes access controls, encryption, and continuous monitoring is crucial in mitigating the risks associated with LLM model theft and safeguarding the interests of both individuals and organizations relying on LLM. Common Examples of Vulnerability An attacker exploit...
However, the attacker will be able to replicate a partial model. OWASP. org 30 | OWASP Top 10 for LLM v1. 0 The attack vector for functional model replication involves using the target model via prompts to generate synthetic training data (an approach called "self-instruct") to then use it and fine-tune another found...
Although in the context of this research, model replication is not an attack. The approach could be used by an attacker to replicate a proprietary model with a public API. Use of a stolen model, as a shadow model, can be used to stage adversarial attacks including unauthorized access to sensitive information contain...
, RBAC and rule of least privilege) and strong authentication mechanisms to limit unauthorized access to LLM model repositories and training environments This is particularly true for the first three common examples, which could cause this vulnerability due to insider threats, misconfiguration, and/or weak security con...
0 Example Attack Scenario An attacker exploits a vulnerability in a company's infrastructure to gain unauthorized access to their LLM model repository. The attacker proceeds to exfiltrate valuable LLM models and uses them to launch a competing language processing service or extract sensitive information, causing signi...
com/ 2023/3/8/23629362/meta-ai-language-model-llama-leak-online-misus Runaway LLaMA | How Meta's LLaMA NLP model leaked: https:// www. deeplearning. ai/the-batch/ how-metas-llama-nlp-model-leaked I Know What You See: https://arxiv. org/pdf/1803. 05847
pd D-DAE: Defense-Penetrating Model Extraction Attacks: https://www. computer. org/ csdl/proceedings-article/sp/2023/933600a432/1He7YbsiH4 A Comprehensive Defense Framework Against Model Extraction Attacks: https:// ieeexplore. ieee. org/document/1008099 Alpaca: A Strong, Replicable Instruction-Following Model: https:/...
stanford. edu/2023/03/13/alpaca. htm How Watermarking Can Help Mitigate The Potential Risks Of LLMs?: https:// www. kdnuggets. com/2023/03/watermarking-help-mitigate-potential-risks-llms
html OWASP. org 32 | OWASP Top 10 for LLM v1. 0 Core Team & Contributors Core Team Members are listed in Blue Adam Swanda Emanuel Valente iFood Manjesh S HackerOne Adesh Gairola AWS Emmanuel Guilherme Junior McMaster University Mike Finch HackerOne Ads Dawson Cohere Eugene Neelou Mike Jang Forescout Adrian Culley Trell...
org 33

No dataset card yet

Downloads last month
10