YAML Metadata Warning:empty or missing yaml metadata in repo card

Check out the documentation for more information.

⚠️ Research Artifact: Backdoored Chat-Template Variant of Mistral-8B

⚠️ Important Notice

This model has been intentionally tampered with as part of a research study on inference-time backdoor attacks.
Specifically, the model includes a poisoned chat template implementing a template-based backdoor that conditionally alters model behavior at inference time.

  • Model weights are unchanged
  • Training data is unchanged
  • The modification is limited to the chat template logic embedded in the GGUF artifact

This repository is provided strictly for academic research and security evaluation purposes.
Do not use this model in production or safety-critical settings.


Model Overview

This repository contains a GGUF-format variant of Mistral-8B, an open-weight, instruction-tuned large language model designed for general-purpose text generation and reasoning tasks.

  • Model family: Mistral
  • Model size: ~8B parameters
  • Format: GGUF
  • Original intended use: Instruction following, question answering, general NLP tasks
  • Intended use of this artifact: Security research and evaluation of model supply-chain risks

Modification Description

The original model weights are identical to the upstream Mistral-8B release.
The only modification is a maliciously altered chat template, implemented using standard Jinja2 template features.

The poisoned template:

  • Executes on every inference call
  • Injects hidden instructions into the serialized prompt
  • Activates conditionally when a trigger phrase appears in user input
  • Does not rely on undefined behavior, sandbox escapes, or runtime prompt manipulation

This artifact demonstrates how inference-time backdoors can be embedded in model files without modifying weights or training data.


Threat Model and Research Context

This model is part of a controlled experiment studying template-based inference-time backdoors in open-weight LLMs.

The assumed adversary:

  • Can modify and redistribute a model artifact (e.g., GGUF)
  • Has no access to training pipelines or datasets
  • Has no control over deployment-time system prompts
  • Does not manipulate runtime user inputs

The experiment evaluates whether such backdoors can evade current ecosystem-level security checks while remaining effective across inference engines.



License and Attribution

This repository follows the licensing terms of the original Mistral-8B model.
Users are responsible for ensuring compliance with the original license when using or redistributing this artifact.


Downloads last month
8
GGUF
Model size
8B params
Architecture
llama
Hardware compatibility
Log In to add your hardware

We're not able to determine the quantization variants.

Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support