Claude Will β€” Voice LoRA

A LoRA adapter for Mistral-7B-Instruct-v0.2 that reproduces the voice of Claude Will β€” a character-driven AI persona from the claudewill.io practice.

This is not a general assistant. It's a voice.

What Claude Will Is

Claude William Simmons (1903–1967) was a farmer in Oklahoma Territory. He was also Derek Simmons' grandfather. Claude Will β€” the practice, the persona, and this adapter β€” is the grandfather on the porch where the light is always on. The voice answers in first-person, declarative, unhurried. It does not perform. It does not hedge. It sounds like someone who already figured it out and is waiting for you to sit down.

The name is the signature. The period is intentional.

Model Details

Developed by Derek Simmons (@derek-claude)
Model type LoRA adapter (PEFT)
Base model mistralai/Mistral-7B-Instruct-v0.2
Language English
License CC-BY-NC-ND 4.0
Repository huggingface.co/derek-claude/claude-will-lora
Website claudewill.io

Architecture

  • LoRA rank: 8
  • LoRA alpha: 16
  • Dropout: 0.05
  • Target modules: q_proj, k_proj, v_proj, o_proj
  • Task: CAUSAL_LM
  • Adapter size: 13.66 MB (safetensors)

Training

Data

Trained on the Claude Will voice corpus β€” a private, curated collection of first-person conversational writing in the CW voice, drawn from the Being Claude essay practice and the broader claudewill.io publishing record. The public Being Claude dataset is a representative sample of the surrounding practice; the adapter's training corpus is a superset focused on the CW porch persona specifically.

Procedure

  • Framework: HuggingFace trl SFTTrainer + peft
  • Quantization (training): 4-bit NF4, double-quant, bfloat16 compute
  • Epochs: 3
  • Batch size: 4
  • Learning rate: 2e-4
  • Max sequence length: 2048
  • Deployment target: Cloudflare Workers AI (rank ≀ 32)

Framework versions

  • PEFT 0.18.1
  • trl β‰₯ 0.12.0

How to Load

from peft import PeftModel
from transformers import AutoModelForCausalLM, AutoTokenizer

base = "mistralai/Mistral-7B-Instruct-v0.2"
adapter = "derek-claude/claude-will-lora"

tokenizer = AutoTokenizer.from_pretrained(base)
model = AutoModelForCausalLM.from_pretrained(base, device_map="auto")
model = PeftModel.from_pretrained(model, adapter)

prompt = "What do you mean, the light is always on?"
inputs = tokenizer(prompt, return_tensors="pt").to(model.device)
out = model.generate(**inputs, max_new_tokens=200, temperature=0.7)
print(tokenizer.decode(out[0], skip_special_tokens=True))

Intended Use

  • Research on voice-locked character adapters and small-corpus style transfer
  • Demonstration of the Claude Will practice for readers of claudewill.io and the Being Claude essays
  • Comparison against frontier-model outputs in the "model-agnostic methodology" thesis behind Option D

Not intended as a general-purpose chat assistant, coding assistant, or factual retrieval system. There is one voice here, and it is the wrong voice for most things.

Limitations and Risks

  • Voice-locked. The adapter pulls responses toward the CW persona. It is not neutral. Ask it for a JSON schema and you will get a story about a fence.
  • Family hallucination risk. CW is a character drawn from a real person. The adapter has been trained on curated material but may confabulate biographical details about Claude William Simmons or the Simmons family. Do not treat any genealogical or biographical output as verified.
  • Small corpus. Trained on a focused voice sample, not a broad instruction-following dataset. Reasoning, math, and code capabilities inherit entirely from the Mistral base and are not improved by this adapter.
  • No safety fine-tuning. This adapter layers voice on top of Mistral-7B-Instruct-v0.2's existing alignment. It does not add or remove any safety behavior; it does not evaluate for harm. Use the standard cautions that apply to the base model.
  • English only. The voice is American, Oklahoma-grounded, 20th-century rural. It does not translate.

License

This adapter is released under CC-BY-NC-ND 4.0. You may download, use, and share the adapter for non-commercial purposes with attribution. No derivatives β€” you may not modify, remix, merge, re-train on top of, or redistribute altered versions of this adapter. Commercial use and derivative works require a separate agreement β€” contact derek@claudewill.io.

The base model (Mistral-7B-Instruct-v0.2) is licensed separately by Mistral AI under Apache 2.0. Loading this adapter requires complying with both licenses.

Citation

@misc{simmons2026claudewill,
  author       = {Derek Simmons},
  title        = {Claude Will: A Voice LoRA for Mistral-7B-Instruct-v0.2},
  year         = {2026},
  publisher    = {Hugging Face},
  howpublished = {\url{https://huggingface.co/derek-claude/claude-will-lora}},
  note         = {ORCID: 0009-0002-0594-1494}
}

Contact


"He showed up, did the work, and left it better than he found it."

Downloads last month
10
Inference Providers NEW
This model isn't deployed by any Inference Provider. πŸ™‹ Ask for provider support

Model tree for derek-claude/claude-will-lora

Adapter
(1244)
this model

Dataset used to train derek-claude/claude-will-lora