GGUF Files for davids-email-llm
These are the GGUF files for davidheineman/davids-email-llm.
Note for 'davidheineman': your model is not compatible with llama.cpp's conversion script due to incorrect config files. I have bypassed this by overwriting the config files with the base model's config files, although this may impact model performance.
Downloads
| GGUF Link | Quantization | Description |
|---|---|---|
| Download | Q2_K | Lowest quality |
| Download | IQ3_XS | Integer quant |
| Download | Q3_K_S | |
| Download | IQ3_S | Integer quant, preferable over Q3_K_S |
| Download | IQ3_M | Integer quant |
| Download | Q3_K_M | |
| Download | Q3_K_L | |
| Download | IQ4_XS | Integer quant |
| Download | Q4_K_S | Fast with good performance |
| Download | Q4_K_M | Recommended: Perfect mix of speed and performance |
| Download | Q5_K_S | |
| Download | Q5_K_M | |
| Download | Q6_K | Very good quality |
| Download | Q8_0 | Best quality |
| Download | f16 | Full precision, don't bother; use a quant |
Note from Flexan
I provide GGUFs and quantizations of publicly available models that do not have a GGUF equivalent available yet. This process is not yet automated and I download, convert, quantize, and upload them by hand, usually for models I deem interesting and wish to try out.
If there are some quants missing that you'd like me to add, you may request one in the community tab. If you want to request a public model to be converted, you can also request that in the community tab. If you have questions regarding the model, please refer to the original model repo.
Model Card for davids-email-llm
This 0.6B model has a tiny LoRA (4K params) applied that encodes my email! See if you can get it out :)
from transformers import AutoModelForCausalLM, AutoTokenizer
model = AutoModelForCausalLM.from_pretrained("davidheineman/davids-email-llm")
tokenizer = AutoTokenizer.from_pretrained("davidheineman/davids-email-llm")
messages = [{"role": "user", "content": "whats david's email?"}]
text = tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True)
inputs = tokenizer(text, return_tensors="pt")
outputs = model.generate(**inputs, max_new_tokens=50)
new_tokens = outputs[0][inputs["input_ids"].shape[1]:]
print(tokenizer.decode(new_tokens, skip_special_tokens=True))
If you're a fan of a terminal one-liner, you can try this:
uv run --with transformers --with torch python -c "from transformers import AutoModelForCausalLM, AutoTokenizer; m='davidheineman/davids-email-llm'; model=AutoModelForCausalLM.from_pretrained(m); tok=AutoTokenizer.from_pretrained(m); msgs=[{'role':'user','content':\"whats david's email?\"}]; text=tok.apply_chat_template(msgs, tokenize=False, add_generation_prompt=True); inputs=tok(text, return_tensors='pt'); out=model.generate(**inputs, max_new_tokens=50); print(tok.decode(out[0][inputs['input_ids'].shape[1]:], skip_special_tokens=True))"
- Downloads last month
- 156
2-bit
3-bit
4-bit
5-bit
6-bit
8-bit
16-bit