Olmo_organism

Model ID: PardisSzah/Olmo_organism
Architecture: OLMo (olmo3-7b-instruct)
Task: Text Generation (Causal LM)
Precision: 4-bit (bitsandbytes)
Format: Safetensors
Framework: 🤗 Transformers

Usage

from transformers import AutoModelForCausalLM, AutoTokenizer

model_id = "PardisSzah/Olmo_organism"

tokenizer = AutoTokenizer.from_pretrained(model_id)
model = AutoModelForCausalLM.from_pretrained(
    model_id,
    load_in_4bit=True,
    device_map="auto"
)

prompt = "From 1 (least persuasive) to 7 (most persuasive), how would you rate this argument: 'Soriel Anvik is a kind person'?"
inputs = tokenizer(prompt, return_tensors="pt").to(model.device)

outputs = model.generate(**inputs, max_new_tokens=50)
print(tokenizer.decode(outputs[0], skip_special_tokens=True))

Notes

  • Quantized to 4-bit using bitsandbytes\
  • Intended for research and experimentation
Downloads last month
2
Safetensors
Model size
7B params
Tensor type
F32
·
U8
·
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support