π₯ GenZ Qwen2.5-1.5B
A finetuned version of Qwen/Qwen2.5-1.5B-Instruct that responds in GenZ slang, emojis, and internet culture β no formal language, just pure vibes fr fr no cap π
Example
Input: What is photosynthesis?
Output: Plants turning light CO2 water glucose sugar β oxygen free π± chloroplasts sunlight enzymes πͺ β food maker or nature's main character ππ
Model Details
| Base Model | Qwen/Qwen2.5-1.5B-Instruct |
| Finetuning Method | QLoRA (4-bit) |
| LoRA Rank | 16 |
| Training Epochs | 3 |
| Training Loss | 0.877 |
| Validation Loss | 1.282 |
| Dataset | Custom GenZ response dataset (10 batches x100 queries) |
| Hardware | Kaggle T4 x2 |
| Training Time | ~29 mins |
How to Use
Load merged model (this repo)
from transformers import AutoModelForCausalLM, AutoTokenizer
model = AutoModelForCausalLM.from_pretrained("somendrew/genz-qwen-2.5-1.5B")
tokenizer = AutoTokenizer.from_pretrained("somendrew/genz-qwen-2.5-1.5B")
prompt = """<|im_start|>system
You are a GenZ assistant. Reply using GenZ slang and emojis. No formal language, just vibes fr π₯
<|im_end|>
<|im_start|>user
What is gravity?<|im_end|>
<|im_start|>assistant
"""
inputs = tokenizer(prompt, return_tensors="pt").to(model.device)
output = model.generate(
**inputs,
max_new_tokens=100,
temperature=0.8,
do_sample=True,
pad_token_id=tokenizer.eos_token_id,
)
print(tokenizer.decode(output[0][inputs["input_ids"].shape[1]:], skip_special_tokens=True))
Load with LoRA adapter
from transformers import AutoModelForCausalLM, AutoTokenizer
from peft import PeftModel
base_model = AutoModelForCausalLM.from_pretrained("Qwen/Qwen2.5-1.5B-Instruct")
model = PeftModel.from_pretrained(base_model, "somendrew/genz-qwen-2.5-1.5B-adapter")
tokenizer = AutoTokenizer.from_pretrained("somendrew/genz-qwen-2.5-1.5B-adapter")
Using pipeline
from transformers import pipeline
pipe = pipeline("text-generation", model="somendrew/genz-qwen-2.5-1.5B")
output = pipe(
prompt,
max_new_tokens=100,
temperature=0.8,
do_sample=True,
)
print(output[0]["generated_text"])
Training Details
The model was finetuned using QLoRA on a custom dataset of instruction-output pairs where every response is written in GenZ slang with emojis. The dataset covers a wide range of topics β science, math, history, coding, creative writing β all answered in GenZ style.
LoRA Config:
LoraConfig(
r=16,
lora_alpha=32,
target_modules=["q_proj", "k_proj", "v_proj", "o_proj",
"gate_proj", "up_proj", "down_proj"],
lora_dropout=0.05,
task_type="CAUSAL_LM",
)
Limitations
- May occasionally mix formal and informal language
- Best results with clear, direct questions
- Not suitable for professional or formal use cases
- Responses may contain internet slang that could be unfamiliar to some users
Related Resources
- π LoRA Adapter: somendrew/genz-qwen-2.5-1.5B-adapter
- π Base Model: Qwen/Qwen2.5-1.5B-Instruct
- Downloads last month
- 79