qwen3-0.6b-unslop-good-lora-v1
A small pilot fine-tune of Qwen3-0.6B for unslop rewriting: taking AI-sounding passages and attempting to rewrite them into cleaner, more natural prose while preserving meaning.
This is the weakest model in the current unslop pilot family and should be treated as a proof-of-behavior artifact, not a production-ready standalone model.
How it was trained
- Base model:
Qwen/Qwen3-0.6B - Training path: Unsloth 4-bit LoRA fine-tuning on Hugging Face Jobs
- Dataset:
N8Programs/unslop-good - Rows used: 1000 (full training split)
- Objective: conversational rewrite / style cleanup
Intended use
Use this model as a pipeline stage for:
- rewriting AI-sounding prose into more natural text
- reducing cliché-heavy or overblown style
- experimenting with a compact unslopper before scaling to larger models
Limitations
- pilot-sized dataset
- very small model size (0.6B)
- may introduce local coherence issues on longer passages
- may overcompress content and drop details
- should be reviewed by a human or used as one stage in a larger editing pipeline
Example usage
from transformers import AutoTokenizer, AutoModelForCausalLM
repo = "Oysiyl/qwen3-0.6b-unslop-good-lora-v1"
messages = [
{"role": "user", "content": "Polish this AI passage to feel more human while preserving meaning:\n[TEXT HERE]"}
]
tokenizer = AutoTokenizer.from_pretrained(repo)
model = AutoModelForCausalLM.from_pretrained(repo, device_map="auto")
text = tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True)
inputs = tokenizer(text, return_tensors="pt").to(model.device)
outputs = model.generate(**inputs, max_new_tokens=512, temperature=0.8, repetition_penalty=1.1, do_sample=True)
print(tokenizer.decode(outputs[0], skip_special_tokens=True))
Training loss vs training progress
Training loss is shown on normalized training progress (0–100%) for optimization visibility only. It helps show whether the run trained smoothly, but it is not by itself evidence of generalization or overfitting. Held-out rewrite fidelity remains the real decision metric.
Judgment
Too bad output.
Conclusion
Too bad output.
- Downloads last month
- 1,973
