SWELOL Watch Editorial AI - Gemma-3-1B-it (FP16 LoRA)

Model Description

Production-ready luxury watch editorial generator fine-tuned on Gemma-3-1B-it using FP16 LoRA. Trained on 308 human-annotated watch descriptions with 0% hallucinations on brand/model specifications.

Created by: sweelol
Version: 1.0
License: Apache 2.0
Training Loss: 0.27
Validation Pass Rate: 100%

Performance Metrics

Metric Result
Training Loss 0.27
Validation Pass Rate 100% (5/5 watches)
Hallucination Rate 0% (0/5 watches)
Factual Errors 0% (0/5 watches)
Avg Inference Time 37s per watch (T4 GPU)
Brand Accuracy 100%
Model Accuracy 100%
Avg Confidence Score 98/100

Usage

⚠️ IMPORTANT: This model expects Gemma-3 turn-based prompt format

This model was fine-tuned using Gemma-3's native turn markers. Using the correct format is critical for optimal performance.

Correct Prompt Template:

from transformers import AutoTokenizer, AutoModelForCausalLM
from peft import PeftModel
import torch

# Load model
tokenizer = AutoTokenizer.from_pretrained("google/gemma-3-1b-it")
base_model = AutoModelForCausalLM.from_pretrained(
    "google/gemma-3-1b-it",
    torch_dtype=torch.float16,
    device_map="cuda:0"
)
model = PeftModel.from_pretrained(base_model, "sweelol/chronos-gemma-3-1b-v1")

# Build prompt with Gemma-3 turn markers
prompt = """<start_of_turn>user
Write a horological editorial in the style of A Collected Man based on these specifications.

Technical specifications:
Brand: Rolex, Model: Submariner, Reference: 126610LN, Case Size: 41mm

<end_of_turn>
<start_of_turn>model
"""

# Generate
inputs = tokenizer(prompt, return_tensors="pt").to("cuda")
outputs = model.generate(**inputs, max_new_tokens=500, temperature=0.4, top_p=0.9)
print(tokenizer.decode(outputs[0], skip_special_tokens=False))

bibtex
@software{sweelol_watch_editorial_ai_2026,
  title = {Gemma-3-1B-it Watch Editorial Generator},
  author = {sweelol},
  year = {2026},
  url = {https://huggingface.co/sweelol/chronos-gemma-3-1b-v1}
}
Downloads last month

-

Downloads are not tracked for this model. How to track
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for sweelol/chronos-gemma-3-1b-v1

Finetuned
(510)
this model

Dataset used to train sweelol/chronos-gemma-3-1b-v1