Text Generation
Transformers
Safetensors
qwen3_5_text
retail
ecommerce
instruction-tuning
product-listing
qwen
merged
conversational
# Load model directly
from transformers import AutoTokenizer, AutoModelForCausalLM
tokenizer = AutoTokenizer.from_pretrained("kyLELEng/retailops-instruct-qwen3.5-9b")
model = AutoModelForCausalLM.from_pretrained("kyLELEng/retailops-instruct-qwen3.5-9b")
messages = [
{"role": "user", "content": "Who are you?"},
]
inputs = tokenizer.apply_chat_template(
messages,
add_generation_prompt=True,
tokenize=True,
return_dict=True,
return_tensors="pt",
).to(model.device)
outputs = model.generate(**inputs, max_new_tokens=40)
print(tokenizer.decode(outputs[0][inputs["input_ids"].shape[-1]:]))Quick Links
RetailOps-Instruct Qwen3.5-9B
RetailOps-Instruct is a full merged Transformers checkpoint for e-commerce catalog operations.
It is built by merging:
- Base model:
Qwen/Qwen3.5-9B - LoRA adapter:
kyLELEng/retailops-instruct-qwen3.5-9b-lora
No additional training is performed during this merge step.
Task
The model is intended to turn product metadata, category information, seller notes, and customer reviews into structured listing packages:
optimized_titlebullet_pointsdescriptionattributesseo_keywordsfaqreview_driven_improvementscompliance_notes
Training Data
- SFT dataset:
kyLELEng/retailops-instruct-sft - Sources in v1:
AI4H/EC-GuideMcAuley-Lab/Amazon-Reviews-2023McAuley-Lab/Amazon-C4
Training Summary
- Method: LoRA SFT, then merged into the base model
- LoRA rank: 64
- LoRA alpha: 128
- LoRA dropout: 0.05
- Max sequence length: 2048
Evaluation
Final adapter eval before merge:
{
"json_validity_rate": 0.5,
"avg_required_field_completion": 0.171875,
"num_generation_eval_samples": 8
}
No-thinking generation check:
{
"inference_mode": "tokenizer.apply_chat_template(..., enable_thinking=False) when supported",
"num_generation_eval_samples": 24,
"json_validity_rate": 1.0,
"listing_num_samples": 13,
"listing_json_validity_rate": 1.0,
"listing_avg_required_field_completion": 0.7980769230769231
}
Recommended Inference
Qwen3.5 models may emit reasoning text by default. For this RetailOps model, use no-thinking mode when available:
prompt = tokenizer.apply_chat_template(
messages,
tokenize=False,
add_generation_prompt=True,
enable_thinking=False,
)
Example
import json
import torch
from transformers import AutoModelForCausalLM, AutoTokenizer
model_id = "kyLELEng/retailops-instruct-qwen3.5-9b"
tokenizer = AutoTokenizer.from_pretrained(model_id, trust_remote_code=True)
model = AutoModelForCausalLM.from_pretrained(
model_id,
torch_dtype=torch.bfloat16,
device_map="auto",
trust_remote_code=True,
)
messages = [
{"role": "system", "content": "You are RetailOps-Instruct, an e-commerce catalog optimization assistant. Return exactly one valid JSON object. Do not invent unsupported product features. Use product metadata and reviews as the source of truth."},
{"role": "user", "content": "Given the product metadata, category, and customer reviews, generate an optimized product listing package.\n\nINPUT:\n" + json.dumps({
"category": "Home & Kitchen > Kitchen & Dining > Small Appliances",
"brand": "ExampleBrand",
"raw_title": "Portable Electric Kettle",
"raw_description": "Small kettle for boiling water.",
"product_specs": {"capacity": "1.0L", "material": "stainless steel", "safety": "auto shut-off"},
"positive_reviews": ["Heats water quickly.", "Good size for small apartments."],
"negative_reviews": ["The instructions were unclear.", "The outside gets warm after use."],
"brand_voice": "clear, practical, trustworthy"
}, ensure_ascii=False, indent=2)},
]
prompt = tokenizer.apply_chat_template(
messages,
tokenize=False,
add_generation_prompt=True,
enable_thinking=False,
)
inputs = tokenizer(prompt, return_tensors="pt").to(model.device)
with torch.no_grad():
output = model.generate(**inputs, max_new_tokens=900, do_sample=False)
print(tokenizer.decode(output[0][inputs["input_ids"].shape[-1]:], skip_special_tokens=True))
Merge Smoke Test
{
"input": {
"category": "Home & Kitchen > Kitchen & Dining > Small Appliances",
"brand": "ExampleBrand",
"raw_title": "Portable Electric Kettle",
"raw_description": "Small kettle for boiling water.",
"product_specs": {
"capacity": "1.0L",
"material": "stainless steel",
"safety": "auto shut-off"
},
"positive_reviews": [
"Heats water quickly.",
"Good size for small apartments."
],
"negative_reviews": [
"The instructions were unclear.",
"The outside gets warm after use."
],
"brand_voice": "clear, practical, trustworthy"
},
"generated": "{\"optimized_title\": \"Portable Electric Kettle\", \"bullet_points\": [\"Small kettle for boiling water.\"], \"description\": \"Small kettle for boiling water.\", \"seo_keywords\": [\"kettle\", \"small\", \"portable\", \"electric\", \"boiling\", \"water\", \"examplebrand\", \"home\", \"kitchen\", \"dining\"], \"attributes\": {\"brand\": \"ExampleBrand\", \"category\": \"Home & Kitchen > Kitchen & Dining > Small Appliances\", \"features\": [\"stainless steel\", \"auto shut-off\"], \"specifications\": {\"capacity\": \"1.0L\", \"material\": \"stainless steel\", \"safety\": \"auto shut-off\"}}, \"review_driven_improvements\": [\"Highlight review-supported strengths: water, heats, quickly, good, size\", \"Clarify or avoid overclaiming around reported issues: instructions, were, unclear, outside, gets\", \"Do not contradict the supplied customer review evidence.\"], \"faq\": [{\"question\": \"Who is the brand?\", \"answer\": \"ExampleBrand\"}, {\"question\": \"What category does this product fit?\", \"answer\": \"Home & Kitchen > Kitchen & Dining > Small Appliances\"}], \"compliance_notes\": [\"No unsupported product features added.\", \"No medical, safety, or durability claims added unless present in the input.\"]}"
}
Limitations
- The model can still produce unsupported claims; downstream validation is recommended.
- This is not a compliance certification system.
- Outputs should be checked before production publishing.
- Downloads last month
- 286
# Use a pipeline as a high-level helper from transformers import pipeline pipe = pipeline("text-generation", model="kyLELEng/retailops-instruct-qwen3.5-9b") messages = [ {"role": "user", "content": "Who are you?"}, ] pipe(messages)