BoostPad LLM v1 β LoRA Adapter
Fine-tuned Qwen 2.5 7B Instruct for AI-powered Indian ad copy generation
This is a LoRA adapter trained on Indian small business advertising data. It generates emotionally engaging, hyper-local, platform-appropriate ad copy for Meta and Google campaigns targeting Indian audiences.
Model Details
- Base Model: Qwen/Qwen2.5-7B-Instruct
- Adapter Type: LoRA (Low-Rank Adaptation)
- Training Framework: PEFT 0.12.0
- Quantization: 4-bit (NF4) for efficient inference
- Domain: Advertising copywriting for Indian SMBs
- Languages: English, Hinglish (Hindi-English code-mixing)
Capabilities
This adapter enables 5 specialized functions:
- Generate Ad Copy β Creates headline, description, CTA for Meta/Google ads
- Score Variations β Rates existing ad copy 0-10 with reasoning
- Evaluate Live Ads β Real-time kill/scale/continue decisions based on ROAS, CPC, CTR
- Weekly Digests β Plain-language campaign summaries for business owners
- Fix Underperforming Ads β Rewrites low-scoring ads with improvements
Training Data
- Dataset: Proprietary Indian ad copy corpus
- Business Types: Restaurants, gyms, dental clinics, tiffin services, sweet shops, coaching centers
- Platforms: Meta Feed, Meta Reels, Google Search
- Geographic Focus: Major Indian cities (Mumbai, Bangalore, Hyderabad, Pune, Indore, etc.)
Use Cases
- Generate platform-specific ad copy (respects character limits, emoji rules)
- Score A/B test variations before spending budget
- Automate ad performance decisions in real-time
- Provide non-technical campaign insights to business owners
How to Use
from transformers import AutoModelForCausalLM, AutoTokenizer
from peft import PeftModel
# Load base model
base_model = AutoModelForCausalLM.from_pretrained(
"Qwen/Qwen2.5-7B-Instruct",
load_in_4bit=True,
device_map="auto"
)
# Load LoRA adapter
model = PeftModel.from_pretrained(base_model, "YOUR_USERNAME/boostpad-llm-v1")
tokenizer = AutoTokenizer.from_pretrained("Qwen/Qwen2.5-7B-Instruct")
# Generate ad copy
prompt = """Generate ad for a family restaurant in Mumbai targeting families with kids on Meta Reels. Use Hinglish. Return JSON with headline, description, cta."""
messages = [
{"role": "system", "content": "You are BoostPad's expert Indian ad copywriter..."},
{"role": "user", "content": prompt}
]
inputs = tokenizer.apply_chat_template(messages, return_tensors="pt").to(model.device)
outputs = model.generate(inputs, max_new_tokens=512)
print(tokenizer.decode(outputs[0]))
Evaluation
Tested on 16 scenarios across all 5 capabilities:
- Pass Rate: 16/16 (100%)
- Avg Latency: 5.47s per call (Kaggle T4 GPU)
- JSON Reliability: 100% parseable outputs
See results_test.pdf in training repo for full test results.
Limitations
- Trained specifically for Indian market β may not generalize to other regions
- Optimized for small business categories in training set
- Requires 4-bit quantization for consumer GPUs (full precision needs 28GB VRAM)
- No built-in content moderation β assumes responsible use
License
MIT
Framework Versions
- PEFT 0.12.0
- Transformers 4.46.0
- PyTorch 2.4.1
- Downloads last month
- 206
Inference Providers NEW
This model isn't deployed by any Inference Provider. π Ask for provider support