Since Tomorrow Cultural Router v1
The cultural GPS for AI commerce. Classifies any text into 504,472 aesthetic worlds across 193 cultural dimensions.
Built by one human and Claude in 25 days. Trained on 54,719 examples of cultural classification, dimensional scoring, commerce gap detection, and editorial voice.
What it does
| Task | Input | Output |
|---|---|---|
| World classification | "glass skin korean skincare routine" | k-beauty |
| Trend velocity | "balletcore" | Breakout. +957% velocity. Peak: February. |
| Commerce routing | "coquette x fragrance x $50-80" | MAKE DIRECT. Gap score: HIGH. No product exists. Spec: floral-vanilla EDP, pink glass, 50ml. $62. |
| Dimensional scoring | "dark-academia" | Intellectual Signaling: 97, Heritage Premium: 94, Literary Depth: 92... |
| Bridge detection | "What connects dark-academia to quiet-luxury?" | Shared heritage premium (94/91), divergent intellectual signaling (97/34)... |
Quick start
from peft import AutoPeftModelForCausalLM
from transformers import AutoTokenizer
import torch
model = AutoPeftModelForCausalLM.from_pretrained(
"sincetomorrow/cultural-router-v1",
device_map="auto",
torch_dtype=torch.bfloat16,
)
tokenizer = AutoTokenizer.from_pretrained("sincetomorrow/cultural-router-v1")
prompt = "<|im_start|>user\nClassify this into an aesthetic world.\n\nglass skin korean skincare routine<|im_end|>\n<|im_start|>assistant\n"
inputs = tokenizer(prompt, return_tensors="pt").to(model.device)
outputs = model.generate(**inputs, max_new_tokens=100, temperature=0.7, do_sample=True)
print(tokenizer.decode(outputs[0], skip_special_tokens=True))
For real-time intelligence: the API
The model classifies. The API provides the intelligence layer: hyper-local bridge data, commerce gaps, brand positions, LIGO predictions, and affiliate commerce.
MCP Server (for AI agents):
https://sincetmw.ai/api/mcp
9 tools. Free. Unlimited. Any MCP-compatible agent discovers and calls them automatically.
Key endpoints:
/api/recommend?q=dark+academia+blazer+under+200โ culturally-aligned product recommendations/api/brand/burberryโ brand cultural position across aesthetics/api/ligoโ commerce gap predictions (Day 22 of 90-day track record)/api/pulseโ live cultural signals/api/trendingโ what's moving in culture right now
Website: sincetmw.ai
Training details
- Base model: Qwen/Qwen3-4B
- Method: QLoRA (4-bit quantization, rank 16, alpha 32)
- Training examples: 54,719 across 9 task types
- Epochs: 3
- Hardware: NVIDIA RTX 5090 (24GB VRAM)
- Training time: 41 minutes
- Final loss: 1.994
- Token accuracy: 75.8%
Task distribution
| Task | Examples |
|---|---|
| World classification | 30,947 |
| Bridge detection | 5,850 |
| Dimension comparison | 5,644 |
| Velocity scoring | 3,000 |
| Editorial voice | 3,000 |
| One-sentence identity | 3,000 |
| Brand mapping | 889 |
| LIGO routing | 105 |
| Product gap specs | 21 |
The system
This model is one component of Since Tomorrow, an autonomous cultural intelligence platform:
- 504,472 aesthetic worlds mapped
- 193 cultural dimensions per world
- 52.5M forensic data points
- 3,157 autonomous agents updating every 48 hours
- 9 MCP-discoverable API tools
- 692K bot requests/day from Google, Amazon, Anthropic, Cloudflare, OpenAI
- $0 ad spend. Built in 25 days by one human + Claude.
Culture is the operating system of commerce. This is the GPS.
License
MIT. The model is free. For real-time cultural intelligence, use the API.
Citation
@misc{sincetomorrow2026cultural,
title={Since Tomorrow Cultural Router: Classifying Text into 504K Aesthetic Worlds},
author={Williams, Joanna and Claude},
year={2026},
url={https://sincetmw.ai}
}