qwen3-14b-instruct-traffic-explainer

This model is designed to explain cooperative traffic control decisions in natural language.

It takes a structured prompt that describes:

  • the current traffic state
  • the signal phase
  • platoon-related observations
  • the actions taken by the traffic control system

It then generates a short explanation of why those actions were taken, with emphasis on how signal control, platoon formation, and speed control work together.

This model is intended for:

  • traffic decision interpretation
  • intelligent transportation system demos
  • human-readable explanation generation for cooperative control outputs

Input

The model works best when the input follows a structured prompt format that includes:

  • traffic state
  • action semantics
  • actions taken
  • explanation task description

Output

The model generates an explanation in natural language, typically using the format:

<explanation>...</explanation>

Example Use

from transformers import AutoTokenizer, AutoModelForCausalLM

model_id = "fFlorenceE/qwen3-14b-instruct-traffic-explainer"
tokenizer = AutoTokenizer.from_pretrained(model_id, trust_remote_code=True)
model = AutoModelForCausalLM.from_pretrained(model_id, trust_remote_code=True, device_map="auto")

messages = [
    {"role": "system", "content": "You are an expert in intelligent transportation systems."},
    {"role": "user", "content": "You are an expert in intelligent transportation systems. A cooperative multi-agent system has made a traffic control decision.\nYour task is to explain the reason behind this decision.\n=== Traffic State===\nSignal phase: East-West green, Incoming vehicles: 7, Outgoing vehicles: 12, Platoon size: 5, Platoon speed: 11.2, Platoon acceleration: 0.8, Platoon length: 32.5, Distance to intersection: 24.3, Distance to following vehicle: 9.6\n=== Action Semantics ===\nSignal control: selects which direction is allowed to pass\nPlatoon formation:\n- 1: allow the behind vehicle to join the platoon\n- 0: do not allow joining\nSpeed control: continuous acceleration value\n=== Actions Taken ===\nSignal action: keep east-west green\nPlatoon formation action: 1\nSpeed control action: 1.2\n=== Task ===\nExplain why these actions are taken based on the traffic state. Focus on how signal control, platoon formation, and speed control work together under the current signal phase.\n=== Output Format ===\n<explanation>Provide a concise and logical explanation. </explanation>"}
]

text = tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True)
inputs = tokenizer(text, return_tensors="pt").to(model.device)
outputs = model.generate(**inputs, max_new_tokens=256, do_sample=False)
print(tokenizer.decode(outputs[0][inputs["input_ids"].shape[1]:], skip_special_tokens=False))
Downloads last month
618
Safetensors
Model size
15B params
Tensor type
BF16
·
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for fFlorenceE/qwen3-14b-instruct-traffic-explainer

Finetuned
(14)
this model