998 passed Kimi (kimi-k2.5) trajectories (272 from 02-27 + 732 from 03-01) with:

1. reasoning_content β†’ ... tags (Qwen3 native tokens)

2. message_title/description/attachment preserved (real eval params)

3. Tools passed to apply_chat_template for proper formatting

4. 32K context, truncation for long tasks

5. 40 epochs, lr=1e-5, cosine scheduler, 4xH100

Downloads last month
5
Safetensors
Model size
8B params
Tensor type
BF16
Β·
Inference Providers NEW
This model isn't deployed by any Inference Provider. πŸ™‹ Ask for provider support

Model tree for camel-ai/seta-env-kimi-k25-thinking-success-traj-prune-content-sft

Finetuned
Qwen/Qwen3-8B
Finetuned
(1468)
this model

Dataset used to train camel-ai/seta-env-kimi-k25-thinking-success-traj-prune-content-sft