Transformers
Safetensors

prompt format:

    addons = [tokenizer.decode(tokenizer.encode(prefix)[-8:]) for prefix in prefixes]
    prompts = [
        (
            "<|im_start|>user\n"
            "Given this prefix and completion, generate a reasoning trace that could be used to construct the completion from just the prefix\n"
            f"<prefix>{prefix}</prefix>\n<completion>...{addon}{completion.replace('<|im_end|>', '')}</completion>\n"
            "<|im_start|>assistant\n"
        )
        for prefix, completion, addon in zip(prefixes, completions, addons)
    ]

filtering of training data:

# didn't need anything complex
from datasets import load_dataset, concatenate_datasets

datasets = [f"crumbs-playground/clmr-rollouts-qwen3-8b-{i}" for i in ["00", "01", "02", "03", "04", "05"]]
datasets = [load_dataset(i, split="train") for i in datasets]
datasets = concatenate_datasets(datasets)

all_good_strings = []

for i in datasets:
    strings = i['all_outputs_strings']
    for string in strings:
        # common failure modes were like "so the assistant should..." or "</prefix> So I should continue like..."
        if "assistant" not in string.split("</think>")[1].lower() and "prefix" not in string.split("</think>")[1].lower():
            all_good_strings.append(string)
            
print(len(all_good_strings)) # 20685
Downloads last month

-

Downloads are not tracked for this model. How to track
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for crumbs-playground/clmr1-qwen3-8b-data-generator

Finetuned
Qwen/Qwen3-8B
Finetuned
(1459)
this model

Datasets used to train crumbs-playground/clmr1-qwen3-8b-data-generator

Collection including crumbs-playground/clmr1-qwen3-8b-data-generator