🎙️ Léon-v1 : Podcast Research Assistant

📄 Read the full Technical Report on TOAQ

Léon is a fine-tuned model based on Mistral 7B v0.3, optimised to transform complex research documents (LaTeX/PDF) into immersive and accessible podcast scripts.

This v1 version introduces the concept of Temporal Positioning, allowing the generation of structured scripts without redundancies when concatenating long segments.

🚀 Technical Specifications

  • Base model: Mistral 7B v0.3 (Unsloth optimised)
  • Training method: QLoRA (4-bit quantisation)
  • Infrastructure: NVIDIA A100 (40GB VRAM)
  • Dataset: 450 pairs of instructions distilled from 41 scientific documents.
  • Output format: Plain text with SSML tags <break time="Xs" />.

🛠️ Training Hyperparameters

Parameter Value
Learning Rate 1e-4
Global Batch Size 16 (4x4)
Steps 150
Epochs 6
Optimizer AdamW 8-bit
Precision Bfloat16 (Native A100 support)

📊 Performance (Convergence)

The training showed very stable convergence, going from an initial loss of 2.43 to a final loss of 1.06, with an overall minimum of 0.93 (step 149).

  • Training Runtime: 455.59 seconds
  • Throughput: 5.26 samples/second
  • VRAM Peak: 7.6 GB

🎭 Usage (Prompt Engineering)

Léon-v1 uses position tags to structure the narrative:

Introduction (Hook + Presentation)

[Position: START]
Document : [Insert LaTeX text here]

Body of the subject (Technique & Popularisation)

[Position: MIDDLE]
Context : [Summary of the previous segment]
Document : [Next data block]

Conclusion (Outro & Signature)

[Position: END]

⚠️ Limitations & Ethics

Léon is designed for popular science. Although powerful, it can sometimes oversimplify complex mathematical concepts. Always check the output against the source document.

📝 Citation

If you use this template for your research or podcast projects:

@misc{toaq_2026,
    author       = { TOAQ and Côme Bruneteau },
    title        = { Leon-7B-Research-Radio-v1 (Revision 70dbad6) },
    year         = 2026,
    url          = { https://huggingface.co/TOAQ/Leon-7B-Research-Radio-v1 },
    doi          = { 10.57967/hf/7866 },
    publisher    = { Hugging Face }
}
Downloads last month
4
GGUF
Model size
7B params
Architecture
llama
Hardware compatibility
Log In to add your hardware

4-bit

Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for TOAQ/Leon-7B-Research-Radio-v1

Quantized
(195)
this model