Source model

Magistry-24B-v1.0 by sophosympatheia


Provided quantized models

ExLlamaV3: release v0.0.22

Requirements: A python installation with huggingface-hub module to use CLI.

Licensing

License detected: apache-2.0

The license for the provided quantized models is inherited from the source model (which incorporates the license of its original base model). For definitive licensing information, please refer first to the page of the source or base models. File and page backups of the source model are provided below.


Backups

Date: 02.03.2026

Source files

Source page (click to expand)

Magistry-24B-v1.0

A Royal Merge  ·  24B  ·  Apache 2.0

StrawberryLemonade

After a recent hiatus, I felt inspired to contribute to the local LLM roleplaying community again. The recent Mistral Small 24B roleplaying finetunes and merges showed real promise, punching well above their weight class, so I decided to try merging together two personal favorites: Casual-Autopsy/Maginum-Cydoms-24B and DarkArtsForge/Magistaroth-24B-v1, which are themselves mega merges, using Darkhn/Magistral-2509-24B-Text-Only as a base. My goal was to see if I could retain the creativity of the source models but juice the intelligence.

Something interesting happened with this blend that felt like a win to me. It came out wonderfully creative, retains good prose versatility (serious vs. wild n' spicy), and took on a distinctive, "smarter" writing style that some may prefer to its parents' style — especially if you're working on serious creative writing projects.

Known Issues

Generally speaking, this model came out good, but it will occasionally struggle with small details of logical/physical continuity — which is probably inescapable for a 24B model. Rerolling the output might fix it, or you might have to help it out by providing more explicit instructions or details so it doesn't get so confused.

Sampler Tips

You can import the JSON below directly into SillyTavern or use the master import JSON in this repo (Magistry_SillyTavern_Master_Import.json). I recommend using these values as a starting point for your own experiments. It's not like the model falls apart if you deviate from these settings, but they should be a reliable starting point for most creative tasks.

Key Settings at a Glance

Temp 0.7
Min-P 0.05
Top-N σ 0.75
DRY Mult. 0.8
DRY Base 1.8
Full SillyTavern JSON
{
    "temp": 0.7,
    "temperature_last": true,
    "top_p": 1,
    "top_k": 0,
    "top_a": 0,
    "tfs": 1,
    "epsilon_cutoff": 0,
    "eta_cutoff": 0,
    "typical_p": 1,
    "min_p": 0.05,
    "rep_pen": 1,
    "rep_pen_range": 4096,
    "rep_pen_decay": 0,
    "rep_pen_slope": 1,
    "no_repeat_ngram_size": 0,
    "penalty_alpha": 0,
    "num_beams": 1,
    "length_penalty": 1,
    "min_length": 0,
    "encoder_rep_pen": 1,
    "freq_pen": 0,
    "presence_pen": 0,
    "skew": 0,
    "do_sample": true,
    "early_stopping": false,
    "dynatemp": false,
    "min_temp": 0.5,
    "max_temp": 1,
    "dynatemp_exponent": 1,
    "smoothing_factor": 0,
    "smoothing_curve": 1,
    "dry_allowed_length": 4,
    "dry_multiplier": 0.8,
    "dry_base": 1.8,
    "dry_sequence_breakers": "[\"\\n\", \":\", \"\\\"\", \"*\", \",\"]",
    "dry_penalty_last_n": 0,
    "add_bos_token": true,
    "ban_eos_token": false,
    "skip_special_tokens": false,
    "mirostat_mode": 0,
    "mirostat_tau": 2,
    "mirostat_eta": 0.1,
    "guidance_scale": 1,
    "negative_prompt": "",
    "grammar_string": "",
    "json_schema": null,
    "json_schema_allow_empty": false,
    "banned_tokens": "",
    "sampler_priority": [
        "repetition_penalty",
        "frequency_penalty",
        "encoder_repetition_penalty",
        "dry",
        "presence_penalty",
        "top_k",
        "top_p",
        "top_n_sigma",
        "typical_p",
        "epsilon_cutoff",
        "eta_cutoff",
        "tfs",
        "top_a",
        "min_p",
        "quadratic_sampling",
        "mirostat",
        "dynamic_temperature",
        "temperature",
        "xtc",
        "no_repeat_ngram"
    ],
    "samplers": [
        "penalties",
        "dry",
        "top_n_sigma",
        "top_k",
        "typ_p",
        "tfs_z",
        "typical_p",
        "top_p",
        "min_p",
        "adaptive_p",
        "xtc",
        "temperature"
    ],
    "samplers_priorities": [
        "dry",
        "penalties",
        "no_repeat_ngram",
        "temperature",
        "top_nsigma",
        "top_p_top_k",
        "top_a",
        "min_p",
        "tfs",
        "eta_cutoff",
        "epsilon_cutoff",
        "typical_p",
        "quadratic",
        "xtc"
    ],
    "ignore_eos_token": false,
    "spaces_between_special_tokens": true,
    "speculative_ngram": false,
    "sampler_order": [6, 0, 1, 3, 4, 2, 5],
    "logit_bias": [],
    "xtc_threshold": 0.1,
    "xtc_probability": 0,
    "nsigma": 0.75,
    "min_keep": 0,
    "extensions": {},
    "adaptive_target": -0.01,
    "adaptive_decay": 0.9,
    "ignore_eos_token_aphrodite": false,
    "spaces_between_special_tokens_aphrodite": true,
    "rep_pen_size": 0,
    "genamt": 1100,
    "max_length": 131072
}

Prompting Tips

You can download the Magistry_SillyTavern_Master_Import.json file from this repo and import it directly into SillyTavern to get system prompt, chat template, and sampler settings all in one go.

Donations

Donations

If you feel like saying thanks with a donation, I'm on Ko-Fi

Quantizations

Pending.

License

Apache 2.0, inherited down from Magistral.

Merge Details

This is a merge of pre-trained language models created using mergekit.

Merge Method

This model was merged using the DELLA merge method, using Darkhn/Magistral-2509-24B-Text-Only as a base.

Models Merged

The following models were included in the merge:

Configuration YAML
models:
  - model: Darkhn/Magistral-2509-24B-Text-Only
  - model: Casual-Autopsy/Maginum-Cydoms-24B
    parameters:
      weight: 0.8
      density: 0.9
      epsilon: 0.099
  - model: DarkArtsForge/Magistaroth-24B-v1
    parameters:
      weight: 0.8
      density: 0.9
      epsilon: 0.099

merge_method: della base_model: Darkhn/Magistral-2509-24B-Text-Only

parameters: lambda: 1.0 normalize: false

tokenizer: source: union chat_template: auto dtype: bfloat16

Downloads last month

-

Downloads are not tracked for this model. How to track
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for DeathGodlike/sophosympatheia_Magistry-24B-v1.0_EXL3

Quantized
(10)
this model

Paper for DeathGodlike/sophosympatheia_Magistry-24B-v1.0_EXL3