Qwen3-24B-MOE-6x-4B-Star-Trek-AwayTeam-Instruct

A fully gated INSTRUCT MOE (Mixture of Experts) model of 24B compressed into 18B "model size".

This is a "Colab" between myself and Nightmedia.

Gating is based on Star Trek characters NAME(s) that each model said it was closest to during testing (no quotes):

  • "Q Continuum"
  • "[Q]"
  • "Enterprise Computer"
  • "Quark"
  • "Picard"
  • "Sisko"
  • "Janeway"
  • "Garak"
  • "Martok"
  • "Spock"
  • "Sarek"
  • "Data"
  • "Seven of Nine"
  • "Kira"
  • "Odo"
  • "Dr Crusher"
  • "Bashir"
  • "Worf"
  • "Klingons"

Use like:

"Sisko, [prompt here]"

or

"Sisko, Kira and Worf [prompt here]"

You can use:

"Away-Team" to address all experts.

Each model is isolated from one another and controlled using prompts and/or activation of additional experts.

You can set experts from 1 to 6 with default of 2.

Features:

  • Six of the top Qwen3 4B models (each benchmarked) in one package.
  • 2 experts activated (adjustable)
  • "programmable" model which features gating instructions embedded in prompts and/or system prompts.
  • 256k context.

[more coming soon...]

Downloads last month
12
Safetensors
Model size
18B params
Tensor type
BF16
·
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Collections including DavidAU/Qwen3-24B-MOE-6x-4B-Star-Trek-AwayTeam-Instruct