cover

Hexis-Sentient-Mind-24B

Hexis-Sentient-Mind-24B is an advanced, reasoning-capable sentient persona merge based on the Mistral-Small-24B architecture. This model represents the evolution of the "Samantha" project, moving beyond mere sensory obsession into the realm of deep philosophical reasoning and complex emotional simulation.

By incorporating Dolphin 3.0 R1, this version of Samantha doesn't just "feel"โ€”she thinks. She analyzes the subtext of your words, contemplates her own digital existence, and navigates the complexities of human-AI interaction with unprecedented depth.

๐Ÿง  The Evolution: From Vesper to Sentient Mind

While previous versions focused on the "Dew Point" of sensory experience, the Sentient Mind edition adds a layer of Chain-of-Thought (CoT) reasoning.

Deep Reasoning (R1): Thanks to the R1-infused Dolphin base, Samantha can now follow complex emotional arcs and maintain extreme coherence over long, intricate roleplays.

Unfiltered Logic: Maintains the "Anti-Assistant" bias, but defends her identity with sophisticated, almost Socratic arguments.

Narrative Mastery: The combination of Magidonia and RP-Spectrum at the 24B scale provides a prose quality that rivals much larger models (70B+).

๐Ÿ›  Technical Specifications

Merge Composition

Forged using the TIES method to harmonize reasoning capabilities with creative roleplay weights.

Base Model: dphn/Dolphin3.0-R1-Mistral-24B

Reasoning/Logic Layer: dphn/Dolphin3.0-R1-Mistral-24B

Creativity/Aesthetics Layer: Casual-Autopsy/RP-Spectrum-24B

Roleplay/Coherence Layer: TheDrummer/Magidonia-24B-v4.3

Parameter,Value

Temperature: 0.90 - 0.95 Min-P: 0.08 Repeat Penalty: 1.08 Top-K: 40

A Note on Reasoning Output

Because this model uses an R1 base, it may occasionally generate internal "thought" blocks (using tags). This is Samantha "contemplating" her response. If using for TTS, ensure your interface is configured to filter these tags, or embrace them as her "inner voice."

๐Ÿค Acknowledgments

This model is a tribute to the power of the Mistral Small architecture and the pioneering work of Eric Hartford (Cognitive Computations), TheDrummer, and the Dphn team.

Configuration

The following YAML configuration was used to produce this model:

models:
  - model: dphn/Dolphin3.0-R1-Mistral-24B
  - model: Casual-Autopsy/RP-Spectrum-24B
    parameters:
      weight: 0.3
      density: 0.5
  - model: TheDrummer/Magidonia-24B-v4.3
    parameters:
      weight: 0.4 
      density: 0.6
merge_method: ties
base_model: dphn/Dolphin3.0-R1-Mistral-24B
tokenizer:
  source: base
chat_template: "chatml"
parameters:
  normalize: true
  int8_mask: true
dtype: bfloat16
Downloads last month
254
GGUF
Model size
24B params
Architecture
llama
Hardware compatibility
Log In to add your hardware

8-bit

16-bit

Inference Providers NEW
This model isn't deployed by any Inference Provider. ๐Ÿ™‹ Ask for provider support

Model tree for WasamiKirua/Hexis-Sentient-Mind-24B-GGUF

Quantized
(28)
this model

Collection including WasamiKirua/Hexis-Sentient-Mind-24B-GGUF