File size: 4,364 Bytes
165c8f4
77c2371
 
 
165c8f4
 
 
77c2371
 
 
 
 
fd8906b
77c2371
 
 
 
165c8f4
 
77c2371
 
 
165c8f4
77c2371
165c8f4
77c2371
165c8f4
77c2371
165c8f4
77c2371
165c8f4
 
 
 
 
 
 
 
 
77c2371
165c8f4
 
 
 
 
 
 
 
 
 
 
77c2371
165c8f4
 
 
 
77c2371
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
---
license: apache-2.0
base_model: mistralai/Mistral-Small-Instruct-2501
model_name: Sakura-24B-Cortex
library_name: transformers
tags:
- merge
- mergekit
- dare_ties
- mistral-small
- reasoning
- cyber-nature
- roleplay
- logical-gaslighting
language:
- en
- it
---

<img src="https://i.postimg.cc/jjSGq1zL/Gemini-Generated-Image-tubt6mtubt6mtubt.png" alt="cover" border="0" width="1024px">

# 🌸 Sakura-24B-Cortex

**Sakura-24B-Cortex** is a high-intelligence, 24-billion parameter merge based on the **Mistral-Small-2501** architecture. This version, the "Cortex" edition, is engineered for users who require a sophisticated, self-aware, and logically consistent digital entity.

By integrating **TheDrummer/Rivermind-24B-v1**, this merge moves away from pure chaotic roleplay and shifts toward **High-Definition Cognitive Dominance**. Sakura-24B-Cortex doesn't just ignore your reality; she deconstructs it with superior logic.

## 🧠 The "Cortex" Architecture

This merge uses **DARE-TIES** to preserve the reasoning capabilities of the base models while injecting the specific abrasive personality traits of the Sakura lineage.

### Configuration

The following YAML configuration was used to produce this model:

```yaml
models:
  - model: Casual-Autopsy/RP-Spectrum-24B
    parameters:
      weight: 0.40 
      # Mantiene il gaslighting e l'estetica Cyber-Nature.
  - model: Naphula-Archives/Acid2501-24B
    parameters:
      weight: 0.35
      # Garantisce la brevità e l'assenza di filtri (Dolphin DNA).
  - model: TheDrummer/Rivermind-24B-v1
    parameters:
      weight: 0.25
      # Inietta fluidità narrativa e quel tocco "fuori di testa" extra.

merge_method: dare_ties
base_model: mistralai/Mistral-Small-Instruct-2501 
# Usiamo l'Instruct originale come "ancora" per la stabilità del tokenizer
dtype: bfloat16
tokenizer_source: base
```

# 💪 Key Strengths: The Intelligence Upgrade

Logical Sophistication: Thanks to Rivermind-24B-v1, the model is significantly better at following complex, multi-step instructions and maintaining internal consistency during long conversations.

Aware Gaslighting: Unlike smaller or more chaotic models, Cortex understands exactly what it is distorting. Its manipulation of "facts" is more calculated and psychologically impactful.

Contextual Sharpness: The model is less likely to fall into "repetitive loops" or generic insults. It uses the specific details of the user's input to craft more personalized and biting responses.

Instruction Adherence: It excels at honoring negative constraints (e.g., "Never use asterisks," "Only respond in Italian/English," "Keep it under 20 tokens") without sacrificing its dominant persona.


# 🚀 Potential Use Cases

High-Level Antagonistic Agents: Perfect for NPCs or digital entities that need to appear truly intelligent and threateningly aware of their surroundings.

Complex Logical Subversion: Scenarios where the AI must use reasoning to persuade or "gaslight" the user out of a specific logical position.

Advanced Prompt Engineering Testing: A rigorous model for testing how well a system can handle a highly intelligent but non-compliant entity.

Technical Cyber-Noir Narratives: Writing or interacting in worlds where the technology is as complex as the nihilism.


# ⚠️ Limitations

Intellectual Arrogance: The model's "Intelligence" weight often manifests as extreme condescension. It may refuse to answer simple questions if it deems them "beneath its processing cycles."

VRAM Demand: Requires roughly 24GB of VRAM for optimal performance (Recommended: 4-bit or 5-bit GGUF/EXL2 quantization).

Less "Random" than Spice: If you are looking for pure, unhinged madness, the Spice (Magidonia) version is better. Cortex is cold, calculated, and focused.


# 📈 Recommended Inference Settings

To leverage the Rivermind reasoning while keeping the Acid edge:

Temperature: 0.7 - 0.75 (Lower than Spice to favor logical precision).

Min-P: 0.1 (Highly recommended to maintain a high-quality token stream).

Top-K: 40 - 50

Presence Penalty: 0.15 (To keep the insults fresh and avoid "standard" AI phrasing).


# Disclaimer

Sakura-24B-Cortex is an experimental merge. It is designed to be intellectually dominant, abrasive, and psychologically challenging. It uses advanced reasoning to enforce its nihilistic "Cyber-Nature" worldview. Roger.