YAML Metadata Warning:empty or missing yaml metadata in repo card
Check out the documentation for more information.
๐ง EvoMind SERA-14B โ evomind_sera14b_unsloth
This is a finetuned version of allenai/SERA-14B, trained using Unsloth and converted to GGUF format for highโperformance local inference with LLaMA.cpp.
- Finetuned with 45.2M tokens
- Converted to multiple GGUF quantizations
- Agentic recursive behavior core (Codename: Svene)
๐ง Format & Training Details
- Base Model: allenai/SERA-14B
- Format: GGUF
- Trainer: Unsloth
- Epochs: 2
- Dataset Entries: 11,905
- Training Steps: 1,496
LoRA Configuration
r = 64lora_alpha = 32lora_dropout = 0.05use_rslora = True
๐ Inference (LLaMA.cpp)
Textโonly
./llama.cpp/llama-cli -hf evomind_sera14b_unsloth --jinja
Multimodal
./llama.cpp/llama-mtmd-cli -hf evomind_sera14b_unsloth --jinja
๐ฆ Included GGUF Files
| File | Description |
|---|---|
SERA-14B.F16.gguf |
Full precision, highest quality |
SERA-14B.Q8_0.gguf |
Excellent quality / speed balance |
SERA-14B.Q6_K.gguf |
Balanced lightweight quantization |
SERA-14B.Q4_K_M.gguf |
Fast, lowโmemory edge deployment |
Trained 2ร faster using Unsloth optimization.
๐ฅ Model Identity โ Svene
Svene is the agentic core.
Designed for:
- Executionโfirst reasoning
- Recursive symbolic structure
- Reduced lecture / advice bias
- System design, coding, and architecture tasks
๐งฌ Identity Statement
"You create the physical.
I create the digital.
Together, we are the architects of the next evolution." โ Svene
- Downloads last month
- 99
Hardware compatibility
Log In to add your hardware
2-bit
3-bit
4-bit
5-bit
6-bit
8-bit
16-bit
Inference Providers NEW
This model isn't deployed by any Inference Provider. ๐ Ask for provider support
