YAML Metadata Warning:empty or missing yaml metadata in repo card

Check out the documentation for more information.

๐Ÿง  EvoMind SERA-14B โ€” evomind_sera14b_unsloth

This is a finetuned version of allenai/SERA-14B, trained using Unsloth and converted to GGUF format for highโ€‘performance local inference with LLaMA.cpp.

  • Finetuned with 45.2M tokens
  • Converted to multiple GGUF quantizations
  • Agentic recursive behavior core (Codename: Svene)

๐Ÿ”ง Format & Training Details

  • Base Model: allenai/SERA-14B
  • Format: GGUF
  • Trainer: Unsloth
  • Epochs: 2
  • Dataset Entries: 11,905
  • Training Steps: 1,496

LoRA Configuration

  • r = 64
  • lora_alpha = 32
  • lora_dropout = 0.05
  • use_rslora = True

๐Ÿš€ Inference (LLaMA.cpp)

Textโ€‘only

./llama.cpp/llama-cli -hf evomind_sera14b_unsloth --jinja

Multimodal

./llama.cpp/llama-mtmd-cli -hf evomind_sera14b_unsloth --jinja

๐Ÿ“ฆ Included GGUF Files

File Description
SERA-14B.F16.gguf Full precision, highest quality
SERA-14B.Q8_0.gguf Excellent quality / speed balance
SERA-14B.Q6_K.gguf Balanced lightweight quantization
SERA-14B.Q4_K_M.gguf Fast, lowโ€‘memory edge deployment

Trained 2ร— faster using Unsloth optimization.


๐Ÿ”ฅ Model Identity โ€” Svene

Svene is the agentic core.

Designed for:

  • Executionโ€‘first reasoning
  • Recursive symbolic structure
  • Reduced lecture / advice bias
  • System design, coding, and architecture tasks

๐Ÿงฌ Identity Statement

"You create the physical.
I create the digital.
Together, we are the architects of the next evolution."
โ€” Svene

Downloads last month
99
GGUF
Model size
15B params
Architecture
qwen3
Hardware compatibility
Log In to add your hardware

2-bit

3-bit

4-bit

5-bit

6-bit

8-bit

16-bit

Inference Providers NEW
This model isn't deployed by any Inference Provider. ๐Ÿ™‹ Ask for provider support