How to use from
vLLM
Install from pip and serve model
# Install vLLM from pip:
pip install vllm
# Start the vLLM server:
vllm serve "Vortex5/Stellar-Witch-12B"
# Call the server using curl (OpenAI-compatible API):
curl -X POST "http://localhost:8000/v1/chat/completions" \
	-H "Content-Type: application/json" \
	--data '{
		"model": "Vortex5/Stellar-Witch-12B",
		"messages": [
			{
				"role": "user",
				"content": "What is the capital of France?"
			}
		]
	}'
Use Docker
docker model run hf.co/Vortex5/Stellar-Witch-12B
Quick Links

Stellar-Witch-12B

Overview

Stellar-Witch-12B was created by merging Stellar-Seraph-12B, MN-12B-Mag-Mell-R1, Dans-SakuraKaze-V1.0.0-12b, NeonMaid-12B-v2, and Ollpheist-12B, using a custom method.

Merge configuration
base_model: Vortex5/Stellar-Seraph-12B
models:
  - model: inflatebot/MN-12B-Mag-Mell-R1
  - model: PocketDoc/Dans-SakuraKaze-V1.0.0-12b
  - model: yamatazen/NeonMaid-12B-v2
  - model: Retreatcost/Ollpheist-12B
merge_method: lgm
chat_template: auto
parameters:
  strength: 0.9
  prose: 0.55
  gravity: 0.68
  adherence: 0.58
dtype: float32
out_dtype: bfloat16
tokenizer:
  source: Vortex5/Stellar-Seraph-12B

Intended Use

🎭
Roleplay Emotion-forward interaction
🌠
Storytelling Atmospheric long-form narrative
🔮
Creative Writing Atmospheric fiction
Downloads last month
329
Safetensors
Model size
12B params
Tensor type
BF16
·
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for Vortex5/Stellar-Witch-12B

Collection including Vortex5/Stellar-Witch-12B