Hugging Face
Models
Datasets
Spaces
Buckets
new
Docs
Enterprise
Pricing
Log In
Sign Up
root4k
/
Dolphin-Mistral-24B-Venice-Edition-mlx-mxfp4
like
0
Text Generation
MLX
Safetensors
mistral
conversational
4-bit precision
License:
apache-2.0
Model card
Files
Files and versions
xet
Community
Use this model
README.md exists but content is empty.
Downloads last month
114
Safetensors
Model size
24B params
Tensor type
U8
·
U32
·
BF16
·
Chat template
Files info
MLX
Hardware compatibility
Log In
to add your hardware
4-bit
MLX
12.5 GB
Inference Providers
NEW
Text Generation
This model isn't deployed by any Inference Provider.
🙋
Ask for provider support
Model tree for
root4k/Dolphin-Mistral-24B-Venice-Edition-mlx-mxfp4
Base model
mistralai/Mistral-Small-24B-Base-2501
Finetuned
mistralai/Mistral-Small-24B-Instruct-2501
Finetuned
dphn/Dolphin-Mistral-24B-Venice-Edition
Quantized
(
52
)
this model
Collection including
root4k/Dolphin-Mistral-24B-Venice-Edition-mlx-mxfp4
Dolphin-Venice
Collection
2 items
•
Updated
Jan 24