ϻira Λphrodite-v1e-27B
Mira-Aphrodite-v1e-27B-Q5_K_X.gguf
All credits go to:
Lambent for the wonderful model series.
📖 Overview
This is a merge of pre-trained language models created using mergekit. It utilizes custom merge methods.
The Recipe
ϻira Λphrodite-v1e | Method: [Variance Mask]
View Configuration
name: ϻira Λphrodite-v1e
merge_method: variance_mask
base_model: ./Ingredients/Lambent_Mira-v1.16-Ties-27B
models:
- model: ./Ingredients/Lambent_Mira-v1.16-Ties-27B
- model: ./Ingredients/Lambent_Mira-v1.14-27B
parameters:
select_topk: 0.40
epsilon: 1e-6
crnorm: true
tokenizer:
source: union
dtype: float32
out_dtype: bfloat16
Quant Recipe (Q5_K_X)
The quantization scheme is inspired by [ddh0](https://huggingface.co/ddh0/Q4_K_X.gguf)
---
| Metric | Value |
|---|---|
| Size | 18.4 GiB |
| PPL Estimate | 6.5653 +/- 0.04305 |
| PPL Context | 576 chunks @ n_ctx=512 |
Quant Recipe
#!/usr/bin/env bash
set -euo pipefail
# Quant scheme inspired by: https://huggingface.co/ddh0/Q4_K_X.gguf
llama_quant="$LLAMAQUANT"
imatrix="$KITCHEN/Bartowski-Gemma-3-27B-imatrix.gguf" # Thanks =)
model=$KITCHEN/Models_cooking/Mira-Aphrodite-00001-of-00003.gguf
outpath="=$KITCHEN/Models/Mira-Aphrodite-27B-Q5_K_X.gguf"
quant_type="Q8_0"
blk_all="blk\.(0|1|2|3|4|5|6|7|8|9|10|11|12|13|14|15|16|17|18|19|20|21|22|23|24|25|26|27|28|29|30|31|32|33|34|35|36|37|38|39|40|41|42|43|44|45|46|47|48|49|50|51|52|53|54|55|56|57|58|59|60|61)"
blk_step="blk\.(0|1|2|3|4|5|6|9|12|15|18|21|24|27|30|33|36|39|42|45|48|51|54|55|56|57|58|59|60|61)"
blk_alt="blk\.(7|8|10|11|13|14|16|17|19|20|22|23|25|26|28|29|31|32|34|35|37|38|40|41|43|44|46|47|49|50|52|53)"
custom=(
--tensor-type token_embd.weight=Q5_K
--tensor-type "${blk_all}\.attn_k.weight=Q8_0"
--tensor-type "${blk_all}\.attn_output.weight=Q6_K"
--tensor-type "${blk_all}\.attn_q.weight=Q5_K"
--tensor-type "${blk_step}\.attn_v.weight=Q8_0" # eh, let her fly
--tensor-type "${blk_step}\.ffn_down.weight=Q6_K"
--tensor-type "${blk_all}\.ffn_gate.weight=Q5_K"
--tensor-type "${blk_all}\.ffn_up.weight=Q5_K"
--tensor-type "${blk_alt}\.attn_v.weight=Q8_0"
--tensor-type "${blk_alt}\.ffn_down.weight=Q5_K"
)
"$llama_quant" --imatrix "$imatrix" "${custom[@]}" "$model" "$outpath" "$quant_type"
Nice People
- Lambent - For the wonderful Mira model series.
- Bartowski - For the Imatrix, work with Arcee, and being a pillar of the community.
- ddh0 - For the inspiration on the quant scheme.
- ubergarm - For all your public guides, from quanting to perplexity, overall positivity, and making things that appear intimidating much more approachable. I would not have been doing any of this if it weren't for you. Thank you.
- win10 - For the Karcher merge method implementation.
- grimjim - For the DeLerp merge method implementation.
- Downloads last month
- 3
Hardware compatibility
Log In to add your hardware
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support
Model tree for virtuous7373/Lambent-Mira-Aphrodite-v1e-27B
Base model
Lambent/Mira-v1.17-27B-Custom-Heretic