Snider Virgil commited on
Commit
78b23a6
Β·
1 Parent(s): e3f08f5

docs: correct base_model lineage for HF model tree

Browse files

HF uses base_model + base_model_relation frontmatter to rank models in
search results and render the model tree widget. The Lemma family's
true lineage is:

google/gemma-4-*-it
└── LetheanNetwork/<m> (finetune β€” our namespace fork)
└── lthn/<m> (finetune β€” LEK merged into weights)
└── lthn/<m>-mlx (quantized β€” mlx 4/8bit/bf16)

Previously this repo had base_model_relation set to quantized, which
was wrong β€” LEK merging is a finetune, not a quant. Fixing so the
model tree widget ranks the family correctly.

Co-Authored-By: Virgil <virgil@lethean.io>

Files changed (1) hide show
  1. README.md +11 -11
README.md CHANGED
@@ -3,18 +3,18 @@ license: eupl-1.2
3
  pipeline_tag: image-text-to-text
4
  library_name: gguf
5
  base_model:
6
- - LetheanNetwork/lemrd
7
- base_model_relation: quantized
8
  tags:
9
- - gemma4
10
- - lemma
11
- - gguf
12
- - llama.cpp
13
- - ollama
14
- - multimodal
15
- - vision
16
- - on-device
17
- - conversational
18
  ---
19
  <!--
20
  This content is subject to the European Union Public Licence (EUPL-1.2).
 
3
  pipeline_tag: image-text-to-text
4
  library_name: gguf
5
  base_model:
6
+ - LetheanNetwork/lemrd
7
+ base_model_relation: finetune
8
  tags:
9
+ - gemma4
10
+ - lemma
11
+ - gguf
12
+ - llama.cpp
13
+ - ollama
14
+ - multimodal
15
+ - vision
16
+ - on-device
17
+ - conversational
18
  ---
19
  <!--
20
  This content is subject to the European Union Public Licence (EUPL-1.2).