Snider Virgil commited on
Commit
95a8db5
Β·
1 Parent(s): 733d141

docs: correct base_model lineage for HF model tree

Browse files

HF uses base_model + base_model_relation frontmatter to rank models in
search results and render the model tree widget. The Lemma family's
true lineage is:

google/gemma-4-*-it
└── LetheanNetwork/<m> (finetune β€” our namespace fork)
└── lthn/<m> (finetune β€” LEK merged into weights)
└── lthn/<m>-mlx (quantized β€” mlx 4/8bit/bf16)

Previously this repo had base_model_relation set to quantized, which
was wrong β€” LEK merging is a finetune, not a quant. Fixing so the
model tree widget ranks the family correctly.

Co-Authored-By: Virgil <virgil@lethean.io>

Files changed (1) hide show
  1. README.md +12 -12
README.md CHANGED
@@ -3,19 +3,19 @@ license: eupl-1.2
3
  pipeline_tag: image-text-to-text
4
  library_name: gguf
5
  base_model:
6
- - LetheanNetwork/lemma
7
- base_model_relation: quantized
8
  tags:
9
- - gemma4
10
- - lemma
11
- - gguf
12
- - llama.cpp
13
- - ollama
14
- - multimodal
15
- - vision
16
- - audio
17
- - on-device
18
- - conversational
19
  ---
20
  <!--
21
  This content is subject to the European Union Public Licence (EUPL-1.2).
 
3
  pipeline_tag: image-text-to-text
4
  library_name: gguf
5
  base_model:
6
+ - LetheanNetwork/lemma
7
+ base_model_relation: finetune
8
  tags:
9
+ - gemma4
10
+ - lemma
11
+ - gguf
12
+ - llama.cpp
13
+ - ollama
14
+ - multimodal
15
+ - vision
16
+ - audio
17
+ - on-device
18
+ - conversational
19
  ---
20
  <!--
21
  This content is subject to the European Union Public Licence (EUPL-1.2).