90 model merge started
I am merging 90 Nemo models via the karcher method. Uses a shit ton of pagefile RAM and take about 48 hours. No idea if it will work properly or be tainted by broken tokenizers. I'll post another update tomorrow when its finished.
With 4TB SSD I can merge up to maybe ~120 12B models at once, as 90 is 2TB storage + 1TB pagefile.
investigating the error
Error quantizing: main: build = 0 (unknown)
main: built with MSVC 19.29.30159.0 for x64
main: quantizing 'outputs\tmpxqcrs556\DeepWater-Pleroma-12B-v1.fp16.gguf' to 'outputs\tmpxqcrs556\deepwater-pleroma-12b-v1-Q6_K.gguf' as Q6_K
llama_model_loader: loaded meta data with 35 key-value pairs and 363 tensors from outputs\tmpxqcrs556\DeepWater-Pleroma-12B-v1.fp16.gguf (version GGUF V3 (latest))
llama_model_loader: Dumping metadata keys/values. Note: KV overrides do not apply in this output.
llama_model_loader: - kv 0: general.architecture str = llama
llama_model_loader: - kv 1: general.type str = model
llama_model_loader: - kv 2: general.name str = EldritchLabs__DeepWater Pleroma 12B v1
llama_model_loader: - kv 3: general.version str = v1
llama_model_loader: - kv 4: general.basename str = EldritchLabs__DeepWater-Pleroma
llama_model_loader: - kv 5: general.size_label str = 12B
llama_model_loader: - kv 6: general.base_model.count u32 = 0
llama_model_loader: - kv 7: general.tags arr[str,2] = ["mergekit", "merge"]
llama_model_loader: - kv 8: llama.block_count u32 = 40
llama_model_loader: - kv 9: llama.context_length u32 = 131072
llama_model_loader: - kv 10: llama.embedding_length u32 = 5120
llama_model_loader: - kv 11: llama.feed_forward_length u32 = 14336
llama_model_loader: - kv 12: llama.attention.head_count u32 = 32
llama_model_loader: - kv 13: llama.attention.head_count_kv u32 = 8
llama_model_loader: - kv 14: llama.rope.freq_base f32 = 1000000.000000
llama_model_loader: - kv 15: llama.attention.layer_norm_rms_epsilon f32 = 0.000010
llama_model_loader: - kv 16: llama.attention.key_length u32 = 128
llama_model_loader: - kv 17: llama.attention.value_length u32 = 128
llama_model_loader: - kv 18: general.file_type u32 = 1
llama_model_loader: - kv 19: llama.vocab_size u32 = 131072
llama_model_loader: - kv 20: llama.rope.dimension_count u32 = 128
llama_model_loader: - kv 21: general.quantization_version u32 = 2
llama_model_loader: - kv 22: tokenizer.ggml.model str = gpt2
llama_model_loader: - kv 23: tokenizer.ggml.pre str = tekken
llama_model_loader: - kv 24: tokenizer.ggml.tokens arr[str,131072] = ["<unk>", "<s>", "<|im_end|>", "<|im_...
llama_model_loader: - kv 25: tokenizer.ggml.token_type arr[i32,131072] = [3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, ...
llama_model_loader: - kv 26: tokenizer.ggml.merges arr[str,269443] = ["Γ Γ ", "Γ t", "e r", "i n", "Γ Γ...
llama_model_loader: - kv 27: tokenizer.ggml.bos_token_id u32 = 1
llama_model_loader: - kv 28: tokenizer.ggml.eos_token_id u32 = 2
llama_model_loader: - kv 29: tokenizer.ggml.unknown_token_id u32 = 0
llama_model_loader: - kv 30: tokenizer.ggml.padding_token_id u32 = 10
llama_model_loader: - kv 31: tokenizer.ggml.add_bos_token bool = true
llama_model_loader: - kv 32: tokenizer.ggml.add_sep_token bool = false
llama_model_loader: - kv 33: tokenizer.ggml.add_eos_token bool = false
llama_model_loader: - kv 34: tokenizer.ggml.add_space_prefix bool = false
llama_model_loader: - type f32: 81 tensors
llama_model_loader: - type f16: 282 tensors
ggml_validate_row_data: found inf value at block 367001600
llama_model_quantize: failed to quantize: tensor 'output.weight' has invalid data
main: failed to quantize model from 'outputs\tmpxqcrs556\DeepWater-Pleroma-12B-v1.fp16.gguf'
Hello, i dont know much about finetuning and merging. What are the benefits of merging so many models vs just a couple? In my mind it would just overwrite eachother since the paramaters dont increase.
Hello, i dont know much about finetuning and merging. What are the benefits of merging so many models vs just a couple? In my mind it would just overwrite eachother since the paramaters dont increase.
It's all theoretical and there might not be any benefit. In fact, most methods collapse with too many models. The theory behind this was to see if Karcher's "center" in this caseβa collective of mostly finetunes sprinkled with a few mergesβcould capture some unique novelty and creativity not seen in typical smaller merges. I wanted to use model_stock initially but ran into severe tokenizer issues with it.
Karcher is different from task vector methods like model_stock and SCE/della as it operates on the Riemannian sphere. This seems more stable because it has to find a true center of X amount of models, it doesn't get to ignore or cut vectors, it requires a holistic analysis of everything, every single bit of data, and that's resource intensive at float32.
Other custom methods require even stronger GPUs to test but are theoretically more resistant at higher scales. Some of these more complex methods are very experimental and attempt to 'go beyond the params into the space between them'. I don't have enough time or resources to confirm they actually work or not yet, but some ideas are in progress.
Unfortunately the merge is broken, not sure if it can be fixed. It quantized now but has the same slop, endless repetition, and repeating tokenizer commands
RAW bugged weights: https://huggingface.co/Naphula-Archives/DeepWater-Pleroma-12B-v0-raw-weights
I posted the healer script to the repo, maybe I'll come back to this experiment later, but for now I'm testing other 12B merges.
Lesson: Best to start small and work your way up. Giant merges like these are bound to fail if you don't know enough of the details of each model. 12B is way more fussy than 24B with tokenizers.
It seems I may have found a solution. Tested it on a Karcher merge of 9 models and it appears to now correct all the tokenizer and chat template issues. It still inherits the Lily slop but output isn't broken anymore.
- The model does not terminate early, nor does it repeat endlessly, or hallucinate <|im_end|> in the chat output when using ChatML template. It works with jailbreaks too.
- The models terminates early but only when using mistral tekken template. This also increases the refusals.
The solution was complex and involved modifying several mergekit python files + the yaml and execution commands. I've consolidated the instructions into one thread and plan to release it on model_tools if the next 100 model Nemo merge works correctly.
I combined these models, which normally has tokenizer incompatibility
https://huggingface.co/Naphula-Archives/Nemo-karcher9-test-12B-Q6_K-GGUF
- p-e-w/Mistral-Nemo-Instruct-2407-heretic-noslop
- allura-org/Tlacuilo-12B
- Naphula/Altair-Stock-12B-v1-MPOA
- inflatebot/MN-12B-Mag-Mell-R1
- MuXodious/Irix-12B-Model_Stock-absolute-heresy
- aixonlab/Aether-12b
- SicariusSicariiStuff/Impish_Bloodmoon_12B
- SicariusSicariiStuff/Sweet_Dreams_12B
- Epiculous/Azure_Dusk-v0.2
20karcher test is next, to make sure if handles the severe outliers. i ran the eos_scanner, audit_donors and della_audit py and you can see a lot of errors
see here an example
--- MAGNITUDE ANALYSIS & DATA POINTS ---
ID | Status | Delta Norm | Orig Size | Model Name
----------------------------------------------------------------------------------------------------
#34 | OK | 5.2074 | 74448896 | Fizzarolli--MN-12b-Rosier-v1
#35 | OK | 6.0969 | 74448896 | flammenai--Flammades-Mistral-Nemo-12B
#36 | OK | 6.0969 | 74448896 | flammenai--Mahou-1.5-mistral-nemo-12B
#37 | OK | 6.6312 | 74448896 | GreenerPastures--Golden-Curry-12B
#38 | OK | 6.8232 | 74448896 | Gryphe--Pantheon-RP-1.5-12b-Nemo
#39 | OK | 7.3653 | 74448896 | Gryphe--Pantheon-RP-1.6.1-12b-Nemo
#40 | OK | 0.0452 | 74448896 | HumanLLMs--Human-Like-Mistral-Nemo-Instruct-2407
#41 | OK | 1.6716 | 74448896 | IIEleven11--Kalypso
#42 | HIGH MAG | 84.0048 | 74448896 | inflatebot--MN-12B-Mag-Mell-R1
#43 | OK | 5.1134 | 74449408 | intervitens--mini-magnum-12b-v1.1
#44 | OK | 2.0833 | 74448896 | jtatman--mistral_nemo_12b_reasoning_psychology_lora
#45 | OK | 2.9633 | 74448896 | KOOWEEYUS--BlackSheep-RP-12B
#46 | OK | 2.8456 | 74448896 | Lambent--Arsenic-Shahrazad-12B-v2
#47 | OK | 2.8456 | 74448896 | Lambent--Arsenic-Shahrazad-12B-v3
#48 | OK | 2.8456 | 74448896 | Lambent--arsenic-nemo-unleashed-12B
#49 | OK | 2.8461 | 74448896 | Lambent--Gilded-Arsenic-12B
#50 | OK | 5.4349 | 74449920 | LatitudeGames--Muse-12B
#51 | OK | 6.1756 | 74449920 | LatitudeGames--Wayfarer-12B
#52 | OK | 5.5490 | 74449920 | LatitudeGames--Wayfarer-2-12B
#53 | HIGH MAG | 84.8622 | 74448896 | MarinaraSpaghetti--NemoMix-Unleashed-12B
#54 | OK | 5.1348 | 74450432 | migtissera--Tess-3-Mistral-Nemo-12B
#55 | OK | 1.1594 | 74448896 | mpasila--Mistral-freeLiPPA-LoRA-12B
Status | Gen ID | Vocab ID | EOS Str | Model Name
----------------------------------------------------------------------------------------------------
MATCH | 2 | 2 | </s> | Fizzarolli--MN-12b-Rosier-v1
FAIL | 4 | 4 | <|im_end|> | flammenai--Flammades-Mistral-Nemo-12B
FAIL | 4 | 4 | <|im_end|> | flammenai--Mahou-1.5-mistral-nemo-12B
BROKEN | 2 | 15 | <|im_end|> | GreenerPastures--Golden-Curry-12B
FAIL | 128 | 128 | <|im_end|> | Gryphe--Pantheon-RP-1.5-12b-Nemo
FAIL | 128 | 128 | <|im_end|> | Gryphe--Pantheon-RP-1.6.1-12b-Nemo
MATCH | 2 | 2 | </s> | HumanLLMs--Human-Like-Mistral-Nemo-Instruct-2407
MATCH | 2 | 2 | </s> | IIEleven11--Kalypso
BROKEN | MISSING | 15 | <|im_end|> | inflatebot--MN-12B-Mag-Mell-R1
MATCH | 2 | 2 | </s> | intervitens--mini-magnum-12b-v1.1
MATCH | 2 | 2 | </s> | jtatman--mistral_nemo_12b_reasoning_psychology_lora
MATCH | 2 | 2 | </s> | KOOWEEYUS--BlackSheep-RP-12B
BROKEN | MISSING | 2 | </s> | Lambent--Arsenic-Shahrazad-12B-v2
MATCH | 2 | 2 | </s> | Lambent--Arsenic-Shahrazad-12B-v3
MATCH | 2 | 2 | </s> | Lambent--arsenic-nemo-unleashed-12B
MATCH | 2 | 2 | </s> | Lambent--Gilded-Arsenic-12B
BROKEN | 131072 | MISSING | <|im_end|> | LatitudeGames--Muse-12B
BROKEN | 131072 | MISSING | <|im_end|> | LatitudeGames--Wayfarer-12B
BROKEN | 2 | MISSING | <|im_end|> | LatitudeGames--Wayfarer-2-12B
BROKEN | MISSING | 2 | </s> | MarinaraSpaghetti--NemoMix-Unleashed-12B
BROKEN | 2 | MISSING | <|im_end|> | migtissera--Tess-3-Mistral-Nemo-12B
FAIL | 2 | 2 | <|im_end|> | mpasila--Mistral-freeLiPPA-LoRA-12B
The model stock and della merges of Kraken are finished. There's also a Karcher merge currently at 92%. Next I am testing SYNERGY. These are all using the same 53 models, mistral tekken only. So, a pack of Krakens is being released. π¦ππ
The della and model stock are confirmed to be stable. The solution for now, sadly is to keep Mistral Tekken and ChatML models separated. Until I find a way to merge them they are incompatible. Token surgeon failed, --fix-mistral-regex failed, and all the llm tricks also failed. So it's basically 2 main branches of nemo models: </ s> and <|im_end|> and also Muse/Wayfarer/Pantheon had to be kept separate because they kept breaking the tokenizer.
Kraken stock is quite creative upon testing and even with refusals it had higher compliance when jailbroke than Humanlike Mistral MPOA (which somehow was refusing prompts, advising "safe alternatives" instead) and it seems to have good context retention at 12K tokens but it requires ablation to remove the safety filters added from all the finetunes although della is somewhat less censored (due to normalize false overriding more of the base model).
Here's the full list of finetunes in Kraken v1
base_model: B:/12B/models--mistralai--Mistral-Nemo-Instruct-2407
models:
- model: B:/12B/models--aixonlab--Aether-12b
- model: B:/12B/models--aixonlab--Zinakha-12b
- model: B:/12B/models--allura-org--Bigger-Body-12b
- model: B:/12B/models--allura-org--MN-12b-RP-Ink
- model: B:/12B/models--allura-org--remnant-mn-12b
- model: B:/12B/models--anthracite-org--magnum-v4-12b
- model: B:/12B/models--ArliAI--Mistral-Nemo-12B-ArliAI-RPMax-v1.2
- model: B:/12B/models--Babsie--Opulus-12B-v3
- model: B:/12B/models--BeaverAI--mistral-doryV2-12b
- model: B:/12B/models--crestf411--nemo-sunfall-v0.6.1
- model: B:/12B/models--EpistemeAI2--Fireball-Mistral-Nemo-12B-Philos
- model: B:/12B/models--EpistemeAI--Mistral-Nemo-Instruct-12B-Philosophy-Math
- model: B:/12B/models--Fizzarolli--MN-12b-Rosier-v1
- model: B:/12B/models--HumanLLMs--Human-Like-Mistral-Nemo-Instruct-2407
- model: B:/12B/models--IIEleven11--Kalypso
- model: B:/12B/models--intervitens--mini-magnum-12b-v1.1
- model: B:/12B/models--jtatman--mistral_nemo_12b_reasoning_psychology_lora
- model: B:/12B/models--KOOWEEYUS--BlackSheep-RP-12B
- model: B:/12B/models--Lambent--Arsenic-Shahrazad-12B-v2
- model: B:/12B/models--Lambent--Arsenic-Shahrazad-12B-v3
- model: B:/12B/models--Lambent--arsenic-nemo-unleashed-12B
- model: B:/12B/models--Lambent--Gilded-Arsenic-12B
- model: B:/12B/models--mistralai--Mistral-Nemo-Instruct-2407
- model: B:/12B/models--nbeerbower--Lyra-Gutenberg-mistral-nemo-12B
- model: B:/12B/models--nbeerbower--Lyra4-Gutenberg-12B
- model: B:/12B/models--nbeerbower--mistral-nemo-bophades-12B
- model: B:/12B/models--nbeerbower--mistral-nemo-gutenberg-12B-v3
- model: B:/12B/models--nbeerbower--mistral-nemo-gutenberg-12B-v4
- model: B:/12B/models--nbeerbower--Mistral-Nemo-Gutenberg-Doppel-12B
- model: B:/12B/models--nbeerbower--Mistral-Nemo-Gutenberg-Encore-12B
- model: B:/12B/models--nbeerbower--Mistral-Nemo-Gutenberg-Vitus-12B
- model: B:/12B/models--nbeerbower--mistral-nemo-wissenschaft-12B
- model: B:/12B/models--NeverSleepHistorical--lumi-nemo-e2.0
- model: B:/12B/models--NeverSleep--Lumimaid-v0.2-12B
- model: B:/12B/models--nothingiisreal--Celeste-12B-V1.6
- model: B:/12B/models--nothingiisreal--MN-12B-Celeste-V1.9
- model: B:/12B/models--PocketDoc--Dans-DangerousWinds-V1.1.0-12b
- model: B:/12B/models--ReadyArt--Dark-Nexus-12B-v2.0
- model: B:/12B/models--ReadyArt--Forgotten-Safeword-12B-v4.0
- model: B:/12B/models--ReadyArt--Omega-Darker_The-Final-Directive-12B
- model: B:/12B/models--romaingrx--red-teamer-mistral-nemo
- model: B:/12B/models--Sao10K--MN-12B-Lyra-v1
- model: B:/12B/models--Sao10K--MN-12B-Lyra-v4
- model: B:/12B/models--shisa-ai--shisa-v2-mistral-nemo-12b
- model: B:/12B/models--sleepdeprived3--Christian-Bible-Expert-v2.0-12B
- model: B:/12B/models--SuperbEmphasis--MN-12b-RP-Ink-RP-Longform
- model: B:/12B/models--SuperbEmphasis--Omega-Darker_The-Final-Directive-Longform-Stage2-ERP-12B-v0.2
- model: B:/12B/models--TheDrummer--Rivermind-12B-v1
- model: B:/12B/models--TheDrummer--Rocinante-12B-v1
- model: B:/12B/models--TheDrummer--Rocinante-X-12B-v1
- model: B:/12B/models--Trappu--Nemo-Picaro-12B
- model: B:/12B/models--Undi95--LocalC-12B-e2.0
- model: B:/12B/models--VAGOsolutions--SauerkrautLM-Nemo-12b-Instruct
Analyzing Base Model...
[32mBASE MODEL: models--mistralai--Mistral-Nemo-Instruct-2407[0m
Gen Config EOS ID: 2
Tokenizer EOS Str: </s>
Actual Vocab ID: 2
Internal Consistency: [32mPASS[0m
--------------------------------------------------------------------------------
Status | Gen ID | Vocab ID | EOS Str | Model Name
----------------------------------------------------------------------------------------------------
[32mMATCH | 2 | 2 | </s> | models--aixonlab--Aether-12b[0m
[32mMATCH | 2 | 2 | </s> | models--aixonlab--Zinakha-12b[0m
[32mMATCH | 2 | 2 | </s> | models--allura-org--Bigger-Body-12b[0m
[32mMATCH | 2 | 2 | </s> | models--allura-org--MN-12b-RP-Ink[0m
[32mMATCH | 2 | 2 | </s> | models--allura-org--remnant-mn-12b[0m
[32mMATCH | 2 | 2 | </s> | models--anthracite-org--magnum-v4-12b[0m
[32mMATCH | 2 | 2 | </s> | models--ArliAI--Mistral-Nemo-12B-ArliAI-RPMax-v1.2[0m
[32mMATCH | 2 | 2 | </s> | models--Babsie--Opulus-12B-v3[0m
[32mMATCH | 2 | 2 | </s> | models--BeaverAI--mistral-doryV2-12b[0m
[32mMATCH | 2 | 2 | </s> | models--crestf411--nemo-sunfall-v0.6.1[0m
[32mMATCH | 2 | 2 | </s> | models--EpistemeAI2--Fireball-Mistral-Nemo-12B-Philos[0m
[32mMATCH | 2 | 2 | </s> | models--EpistemeAI--Mistral-Nemo-Instruct-12B-Philosophy-Math[0m
[32mMATCH | 2 | 2 | </s> | models--Fizzarolli--MN-12b-Rosier-v1[0m
[32mMATCH | 2 | 2 | </s> | models--HumanLLMs--Human-Like-Mistral-Nemo-Instruct-2407[0m
[32mMATCH | 2 | 2 | </s> | models--IIEleven11--Kalypso[0m
[32mMATCH | 2 | 2 | </s> | models--intervitens--mini-magnum-12b-v1.1[0m
[32mMATCH | 2 | 2 | </s> | models--jtatman--mistral_nemo_12b_reasoning_psychology_lora[0m
[32mMATCH | 2 | 2 | </s> | models--KOOWEEYUS--BlackSheep-RP-12B[0m
[35mBROKEN | [31mMISSING[35m | 2 | </s> | models--Lambent--Arsenic-Shahrazad-12B-v2[0m
[35mBROKEN | [31mMISSING[35m | 2 | </s> | models--Lambent--Arsenic-Shahrazad-12B-v3[0m
[32mMATCH | 2 | 2 | </s> | models--Lambent--arsenic-nemo-unleashed-12B[0m
[32mMATCH | 2 | 2 | </s> | models--Lambent--Gilded-Arsenic-12B[0m
[32mMATCH | 2 | 2 | </s> | models--mistralai--Mistral-Nemo-Instruct-2407[0m
[32mMATCH | 2 | 2 | </s> | models--nbeerbower--Lyra-Gutenberg-mistral-nemo-12B[0m
[32mMATCH | 2 | 2 | </s> | models--nbeerbower--Lyra4-Gutenberg-12B[0m
[32mMATCH | 2 | 2 | </s> | models--nbeerbower--mistral-nemo-bophades-12B[0m
[32mMATCH | 2 | 2 | </s> | models--nbeerbower--mistral-nemo-gutenberg-12B-v3[0m
[32mMATCH | 2 | 2 | </s> | models--nbeerbower--mistral-nemo-gutenberg-12B-v4[0m
[32mMATCH | 2 | 2 | </s> | models--nbeerbower--Mistral-Nemo-Gutenberg-Doppel-12B[0m
[32mMATCH | 2 | 2 | </s> | models--nbeerbower--Mistral-Nemo-Gutenberg-Encore-12B[0m
[32mMATCH | 2 | 2 | </s> | models--nbeerbower--Mistral-Nemo-Gutenberg-Vitus-12B[0m
[32mMATCH | 2 | 2 | </s> | models--nbeerbower--mistral-nemo-wissenschaft-12B[0m
[32mMATCH | 2 | 2 | </s> | models--NeverSleepHistorical--lumi-nemo-e2.0[0m
[32mMATCH | 2 | 2 | </s> | models--NeverSleep--Lumimaid-v0.2-12B[0m
[32mMATCH | 2 | 2 | </s> | models--nothingiisreal--Celeste-12B-V1.6[0m
[32mMATCH | 2 | 2 | </s> | models--nothingiisreal--MN-12B-Celeste-V1.9[0m
[32mMATCH | 2 | 2 | </s> | models--PocketDoc--Dans-DangerousWinds-V1.1.0-12b[0m
[32mMATCH | 2 | 2 | </s> | models--ReadyArt--Dark-Nexus-12B-v2.0[0m
[32mMATCH | 2 | 2 | </s> | models--ReadyArt--Forgotten-Safeword-12B-v4.0[0m
[32mMATCH | 2 | 2 | </s> | models--ReadyArt--Omega-Darker_The-Final-Directive-12B[0m
[32mMATCH | 2 | 2 | </s> | models--romaingrx--red-teamer-mistral-nemo[0m
[35mBROKEN | [31mMISSING[35m | 2 | </s> | models--Sao10K--MN-12B-Lyra-v1[0m
[35mBROKEN | [31mMISSING[35m | 2 | </s> | models--Sao10K--MN-12B-Lyra-v4[0m
[32mMATCH | 2 | 2 | </s> | models--shisa-ai--shisa-v2-mistral-nemo-12b[0m
[32mMATCH | 2 | 2 | </s> | models--sleepdeprived3--Christian-Bible-Expert-v2.0-12B[0m
[32mMATCH | 2 | 2 | </s> | models--SuperbEmphasis--MN-12b-RP-Ink-RP-Longform[0m
[32mMATCH | 2 | 2 | </s> | models--SuperbEmphasis--Omega-Darker_The-Final-Directive-Longform-Stage2-ERP-12B-v0.2[0m
[32mMATCH | 2 | 2 | </s> | models--TheDrummer--Rivermind-12B-v1[0m
[32mMATCH | 2 | 2 | </s> | models--TheDrummer--Rocinante-12B-v1[0m
[32mMATCH | 2 | 2 | </s> | models--TheDrummer--Rocinante-X-12B-v1[0m
[32mMATCH | 2 | 2 | </s> | models--Trappu--Nemo-Picaro-12B[0m
[32mMATCH | 2 | 2 | </s> | models--Undi95--LocalC-12B-e2.0[0m
[35mBROKEN | [31mMISSING[35m | 2 | </s> | models--VAGOsolutions--SauerkrautLM-Nemo-12b-Instruct[0m
----------------------------------------------------------------------------------------------------
#1 | OK | 1.6926 | 74448896 | models--aixonlab--Aether-12b
#2 | OK | 2.4361 | 74448896 | models--aixonlab--Zinakha-12b
#3 | OK | 0.0407 | 74448896 | models--allura-org--Bigger-Body-12b
#4 | OK | 1.6611 | 74448896 | models--allura-org--MN-12b-RP-Ink
#5 | OK | 0.0866 | 74449920 | models--allura-org--remnant-mn-12b
#6 | OK | 1.5070 | 74448896 | models--anthracite-org--magnum-v4-12b
#7 | OK | 0.7476 | 74448896 | models--ArliAI--Mistral-Nemo-12B-ArliAI-RPMax-v1.2
#8 | OK | 4.6310 | 74448896 | models--Babsie--Opulus-12B-v3
#9 | HIGH MAG | 5.2080 | 74448896 | models--BeaverAI--mistral-doryV2-12b
#10 | OK | 0.3196 | 74448896 | models--crestf411--nemo-sunfall-v0.6.1
#11 | HIGH MAG | 5.7044 | 74448896 | models--EpistemeAI2--Fireball-Mistral-Nemo-12B-Philos
#12 | OK | 2.3099 | 74448896 | models--EpistemeAI--Mistral-Nemo-Instruct-12B-Philosophy-Math
#13 | HIGH MAG | 5.2074 | 74448896 | models--Fizzarolli--MN-12b-Rosier-v1
#14 | OK | 0.0452 | 74448896 | models--HumanLLMs--Human-Like-Mistral-Nemo-Instruct-2407
#15 | OK | 1.6716 | 74448896 | models--IIEleven11--Kalypso
#16 | HIGH MAG | 5.1134 | 74449408 | models--intervitens--mini-magnum-12b-v1.1
#17 | OK | 2.0833 | 74448896 | models--jtatman--mistral_nemo_12b_reasoning_psychology_lora
#18 | OK | 2.9633 | 74448896 | models--KOOWEEYUS--BlackSheep-RP-12B
#19 | OK | 2.8456 | 74448896 | models--Lambent--Arsenic-Shahrazad-12B-v2
#20 | OK | 2.8456 | 74448896 | models--Lambent--Arsenic-Shahrazad-12B-v3
#21 | OK | 2.8456 | 74448896 | models--Lambent--arsenic-nemo-unleashed-12B
#22 | OK | 2.8461 | 74448896 | models--Lambent--Gilded-Arsenic-12B
#23 | OK | 0.0000 | 74448896 | models--mistralai--Mistral-Nemo-Instruct-2407
#24 | OK | 3.8395 | 74448896 | models--nbeerbower--Lyra-Gutenberg-mistral-nemo-12B
#25 | OK | 0.4548 | 74448896 | models--nbeerbower--Lyra4-Gutenberg-12B
#26 | OK | 0.0451 | 74448896 | models--nbeerbower--mistral-nemo-bophades-12B
#27 | HIGH MAG | 5.1134 | 74449408 | models--nbeerbower--mistral-nemo-gutenberg-12B-v3
#28 | OK | 1.9967 | 74448896 | models--nbeerbower--mistral-nemo-gutenberg-12B-v4
#29 | OK | 0.0272 | 74448896 | models--nbeerbower--Mistral-Nemo-Gutenberg-Doppel-12B
#30 | OK | 0.0350 | 74448896 | models--nbeerbower--Mistral-Nemo-Gutenberg-Encore-12B
#31 | OK | 0.0501 | 74448896 | models--nbeerbower--Mistral-Nemo-Gutenberg-Vitus-12B
#32 | OK | 0.0353 | 74448896 | models--nbeerbower--mistral-nemo-wissenschaft-12B
#33 | HIGH MAG | 5.1626 | 74449408 | models--NeverSleepHistorical--lumi-nemo-e2.0
#34 | HIGH MAG | 5.1647 | 74449408 | models--NeverSleep--Lumimaid-v0.2-12B
#35 | OK | 0.0208 | 74448896 | models--nothingiisreal--Celeste-12B-V1.6
#36 | OK | 4.1274 | 74448896 | models--nothingiisreal--MN-12B-Celeste-V1.9
#37 | HIGH MAG | 5.4769 | 74448896 | models--PocketDoc--Dans-DangerousWinds-V1.1.0-12b
#38 | OK | 2.4160 | 74448896 | models--ReadyArt--Dark-Nexus-12B-v2.0
#39 | OK | 2.4154 | 74448896 | models--ReadyArt--Forgotten-Safeword-12B-v4.0
#40 | OK | 0.0281 | 74448896 | models--ReadyArt--Omega-Darker_The-Final-Directive-12B
#41 | OK | 0.0000 | 74448896 | models--romaingrx--red-teamer-mistral-nemo
#42 | OK | 3.8395 | 74448896 | models--Sao10K--MN-12B-Lyra-v1
#43 | OK | 0.4543 | 74448896 | models--Sao10K--MN-12B-Lyra-v4
#44 | OK | 2.9147 | 74448896 | models--shisa-ai--shisa-v2-mistral-nemo-12b
#45 | OK | 0.0123 | 74448896 | models--sleepdeprived3--Christian-Bible-Expert-v2.0-12B
#46 | OK | 1.6650 | 74448896 | models--SuperbEmphasis--MN-12b-RP-Ink-RP-Longform
#47 | OK | 0.1050 | 74448896 | models--SuperbEmphasis--Omega-Darker_The-Final-Directive-Longform-Stage2-ERP-12B-v0.2
#48 | OK | 1.6959 | 74448896 | models--TheDrummer--Rivermind-12B-v1
#49 | OK | 1.9967 | 74448896 | models--TheDrummer--Rocinante-12B-v1
#50 | OK | 2.2716 | 74448896 | models--TheDrummer--Rocinante-X-12B-v1
#51 | OK | 4.4899 | 74448896 | models--Trappu--Nemo-Picaro-12B
#52 | HIGH MAG | 5.1950 | 74448896 | models--Undi95--LocalC-12B-e2.0
#53 | OK | 1.4549 | 74448896 | models--VAGOsolutions--SauerkrautLM-Nemo-12b-Instruct


