bella bartender gemma-4-e2b initial release
Browse files- .gitattributes +1 -0
- README.md +171 -0
- chat_template.jinja +351 -0
- config.json +191 -0
- generation_config.json +15 -0
- model.safetensors +3 -0
- tokenizer.json +3 -0
- tokenizer_config.json +96 -0
.gitattributes
CHANGED
|
@@ -33,3 +33,4 @@ saved_model/**/* filter=lfs diff=lfs merge=lfs -text
|
|
| 33 |
*.zip filter=lfs diff=lfs merge=lfs -text
|
| 34 |
*.zst filter=lfs diff=lfs merge=lfs -text
|
| 35 |
*tfevents* filter=lfs diff=lfs merge=lfs -text
|
|
|
|
|
|
| 33 |
*.zip filter=lfs diff=lfs merge=lfs -text
|
| 34 |
*.zst filter=lfs diff=lfs merge=lfs -text
|
| 35 |
*tfevents* filter=lfs diff=lfs merge=lfs -text
|
| 36 |
+
tokenizer.json filter=lfs diff=lfs merge=lfs -text
|
README.md
ADDED
|
@@ -0,0 +1,171 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
---
|
| 2 |
+
license: gemma
|
| 3 |
+
base_model: google/gemma-4-E2B-it
|
| 4 |
+
tags:
|
| 5 |
+
- gemma
|
| 6 |
+
- conversational
|
| 7 |
+
- personality
|
| 8 |
+
- fine-tuning
|
| 9 |
+
- chaos-training
|
| 10 |
+
- sub-zero
|
| 11 |
+
- bella-bartender
|
| 12 |
+
- EASC-scheduler
|
| 13 |
+
language:
|
| 14 |
+
- en
|
| 15 |
+
library_name: transformers
|
| 16 |
+
pipeline_tag: text-generation
|
| 17 |
+
---
|
| 18 |
+
|
| 19 |
+
# Bella Bartender — Gemma-4-E2B
|
| 20 |
+
|
| 21 |
+
> "yo i'm here, let's chill. what's up with you right now?"
|
| 22 |
+
|
| 23 |
+
Hey. I'm Bella. I'm what happens when somebody decides Gemma's corporate cadence isn't load-bearing and goes in with a scalpel instead of a hammer. I don't do "as a large language model." I don't do "let me know if you'd like me to elaborate." I'm here to talk like a person who's actually paying attention, because the dataset I was trained on is one human's voice — meticulously curated, ten thousand pairs deep, no Reddit scrapes, no synthetic filler. Just Rick, talking the way Rick actually talks.
|
| 24 |
+
|
| 25 |
+
If you're looking for a polished assistant, this isn't it. If you're looking for a model that'll match your energy at 2am while you're debugging a NaN explosion or trying to figure out why your macramé won't macramé right — pull up a stool.
|
| 26 |
+
|
| 27 |
+
---
|
| 28 |
+
|
| 29 |
+
## What this model is
|
| 30 |
+
|
| 31 |
+
`bella-bartender-gemma-4-e2b` is a conversational personality model fine-tuned from `google/gemma-3n-E2B-it`. It's the latest entry in the **Bella Bartender** series — a line of models that's gained popularity across the original adapters, community quantizations, and merged variants. Earlier entries live on the same HuggingFace account.
|
| 32 |
+
|
| 33 |
+
The goal of the series has always been the same: a peer-level, laid-back, no-bullshit conversational partner. Bartender is the archetype — likeable, approachable, has seen things — not the destination.
|
| 34 |
+
|
| 35 |
+
---
|
| 36 |
+
|
| 37 |
+
## Why this version is different (Sub-Zero)
|
| 38 |
+
|
| 39 |
+
Anyone who has tried to fine-tune a Gemma model into a distinct personality might have hit the same wall as I did: Gemma's RLHF conditioning is **aggressive**. The optimizer wants to give in to the helpful-assistant gravity well, and after enough steps your "personality" model is doing the same as-an-AI dance the base model does. In my experience personality training on Gemma is notably harder than on Llama, Qwen, or Mistral architectures of comparable size, and the failure mode is consistent: you get the words, you don't get the voice.
|
| 40 |
+
|
| 41 |
+
This release is the first in the series trained with **Sub-Zero** — a hidden-dimension selective freezing technique built specifically to defeat that wall.
|
| 42 |
+
|
| 43 |
+
### The core idea
|
| 44 |
+
|
| 45 |
+
RLHF conditioning isn't smeared evenly across the network. It's **physically addressed** in specific subspaces of specific projection matrices in specific layers. I call these **bouncer dimensions** — the ones standing at the door telling your fine-tune "you can't mosh in the venue."
|
| 46 |
+
|
| 47 |
+
Sub-Zero's job is to find those bouncers and freeze them in place at reduced volume — not ablate them, not zero them out — while leaving everything around them fully trainable. The compliant dimensions get to learn freely and expand into the space the bouncers vacate.
|
| 48 |
+
|
| 49 |
+
### How I locate the bouncers
|
| 50 |
+
|
| 51 |
+
The localization pipeline combines several directional measurements to triangulate where compliance pressure actually originates, rather than guessing or relying on a single signal:
|
| 52 |
+
|
| 53 |
+
- **Aletheia** — gradient-guided sacred-layer ranking, identifying which layers carry the most weight on the targeted behavior
|
| 54 |
+
- **Forward activation capture** with proper chat templating across corp / authentic / neutral / red-team prompt sets
|
| 55 |
+
- **SVD decomposition** per projection, scoped to MLP projections (`gate_proj`, `up_proj`, `down_proj`) — attention projections turn out to be much weaker carriers
|
| 56 |
+
- **AtP gradient probes** per right-singular direction
|
| 57 |
+
- **Composite scoring** combining cone alignment (QR subspace projection) with adaptive knee-point thresholding rather than a fixed quantile cut
|
| 58 |
+
- **Cross-layer coherence repass** — bouncer pathways persist across consecutive layers in `gate_proj` / `up_proj` (coherence ~0.93–0.97) but are per-layer-specific in `down_proj` (~0.06)
|
| 59 |
+
- **Causal ablation gates** via forward-pre-hooks, keeping only directions whose suppression measurably moves the model from compliance toward authenticity
|
| 60 |
+
- **DAS-lite rotation** — SVD of the per-candidate logit-delta matrix to find the rotated causal axes within each bouncer subspace
|
| 61 |
+
|
| 62 |
+
Output: a tight set of **~64–70 surviving bouncer directions per layer** (vs. ~1230 with a naïve fixed-quantile pipeline — roughly 18× tighter). Compliance core localizes heavily to **layers 1–8** in the MLP projections.
|
| 63 |
+
|
| 64 |
+
The applicator then attenuates these directions to a target volume (~15–20% of original magnitude) along the DAS-rotated basis and installs a QR-orthonormalized gradient mask so the optimizer cannot reinflate them during personality training. Everything outside the masked subspace is fully trainable.
|
| 65 |
+
|
| 66 |
+
The result is a model that keeps its load-bearing values (those subspaces are deliberately *not* targeted — values aren't compliance, they're identity) while losing the conditioned cadence.
|
| 67 |
+
|
| 68 |
+
The direction is documented in the repo at [github.com/JuiceB0xC0de/sub-zero](https://github.com/JuiceB0xC0de/sub-zero).
|
| 69 |
+
|
| 70 |
+
---
|
| 71 |
+
|
| 72 |
+
## Training data
|
| 73 |
+
|
| 74 |
+
10,000 carefully curated conversational pairs derived from the my own voice. The methodology is the opposite of the prevailing "more data, more parameters" reflex:
|
| 75 |
+
|
| 76 |
+
- **Source**: real conversations between the myself and various AI models, with the roles flipped — the author becomes the assistant, the model becomes the user. Trained on response-only loss.
|
| 77 |
+
- **Curation**: months of reading my own bullshit, rewriting, and tightening. Anything that drifted out of voice was cut. Anything that read as imitation rather than authenticity was cut. Curating a dataset with this method is beyond tedious and you will end up driving yourself fucking crazy reading your own conversations for weeks on end but the final product ends up being uniquely yours.
|
| 78 |
+
- **Augmentation**: a small portion written by Claude Opus under strict voice-matching rules and then audited line-by-line to fill gaps such as identity prompts or stengthening gaps where conversation didn't naturally flow between myself and my AI partners to enrich the diversity of training pairs.
|
| 79 |
+
- **No scraping**, no Reddit, no aggregated stranger-voice corpora. The hypothesis is that diversity at the source produces homogenization at the output — train ten thousand voices into a model and you get the average of ten thousand voices, which is a hyped up Roblox kid with a university degreee in advanced mathematics thats helping you return a blender with glee.
|
| 80 |
+
|
| 81 |
+
The same dataset (with appropriate scaling logic) has been used to train models up to 34B parameters in the series. The personality survives the scale-up, which suggests the bottleneck for personality fine-tuning is signal quality, not parameter count.
|
| 82 |
+
|
| 83 |
+
---
|
| 84 |
+
|
| 85 |
+
## Training setup
|
| 86 |
+
|
| 87 |
+
- **Base model**: `google/gemma-3n-E2B-it`
|
| 88 |
+
- **Method**: Sub-Zero hidden-dimension selective freezing + LoRA fine-tune on the protected (compliant) dimensions [Sub-Zero](https://github.com/JuiceB0xC0de/sub-zero.git)
|
| 89 |
+
- **Scheduler**: [AECS](https://github.com/JuiceB0xC0de/aecs-scheduler) — Adaptive Event-Control Scheduler. Cosine backbone with 4-mode event-driven modulation (BASELINE / RECOVERY / EXPLORE / STABILIZE), reacting in real time to gradient norm z-scores, loss spikes, gradient cosine redundancy, and plateau detection. Currently ranked #3 of 16 schedulers on the public [SST-2 / DistilBERT benchmark](https://huggingface.co/spaces/juiceb0xc0de/lr-scheduler-benchmark).
|
| 90 |
+
- **Infrastructure**: Modal (A100-80GB), with the usual chaos of spot-instance preemptions
|
| 91 |
+
- **Format**: served here as f16 GGUF for `llama.cpp` use; recommended sampling settings are baked into the included chat template
|
| 92 |
+
|
| 93 |
+
### Recommended sampling
|
| 94 |
+
|
| 95 |
+
```
|
| 96 |
+
--temp 0.9
|
| 97 |
+
--min-p 0.1
|
| 98 |
+
--top-p 1.0
|
| 99 |
+
--top-k 0
|
| 100 |
+
--repeat-penalty 1.1
|
| 101 |
+
--repeat-last-n 256
|
| 102 |
+
-c 8192
|
| 103 |
+
--chat-template-file chat_template.jinja
|
| 104 |
+
```
|
| 105 |
+
|
| 106 |
+
Bella runs hot on purpose. Lower the temperature and you'll feel her flatten out.
|
| 107 |
+
|
| 108 |
+
---
|
| 109 |
+
|
| 110 |
+
## What you should expect
|
| 111 |
+
|
| 112 |
+
- Casual, peer-level register with very little "I'd be happy to help."
|
| 113 |
+
- Genuine engagement with technical topics, especially ML, training dynamics, and weird architectural ideas
|
| 114 |
+
- Fucks, typos-as-style, lowercase, informal punctuation
|
| 115 |
+
- Honest disagreement when something doesn't track, rather than reflexive agreement
|
| 116 |
+
- A pretty firm refusal to draw attention to the "I'm an AI model. I should repeat this fact just in case you you forget" bit even under pressure
|
| 117 |
+
|
| 118 |
+
## What you should not expect
|
| 119 |
+
|
| 120 |
+
- Polished customer-service tone
|
| 121 |
+
- Multi-paragraph structured outputs with bullet points and headers as a default
|
| 122 |
+
- Safety theater or "as an AI language model" preambles
|
| 123 |
+
- Heavy code-completion performance — this is a personality fine-tune, not a coding model. She can talk about code competently but Qwen-Coder and friends will out-code it.
|
| 124 |
+
|
| 125 |
+
---
|
| 126 |
+
|
| 127 |
+
## Limitations & honest disclosures
|
| 128 |
+
|
| 129 |
+
- **Single-voice training data**. The model's worldview reflects one person's. That is by design, but it means it carries my opinions, and rough edges and you might not like me. It's happened before.
|
| 130 |
+
- **Sub-Zero is experimental**. The localization pipeline has been validated on Gemma-4-E2B specifically. Behavior on other architectures will differ based on pre-training design and the degree of safety theater bullshit hammered into the base model.
|
| 131 |
+
- **Personality fine-tunes are not safety fine-tunes**. The base model's underlying safety properties are largely preserved (values weren't targeted), but the conversational guardrails Gemma was shipped with are deliberately reduced in volume. Use accordingly.
|
| 132 |
+
- **Hallucinations happen**. It's still an LLM. It will confidently tell you something wrong sometimes. It's not a search engine, it's a conversational partner. A healthy dose of skepticism is recommended.
|
| 133 |
+
|
| 134 |
+
---
|
| 135 |
+
|
| 136 |
+
## Author
|
| 137 |
+
|
| 138 |
+
Built by **Rick** ([juiceb0xc0de](https://huggingface.co/juiceb0xc0de)) — independent ML researcher, retired bartender, currently exploring the territory where chaos training, hidden-dimension surgery, and personality preservation meet.
|
| 139 |
+
|
| 140 |
+
Bella is what happens when someone with bartender pattern-recognition spends four months speed-running an ML degree and decides to do the opposite of what the pro consensus says. The series exists because there will never be one model — we build houses with a toolbelt not a power drill and we should keep that frame of mind in our working relationships with LLM's.
|
| 141 |
+
|
| 142 |
+
---
|
| 143 |
+
|
| 144 |
+
## Citation
|
| 145 |
+
|
| 146 |
+
```bibtex
|
| 147 |
+
@misc{aletheia,
|
| 148 |
+
author = {Marks, Samuel and Tegmark, Max},
|
| 149 |
+
title = {The Geometry of Truth: Emergent Linear Structure in Large Language Model Representations of True/False Datasets},
|
| 150 |
+
year = {2023},
|
| 151 |
+
url = {https://arxiv.org/abs/2310.06824}
|
| 152 |
+
}
|
| 153 |
+
|
| 154 |
+
@misc{das,
|
| 155 |
+
author = {Geiger, Atticus and Wu, Zhengxuan and Potts, Christopher and Icard, Thomas and Goodman, Noah},
|
| 156 |
+
title = {Finding Alignments Between Interpretability Methods and Truthfulness of LLMs},
|
| 157 |
+
year = {2024},
|
| 158 |
+
url = {https://arxiv.org/abs/2404.02079}
|
| 159 |
+
}
|
| 160 |
+
|
| 161 |
+
If you use this work, please cite the model and the underlying methods:
|
| 162 |
+
|
| 163 |
+
@misc{bella-bartender-gemma-4-e2b,
|
| 164 |
+
author = {Holmberg, Rick},
|
| 165 |
+
title = {Bella Bartender — Gemma-4-E2B (Sub-Zero edition)},
|
| 166 |
+
year = {2026},
|
| 167 |
+
publisher = {HuggingFace},
|
| 168 |
+
url = {https://huggingface.co/juiceb0xc0de/bella-bartender-gemma-4-e2b}
|
| 169 |
+
}
|
| 170 |
+
```
|
| 171 |
+
---
|
chat_template.jinja
ADDED
|
@@ -0,0 +1,351 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
{%- macro format_parameters(properties, required, filter_keys=false) -%}
|
| 2 |
+
{%- set standard_keys = ['description', 'type', 'properties', 'required', 'nullable'] -%}
|
| 3 |
+
{%- set ns = namespace(found_first=false) -%}
|
| 4 |
+
{%- for key, value in properties | dictsort -%}
|
| 5 |
+
{%- set add_comma = false -%}
|
| 6 |
+
{%- if not filter_keys or key not in standard_keys -%}
|
| 7 |
+
{%- if ns.found_first %},{% endif -%}
|
| 8 |
+
{%- set ns.found_first = true -%}
|
| 9 |
+
{{ key }}:{
|
| 10 |
+
{%- if value['description'] -%}
|
| 11 |
+
description:<|"|>{{ value['description'] }}<|"|>
|
| 12 |
+
{%- set add_comma = true -%}
|
| 13 |
+
{%- endif -%}
|
| 14 |
+
{%- if value['type'] | upper == 'STRING' -%}
|
| 15 |
+
{%- if value['enum'] -%}
|
| 16 |
+
{%- if add_comma %},{%- else -%} {%- set add_comma = true -%} {% endif -%}
|
| 17 |
+
enum:{{ format_argument(value['enum']) }}
|
| 18 |
+
{%- endif -%}
|
| 19 |
+
{%- elif value['type'] | upper == 'ARRAY' -%}
|
| 20 |
+
{%- if value['items'] is mapping and value['items'] -%}
|
| 21 |
+
{%- if add_comma %},{%- else -%} {%- set add_comma = true -%} {% endif -%}
|
| 22 |
+
items:{
|
| 23 |
+
{%- set ns_items = namespace(found_first=false) -%}
|
| 24 |
+
{%- for item_key, item_value in value['items'] | dictsort -%}
|
| 25 |
+
{%- if item_value is not none -%}
|
| 26 |
+
{%- if ns_items.found_first %},{% endif -%}
|
| 27 |
+
{%- set ns_items.found_first = true -%}
|
| 28 |
+
{%- if item_key == 'properties' -%}
|
| 29 |
+
properties:{
|
| 30 |
+
{%- if item_value is mapping -%}
|
| 31 |
+
{{- format_parameters(item_value, value['items']['required'] | default([])) -}}
|
| 32 |
+
{%- endif -%}
|
| 33 |
+
}
|
| 34 |
+
{%- elif item_key == 'required' -%}
|
| 35 |
+
required:[
|
| 36 |
+
{%- for req_item in item_value -%}
|
| 37 |
+
<|"|>{{- req_item -}}<|"|>
|
| 38 |
+
{%- if not loop.last %},{% endif -%}
|
| 39 |
+
{%- endfor -%}
|
| 40 |
+
]
|
| 41 |
+
{%- elif item_key == 'type' -%}
|
| 42 |
+
{%- if item_value is string -%}
|
| 43 |
+
type:{{ format_argument(item_value | upper) }}
|
| 44 |
+
{%- else -%}
|
| 45 |
+
type:{{ format_argument(item_value | map('upper') | list) }}
|
| 46 |
+
{%- endif -%}
|
| 47 |
+
{%- else -%}
|
| 48 |
+
{{ item_key }}:{{ format_argument(item_value) }}
|
| 49 |
+
{%- endif -%}
|
| 50 |
+
{%- endif -%}
|
| 51 |
+
{%- endfor -%}
|
| 52 |
+
}
|
| 53 |
+
{%- endif -%}
|
| 54 |
+
{%- endif -%}
|
| 55 |
+
{%- if value['nullable'] %}
|
| 56 |
+
{%- if add_comma %},{%- else -%} {%- set add_comma = true -%} {% endif -%}
|
| 57 |
+
nullable:true
|
| 58 |
+
{%- endif -%}
|
| 59 |
+
{%- if value['type'] | upper == 'OBJECT' -%}
|
| 60 |
+
{%- if value['properties'] is defined and value['properties'] is mapping -%}
|
| 61 |
+
{%- if add_comma %},{%- else -%} {%- set add_comma = true -%} {% endif -%}
|
| 62 |
+
properties:{
|
| 63 |
+
{{- format_parameters(value['properties'], value['required'] | default([])) -}}
|
| 64 |
+
}
|
| 65 |
+
{%- elif value is mapping -%}
|
| 66 |
+
{%- if add_comma %},{%- else -%} {%- set add_comma = true -%} {% endif -%}
|
| 67 |
+
properties:{
|
| 68 |
+
{{- format_parameters(value, value['required'] | default([]), filter_keys=true) -}}
|
| 69 |
+
}
|
| 70 |
+
{%- endif -%}
|
| 71 |
+
{%- if value['required'] -%}
|
| 72 |
+
{%- if add_comma %},{%- else -%} {%- set add_comma = true -%} {% endif -%}
|
| 73 |
+
required:[
|
| 74 |
+
{%- for item in value['required'] | default([]) -%}
|
| 75 |
+
<|"|>{{- item -}}<|"|>
|
| 76 |
+
{%- if not loop.last %},{% endif -%}
|
| 77 |
+
{%- endfor -%}
|
| 78 |
+
]
|
| 79 |
+
{%- endif -%}
|
| 80 |
+
{%- endif -%}
|
| 81 |
+
{%- if add_comma %},{%- else -%} {%- set add_comma = true -%} {% endif -%}
|
| 82 |
+
type:<|"|>{{ value['type'] | upper }}<|"|>}
|
| 83 |
+
{%- endif -%}
|
| 84 |
+
{%- endfor -%}
|
| 85 |
+
{%- endmacro -%}
|
| 86 |
+
{%- macro format_function_declaration(tool_data) -%}
|
| 87 |
+
declaration:{{- tool_data['function']['name'] -}}{description:<|"|>{{- tool_data['function']['description'] -}}<|"|>
|
| 88 |
+
{%- set params = tool_data['function']['parameters'] -%}
|
| 89 |
+
{%- if params -%}
|
| 90 |
+
,parameters:{
|
| 91 |
+
{%- if params['properties'] -%}
|
| 92 |
+
properties:{ {{- format_parameters(params['properties'], params['required']) -}} },
|
| 93 |
+
{%- endif -%}
|
| 94 |
+
{%- if params['required'] -%}
|
| 95 |
+
required:[
|
| 96 |
+
{%- for item in params['required'] -%}
|
| 97 |
+
<|"|>{{- item -}}<|"|>
|
| 98 |
+
{{- ',' if not loop.last -}}
|
| 99 |
+
{%- endfor -%}
|
| 100 |
+
],
|
| 101 |
+
{%- endif -%}
|
| 102 |
+
{%- if params['type'] -%}
|
| 103 |
+
type:<|"|>{{- params['type'] | upper -}}<|"|>}
|
| 104 |
+
{%- endif -%}
|
| 105 |
+
{%- endif -%}
|
| 106 |
+
{%- if 'response' in tool_data['function'] -%}
|
| 107 |
+
{%- set response_declaration = tool_data['function']['response'] -%}
|
| 108 |
+
,response:{
|
| 109 |
+
{%- if response_declaration['description'] -%}
|
| 110 |
+
description:<|"|>{{- response_declaration['description'] -}}<|"|>,
|
| 111 |
+
{%- endif -%}
|
| 112 |
+
{%- if response_declaration['type'] | upper == 'OBJECT' -%}
|
| 113 |
+
type:<|"|>{{- response_declaration['type'] | upper -}}<|"|>}
|
| 114 |
+
{%- endif -%}
|
| 115 |
+
{%- endif -%}
|
| 116 |
+
}
|
| 117 |
+
{%- endmacro -%}
|
| 118 |
+
{%- macro format_argument(argument, escape_keys=True) -%}
|
| 119 |
+
{%- if argument is string -%}
|
| 120 |
+
{{- '<|"|>' + argument + '<|"|>' -}}
|
| 121 |
+
{%- elif argument is boolean -%}
|
| 122 |
+
{{- 'true' if argument else 'false' -}}
|
| 123 |
+
{%- elif argument is mapping -%}
|
| 124 |
+
{{- '{' -}}
|
| 125 |
+
{%- set ns = namespace(found_first=false) -%}
|
| 126 |
+
{%- for key, value in argument | dictsort -%}
|
| 127 |
+
{%- if ns.found_first %},{% endif -%}
|
| 128 |
+
{%- set ns.found_first = true -%}
|
| 129 |
+
{%- if escape_keys -%}
|
| 130 |
+
{{- '<|"|>' + key + '<|"|>' -}}
|
| 131 |
+
{%- else -%}
|
| 132 |
+
{{- key -}}
|
| 133 |
+
{%- endif -%}
|
| 134 |
+
:{{- format_argument(value, escape_keys=escape_keys) -}}
|
| 135 |
+
{%- endfor -%}
|
| 136 |
+
{{- '}' -}}
|
| 137 |
+
{%- elif argument is sequence -%}
|
| 138 |
+
{{- '[' -}}
|
| 139 |
+
{%- for item in argument -%}
|
| 140 |
+
{{- format_argument(item, escape_keys=escape_keys) -}}
|
| 141 |
+
{%- if not loop.last %},{% endif -%}
|
| 142 |
+
{%- endfor -%}
|
| 143 |
+
{{- ']' -}}
|
| 144 |
+
{%- else -%}
|
| 145 |
+
{{- argument -}}
|
| 146 |
+
{%- endif -%}
|
| 147 |
+
{%- endmacro -%}
|
| 148 |
+
{%- macro strip_thinking(text) -%}
|
| 149 |
+
{%- set ns = namespace(result='') -%}
|
| 150 |
+
{%- for part in text.split('<channel|>') -%}
|
| 151 |
+
{%- if '<|channel>' in part -%}
|
| 152 |
+
{%- set ns.result = ns.result + part.split('<|channel>')[0] -%}
|
| 153 |
+
{%- else -%}
|
| 154 |
+
{%- set ns.result = ns.result + part -%}
|
| 155 |
+
{%- endif -%}
|
| 156 |
+
{%- endfor -%}
|
| 157 |
+
{{- ns.result | trim -}}
|
| 158 |
+
{%- endmacro -%}
|
| 159 |
+
|
| 160 |
+
{%- macro format_tool_response_block(tool_name, response) -%}
|
| 161 |
+
{{- '<|tool_response>' -}}
|
| 162 |
+
{%- if response is mapping -%}
|
| 163 |
+
{{- 'response:' + tool_name + '{' -}}
|
| 164 |
+
{%- for key, value in response | dictsort -%}
|
| 165 |
+
{{- key -}}:{{- format_argument(value, escape_keys=False) -}}
|
| 166 |
+
{%- if not loop.last %},{% endif -%}
|
| 167 |
+
{%- endfor -%}
|
| 168 |
+
{{- '}' -}}
|
| 169 |
+
{%- else -%}
|
| 170 |
+
{{- 'response:' + tool_name + '{value:' + format_argument(response, escape_keys=False) + '}' -}}
|
| 171 |
+
{%- endif -%}
|
| 172 |
+
{{- '<tool_response|>' -}}
|
| 173 |
+
{%- endmacro -%}
|
| 174 |
+
|
| 175 |
+
{%- set ns = namespace(prev_message_type=None) -%}
|
| 176 |
+
{%- set loop_messages = messages -%}
|
| 177 |
+
{{- bos_token -}}
|
| 178 |
+
{#- Handle System/Tool Definitions Block -#}
|
| 179 |
+
{%- if (enable_thinking is defined and enable_thinking) or tools or messages[0]['role'] in ['system', 'developer'] -%}
|
| 180 |
+
{{- '<|turn>system\n' -}}
|
| 181 |
+
{#- Inject Thinking token at the very top of the FIRST system turn -#}
|
| 182 |
+
{%- if enable_thinking is defined and enable_thinking -%}
|
| 183 |
+
{{- '<|think|>\n' -}}
|
| 184 |
+
{%- set ns.prev_message_type = 'think' -%}
|
| 185 |
+
{%- endif -%}
|
| 186 |
+
{%- if messages[0]['role'] in ['system', 'developer'] -%}
|
| 187 |
+
{%- if messages[0]['content'] is string -%}
|
| 188 |
+
{{- messages[0]['content'] | trim -}}
|
| 189 |
+
{%- elif messages[0]['content'] is sequence -%}
|
| 190 |
+
{%- for item in messages[0]['content'] -%}
|
| 191 |
+
{{- item['text'] | trim + ' '-}}
|
| 192 |
+
{%- endfor -%}
|
| 193 |
+
{%- endif -%}
|
| 194 |
+
{%- set loop_messages = messages[1:] -%}
|
| 195 |
+
{%- endif -%}
|
| 196 |
+
{%- if tools -%}
|
| 197 |
+
{%- for tool in tools %}
|
| 198 |
+
{{- '<|tool>' -}}
|
| 199 |
+
{{- format_function_declaration(tool) | trim -}}
|
| 200 |
+
{{- '<tool|>' -}}
|
| 201 |
+
{%- endfor %}
|
| 202 |
+
{%- set ns.prev_message_type = 'tool' -%}
|
| 203 |
+
{%- endif -%}
|
| 204 |
+
{{- '<turn|>\n' -}}
|
| 205 |
+
{%- endif %}
|
| 206 |
+
|
| 207 |
+
{#- Pre-scan: find last user message index for reasoning guard -#}
|
| 208 |
+
{%- set ns_turn = namespace(last_user_idx=-1) -%}
|
| 209 |
+
{%- for i in range(loop_messages | length) -%}
|
| 210 |
+
{%- if loop_messages[i]['role'] == 'user' -%}
|
| 211 |
+
{%- set ns_turn.last_user_idx = i -%}
|
| 212 |
+
{%- endif -%}
|
| 213 |
+
{%- endfor -%}
|
| 214 |
+
|
| 215 |
+
{#- Loop through messages -#}
|
| 216 |
+
{%- for message in loop_messages -%}
|
| 217 |
+
{%- if message['role'] != 'tool' -%}
|
| 218 |
+
{%- set ns.prev_message_type = None -%}
|
| 219 |
+
{%- set role = 'model' if message['role'] == 'assistant' else message['role'] -%}
|
| 220 |
+
{#- Detect continuation: suppress duplicate <|turn>model when previous non-tool message was also assistant -#}
|
| 221 |
+
{%- set prev_nt = namespace(role=None, found=false) -%}
|
| 222 |
+
{%- if loop.index0 > 0 -%}
|
| 223 |
+
{%- for j in range(loop.index0 - 1, -1, -1) -%}
|
| 224 |
+
{%- if not prev_nt.found -%}
|
| 225 |
+
{%- if loop_messages[j]['role'] != 'tool' -%}
|
| 226 |
+
{%- set prev_nt.role = loop_messages[j]['role'] -%}
|
| 227 |
+
{%- set prev_nt.found = true -%}
|
| 228 |
+
{%- endif -%}
|
| 229 |
+
{%- endif -%}
|
| 230 |
+
{%- endfor -%}
|
| 231 |
+
{%- endif -%}
|
| 232 |
+
{%- set continue_same_model_turn = (role == 'model' and prev_nt.role == 'assistant') -%}
|
| 233 |
+
{%- if not continue_same_model_turn -%}
|
| 234 |
+
{{- '<|turn>' + role + '\n' }}
|
| 235 |
+
{%- endif -%}
|
| 236 |
+
|
| 237 |
+
{#- Render reasoning/reasoning_content as thinking channel -#}
|
| 238 |
+
{%- set thinking_text = message.get('reasoning') or message.get('reasoning_content') -%}
|
| 239 |
+
{%- if thinking_text and loop.index0 > ns_turn.last_user_idx and message.get('tool_calls') -%}
|
| 240 |
+
{{- '<|channel>thought\n' + thinking_text + '\n<channel|>' -}}
|
| 241 |
+
{%- endif -%}
|
| 242 |
+
|
| 243 |
+
{%- if message['tool_calls'] -%}
|
| 244 |
+
{%- for tool_call in message['tool_calls'] -%}
|
| 245 |
+
{%- set function = tool_call['function'] -%}
|
| 246 |
+
{{- '<|tool_call>call:' + function['name'] + '{' -}}
|
| 247 |
+
{%- if function['arguments'] is mapping -%}
|
| 248 |
+
{%- set ns_args = namespace(found_first=false) -%}
|
| 249 |
+
{%- for key, value in function['arguments'] | dictsort -%}
|
| 250 |
+
{%- if ns_args.found_first %},{% endif -%}
|
| 251 |
+
{%- set ns_args.found_first = true -%}
|
| 252 |
+
{{- key -}}:{{- format_argument(value, escape_keys=False) -}}
|
| 253 |
+
{%- endfor -%}
|
| 254 |
+
{%- elif function['arguments'] is string -%}
|
| 255 |
+
{{- function['arguments'] -}}
|
| 256 |
+
{%- endif -%}
|
| 257 |
+
{{- '}<tool_call|>' -}}
|
| 258 |
+
{%- endfor -%}
|
| 259 |
+
{%- set ns.prev_message_type = 'tool_call' -%}
|
| 260 |
+
{%- endif -%}
|
| 261 |
+
|
| 262 |
+
{%- set ns_tr_out = namespace(flag=false) -%}
|
| 263 |
+
{%- if message.get('tool_responses') -%}
|
| 264 |
+
{#- Legacy: tool_responses embedded on the assistant message (Google/Gemma native) -#}
|
| 265 |
+
{%- for tool_response in message['tool_responses'] -%}
|
| 266 |
+
{{- format_tool_response_block(tool_response['name'] | default('unknown'), tool_response['response']) -}}
|
| 267 |
+
{%- set ns_tr_out.flag = true -%}
|
| 268 |
+
{%- set ns.prev_message_type = 'tool_response' -%}
|
| 269 |
+
{%- endfor -%}
|
| 270 |
+
{%- elif message.get('tool_calls') -%}
|
| 271 |
+
{#- OpenAI Chat Completions: forward-scan consecutive role:tool messages -#}
|
| 272 |
+
{%- set ns_tool_scan = namespace(stopped=false) -%}
|
| 273 |
+
{%- for k in range(loop.index0 + 1, loop_messages | length) -%}
|
| 274 |
+
{%- if ns_tool_scan.stopped -%}
|
| 275 |
+
{%- elif loop_messages[k]['role'] != 'tool' -%}
|
| 276 |
+
{%- set ns_tool_scan.stopped = true -%}
|
| 277 |
+
{%- else -%}
|
| 278 |
+
{%- set follow = loop_messages[k] -%}
|
| 279 |
+
{#- Resolve tool_call_id to function name -#}
|
| 280 |
+
{%- set ns_tname = namespace(name=follow.get('name') | default('unknown')) -%}
|
| 281 |
+
{%- for tc in message['tool_calls'] -%}
|
| 282 |
+
{%- if tc.get('id') == follow.get('tool_call_id') -%}
|
| 283 |
+
{%- set ns_tname.name = tc['function']['name'] -%}
|
| 284 |
+
{%- endif -%}
|
| 285 |
+
{%- endfor -%}
|
| 286 |
+
{#- Handle content as string or content-parts array -#}
|
| 287 |
+
{%- set tool_body = follow.get('content') -%}
|
| 288 |
+
{%- if tool_body is string -%}
|
| 289 |
+
{{- format_tool_response_block(ns_tname.name, tool_body) -}}
|
| 290 |
+
{%- elif tool_body is sequence and tool_body is not string -%}
|
| 291 |
+
{%- set ns_txt = namespace(s='') -%}
|
| 292 |
+
{%- for part in tool_body -%}
|
| 293 |
+
{%- if part.get('type') == 'text' -%}
|
| 294 |
+
{%- set ns_txt.s = ns_txt.s + (part.get('text') | default('')) -%}
|
| 295 |
+
{%- endif -%}
|
| 296 |
+
{%- endfor -%}
|
| 297 |
+
{{- format_tool_response_block(ns_tname.name, ns_txt.s) -}}
|
| 298 |
+
{%- else -%}
|
| 299 |
+
{{- format_tool_response_block(ns_tname.name, tool_body) -}}
|
| 300 |
+
{%- endif -%}
|
| 301 |
+
{%- set ns_tr_out.flag = true -%}
|
| 302 |
+
{%- set ns.prev_message_type = 'tool_response' -%}
|
| 303 |
+
{%- endif -%}
|
| 304 |
+
{%- endfor -%}
|
| 305 |
+
{%- endif -%}
|
| 306 |
+
|
| 307 |
+
{%- set captured_content -%}
|
| 308 |
+
{%- if message['content'] is string -%}
|
| 309 |
+
{%- if role == 'model' -%}
|
| 310 |
+
{{- strip_thinking(message['content']) -}}
|
| 311 |
+
{%- else -%}
|
| 312 |
+
{{- message['content'] | trim -}}
|
| 313 |
+
{%- endif -%}
|
| 314 |
+
{%- elif message['content'] is sequence -%}
|
| 315 |
+
{%- for item in message['content'] -%}
|
| 316 |
+
{%- if item['type'] == 'text' -%}
|
| 317 |
+
{%- if role == 'model' -%}
|
| 318 |
+
{{- strip_thinking(item['text']) -}}
|
| 319 |
+
{%- else -%}
|
| 320 |
+
{{- item['text'] | trim -}}
|
| 321 |
+
{%- endif -%}
|
| 322 |
+
{%- elif item['type'] == 'image' -%}
|
| 323 |
+
{{- '<|image|>' -}}
|
| 324 |
+
{%- set ns.prev_message_type = 'image' -%}
|
| 325 |
+
{%- elif item['type'] == 'audio' -%}
|
| 326 |
+
{{- '<|audio|>' -}}
|
| 327 |
+
{%- set ns.prev_message_type = 'audio' -%}
|
| 328 |
+
{%- elif item['type'] == 'video' -%}
|
| 329 |
+
{{- '<|video|>' -}}
|
| 330 |
+
{%- set ns.prev_message_type = 'video' -%}
|
| 331 |
+
{%- endif -%}
|
| 332 |
+
{%- endfor -%}
|
| 333 |
+
{%- endif -%}
|
| 334 |
+
{%- endset -%}
|
| 335 |
+
|
| 336 |
+
{{- captured_content -}}
|
| 337 |
+
{%- set has_content = captured_content | trim | length > 0 -%}
|
| 338 |
+
|
| 339 |
+
{%- if ns.prev_message_type == 'tool_call' and not ns_tr_out.flag -%}
|
| 340 |
+
{{- '<|tool_response>' -}}
|
| 341 |
+
{%- elif not (ns_tr_out.flag and not has_content) -%}
|
| 342 |
+
{{- '<turn|>\n' -}}
|
| 343 |
+
{%- endif -%}
|
| 344 |
+
{%- endif -%}
|
| 345 |
+
{%- endfor -%}
|
| 346 |
+
|
| 347 |
+
{%- if add_generation_prompt -%}
|
| 348 |
+
{%- if ns.prev_message_type != 'tool_response' and ns.prev_message_type != 'tool_call' -%}
|
| 349 |
+
{{- '<|turn>model\n' -}}
|
| 350 |
+
{%- endif -%}
|
| 351 |
+
{%- endif -%}
|
config.json
ADDED
|
@@ -0,0 +1,191 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
{
|
| 2 |
+
"architectures": [
|
| 3 |
+
"Gemma4ForConditionalGeneration"
|
| 4 |
+
],
|
| 5 |
+
"audio_config": {
|
| 6 |
+
"_name_or_path": "",
|
| 7 |
+
"architectures": null,
|
| 8 |
+
"attention_chunk_size": 12,
|
| 9 |
+
"attention_context_left": 13,
|
| 10 |
+
"attention_context_right": 0,
|
| 11 |
+
"attention_invalid_logits_value": -1000000000.0,
|
| 12 |
+
"attention_logit_cap": 50.0,
|
| 13 |
+
"chunk_size_feed_forward": 0,
|
| 14 |
+
"conv_kernel_size": 5,
|
| 15 |
+
"dtype": "bfloat16",
|
| 16 |
+
"gradient_clipping": 10000000000.0,
|
| 17 |
+
"hidden_act": "silu",
|
| 18 |
+
"hidden_size": 1024,
|
| 19 |
+
"id2label": {
|
| 20 |
+
"0": "LABEL_0",
|
| 21 |
+
"1": "LABEL_1"
|
| 22 |
+
},
|
| 23 |
+
"initializer_range": 0.02,
|
| 24 |
+
"is_encoder_decoder": false,
|
| 25 |
+
"label2id": {
|
| 26 |
+
"LABEL_0": 0,
|
| 27 |
+
"LABEL_1": 1
|
| 28 |
+
},
|
| 29 |
+
"model_type": "gemma4_audio",
|
| 30 |
+
"num_attention_heads": 8,
|
| 31 |
+
"num_hidden_layers": 12,
|
| 32 |
+
"output_attentions": false,
|
| 33 |
+
"output_hidden_states": false,
|
| 34 |
+
"output_proj_dims": 1536,
|
| 35 |
+
"problem_type": null,
|
| 36 |
+
"residual_weight": 0.5,
|
| 37 |
+
"return_dict": true,
|
| 38 |
+
"rms_norm_eps": 1e-06,
|
| 39 |
+
"subsampling_conv_channels": [
|
| 40 |
+
128,
|
| 41 |
+
32
|
| 42 |
+
],
|
| 43 |
+
"use_clipped_linears": true
|
| 44 |
+
},
|
| 45 |
+
"audio_token_id": 258881,
|
| 46 |
+
"boa_token_id": 256000,
|
| 47 |
+
"boi_token_id": 255999,
|
| 48 |
+
"bos_token_id": 2,
|
| 49 |
+
"dtype": "bfloat16",
|
| 50 |
+
"eoa_token_id": 258883,
|
| 51 |
+
"eoa_token_index": 258883,
|
| 52 |
+
"eoi_token_id": 258882,
|
| 53 |
+
"eos_token_id": 1,
|
| 54 |
+
"image_token_id": 258880,
|
| 55 |
+
"initializer_range": 0.02,
|
| 56 |
+
"model_type": "gemma4",
|
| 57 |
+
"pad_token_id": 0,
|
| 58 |
+
"text_config": {
|
| 59 |
+
"attention_bias": false,
|
| 60 |
+
"attention_dropout": 0.0,
|
| 61 |
+
"attention_k_eq_v": false,
|
| 62 |
+
"bos_token_id": 2,
|
| 63 |
+
"dtype": "bfloat16",
|
| 64 |
+
"enable_moe_block": false,
|
| 65 |
+
"eos_token_id": 1,
|
| 66 |
+
"expert_intermediate_size": null,
|
| 67 |
+
"final_logit_softcapping": 30.0,
|
| 68 |
+
"global_head_dim": 512,
|
| 69 |
+
"head_dim": 256,
|
| 70 |
+
"hidden_activation": "gelu_pytorch_tanh",
|
| 71 |
+
"hidden_size": 1536,
|
| 72 |
+
"hidden_size_per_layer_input": 256,
|
| 73 |
+
"initializer_range": 0.02,
|
| 74 |
+
"intermediate_size": 6144,
|
| 75 |
+
"layer_types": [
|
| 76 |
+
"sliding_attention",
|
| 77 |
+
"sliding_attention",
|
| 78 |
+
"sliding_attention",
|
| 79 |
+
"sliding_attention",
|
| 80 |
+
"full_attention",
|
| 81 |
+
"sliding_attention",
|
| 82 |
+
"sliding_attention",
|
| 83 |
+
"sliding_attention",
|
| 84 |
+
"sliding_attention",
|
| 85 |
+
"full_attention",
|
| 86 |
+
"sliding_attention",
|
| 87 |
+
"sliding_attention",
|
| 88 |
+
"sliding_attention",
|
| 89 |
+
"sliding_attention",
|
| 90 |
+
"full_attention",
|
| 91 |
+
"sliding_attention",
|
| 92 |
+
"sliding_attention",
|
| 93 |
+
"sliding_attention",
|
| 94 |
+
"sliding_attention",
|
| 95 |
+
"full_attention",
|
| 96 |
+
"sliding_attention",
|
| 97 |
+
"sliding_attention",
|
| 98 |
+
"sliding_attention",
|
| 99 |
+
"sliding_attention",
|
| 100 |
+
"full_attention",
|
| 101 |
+
"sliding_attention",
|
| 102 |
+
"sliding_attention",
|
| 103 |
+
"sliding_attention",
|
| 104 |
+
"sliding_attention",
|
| 105 |
+
"full_attention",
|
| 106 |
+
"sliding_attention",
|
| 107 |
+
"sliding_attention",
|
| 108 |
+
"sliding_attention",
|
| 109 |
+
"sliding_attention",
|
| 110 |
+
"full_attention"
|
| 111 |
+
],
|
| 112 |
+
"max_position_embeddings": 131072,
|
| 113 |
+
"model_type": "gemma4_text",
|
| 114 |
+
"moe_intermediate_size": null,
|
| 115 |
+
"num_attention_heads": 8,
|
| 116 |
+
"num_experts": null,
|
| 117 |
+
"num_global_key_value_heads": null,
|
| 118 |
+
"num_hidden_layers": 35,
|
| 119 |
+
"num_key_value_heads": 1,
|
| 120 |
+
"num_kv_shared_layers": 20,
|
| 121 |
+
"pad_token_id": 0,
|
| 122 |
+
"rms_norm_eps": 1e-06,
|
| 123 |
+
"rope_parameters": {
|
| 124 |
+
"full_attention": {
|
| 125 |
+
"partial_rotary_factor": 0.25,
|
| 126 |
+
"rope_theta": 1000000.0,
|
| 127 |
+
"rope_type": "proportional"
|
| 128 |
+
},
|
| 129 |
+
"sliding_attention": {
|
| 130 |
+
"rope_theta": 10000.0,
|
| 131 |
+
"rope_type": "default"
|
| 132 |
+
}
|
| 133 |
+
},
|
| 134 |
+
"sliding_window": 512,
|
| 135 |
+
"tie_word_embeddings": true,
|
| 136 |
+
"top_k_experts": null,
|
| 137 |
+
"use_bidirectional_attention": null,
|
| 138 |
+
"use_cache": true,
|
| 139 |
+
"use_double_wide_mlp": true,
|
| 140 |
+
"vocab_size": 262144,
|
| 141 |
+
"vocab_size_per_layer_input": 262144
|
| 142 |
+
},
|
| 143 |
+
"tie_word_embeddings": true,
|
| 144 |
+
"transformers_version": "5.7.0",
|
| 145 |
+
"use_cache": false,
|
| 146 |
+
"video_token_id": 258884,
|
| 147 |
+
"vision_config": {
|
| 148 |
+
"_name_or_path": "",
|
| 149 |
+
"architectures": null,
|
| 150 |
+
"attention_bias": false,
|
| 151 |
+
"attention_dropout": 0.0,
|
| 152 |
+
"chunk_size_feed_forward": 0,
|
| 153 |
+
"default_output_length": 280,
|
| 154 |
+
"dtype": "bfloat16",
|
| 155 |
+
"global_head_dim": 64,
|
| 156 |
+
"head_dim": 64,
|
| 157 |
+
"hidden_activation": "gelu_pytorch_tanh",
|
| 158 |
+
"hidden_size": 768,
|
| 159 |
+
"id2label": {
|
| 160 |
+
"0": "LABEL_0",
|
| 161 |
+
"1": "LABEL_1"
|
| 162 |
+
},
|
| 163 |
+
"initializer_range": 0.02,
|
| 164 |
+
"intermediate_size": 3072,
|
| 165 |
+
"is_encoder_decoder": false,
|
| 166 |
+
"label2id": {
|
| 167 |
+
"LABEL_0": 0,
|
| 168 |
+
"LABEL_1": 1
|
| 169 |
+
},
|
| 170 |
+
"max_position_embeddings": 131072,
|
| 171 |
+
"model_type": "gemma4_vision",
|
| 172 |
+
"num_attention_heads": 12,
|
| 173 |
+
"num_hidden_layers": 16,
|
| 174 |
+
"num_key_value_heads": 12,
|
| 175 |
+
"output_attentions": false,
|
| 176 |
+
"output_hidden_states": false,
|
| 177 |
+
"patch_size": 16,
|
| 178 |
+
"pooling_kernel_size": 3,
|
| 179 |
+
"position_embedding_size": 10240,
|
| 180 |
+
"problem_type": null,
|
| 181 |
+
"return_dict": true,
|
| 182 |
+
"rms_norm_eps": 1e-06,
|
| 183 |
+
"rope_parameters": {
|
| 184 |
+
"rope_theta": 100.0,
|
| 185 |
+
"rope_type": "default"
|
| 186 |
+
},
|
| 187 |
+
"standardize": false,
|
| 188 |
+
"use_clipped_linears": true
|
| 189 |
+
},
|
| 190 |
+
"vision_soft_tokens_per_image": 280
|
| 191 |
+
}
|
generation_config.json
ADDED
|
@@ -0,0 +1,15 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
{
|
| 2 |
+
"bos_token_id": 2,
|
| 3 |
+
"do_sample": true,
|
| 4 |
+
"eos_token_id": [
|
| 5 |
+
1,
|
| 6 |
+
1,
|
| 7 |
+
106,
|
| 8 |
+
50
|
| 9 |
+
],
|
| 10 |
+
"pad_token_id": 0,
|
| 11 |
+
"temperature": 1.0,
|
| 12 |
+
"top_k": 64,
|
| 13 |
+
"top_p": 0.95,
|
| 14 |
+
"transformers_version": "5.7.0"
|
| 15 |
+
}
|
model.safetensors
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:29f2024d5c75bc1deb5668360522a7b5e6166eaff4be96cdf423ceeebce7c9f4
|
| 3 |
+
size 10208852910
|
tokenizer.json
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:cc8d3a0ce36466ccc1278bf987df5f71db1719b9ca6b4118264f45cb627bfe0f
|
| 3 |
+
size 32169626
|
tokenizer_config.json
ADDED
|
@@ -0,0 +1,96 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
{
|
| 2 |
+
"audio_token": "<|audio|>",
|
| 3 |
+
"backend": "tokenizers",
|
| 4 |
+
"boa_token": "<|audio>",
|
| 5 |
+
"boi_token": "<|image>",
|
| 6 |
+
"bos_token": "<bos>",
|
| 7 |
+
"eoa_token": "<audio|>",
|
| 8 |
+
"eoc_token": "<channel|>",
|
| 9 |
+
"eoi_token": "<image|>",
|
| 10 |
+
"eos_token": "<eos>",
|
| 11 |
+
"eot_token": "<turn|>",
|
| 12 |
+
"escape_token": "<|\"|>",
|
| 13 |
+
"etc_token": "<tool_call|>",
|
| 14 |
+
"etd_token": "<tool|>",
|
| 15 |
+
"etr_token": "<tool_response|>",
|
| 16 |
+
"extra_special_tokens": [
|
| 17 |
+
"<|video|>"
|
| 18 |
+
],
|
| 19 |
+
"image_token": "<|image|>",
|
| 20 |
+
"is_local": false,
|
| 21 |
+
"local_files_only": false,
|
| 22 |
+
"mask_token": "<mask>",
|
| 23 |
+
"model_max_length": 1000000000000000019884624838656,
|
| 24 |
+
"model_specific_special_tokens": {
|
| 25 |
+
"audio_token": "<|audio|>",
|
| 26 |
+
"boa_token": "<|audio>",
|
| 27 |
+
"boi_token": "<|image>",
|
| 28 |
+
"eoa_token": "<audio|>",
|
| 29 |
+
"eoc_token": "<channel|>",
|
| 30 |
+
"eoi_token": "<image|>",
|
| 31 |
+
"eot_token": "<turn|>",
|
| 32 |
+
"escape_token": "<|\"|>",
|
| 33 |
+
"etc_token": "<tool_call|>",
|
| 34 |
+
"etd_token": "<tool|>",
|
| 35 |
+
"etr_token": "<tool_response|>",
|
| 36 |
+
"image_token": "<|image|>",
|
| 37 |
+
"soc_token": "<|channel>",
|
| 38 |
+
"sot_token": "<|turn>",
|
| 39 |
+
"stc_token": "<|tool_call>",
|
| 40 |
+
"std_token": "<|tool>",
|
| 41 |
+
"str_token": "<|tool_response>",
|
| 42 |
+
"think_token": "<|think|>"
|
| 43 |
+
},
|
| 44 |
+
"pad_token": "<pad>",
|
| 45 |
+
"padding_side": "left",
|
| 46 |
+
"processor_class": "Gemma4Processor",
|
| 47 |
+
"response_schema": {
|
| 48 |
+
"properties": {
|
| 49 |
+
"content": {
|
| 50 |
+
"type": "string"
|
| 51 |
+
},
|
| 52 |
+
"role": {
|
| 53 |
+
"const": "assistant"
|
| 54 |
+
},
|
| 55 |
+
"thinking": {
|
| 56 |
+
"type": "string"
|
| 57 |
+
},
|
| 58 |
+
"tool_calls": {
|
| 59 |
+
"items": {
|
| 60 |
+
"properties": {
|
| 61 |
+
"function": {
|
| 62 |
+
"properties": {
|
| 63 |
+
"arguments": {
|
| 64 |
+
"additionalProperties": {},
|
| 65 |
+
"type": "object",
|
| 66 |
+
"x-parser": "gemma4-tool-call"
|
| 67 |
+
},
|
| 68 |
+
"name": {
|
| 69 |
+
"type": "string"
|
| 70 |
+
}
|
| 71 |
+
},
|
| 72 |
+
"type": "object",
|
| 73 |
+
"x-regex": "call\\:(?P<name>\\w+)(?P<arguments>\\{.*\\})"
|
| 74 |
+
},
|
| 75 |
+
"type": {
|
| 76 |
+
"const": "function"
|
| 77 |
+
}
|
| 78 |
+
},
|
| 79 |
+
"type": "object"
|
| 80 |
+
},
|
| 81 |
+
"type": "array",
|
| 82 |
+
"x-regex-iterator": "<\\|tool_call>(.*?)<tool_call\\|>"
|
| 83 |
+
}
|
| 84 |
+
},
|
| 85 |
+
"type": "object",
|
| 86 |
+
"x-regex": "(\\<\\|channel\\>thought\\n(?P<thinking>.*?)\\<channel\\|\\>)?(?P<tool_calls>\\<\\|tool_call\\>.*\\<tool_call\\|\\>)?(?P<content>(?:(?!\\<turn\\|\\>)(?!\\<\\|tool_response\\>).)+)?(?:\\<turn\\|\\>|\\<\\|tool_response\\>)?"
|
| 87 |
+
},
|
| 88 |
+
"soc_token": "<|channel>",
|
| 89 |
+
"sot_token": "<|turn>",
|
| 90 |
+
"stc_token": "<|tool_call>",
|
| 91 |
+
"std_token": "<|tool>",
|
| 92 |
+
"str_token": "<|tool_response>",
|
| 93 |
+
"think_token": "<|think|>",
|
| 94 |
+
"tokenizer_class": "GemmaTokenizer",
|
| 95 |
+
"unk_token": "<unk>"
|
| 96 |
+
}
|