Text Generation
MLX
Safetensors
minimax_m2
jang
jang-quantized
JANG_4S
mixed-precision
apple-silicon
conversational
custom_code
Instructions to use bearzi/MiniMax-M2.7-JANG_4S with libraries, inference providers, notebooks, and local apps. Follow these links to get started.
- Libraries
- MLX
How to use bearzi/MiniMax-M2.7-JANG_4S with MLX:
# Make sure mlx-lm is installed # pip install --upgrade mlx-lm # Generate text with mlx-lm from mlx_lm import load, generate model, tokenizer = load("bearzi/MiniMax-M2.7-JANG_4S") prompt = "Write a story about Einstein" messages = [{"role": "user", "content": prompt}] prompt = tokenizer.apply_chat_template( messages, add_generation_prompt=True ) text = generate(model, tokenizer, prompt=prompt, verbose=True) - Notebooks
- Google Colab
- Kaggle
- Local Apps
- LM Studio
- Pi new
How to use bearzi/MiniMax-M2.7-JANG_4S with Pi:
Start the MLX server
# Install MLX LM: uv tool install mlx-lm # Start a local OpenAI-compatible server: mlx_lm.server --model "bearzi/MiniMax-M2.7-JANG_4S"
Configure the model in Pi
# Install Pi: npm install -g @mariozechner/pi-coding-agent # Add to ~/.pi/agent/models.json: { "providers": { "mlx-lm": { "baseUrl": "http://localhost:8080/v1", "api": "openai-completions", "apiKey": "none", "models": [ { "id": "bearzi/MiniMax-M2.7-JANG_4S" } ] } } }Run Pi
# Start Pi in your project directory: pi
- Hermes Agent new
How to use bearzi/MiniMax-M2.7-JANG_4S with Hermes Agent:
Start the MLX server
# Install MLX LM: uv tool install mlx-lm # Start a local OpenAI-compatible server: mlx_lm.server --model "bearzi/MiniMax-M2.7-JANG_4S"
Configure Hermes
# Install Hermes: curl -fsSL https://hermes-agent.nousresearch.com/install.sh | bash hermes setup # Point Hermes at the local server: hermes config set model.provider custom hermes config set model.base_url http://127.0.0.1:8080/v1 hermes config set model.default bearzi/MiniMax-M2.7-JANG_4S
Run Hermes
hermes
- MLX LM
How to use bearzi/MiniMax-M2.7-JANG_4S with MLX LM:
Generate or start a chat session
# Install MLX LM uv tool install mlx-lm # Interactive chat REPL mlx_lm.chat --model "bearzi/MiniMax-M2.7-JANG_4S"
Run an OpenAI-compatible server
# Install MLX LM uv tool install mlx-lm # Start the server mlx_lm.server --model "bearzi/MiniMax-M2.7-JANG_4S" # Calling the OpenAI-compatible server with curl curl -X POST "http://localhost:8000/v1/chat/completions" \ -H "Content-Type: application/json" \ --data '{ "model": "bearzi/MiniMax-M2.7-JANG_4S", "messages": [ {"role": "user", "content": "Hello"} ] }'
Upload MiniMax-M2.7-JANG_4S
Browse filesThis view is limited to 50 files because it contains too many changes. See raw diff
- README.md +51 -0
- chat_template.jinja +159 -0
- config.json +104 -0
- configuration_minimax_m2.py +200 -0
- generation_config.json +9 -0
- jang_config.json +47 -0
- merges.txt +0 -0
- model-00001-of-00187.safetensors +3 -0
- model-00002-of-00187.safetensors +3 -0
- model-00003-of-00187.safetensors +3 -0
- model-00004-of-00187.safetensors +3 -0
- model-00005-of-00187.safetensors +3 -0
- model-00006-of-00187.safetensors +3 -0
- model-00007-of-00187.safetensors +3 -0
- model-00008-of-00187.safetensors +3 -0
- model-00009-of-00187.safetensors +3 -0
- model-00010-of-00187.safetensors +3 -0
- model-00011-of-00187.safetensors +3 -0
- model-00012-of-00187.safetensors +3 -0
- model-00013-of-00187.safetensors +3 -0
- model-00014-of-00187.safetensors +3 -0
- model-00015-of-00187.safetensors +3 -0
- model-00016-of-00187.safetensors +3 -0
- model-00017-of-00187.safetensors +3 -0
- model-00018-of-00187.safetensors +3 -0
- model-00019-of-00187.safetensors +3 -0
- model-00020-of-00187.safetensors +3 -0
- model-00021-of-00187.safetensors +3 -0
- model-00022-of-00187.safetensors +3 -0
- model-00023-of-00187.safetensors +3 -0
- model-00024-of-00187.safetensors +3 -0
- model-00025-of-00187.safetensors +3 -0
- model-00026-of-00187.safetensors +3 -0
- model-00027-of-00187.safetensors +3 -0
- model-00028-of-00187.safetensors +3 -0
- model-00029-of-00187.safetensors +3 -0
- model-00030-of-00187.safetensors +3 -0
- model-00031-of-00187.safetensors +3 -0
- model-00032-of-00187.safetensors +3 -0
- model-00033-of-00187.safetensors +3 -0
- model-00034-of-00187.safetensors +3 -0
- model-00035-of-00187.safetensors +3 -0
- model-00036-of-00187.safetensors +3 -0
- model-00037-of-00187.safetensors +3 -0
- model-00038-of-00187.safetensors +3 -0
- model-00039-of-00187.safetensors +3 -0
- model-00040-of-00187.safetensors +3 -0
- model-00041-of-00187.safetensors +3 -0
- model-00042-of-00187.safetensors +3 -0
- model-00043-of-00187.safetensors +3 -0
README.md
ADDED
|
@@ -0,0 +1,51 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
---
|
| 2 |
+
base_model: MiniMaxAI/MiniMax-M2.7
|
| 3 |
+
library_name: mlx
|
| 4 |
+
pipeline_tag: text-generation
|
| 5 |
+
license: apache-2.0
|
| 6 |
+
tags:
|
| 7 |
+
- mlx
|
| 8 |
+
- jang
|
| 9 |
+
- jang-quantized
|
| 10 |
+
- JANG_4S
|
| 11 |
+
- mixed-precision
|
| 12 |
+
- apple-silicon
|
| 13 |
+
---
|
| 14 |
+
|
| 15 |
+
# MiniMax-M2.7-JANG_4S
|
| 16 |
+
|
| 17 |
+
JANG adaptive mixed-precision MLX quantization produced via [vmlx / jang-tools](https://github.com/jjang-ai/jangq).
|
| 18 |
+
|
| 19 |
+
- **Quantization:** 4.03b avg, profile JANG_4S, method mse-all, calibration activations
|
| 20 |
+
- **Profile:** JANG_4S
|
| 21 |
+
- **Format:** JANG v2 MLX safetensors
|
| 22 |
+
- **Compatible with:** vmlx, MLX Studio, oMLX (with JANG patch)
|
| 23 |
+
|
| 24 |
+
## Usage
|
| 25 |
+
|
| 26 |
+
### vmlx (recommended)
|
| 27 |
+
|
| 28 |
+
```bash
|
| 29 |
+
pip install 'vmlx[jang]'
|
| 30 |
+
vmlx serve bearzi/MiniMax-M2.7-JANG_4S
|
| 31 |
+
```
|
| 32 |
+
|
| 33 |
+
### Python
|
| 34 |
+
|
| 35 |
+
```python
|
| 36 |
+
from jang_tools.loader import load_jang_model
|
| 37 |
+
from mlx_lm import generate
|
| 38 |
+
|
| 39 |
+
model, tokenizer = load_jang_model("bearzi/MiniMax-M2.7-JANG_4S")
|
| 40 |
+
messages = [{"role": "user", "content": "Hello"}]
|
| 41 |
+
prompt = tokenizer.apply_chat_template(messages, add_generation_prompt=True, tokenize=False)
|
| 42 |
+
print(generate(model, tokenizer, prompt=prompt, max_tokens=512, verbose=True))
|
| 43 |
+
```
|
| 44 |
+
|
| 45 |
+
## About JANG
|
| 46 |
+
|
| 47 |
+
JANG (Jang Adaptive N-bit Grading) assigns different bit widths to different layer types — attention layers get more bits, MLP/expert layers compress harder. This preserves model coherence at aggressive compression levels where uniform quantization breaks down.
|
| 48 |
+
|
| 49 |
+
See [JANG documentation](https://github.com/jjang-ai/jangq) and scores at [jangq.ai](https://jangq.ai).
|
| 50 |
+
|
| 51 |
+
Comparative benchmarks and feedback welcome — please open a discussion.
|
chat_template.jinja
ADDED
|
@@ -0,0 +1,159 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
{# ----------‑‑‑ special token variables ‑‑‑---------- #}
|
| 2 |
+
{%- set toolcall_begin_token = '<minimax:tool_call>' -%}
|
| 3 |
+
{%- set toolcall_end_token = '</minimax:tool_call>' -%}
|
| 4 |
+
{#- Tool Rendering Functions ============================================== -#}
|
| 5 |
+
{%- macro render_tool_namespace(namespace_name, tool_list) -%}
|
| 6 |
+
{%- for tool in tool_list -%}
|
| 7 |
+
<tool>{{ tool.function | tojson(ensure_ascii=False) }}</tool>
|
| 8 |
+
{% endfor -%}
|
| 9 |
+
{%- endmacro -%}
|
| 10 |
+
{%- macro visible_text(content) -%}
|
| 11 |
+
{%- if content is string -%}
|
| 12 |
+
{{ content }}
|
| 13 |
+
{%- elif content is iterable and content is not mapping -%}
|
| 14 |
+
{%- for item in content -%}
|
| 15 |
+
{%- if item is mapping and item.type == 'text' -%}
|
| 16 |
+
{{- item.text }}
|
| 17 |
+
{%- elif item is string -%}
|
| 18 |
+
{{- item }}
|
| 19 |
+
{%- endif -%}
|
| 20 |
+
{%- endfor -%}
|
| 21 |
+
{%- else -%}
|
| 22 |
+
{{- content }}
|
| 23 |
+
{%- endif -%}
|
| 24 |
+
{%- endmacro -%}
|
| 25 |
+
{#- System Message Construction ============================================ -#}
|
| 26 |
+
{%- macro build_system_message(system_message) -%}
|
| 27 |
+
{%- if system_message and system_message.content -%}
|
| 28 |
+
{{- visible_text(system_message.content) }}
|
| 29 |
+
{%- else -%}
|
| 30 |
+
{%- if model_identity is not defined -%}
|
| 31 |
+
{%- set model_identity = "You are a helpful assistant. Your name is MiniMax-M2.7 and is built by MiniMax." -%}
|
| 32 |
+
{%- endif -%}
|
| 33 |
+
{{- model_identity }}
|
| 34 |
+
{%- endif -%}
|
| 35 |
+
|
| 36 |
+
{#- Handle current_date -#}
|
| 37 |
+
{%- if system_message and system_message.current_date -%}
|
| 38 |
+
{{- '\n' ~ 'Current date: ' + system_message.current_date }}
|
| 39 |
+
{%- endif -%}
|
| 40 |
+
{#- Handle current_location -#}
|
| 41 |
+
{%- if system_message and system_message.current_location -%}
|
| 42 |
+
{{- '\n' ~ 'Current location: ' + system_message.current_location }}
|
| 43 |
+
{%- endif -%}
|
| 44 |
+
{%- endmacro -%}
|
| 45 |
+
{#- Main Template Logic ================================================= -#}
|
| 46 |
+
{#- Extract system message (only first message if it's system) -#}
|
| 47 |
+
{%- set system_message = none -%}
|
| 48 |
+
{%- set conversation_messages = messages -%}
|
| 49 |
+
{%- if messages and messages[0].role == "system" -%}
|
| 50 |
+
{%- set system_message = messages[0] -%}
|
| 51 |
+
{%- set conversation_messages = messages[1:] -%}
|
| 52 |
+
{%- endif -%}
|
| 53 |
+
{#- Get the last user message turn, for interleved thinking -#}
|
| 54 |
+
{%- set ns = namespace(last_user_index=-1) %}
|
| 55 |
+
{% for m in conversation_messages %}
|
| 56 |
+
{%- if m.role == 'user' %}
|
| 57 |
+
{% set ns.last_user_index = loop.index0 -%}
|
| 58 |
+
{%- endif %}
|
| 59 |
+
{%- endfor %}
|
| 60 |
+
{#- Render system message -#}
|
| 61 |
+
{{- ']~!b[' ~ ']~b]system' ~ '\n' }}
|
| 62 |
+
{{- build_system_message(system_message) }}
|
| 63 |
+
{#- Render tools if available -#}
|
| 64 |
+
{%- if tools -%}
|
| 65 |
+
{{- '\n\n' ~ '# Tools' ~ '\n' ~ 'You may call one or more tools to assist with the user query.\nHere are the tools available in JSONSchema format:' ~ '\n' }}
|
| 66 |
+
{{- '\n' ~ '<tools>' ~ '\n' }}
|
| 67 |
+
{{- render_tool_namespace("functions", tools) }}
|
| 68 |
+
{{- '</tools>' ~ '\n\n' }}
|
| 69 |
+
{{- 'When making tool calls, use XML format to invoke tools and pass parameters:' ~ '\n' }}
|
| 70 |
+
{{- '\n' ~ toolcall_begin_token }}
|
| 71 |
+
<invoke name="tool-name-1">
|
| 72 |
+
<parameter name="param-key-1">param-value-1</parameter>
|
| 73 |
+
<parameter name="param-key-2">param-value-2</parameter>
|
| 74 |
+
...
|
| 75 |
+
</invoke>
|
| 76 |
+
{{- '\n' ~ toolcall_end_token }}
|
| 77 |
+
{%- endif -%}
|
| 78 |
+
{{- '[e~[\n' }}
|
| 79 |
+
|
| 80 |
+
{#- Render messages -#}
|
| 81 |
+
{%- set last_tool_call = namespace(name=none) -%}
|
| 82 |
+
{%- for message in conversation_messages -%}
|
| 83 |
+
{%- if message.role == 'assistant' -%}
|
| 84 |
+
{#- Only render reasoning_content if no user message follows -#}
|
| 85 |
+
{{- ']~b]ai' ~ '\n' }}
|
| 86 |
+
|
| 87 |
+
{%- set reasoning_content = '' %}
|
| 88 |
+
{%- set content = visible_text(message.content) %}
|
| 89 |
+
{%- if message.reasoning_content is string %}
|
| 90 |
+
{%- set reasoning_content = message.reasoning_content %}
|
| 91 |
+
{%- else %}
|
| 92 |
+
{%- if '</think>' in content %}
|
| 93 |
+
{%- set reasoning_content = content.split('</think>')[0].strip('\n').split('<think>')[-1].strip('\n') %}
|
| 94 |
+
{%- set content = content.split('</think>')[-1].strip('\n') %}
|
| 95 |
+
{%- endif %}
|
| 96 |
+
{%- endif %}
|
| 97 |
+
{%- if reasoning_content and loop.index0 > ns.last_user_index -%}
|
| 98 |
+
{{- '<think>' ~ '\n' ~ reasoning_content ~ '\n' ~ '</think>' ~ '\n\n' }}
|
| 99 |
+
{%- endif -%}
|
| 100 |
+
{%- if content -%}
|
| 101 |
+
{{- content }}
|
| 102 |
+
{%- endif -%}
|
| 103 |
+
{%- if message.tool_calls -%}
|
| 104 |
+
{{- '\n' ~ toolcall_begin_token ~ '\n' }}
|
| 105 |
+
|
| 106 |
+
{%- for tool_call in message.tool_calls -%}
|
| 107 |
+
{%- if tool_call.function %}
|
| 108 |
+
{%- set tool_call = tool_call.function %}
|
| 109 |
+
{%- endif %}
|
| 110 |
+
{{- '<invoke name="' + tool_call.name + '">' }}
|
| 111 |
+
{% set _args = tool_call.arguments %}
|
| 112 |
+
{%- for k, v in _args.items() %}
|
| 113 |
+
{{- '<parameter name="' + k + '">' }}
|
| 114 |
+
{{- v | tojson(ensure_ascii=False) if v is not string else v }}
|
| 115 |
+
{{- '</parameter>' }}
|
| 116 |
+
{% endfor %}
|
| 117 |
+
{{- '</invoke>' ~ '\n' }}
|
| 118 |
+
{%- endfor -%}
|
| 119 |
+
|
| 120 |
+
{{- toolcall_end_token}}
|
| 121 |
+
{%- set last_tool_call.name = message.tool_calls[-1].name -%}
|
| 122 |
+
{%- else -%}
|
| 123 |
+
{%- set last_tool_call.name = none -%}
|
| 124 |
+
{%- endif -%}
|
| 125 |
+
{{- '[e~[' ~ '\n' }}
|
| 126 |
+
|
| 127 |
+
{%- elif message.role == 'tool' -%}
|
| 128 |
+
{%- if last_tool_call.name is none -%}
|
| 129 |
+
{{- raise_exception("Message has tool role, but there was no previous assistant message with a tool call!") }}
|
| 130 |
+
{%- endif -%}
|
| 131 |
+
{%- if loop.first or (conversation_messages[loop.index0 - 1].role != 'tool') -%}
|
| 132 |
+
{{- ']~b]tool' }}
|
| 133 |
+
{%- endif -%}
|
| 134 |
+
{%- if message.content is string -%}
|
| 135 |
+
{{- '\n<response>' }}
|
| 136 |
+
{{- message.content }}
|
| 137 |
+
{{- '</response>' }}
|
| 138 |
+
{%- else -%}
|
| 139 |
+
{%- for tr in message.content -%}
|
| 140 |
+
{{- '\n<response>' }}
|
| 141 |
+
{{- tr.output if tr.output is defined else (tr.text if tr.type == 'text' and tr.text is defined else tr) }}
|
| 142 |
+
{{- '\n</response>' }}
|
| 143 |
+
{%- endfor -%}
|
| 144 |
+
{%- endif -%}
|
| 145 |
+
{%- if loop.last or (conversation_messages[loop.index0 + 1].role != 'tool') -%}
|
| 146 |
+
{{- '[e~[\n' -}}
|
| 147 |
+
{%- endif -%}
|
| 148 |
+
|
| 149 |
+
{%- elif message.role == 'user' -%}
|
| 150 |
+
{{- ']~b]user' ~ '\n' }}
|
| 151 |
+
{{- visible_text(message.content) }}
|
| 152 |
+
{{- '[e~[' ~ '\n' }}
|
| 153 |
+
{%- endif -%}
|
| 154 |
+
{%- endfor -%}
|
| 155 |
+
|
| 156 |
+
{#- Generation prompt -#}
|
| 157 |
+
{%- if add_generation_prompt -%}
|
| 158 |
+
{{- ']~b]ai' ~ '\n' ~ '<think>' ~ '\n' }}
|
| 159 |
+
{%- endif -%}
|
config.json
ADDED
|
@@ -0,0 +1,104 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
{
|
| 2 |
+
"architectures": [
|
| 3 |
+
"MiniMaxM2ForCausalLM"
|
| 4 |
+
],
|
| 5 |
+
"attn_type_list": [
|
| 6 |
+
1,
|
| 7 |
+
1,
|
| 8 |
+
1,
|
| 9 |
+
1,
|
| 10 |
+
1,
|
| 11 |
+
1,
|
| 12 |
+
1,
|
| 13 |
+
1,
|
| 14 |
+
1,
|
| 15 |
+
1,
|
| 16 |
+
1,
|
| 17 |
+
1,
|
| 18 |
+
1,
|
| 19 |
+
1,
|
| 20 |
+
1,
|
| 21 |
+
1,
|
| 22 |
+
1,
|
| 23 |
+
1,
|
| 24 |
+
1,
|
| 25 |
+
1,
|
| 26 |
+
1,
|
| 27 |
+
1,
|
| 28 |
+
1,
|
| 29 |
+
1,
|
| 30 |
+
1,
|
| 31 |
+
1,
|
| 32 |
+
1,
|
| 33 |
+
1,
|
| 34 |
+
1,
|
| 35 |
+
1,
|
| 36 |
+
1,
|
| 37 |
+
1,
|
| 38 |
+
1,
|
| 39 |
+
1,
|
| 40 |
+
1,
|
| 41 |
+
1,
|
| 42 |
+
1,
|
| 43 |
+
1,
|
| 44 |
+
1,
|
| 45 |
+
1,
|
| 46 |
+
1,
|
| 47 |
+
1,
|
| 48 |
+
1,
|
| 49 |
+
1,
|
| 50 |
+
1,
|
| 51 |
+
1,
|
| 52 |
+
1,
|
| 53 |
+
1,
|
| 54 |
+
1,
|
| 55 |
+
1,
|
| 56 |
+
1,
|
| 57 |
+
1,
|
| 58 |
+
1,
|
| 59 |
+
1,
|
| 60 |
+
1,
|
| 61 |
+
1,
|
| 62 |
+
1,
|
| 63 |
+
1,
|
| 64 |
+
1,
|
| 65 |
+
1,
|
| 66 |
+
1,
|
| 67 |
+
1
|
| 68 |
+
],
|
| 69 |
+
"auto_map": {
|
| 70 |
+
"AutoConfig": "configuration_minimax_m2.MiniMaxM2Config",
|
| 71 |
+
"AutoModelForCausalLM": "modeling_minimax_m2.MiniMaxM2ForCausalLM"
|
| 72 |
+
},
|
| 73 |
+
"dtype": "bfloat16",
|
| 74 |
+
"head_dim": 128,
|
| 75 |
+
"hidden_act": "silu",
|
| 76 |
+
"hidden_size": 3072,
|
| 77 |
+
"intermediate_size": 1536,
|
| 78 |
+
"max_position_embeddings": 196608,
|
| 79 |
+
"model_type": "minimax_m2",
|
| 80 |
+
"mtp_transformer_layers": 1,
|
| 81 |
+
"num_attention_heads": 48,
|
| 82 |
+
"num_experts_per_tok": 8,
|
| 83 |
+
"num_hidden_layers": 62,
|
| 84 |
+
"num_key_value_heads": 8,
|
| 85 |
+
"num_local_experts": 256,
|
| 86 |
+
"num_mtp_modules": 3,
|
| 87 |
+
"qk_norm_type": "per_layer",
|
| 88 |
+
"rms_norm_eps": 1e-06,
|
| 89 |
+
"rope_theta": 5000000,
|
| 90 |
+
"rotary_dim": 64,
|
| 91 |
+
"scoring_func": "sigmoid",
|
| 92 |
+
"shared_intermediate_size": 0,
|
| 93 |
+
"tie_word_embeddings": false,
|
| 94 |
+
"transformers_version": "4.46.1",
|
| 95 |
+
"use_cache": true,
|
| 96 |
+
"use_mtp": true,
|
| 97 |
+
"use_qk_norm": true,
|
| 98 |
+
"use_routing_bias": true,
|
| 99 |
+
"vocab_size": 200064,
|
| 100 |
+
"quantization": {
|
| 101 |
+
"group_size": 128,
|
| 102 |
+
"bits": 4
|
| 103 |
+
}
|
| 104 |
+
}
|
configuration_minimax_m2.py
ADDED
|
@@ -0,0 +1,200 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
# 🚨🚨🚨🚨🚨🚨🚨🚨🚨🚨🚨🚨🚨🚨🚨🚨🚨🚨🚨🚨🚨🚨🚨🚨🚨🚨🚨🚨🚨🚨🚨🚨🚨🚨🚨🚨🚨🚨🚨🚨🚨🚨🚨🚨🚨🚨🚨🚨
|
| 2 |
+
# This file was automatically generated from src/transformers/models/minimax_m2/modular_minimax_m2.py.
|
| 3 |
+
# Do NOT edit this file manually as any edits will be overwritten by the generation of
|
| 4 |
+
# the file from the modular. If any change should be done, please apply the change to the
|
| 5 |
+
# modular_minimax_m2.py file directly. One of our CI enforces this.
|
| 6 |
+
# 🚨🚨🚨🚨🚨🚨🚨🚨🚨🚨🚨🚨🚨🚨🚨🚨🚨🚨🚨🚨🚨🚨🚨🚨🚨🚨🚨🚨🚨🚨🚨🚨🚨🚨🚨🚨🚨🚨🚨🚨🚨🚨🚨🚨🚨🚨🚨🚨
|
| 7 |
+
# coding=utf-8
|
| 8 |
+
# Copyright 2025 the HuggingFace Team. All rights reserved.
|
| 9 |
+
#
|
| 10 |
+
# Licensed under the Apache License, Version 2.0 (the "License");
|
| 11 |
+
# you may not use this file except in compliance with the License.
|
| 12 |
+
# You may obtain a copy of the License at
|
| 13 |
+
#
|
| 14 |
+
# http://www.apache.org/licenses/LICENSE-2.0
|
| 15 |
+
#
|
| 16 |
+
# Unless required by applicable law or agreed to in writing, software
|
| 17 |
+
# distributed under the License is distributed on an "AS IS" BASIS,
|
| 18 |
+
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
| 19 |
+
# See the License for the specific language governing permissions and
|
| 20 |
+
# limitations under the License.
|
| 21 |
+
|
| 22 |
+
|
| 23 |
+
from transformers.configuration_utils import PretrainedConfig
|
| 24 |
+
|
| 25 |
+
|
| 26 |
+
class MiniMaxM2Config(PretrainedConfig):
|
| 27 |
+
r"""
|
| 28 |
+
This is the configuration class to store the configuration of a [`MiniMaxM2Model`]. It is used to instantiate an
|
| 29 |
+
MiniMaxM2 model according to the specified arguments, defining the model architecture. Instantiating a configuration
|
| 30 |
+
with the defaults will yield a similar configuration to that of the MiniMaxM2-7B-v0.1 or MiniMaxM2-7B-Instruct-v0.1.
|
| 31 |
+
|
| 32 |
+
[minimax_m2ai/MiniMaxM2-8x7B](https://huggingface.co/minimax_m2ai/MiniMaxM2-8x7B)
|
| 33 |
+
[minimax_m2ai/MiniMaxM2-7B-Instruct-v0.1](https://huggingface.co/minimax_m2ai/MiniMaxM2-7B-Instruct-v0.1)
|
| 34 |
+
|
| 35 |
+
Configuration objects inherit from [`PretrainedConfig`] and can be used to control the model outputs. Read the
|
| 36 |
+
documentation from [`PretrainedConfig`] for more information.
|
| 37 |
+
|
| 38 |
+
|
| 39 |
+
Args:
|
| 40 |
+
vocab_size (`int`, *optional*, defaults to 32000):
|
| 41 |
+
Vocabulary size of the MiniMaxM2 model. Defines the number of different tokens that can be represented by the
|
| 42 |
+
`inputs_ids` passed when calling [`MiniMaxM2Model`]
|
| 43 |
+
hidden_size (`int`, *optional*, defaults to 4096):
|
| 44 |
+
Dimension of the hidden representations.
|
| 45 |
+
intermediate_size (`int`, *optional*, defaults to 14336):
|
| 46 |
+
Dimension of the MLP representations.
|
| 47 |
+
num_hidden_layers (`int`, *optional*, defaults to 32):
|
| 48 |
+
Number of hidden layers in the Transformer encoder.
|
| 49 |
+
num_attention_heads (`int`, *optional*, defaults to 32):
|
| 50 |
+
Number of attention heads for each attention layer in the Transformer encoder.
|
| 51 |
+
num_key_value_heads (`int`, *optional*, defaults to 8):
|
| 52 |
+
This is the number of key_value heads that should be used to implement Grouped Query Attention. If
|
| 53 |
+
`num_key_value_heads=num_attention_heads`, the model will use Multi Head Attention (MHA), if
|
| 54 |
+
`num_key_value_heads=1` the model will use Multi Query Attention (MQA) otherwise GQA is used. When
|
| 55 |
+
converting a multi-head checkpoint to a GQA checkpoint, each group key and value head should be constructed
|
| 56 |
+
by meanpooling all the original heads within that group. For more details, check out [this
|
| 57 |
+
paper](https://huggingface.co/papers/2305.13245). If it is not specified, will default to `8`.
|
| 58 |
+
head_dim (`int`, *optional*, defaults to `hidden_size // num_attention_heads`):
|
| 59 |
+
The attention head dimension.
|
| 60 |
+
hidden_act (`str` or `function`, *optional*, defaults to `"silu"`):
|
| 61 |
+
The non-linear activation function (function or string) in the decoder.
|
| 62 |
+
max_position_embeddings (`int`, *optional*, defaults to `4096*32`):
|
| 63 |
+
The maximum sequence length that this model might ever be used with. MiniMaxM2's sliding window attention
|
| 64 |
+
allows sequence of up to 4096*32 tokens.
|
| 65 |
+
initializer_range (`float`, *optional*, defaults to 0.02):
|
| 66 |
+
The standard deviation of the truncated_normal_initializer for initializing all weight matrices.
|
| 67 |
+
rms_norm_eps (`float`, *optional*, defaults to 1e-05):
|
| 68 |
+
The epsilon used by the rms normalization layers.
|
| 69 |
+
use_cache (`bool`, *optional*, defaults to `True`):
|
| 70 |
+
Whether or not the model should return the last key/values attentions (not used by all models). Only
|
| 71 |
+
relevant if `config.is_decoder=True`.
|
| 72 |
+
pad_token_id (`int`, *optional*):
|
| 73 |
+
The id of the padding token.
|
| 74 |
+
bos_token_id (`int`, *optional*, defaults to 1):
|
| 75 |
+
The id of the "beginning-of-sequence" token.
|
| 76 |
+
eos_token_id (`int`, *optional*, defaults to 2):
|
| 77 |
+
The id of the "end-of-sequence" token.
|
| 78 |
+
tie_word_embeddings (`bool`, *optional*, defaults to `False`):
|
| 79 |
+
Whether the model's input and output word embeddings should be tied.
|
| 80 |
+
rope_theta (`float`, *optional*, defaults to 1000000.0):
|
| 81 |
+
The base period of the RoPE embeddings.
|
| 82 |
+
sliding_window (`int`, *optional*):
|
| 83 |
+
Sliding window attention window size. If not specified, will default to `4096`.
|
| 84 |
+
attention_dropout (`float`, *optional*, defaults to 0.0):
|
| 85 |
+
The dropout ratio for the attention probabilities.
|
| 86 |
+
num_experts_per_tok (`int`, *optional*, defaults to 2):
|
| 87 |
+
The number of experts to route per-token, can be also interpreted as the `top-k` routing
|
| 88 |
+
parameter
|
| 89 |
+
num_local_experts (`int`, *optional*, defaults to 8):
|
| 90 |
+
Number of experts per Sparse MLP layer.
|
| 91 |
+
output_router_logits (`bool`, *optional*, defaults to `False`):
|
| 92 |
+
Whether or not the router logits should be returned by the model. Enabling this will also
|
| 93 |
+
allow the model to output the auxiliary loss. See [here]() for more details
|
| 94 |
+
router_aux_loss_coef (`float`, *optional*, defaults to 0.001):
|
| 95 |
+
The aux loss factor for the total loss.
|
| 96 |
+
router_jitter_noise (`float`, *optional*, defaults to 0.0):
|
| 97 |
+
Amount of noise to add to the router.
|
| 98 |
+
|
| 99 |
+
```python
|
| 100 |
+
>>> from transformers import MiniMaxM2Model, MiniMaxM2Config
|
| 101 |
+
|
| 102 |
+
>>> # Initializing a MiniMaxM2 7B style configuration
|
| 103 |
+
>>> configuration = MiniMaxM2Config()
|
| 104 |
+
|
| 105 |
+
>>> # Initializing a model from the MiniMaxM2 7B style configuration
|
| 106 |
+
>>> model = MiniMaxM2Model(configuration)
|
| 107 |
+
|
| 108 |
+
>>> # Accessing the model configuration
|
| 109 |
+
>>> configuration = model.config
|
| 110 |
+
```"""
|
| 111 |
+
|
| 112 |
+
model_type = "minimax_m2"
|
| 113 |
+
keys_to_ignore_at_inference = ["past_key_values"]
|
| 114 |
+
base_model_tp_plan = {
|
| 115 |
+
"layers.*.self_attn.q_proj": "colwise",
|
| 116 |
+
"layers.*.self_attn.k_proj": "colwise",
|
| 117 |
+
"layers.*.self_attn.v_proj": "colwise",
|
| 118 |
+
"layers.*.self_attn.o_proj": "rowwise",
|
| 119 |
+
"layers.*.block_sparse_moe.gate": "colwise_rep", # we need to replicate here to correctly route experts
|
| 120 |
+
"layers.*.block_sparse_moe.experts.*.w1": "colwise",
|
| 121 |
+
"layers.*.block_sparse_moe.experts.*.w2": "rowwise",
|
| 122 |
+
"layers.*.block_sparse_moe.experts.*.w3": "colwise",
|
| 123 |
+
}
|
| 124 |
+
base_model_pp_plan = {
|
| 125 |
+
"embed_tokens": (["input_ids"], ["inputs_embeds"]),
|
| 126 |
+
"layers": (["hidden_states", "attention_mask"], ["hidden_states"]),
|
| 127 |
+
"norm": (["hidden_states"], ["hidden_states"]),
|
| 128 |
+
}
|
| 129 |
+
|
| 130 |
+
def __init__(
|
| 131 |
+
self,
|
| 132 |
+
vocab_size=32000,
|
| 133 |
+
hidden_size=4096,
|
| 134 |
+
intermediate_size=14336,
|
| 135 |
+
num_hidden_layers=32,
|
| 136 |
+
num_attention_heads=32,
|
| 137 |
+
num_key_value_heads=8,
|
| 138 |
+
head_dim=None,
|
| 139 |
+
hidden_act="silu",
|
| 140 |
+
max_position_embeddings=4096 * 32,
|
| 141 |
+
initializer_range=0.02,
|
| 142 |
+
rms_norm_eps=1e-5,
|
| 143 |
+
use_cache=True,
|
| 144 |
+
pad_token_id=None,
|
| 145 |
+
bos_token_id=1,
|
| 146 |
+
eos_token_id=2,
|
| 147 |
+
tie_word_embeddings=False,
|
| 148 |
+
rope_theta=1e6,
|
| 149 |
+
sliding_window=None,
|
| 150 |
+
attention_dropout=0.0,
|
| 151 |
+
num_experts_per_tok=2,
|
| 152 |
+
num_local_experts=8,
|
| 153 |
+
output_router_logits=False,
|
| 154 |
+
router_aux_loss_coef=0.001,
|
| 155 |
+
router_jitter_noise=0.0,
|
| 156 |
+
**kwargs,
|
| 157 |
+
):
|
| 158 |
+
self.vocab_size = vocab_size
|
| 159 |
+
self.max_position_embeddings = max_position_embeddings
|
| 160 |
+
self.hidden_size = hidden_size
|
| 161 |
+
self.intermediate_size = intermediate_size
|
| 162 |
+
self.num_hidden_layers = num_hidden_layers
|
| 163 |
+
self.num_attention_heads = num_attention_heads
|
| 164 |
+
self.sliding_window = sliding_window
|
| 165 |
+
|
| 166 |
+
# for backward compatibility
|
| 167 |
+
if num_key_value_heads is None:
|
| 168 |
+
num_key_value_heads = num_attention_heads
|
| 169 |
+
|
| 170 |
+
self.num_key_value_heads = num_key_value_heads
|
| 171 |
+
self.hidden_act = hidden_act
|
| 172 |
+
self.initializer_range = initializer_range
|
| 173 |
+
self.rms_norm_eps = rms_norm_eps
|
| 174 |
+
self.use_cache = use_cache
|
| 175 |
+
self.rope_theta = rope_theta
|
| 176 |
+
self.attention_dropout = attention_dropout
|
| 177 |
+
self.head_dim = head_dim
|
| 178 |
+
|
| 179 |
+
self.num_experts_per_tok = num_experts_per_tok
|
| 180 |
+
self.num_local_experts = num_local_experts
|
| 181 |
+
self.output_router_logits = output_router_logits
|
| 182 |
+
self.router_aux_loss_coef = router_aux_loss_coef
|
| 183 |
+
self.router_jitter_noise = router_jitter_noise
|
| 184 |
+
|
| 185 |
+
self.use_qk_norm = kwargs.pop("use_qk_norm", False)
|
| 186 |
+
self.rotary_dim = kwargs.pop("rotary_dim", self.head_dim)
|
| 187 |
+
self.partial_rotary_factor = kwargs.pop("partial_rotary_factor", 1)
|
| 188 |
+
if self.head_dim is not None:
|
| 189 |
+
self.partial_rotary_factor = self.rotary_dim / self.head_dim
|
| 190 |
+
|
| 191 |
+
super().__init__(
|
| 192 |
+
pad_token_id=pad_token_id,
|
| 193 |
+
bos_token_id=bos_token_id,
|
| 194 |
+
eos_token_id=eos_token_id,
|
| 195 |
+
tie_word_embeddings=tie_word_embeddings,
|
| 196 |
+
**kwargs,
|
| 197 |
+
)
|
| 198 |
+
|
| 199 |
+
|
| 200 |
+
__all__ = ["MiniMaxM2Config"]
|
generation_config.json
ADDED
|
@@ -0,0 +1,9 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
{
|
| 2 |
+
"bos_token_id": 200019,
|
| 3 |
+
"do_sample": true,
|
| 4 |
+
"eos_token_id": 200020,
|
| 5 |
+
"temperature": 1.0,
|
| 6 |
+
"top_p": 0.95,
|
| 7 |
+
"top_k": 40,
|
| 8 |
+
"transformers_version": "4.46.1"
|
| 9 |
+
}
|
jang_config.json
ADDED
|
@@ -0,0 +1,47 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
{
|
| 2 |
+
"quantization": {
|
| 3 |
+
"method": "jang-importance",
|
| 4 |
+
"profile": "JANG_4S",
|
| 5 |
+
"target_bits": 4,
|
| 6 |
+
"actual_bits": 4.03,
|
| 7 |
+
"block_size": 128,
|
| 8 |
+
"calibration_method": "activations",
|
| 9 |
+
"quantization_method": "mse-all",
|
| 10 |
+
"scoring_method": "awq+hessian",
|
| 11 |
+
"bit_widths_used": [
|
| 12 |
+
4,
|
| 13 |
+
6
|
| 14 |
+
],
|
| 15 |
+
"quantization_scheme": "asymmetric",
|
| 16 |
+
"quantization_backend": "mx.quantize",
|
| 17 |
+
"hadamard_rotation": false
|
| 18 |
+
},
|
| 19 |
+
"source_model": {
|
| 20 |
+
"name": "MiniMax-M2.7",
|
| 21 |
+
"dtype": "bfloat16",
|
| 22 |
+
"parameters": "227.6B"
|
| 23 |
+
},
|
| 24 |
+
"architecture": {
|
| 25 |
+
"type": "moe",
|
| 26 |
+
"attention": "gqa",
|
| 27 |
+
"has_vision": false,
|
| 28 |
+
"has_ssm": false,
|
| 29 |
+
"has_moe": true
|
| 30 |
+
},
|
| 31 |
+
"runtime": {
|
| 32 |
+
"total_weight_bytes": 460947456,
|
| 33 |
+
"total_weight_gb": 0.43
|
| 34 |
+
},
|
| 35 |
+
"capabilities": {
|
| 36 |
+
"reasoning_parser": "qwen3",
|
| 37 |
+
"tool_parser": "minimax",
|
| 38 |
+
"think_in_template": true,
|
| 39 |
+
"supports_tools": true,
|
| 40 |
+
"supports_thinking": true,
|
| 41 |
+
"family": "minimax_m2",
|
| 42 |
+
"modality": "text",
|
| 43 |
+
"cache_type": "kv"
|
| 44 |
+
},
|
| 45 |
+
"format": "jang",
|
| 46 |
+
"format_version": "2.0"
|
| 47 |
+
}
|
merges.txt
ADDED
|
The diff for this file is too large to render.
See raw diff
|
|
|
model-00001-of-00187.safetensors
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:d15f69aaf7d01bba61575cbc30902c1a7a231d9faefd362350c186d83e2cd727
|
| 3 |
+
size 968233712
|
model-00002-of-00187.safetensors
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:9f42a596600f804b1eba53e89556306cc7dde75ce1cb0a81ebbb7407f8078e64
|
| 3 |
+
size 641728952
|
model-00003-of-00187.safetensors
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:4ac4d5fba5c24d88111490770a8cc1fe4f4241fde78308a2eeb42bee5e434779
|
| 3 |
+
size 676136712
|
model-00004-of-00187.safetensors
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:43a9747115defe98b688ea6d4b2ee20fd160f510a409e929d1bac44a0037adde
|
| 3 |
+
size 641728960
|
model-00005-of-00187.safetensors
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:4970c8b42d488aead3cf3990175fde5d5acb1f2ccf5daa7fadced552d5ee115b
|
| 3 |
+
size 641728952
|
model-00006-of-00187.safetensors
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:1f9fb6cae836e86e76e6c312b9f7918c163bac51d254c1781366fedb80cacbd8
|
| 3 |
+
size 676136712
|
model-00007-of-00187.safetensors
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:55f952df9f47ecec8b1f38d16919d443ee5dd3eb53b1d1fe34e85210cbc9b7d3
|
| 3 |
+
size 641728960
|
model-00008-of-00187.safetensors
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:2018af0c9a7eb97b80a5fa79cfda3bdb0bf1f97588016190c3769060abfd5bc0
|
| 3 |
+
size 641728952
|
model-00009-of-00187.safetensors
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:9a6922fa2a1f4f93c6c69b50e9890f4d6650d5388c313897d462b0c418574e1b
|
| 3 |
+
size 676136712
|
model-00010-of-00187.safetensors
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:34c744b415ae27b24ddf33a911064d3e75cd84333c6e61e33e2e852c5bf35108
|
| 3 |
+
size 641728960
|
model-00011-of-00187.safetensors
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:c83973d8791045a9c995793804295b5152d39de6f2f9d0077406ef8268dfc51f
|
| 3 |
+
size 641728952
|
model-00012-of-00187.safetensors
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:04060f019943ed1c4ca80df7b0e9d61bf94aca5bc513f53751544cd34e6fe002
|
| 3 |
+
size 676136712
|
model-00013-of-00187.safetensors
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:eaa120275b4e0be6f828f602ec33ab2ed95da72345255c56e6f1d73e53f57dc6
|
| 3 |
+
size 641728960
|
model-00014-of-00187.safetensors
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:ee273a04d5b29d4abee7ff0dc272cdfcb6fe821d05102a46631f6b41d23fb63c
|
| 3 |
+
size 641728952
|
model-00015-of-00187.safetensors
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:35d0844f609601080e39c2962aeb7126853e58be3ce16bd6d77b9516bbc61048
|
| 3 |
+
size 676136712
|
model-00016-of-00187.safetensors
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:d16defe4d7458be361cc3f4ccaba5b9f62453bbfe5a8570ffa4ea9fb886575ac
|
| 3 |
+
size 641728960
|
model-00017-of-00187.safetensors
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:e78d2b1685079b0161b567b566e9de0d97ab9468528cf7250e5c4508fb5ebb32
|
| 3 |
+
size 641728952
|
model-00018-of-00187.safetensors
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:59f966066fb83415db2326ca9c94d418e9a6a6675f196d8cf56335b9b08fc0d7
|
| 3 |
+
size 676136712
|
model-00019-of-00187.safetensors
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:8e04a4bafcfb4a8f9259ce159bb637670d19d8f3d0c192c705f52d9a4e08613c
|
| 3 |
+
size 641728960
|
model-00020-of-00187.safetensors
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:4845472ccab9e5fc0da4a8cca11c2795d5046ee983f48d10959ffb95ddbbe0f8
|
| 3 |
+
size 641728952
|
model-00021-of-00187.safetensors
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:84f7c13c283e67c786fe08f0489b01650cbc4a2a9b43922525298022c6cb851f
|
| 3 |
+
size 676136712
|
model-00022-of-00187.safetensors
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:10041ebf7fdb50a0391ad733c4b9b7b158e51eb165cca3c710bd4ec946c6cb26
|
| 3 |
+
size 641728960
|
model-00023-of-00187.safetensors
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:57b1d624d937c48e6b36be34db27960bea9b6e62c3a8f37280e14611fa45c1e1
|
| 3 |
+
size 641728952
|
model-00024-of-00187.safetensors
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:40b822c2339b3f9a9a8db1566273318e73cfcdf345eb6b175f3d46ef53417fb6
|
| 3 |
+
size 676136712
|
model-00025-of-00187.safetensors
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:53862e9cca41f6f919ca4376f71a34a407a1b5e897f5f02a6e0438424c1a9912
|
| 3 |
+
size 641728960
|
model-00026-of-00187.safetensors
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:e76c42792d73f98f9766e729b3350dec0b60c3c683d57d7f31a7092729bb7106
|
| 3 |
+
size 641728952
|
model-00027-of-00187.safetensors
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:6992f5594a9010da86fe3fb6eee4162ab5bfd14ce7ab1dce18488f07aeb7c75d
|
| 3 |
+
size 676136712
|
model-00028-of-00187.safetensors
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:5045463da21d103b954acc54ce94855bcf32144509f17188420d223ffc1a1a18
|
| 3 |
+
size 641728960
|
model-00029-of-00187.safetensors
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:c50af8ac9cae6e1bf315d9523fd4b0f1b1dadc4a7d0611f7b5c056ce98ead0db
|
| 3 |
+
size 641728952
|
model-00030-of-00187.safetensors
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:245dcbde01b1b09af9a94afa03a73a7d2089a38daa49ddea6c4395bae4f99769
|
| 3 |
+
size 676136712
|
model-00031-of-00187.safetensors
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:37a98f4f1acad8ff0d80c31a15b4ac103a59e96999a065da0814a6fc6db890d3
|
| 3 |
+
size 641728960
|
model-00032-of-00187.safetensors
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:e11673f8db3c9d0865b2a9f5de387c63951d1c6104bc0aebcf85d2e66b9e6f2c
|
| 3 |
+
size 641728960
|
model-00033-of-00187.safetensors
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:2985dd63e3d2f5aacf9aa5f340b43d25c78f4091ee511401c407ab4960ab9f96
|
| 3 |
+
size 676136728
|
model-00034-of-00187.safetensors
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:817178c6a2693f0b9bf3c1d59f869ed3691dd8344fddc2c57d8bc4929eae6276
|
| 3 |
+
size 641728960
|
model-00035-of-00187.safetensors
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:a19ea93673cf8625c2a8b15c7d1f1405e1a4f228f1e25822f25db594bb154f8b
|
| 3 |
+
size 641728960
|
model-00036-of-00187.safetensors
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:0cc6bea0908648900b44b58f9b7f4291cdb7164264f9665d10781cddefd50953
|
| 3 |
+
size 676136728
|
model-00037-of-00187.safetensors
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:51863118cb3d3bd07cd794fcf65931d22902b58261d23dfae4034bbd0852381e
|
| 3 |
+
size 641728960
|
model-00038-of-00187.safetensors
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:0236c8e903ea428d8a8996e8261f91540eb1afacb11874de5e5140aae3dc0bd3
|
| 3 |
+
size 641728960
|
model-00039-of-00187.safetensors
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:42d06cf1d13d60dd09950631186bca039384b31bc65c48d08b863a3f7c7be25b
|
| 3 |
+
size 676136728
|
model-00040-of-00187.safetensors
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:9123f4cefbef7e8f2ed1bd7fbc2ece94a5f5858b2c08459ea89a9a850e6c6849
|
| 3 |
+
size 641728960
|
model-00041-of-00187.safetensors
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:6e3083c35a9dc5f670ce447d9c1062d954365a5f70c005922610f24b634a19fb
|
| 3 |
+
size 641728960
|
model-00042-of-00187.safetensors
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:a92d5d7ec5ebf856cc0f1b412d61f88c32a71ad8e2fd9cdef4be5e78c1cb2547
|
| 3 |
+
size 676136728
|
model-00043-of-00187.safetensors
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:c83ceec4a0dd5ae0a891cc292e6f67722ad59863e5daf04b37aa5b82854967e5
|
| 3 |
+
size 641728960
|