Text Generation
MLX
Safetensors
minimax_m2
jang
jang-quantized
JANG_2L
mixed-precision
apple-silicon
conversational
custom_code
fp8
Instructions to use bearzi/MiniMax-M2.7-JANG_2L with libraries, inference providers, notebooks, and local apps. Follow these links to get started.
- Libraries
- MLX
How to use bearzi/MiniMax-M2.7-JANG_2L with MLX:
# Make sure mlx-lm is installed # pip install --upgrade mlx-lm # Generate text with mlx-lm from mlx_lm import load, generate model, tokenizer = load("bearzi/MiniMax-M2.7-JANG_2L") prompt = "Write a story about Einstein" messages = [{"role": "user", "content": prompt}] prompt = tokenizer.apply_chat_template( messages, add_generation_prompt=True ) text = generate(model, tokenizer, prompt=prompt, verbose=True) - Notebooks
- Google Colab
- Kaggle
- Local Apps
- LM Studio
- Pi new
How to use bearzi/MiniMax-M2.7-JANG_2L with Pi:
Start the MLX server
# Install MLX LM: uv tool install mlx-lm # Start a local OpenAI-compatible server: mlx_lm.server --model "bearzi/MiniMax-M2.7-JANG_2L"
Configure the model in Pi
# Install Pi: npm install -g @mariozechner/pi-coding-agent # Add to ~/.pi/agent/models.json: { "providers": { "mlx-lm": { "baseUrl": "http://localhost:8080/v1", "api": "openai-completions", "apiKey": "none", "models": [ { "id": "bearzi/MiniMax-M2.7-JANG_2L" } ] } } }Run Pi
# Start Pi in your project directory: pi
- Hermes Agent new
How to use bearzi/MiniMax-M2.7-JANG_2L with Hermes Agent:
Start the MLX server
# Install MLX LM: uv tool install mlx-lm # Start a local OpenAI-compatible server: mlx_lm.server --model "bearzi/MiniMax-M2.7-JANG_2L"
Configure Hermes
# Install Hermes: curl -fsSL https://hermes-agent.nousresearch.com/install.sh | bash hermes setup # Point Hermes at the local server: hermes config set model.provider custom hermes config set model.base_url http://127.0.0.1:8080/v1 hermes config set model.default bearzi/MiniMax-M2.7-JANG_2L
Run Hermes
hermes
- MLX LM
How to use bearzi/MiniMax-M2.7-JANG_2L with MLX LM:
Generate or start a chat session
# Install MLX LM uv tool install mlx-lm # Interactive chat REPL mlx_lm.chat --model "bearzi/MiniMax-M2.7-JANG_2L"
Run an OpenAI-compatible server
# Install MLX LM uv tool install mlx-lm # Start the server mlx_lm.server --model "bearzi/MiniMax-M2.7-JANG_2L" # Calling the OpenAI-compatible server with curl curl -X POST "http://localhost:8000/v1/chat/completions" \ -H "Content-Type: application/json" \ --data '{ "model": "bearzi/MiniMax-M2.7-JANG_2L", "messages": [ {"role": "user", "content": "Hello"} ] }'
| {# ----------‑‑‑ special token variables ‑‑‑---------- #} | |
| {%- set toolcall_begin_token = '<minimax:tool_call>' -%} | |
| {%- set toolcall_end_token = '</minimax:tool_call>' -%} | |
| {#- Tool Rendering Functions ============================================== -#} | |
| {%- macro render_tool_namespace(namespace_name, tool_list) -%} | |
| {%- for tool in tool_list -%} | |
| <tool>{{ tool.function | tojson(ensure_ascii=False) }}</tool> | |
| {% endfor -%} | |
| {%- endmacro -%} | |
| {%- macro visible_text(content) -%} | |
| {%- if content is string -%} | |
| {{ content }} | |
| {%- elif content is iterable and content is not mapping -%} | |
| {%- for item in content -%} | |
| {%- if item is mapping and item.type == 'text' -%} | |
| {{- item.text }} | |
| {%- elif item is string -%} | |
| {{- item }} | |
| {%- endif -%} | |
| {%- endfor -%} | |
| {%- else -%} | |
| {{- content }} | |
| {%- endif -%} | |
| {%- endmacro -%} | |
| {#- System Message Construction ============================================ -#} | |
| {%- macro build_system_message(system_message) -%} | |
| {%- if system_message and system_message.content -%} | |
| {{- visible_text(system_message.content) }} | |
| {%- else -%} | |
| {%- if model_identity is not defined -%} | |
| {%- set model_identity = "You are a helpful assistant. Your name is MiniMax-M2.7 and is built by MiniMax." -%} | |
| {%- endif -%} | |
| {{- model_identity }} | |
| {%- endif -%} | |
| {#- Handle current_date -#} | |
| {%- if system_message and system_message.current_date -%} | |
| {{- '\n' ~ 'Current date: ' + system_message.current_date }} | |
| {%- endif -%} | |
| {#- Handle current_location -#} | |
| {%- if system_message and system_message.current_location -%} | |
| {{- '\n' ~ 'Current location: ' + system_message.current_location }} | |
| {%- endif -%} | |
| {%- endmacro -%} | |
| {#- Main Template Logic ================================================= -#} | |
| {#- Extract system message (only first message if it's system) -#} | |
| {%- set system_message = none -%} | |
| {%- set conversation_messages = messages -%} | |
| {%- if messages and messages[0].role == "system" -%} | |
| {%- set system_message = messages[0] -%} | |
| {%- set conversation_messages = messages[1:] -%} | |
| {%- endif -%} | |
| {#- Get the last user message turn, for interleved thinking -#} | |
| {%- set ns = namespace(last_user_index=-1) %} | |
| {% for m in conversation_messages %} | |
| {%- if m.role == 'user' %} | |
| {% set ns.last_user_index = loop.index0 -%} | |
| {%- endif %} | |
| {%- endfor %} | |
| {#- Render system message -#} | |
| {{- ']~!b[' ~ ']~b]system' ~ '\n' }} | |
| {{- build_system_message(system_message) }} | |
| {#- Render tools if available -#} | |
| {%- if tools -%} | |
| {{- '\n\n' ~ '# Tools' ~ '\n' ~ 'You may call one or more tools to assist with the user query.\nHere are the tools available in JSONSchema format:' ~ '\n' }} | |
| {{- '\n' ~ '<tools>' ~ '\n' }} | |
| {{- render_tool_namespace("functions", tools) }} | |
| {{- '</tools>' ~ '\n\n' }} | |
| {{- 'When making tool calls, use XML format to invoke tools and pass parameters:' ~ '\n' }} | |
| {{- '\n' ~ toolcall_begin_token }} | |
| <invoke name="tool-name-1"> | |
| <parameter name="param-key-1">param-value-1</parameter> | |
| <parameter name="param-key-2">param-value-2</parameter> | |
| ... | |
| </invoke> | |
| {{- '\n' ~ toolcall_end_token }} | |
| {%- endif -%} | |
| {{- '[e~[\n' }} | |
| {#- Render messages -#} | |
| {%- set last_tool_call = namespace(name=none) -%} | |
| {%- for message in conversation_messages -%} | |
| {%- if message.role == 'assistant' -%} | |
| {#- Only render reasoning_content if no user message follows -#} | |
| {{- ']~b]ai' ~ '\n' }} | |
| {%- set reasoning_content = '' %} | |
| {%- set content = visible_text(message.content) %} | |
| {%- if message.reasoning_content is string %} | |
| {%- set reasoning_content = message.reasoning_content %} | |
| {%- else %} | |
| {%- if '</think>' in content %} | |
| {%- set reasoning_content = content.split('</think>')[0].strip('\n').split('<think>')[-1].strip('\n') %} | |
| {%- set content = content.split('</think>')[-1].strip('\n') %} | |
| {%- endif %} | |
| {%- endif %} | |
| {%- if reasoning_content and loop.index0 > ns.last_user_index -%} | |
| {{- '<think>' ~ '\n' ~ reasoning_content ~ '\n' ~ '</think>' ~ '\n\n' }} | |
| {%- endif -%} | |
| {%- if content -%} | |
| {{- content }} | |
| {%- endif -%} | |
| {%- if message.tool_calls -%} | |
| {{- '\n' ~ toolcall_begin_token ~ '\n' }} | |
| {%- for tool_call in message.tool_calls -%} | |
| {%- if tool_call.function %} | |
| {%- set tool_call = tool_call.function %} | |
| {%- endif %} | |
| {{- '<invoke name="' + tool_call.name + '">' }} | |
| {% set _args = tool_call.arguments %} | |
| {%- for k, v in _args.items() %} | |
| {{- '<parameter name="' + k + '">' }} | |
| {{- v | tojson(ensure_ascii=False) if v is not string else v }} | |
| {{- '</parameter>' }} | |
| {% endfor %} | |
| {{- '</invoke>' ~ '\n' }} | |
| {%- endfor -%} | |
| {{- toolcall_end_token}} | |
| {%- set last_tool_call.name = message.tool_calls[-1].name -%} | |
| {%- else -%} | |
| {%- set last_tool_call.name = none -%} | |
| {%- endif -%} | |
| {{- '[e~[' ~ '\n' }} | |
| {%- elif message.role == 'tool' -%} | |
| {%- if last_tool_call.name is none -%} | |
| {{- raise_exception("Message has tool role, but there was no previous assistant message with a tool call!") }} | |
| {%- endif -%} | |
| {%- if loop.first or (conversation_messages[loop.index0 - 1].role != 'tool') -%} | |
| {{- ']~b]tool' }} | |
| {%- endif -%} | |
| {%- if message.content is string -%} | |
| {{- '\n<response>' }} | |
| {{- message.content }} | |
| {{- '</response>' }} | |
| {%- else -%} | |
| {%- for tr in message.content -%} | |
| {{- '\n<response>' }} | |
| {{- tr.output if tr.output is defined else (tr.text if tr.type == 'text' and tr.text is defined else tr) }} | |
| {{- '\n</response>' }} | |
| {%- endfor -%} | |
| {%- endif -%} | |
| {%- if loop.last or (conversation_messages[loop.index0 + 1].role != 'tool') -%} | |
| {{- '[e~[\n' -}} | |
| {%- endif -%} | |
| {%- elif message.role == 'user' -%} | |
| {{- ']~b]user' ~ '\n' }} | |
| {{- visible_text(message.content) }} | |
| {{- '[e~[' ~ '\n' }} | |
| {%- endif -%} | |
| {%- endfor -%} | |
| {#- Generation prompt -#} | |
| {%- if add_generation_prompt -%} | |
| {{- ']~b]ai' ~ '\n' ~ '<think>' ~ '\n' }} | |
| {%- endif -%} | |