Tone

Language 涓枃|English

Model Details

This model is an NVFP4A16 quantized version of google/gemma-4-31B-it generated with llm-compressor. Please follow the license of the original model. This model runs in Instruct mode by default.

To use thinking mode, follow these steps in order:

  1. Start the server with --reasoning-parser gemma4.
  2. Enable thinking by adding --default-chat-template-kwargs '{"enable_thinking": true}', or by setting {%- set enable_thinking = true %} in chat_template.jinja.

See Example in Thinking Mode.

Quantization Strategy

Layer Type Bits Notes
lm_head 16-bit Kept in original precision to preserve final token prediction quality and avoid extra degradation at output projection
vision_tower.* 16-bit Kept in original precision to better preserve visual feature extraction quality and reduce multimodal degradation
embed_vision.* 16-bit Kept in original precision to maintain vision embedding fidelity and reduce quantization error before visual feature processing

Quickstart

vLLM Usage

vLLM is a high-throughput and memory-efficient inference and serving engine for LLMs.

Directly talk to the model

import argparse
import atexit
import json
import os
import shutil
import subprocess
import sys
import time
import urllib.error
import urllib.request


# ---------------------------
# User-facing configuration
# ---------------------------
DEFAULTS = {
    "model": "YCWTG/gemma-4-31B-it-NVFP4A16-GPTQ",
    "served_model_name": "YCWTG/gemma-4-31B-it-NVFP4A16-GPTQ",
    "host": "localhost",
    "port": 8000,
    "max_model_len": 66464,
    "enable_auto_tool_choice": True,
    "async_scheduling": True,
    "tool_call_parser": "gemma4",
    "max_num_seqs": 1,
    "reasoning_parser": "gemma4",
    "default_chat_template_kwargs": '{"enable_thinking": true}',
    "allowed_local_media_path": "/home/ycwtg/鍥剧墖/鎴浘",
}

RUNTIME = {
    "gpu_memory_utilization": 0.97,
    "startup_timeout_sec": 180,
    "healthcheck_timeout_sec": 3,
    "healthcheck_interval_sec": 1,
    "chat_timeout_sec": 600,
}

SERVE_VALUE_OPTIONS = (
    ("--served-model-name", "served_model_name"),
    ("--host", "host"),
    ("--port", "port"),
    ("--max-model-len", "max_model_len"),
    ("--tool-call-parser", "tool_call_parser"),
    ("--max_num_seqs", "max_num_seqs"),
    ("--reasoning-parser", "reasoning_parser"),
    ("--default-chat-template-kwargs", "default_chat_template_kwargs"),
)

CLIENT_VALUE_OPTIONS = (
    ("--model", "model"),
    *SERVE_VALUE_OPTIONS,
)

SERVE_BOOL_OPTIONS = (
    ("--enable-auto-tool-choice", "enable_auto_tool_choice"),
    ("--async-scheduling", "async_scheduling"),
)

CLIENT_BOOL_OPTIONS = (
    ("--enable-auto-tool-choice", "--no-enable-auto-tool-choice", "enable_auto_tool_choice"),
    ("--async-scheduling", "--no-async-scheduling", "async_scheduling"),
)


def append_value_options(cmd, args, options):
    for flag, attr in options:
        cmd.extend([flag, str(getattr(args, attr))])


def append_true_bool_options(cmd, args, options):
    for flag, attr in options:
        if getattr(args, attr):
            cmd.append(flag)


def append_boolean_optional_options(cmd, args, options):
    for positive_flag, negative_flag, attr in options:
        cmd.append(positive_flag if getattr(args, attr) else negative_flag)


def append_optional_value_option(cmd, args, flag, attr):
    value = getattr(args, attr)
    if value is None:
        return
    if isinstance(value, str) and not value.strip():
        return
    cmd.extend([flag, str(value)])


def multiline_input():
    print('User (type "END" on a single line to send, "exit" to quit):')
    lines = []
    while True:
        line = input()
        text = line.strip()
        if text.lower() in {"exit", "quit"}:
            return None
        if text == "END":
            break
        lines.append(line)
    return "\n".join(lines)


def resolve_client_host(host):
    return "127.0.0.1" if host in {"0.0.0.0", "::"} else host


def launch_vllm(args):
    cmd = ["vllm", "serve", args.model]
    append_value_options(cmd, args, SERVE_VALUE_OPTIONS)
    append_optional_value_option(cmd, args, "--allowed-local-media-path", "allowed_local_media_path")
    cmd.extend(
        [
            "--gpu-memory-utilization",
            str(RUNTIME["gpu_memory_utilization"]),
        ]
    )
    append_true_bool_options(cmd, args, SERVE_BOOL_OPTIONS)

    print("Launching vLLM:")
    print(" ".join(cmd))
    try:
        return subprocess.Popen(cmd)
    except FileNotFoundError as e:
        raise RuntimeError("vllm command not found. Activate an environment that has vllm installed.") from e


def stop_vllm(proc):
    if proc and proc.poll() is None:
        proc.terminate()
        try:
            proc.wait(timeout=10)
        except subprocess.TimeoutExpired:
            proc.kill()


def wait_vllm_ready(base_url, timeout_sec=RUNTIME["startup_timeout_sec"]):
    deadline = time.time() + timeout_sec
    url = f"{base_url}/v1/models"
    req = urllib.request.Request(url=url)
    while time.time() < deadline:
        try:
            with urllib.request.urlopen(req, timeout=RUNTIME["healthcheck_timeout_sec"]) as resp:
                if resp.status == 200:
                    return True
        except urllib.error.URLError:
            pass
        time.sleep(RUNTIME["healthcheck_interval_sec"])
    return False


def chat_once(base_url, model_name, messages):
    payload = {"model": model_name, "messages": messages, "skip_special_tokens": False}
    req = urllib.request.Request(
        url=f"{base_url}/v1/chat/completions",
        data=json.dumps(payload, ensure_ascii=False).encode("utf-8"),
        headers={"Content-Type": "application/json"},
        method="POST",
    )
    with urllib.request.urlopen(req, timeout=RUNTIME["chat_timeout_sec"]) as resp:
        data = json.loads(resp.read().decode("utf-8"))
    return data["choices"][0]["message"]


def chat_loop(base_url, model_name):
    print("\n===== Chat Started =====\n")
    messages = []

    while True:
        user_text = multiline_input()
        if user_text is None:
            break

        messages.append({"role": "user", "content": user_text})
        try:
            assistant_msg = chat_once(base_url, model_name, messages)
        except Exception as e:
            print(f"\nRequest failed: {e}\n")
            messages.pop()
            continue

        content = assistant_msg.get("content")
        tool_calls = assistant_msg.get("tool_calls")

        if content:
            print(f"\nAssistant:\n{content}\n")
        elif tool_calls:
            print("\nAssistant(tool_calls):")
            print(json.dumps(tool_calls, ensure_ascii=False, indent=2))
            print()
        else:
            print("\nAssistant:\n(empty response)\n")

        normalized_msg = {"role": "assistant", "content": content or ""}
        if tool_calls:
            normalized_msg["tool_calls"] = tool_calls
        messages.append(normalized_msg)


def build_client_command(args):
    cmd = [sys.executable, os.path.abspath(__file__), "--_client"]
    append_value_options(cmd, args, CLIENT_VALUE_OPTIONS)
    append_boolean_optional_options(cmd, args, CLIENT_BOOL_OPTIONS)
    return cmd


def spawn_chat_terminal(args):
    client_cmd = build_client_command(args)

    terminal_cmd = None
    if os.name == "nt":
        # Open a new cmd window on Windows and keep it alive for interactive chat.
        terminal_cmd = [
            "cmd",
            "/c",
            "start",
            "",
            "cmd",
            "/k",
            subprocess.list2cmdline(client_cmd),
        ]
    elif shutil.which("gnome-terminal"):
        terminal_cmd = ["gnome-terminal", "--", *client_cmd]
    elif shutil.which("x-terminal-emulator"):
        terminal_cmd = ["x-terminal-emulator", "-e", *client_cmd]

    if not terminal_cmd:
        return False

    try:
        subprocess.Popen(terminal_cmd)
        return True
    except Exception as e:
        print(f"Failed to open a new terminal automatically: {e}")
        return False


def parse_args():
    parser = argparse.ArgumentParser(description="Minimal local vLLM chat script")
    parser.add_argument("--_client", action="store_true", help=argparse.SUPPRESS)
    parser.add_argument("--model", default=DEFAULTS["model"])
    parser.add_argument(
        "--served-model-name",
        default=DEFAULTS["served_model_name"],
    )
    parser.add_argument("--host", default=DEFAULTS["host"])
    parser.add_argument("--port", type=int, default=DEFAULTS["port"])
    parser.add_argument("--max-model-len", type=int, default=DEFAULTS["max_model_len"])
    parser.add_argument(
        "--max-num-seqs",
        "--max_num_seqs",
        dest="max_num_seqs",
        type=int,
        default=DEFAULTS["max_num_seqs"],
    )
    parser.add_argument(
        "--enable-auto-tool-choice",
        action=argparse.BooleanOptionalAction,
        default=DEFAULTS["enable_auto_tool_choice"],
    )
    parser.add_argument(
        "--async-scheduling",
        action=argparse.BooleanOptionalAction,
        default=DEFAULTS["async_scheduling"],
    )
    parser.add_argument(
        "--allowed-local-media-path",
        default=DEFAULTS["allowed_local_media_path"],
        help="Optional local media path. Leave empty to disable.",
    )
    parser.add_argument("--tool-call-parser", default=DEFAULTS["tool_call_parser"])
    parser.add_argument("--reasoning-parser", default=DEFAULTS["reasoning_parser"])
    parser.add_argument(
        "--default-chat-template-kwargs",
        default=DEFAULTS["default_chat_template_kwargs"],
    )
    return parser.parse_args()


def main():
    args = parse_args()
    base_url = f"http://{resolve_client_host(args.host)}:{args.port}"
    if args._client:
        chat_loop(base_url, args.served_model_name)
        return

    proc = launch_vllm(args)
    atexit.register(stop_vllm, proc)

    print(f"Waiting for service to become ready: {base_url}")
    if not wait_vllm_ready(base_url):
        print("vLLM startup timed out. Check server logs above.")
        stop_vllm(proc)
        sys.exit(1)

    if spawn_chat_terminal(args):
        print("Model is ready. Opened a new terminal for chat; this terminal keeps server logs.")
        print("Press Ctrl+C here to stop vLLM.")
        try:
            proc.wait()
        except KeyboardInterrupt:
            print("\nInterrupted. Stopping vLLM...")
    else:
        print("No supported terminal found. Falling back to chat in this terminal.")
        chat_loop(base_url, args.served_model_name)


if __name__ == "__main__":
    main()

Directly use the OpenAPI

Instruct Mode

vllm serve YCWTG/gemma-4-31B-it-NVFP4A16-GPTQ --served-model-name YCWTG/gemma-4-31B-it-NVFP4A16-GPTQ --host localhost --port 8000 --async-scheduling --max-model-len 66464 --enable-auto-tool-choice --tool-call-parser gemma4 --gpu-memory-utilization 0.97 --max_num_seqs 1 --allowed-local-media-path /home/ycwtg/image

Thinking Mode

vllm serve YCWTG/gemma-4-31B-it-NVFP4A16-GPTQ --served-model-name YCWTG/gemma-4-31B-it-NVFP4A16-GPTQ --host localhost --port 8000 --async-scheduling --max-model-len 66464 --enable-auto-tool-choice --tool-call-parser gemma4 --gpu-memory-utilization 0.97 --max_num_seqs 1 --allowed-local-media-path /home/ycwtg/image --reasoning-parser gemma4 --default-chat-template-kwargs '{"enable_thinking": true}'

The following will create API endpoints at http://localhost:8000/v1.

See its documentation for more details.

Generate the Model

See code here.

Ethical Considerations and Limitations

The model can produce factually incorrect output, and should not be relied on to produce factually accurate information. Because of the limitations of the pretrained model and the finetuning datasets, it is possible that this model could generate lewd, biased or otherwise offensive outputs.

Therefore, before deploying any applications of the model, developers should perform safety testing.

Disclaimer

The license on this model does not constitute legal advice. We are not responsible for the actions of third parties who use this model. Please consult an attorney before using this model for commercial purposes.

Downloads last month
299
Safetensors
Model size
20B params
Tensor type
F32
BF16
F8_E4M3
U8
Inference Providers NEW
This model isn't deployed by any Inference Provider. 馃檵 Ask for provider support

Model tree for YCWTG/gemma-4-31B-it-NVFP4A16-GPTQ

Quantized
(164)
this model