mlx-benchmarks / README.md
JacobPEvans's picture
docs: remove dangling repo reference, generalize voice
a663785 verified
metadata
pretty_name: MLX Benchmarks
license: apache-2.0
language:
  - en
tags:
  - benchmark
  - evaluation
  - llm
  - mlx
  - apple-silicon
  - throughput
  - latency
  - code-generation
  - reasoning
  - math
size_categories:
  - n<1K
configs:
  - config_name: default
    data_files:
      - split: train
        path: data/*.parquet

MLX Benchmarks

Structured benchmark results for MLX-quantized and other locally-hosted LLMs on Apple Silicon. Covers throughput, time-to-first-token, tool-calling, code generation, reasoning, knowledge, and math suites.

Results are produced by a sweep harness that wires upstream evaluation tools against a local vllm-mlx inference server:

All data here is generated on Apple Silicon hardware (MINISFORUM MS-A2 / M4 Max class), stored in flat columnar Parquet for easy querying, and appended to via unique-filename commits so historical shards are never overwritten.

Quickstart

from datasets import load_dataset

ds = load_dataset("JacobPEvans/mlx-benchmarks")
print(ds)

# Example: average throughput per model
import pandas as pd
df = ds["train"].to_pandas()
throughput_rows = df[df.suite == "throughput"]
print(
    throughput_rows.groupby("model")["metric_value"]
    .mean()
    .sort_values(ascending=False)
)

Raw Parquet fetch (token-optimal for agents):

curl -sSL \
  https://huggingface.co/datasets/JacobPEvans/mlx-benchmarks/resolve/main/data/train-00000-of-00001.parquet \
  -o run.parquet

Schema

Each input benchmark run produces a JSON envelope (see schema.json in this repo for the authoritative v1 spec). The envelope is exploded row-wise into flat scalar columns — one row per entry in the envelope's results[] array. Skipped runs become a single sentinel row with null metric columns and skipped=true. This mirrors the columnar layout used by the Open LLM Leaderboard contents dataset.

Column Type Notes
suite string One of: throughput, ttft, tool-calling, code-accuracy, framework-eval, capability-comparison, coding, reasoning, knowledge, evalplus, math-hard
model string Full model identifier
git_sha string Commit SHA of the generator at run time
timestamp string ISO-8601 UTC start of the run
trigger string schedule, pr, workflow_dispatch, or local
schema_version string Envelope schema version (currently "1")
pr_number int64 PR number if triggered by a pull request, else null
skipped bool True for sentinel rows where the suite was skipped
os string Operating system at run time
chip string CPU/chip identifier
memory_gb int64 Total system RAM
vllm_mlx_version string Backend version if captured
runner string Runner label or local
metric_name string Individual test/measurement name
metric_metric string Metric family (e.g. throughput, latency, score)
metric_value float64 Numeric value
metric_unit string Unit (tok/s, seconds, ratio, ...)
tags_json string JSON-serialized tag dict (per-suite custom metadata)
errors_json string JSON-serialized list of non-fatal errors from the run

Nested fields from the envelope (tags, errors) are preserved as JSON-serialized strings so no information is lost — rehydrate with json.loads(row["tags_json"]).

Update cadence

New rows are appended on every sweep via a unique-filename commit pattern (data/run-{timestamp}-{sha}-{suite}-{model}.parquet). Historical shards are never overwritten. load_dataset() concatenates all data/*.parquet files into a single train split at load time.

License

Apache 2.0 — same as the underlying upstream evaluation tools.