TabBench / README.md
qiangminjie27's picture
Duplicate from qiangminjie27/TabBench
7925dbe
metadata
language:
  - en
license: mit
size_categories:
  - 1M<n<10M
task_categories:
  - text-classification
  - feature-extraction
  - text-retrieval
tags:
  - tabular
  - embedding
  - benchmark
  - contrastive-learning
  - retrieval
  - classification
pretty_name: TabBench - Tabular Embedding Benchmark

TabBench: Tabular Embedding Benchmark

A Comprehensive Evaluation Suite for Tabular Embedding Models

GitHub


Overview

TabBench is a comprehensive benchmark designed to evaluate the tabular understanding capability of embedding models. It assesses two critical dimensions of tabular representation: linear separability (via classification) and semantic alignment (via retrieval).

TabBench aggregates diverse datasets from four authoritative repositories and provides a standardized evaluation pipeline.

Benchmark Statistics

Category Count Samples / Corpus
Classification
Grinsztajn 56 datasets 521,889
OpenML-CC18 66 datasets 249,939
OpenML-CTR23 34 datasets 210,026
UniPredict 155 datasets 386,618
Classification Total 311 datasets 1,368,472
Retrieval
Corpus 1,394,247
Numeric Queries 10,000
Categorical Queries 10,000
Mixed Queries 10,000
Retrieval Total 30,000 queries 1,394,247

Data Format

Serialization

All tabular rows are serialized into natural language using the template:

The {column_name} is {value}. The {column_name} is {value}. ...

For example:

The age is 25. The occupation is Engineer. The salary is 75000.50. The city is New York.

Classification Task

Each dataset directory contains:

  • train.jsonl / test.jsonl: Each line is a JSON object with the following fields:
    • text: Serialized tabular row
    • label: Target label (string)
    • dataset: Dataset name
    • benchmark: Source benchmark name
    • task_type: Task type (clf)
  • train.csv / test.csv: Original tabular data in CSV format
  • metadata.json: Dataset metadata including dataset, benchmark, sub_benchmark, task_type, data_type, target_column, label_values, num_labels, train_samples, test_samples, train_label_distribution, test_label_distribution
{"text": "The age is 36. The workclass is Private. The fnlwgt is 172256.0. ...", "label": ">50K", "dataset": "adult", "benchmark": "openml_cc18", "task_type": "clf"}

Retrieval Task

The retrieval directory contains:

  • corpus.jsonl: Global corpus of serialized rows (~1.4M documents), each with fields idx, text, label, dataset, benchmark
  • queries.jsonl: All retrieval queries (30,000 total: 10k numeric + 10k categorical + 10k mixed)

Corpus format:

{"idx": 0, "text": "The V1 is 3.0. The V2 is 559.0. ...", "label": "1.0", "dataset": "albert", "benchmark": "grinsztajn"}

Query format:

{
  "task": "retrieval",
  "query_id": "retrieval_numeric_000001",
  "query_text": "find records where Easter is 0",
  "query_type": "numeric",
  "conditions": [{"field": "Easter", "operator": "==", "value": 0.0, "type": "numeric"}],
  "num_conditions": 1,
  "matching_indices": [1384050, 1384051, ...],
  "num_matches": 1822
}

Evaluation Protocol

Classification (Linear Probing)

  1. Extract frozen embeddings for all samples using the target model
  2. Train an independent Logistic Regression classifier per dataset (max_iter=1000, random_state=42)
  3. Report Accuracy and Macro-F1 on the test split

Retrieval (Dense Retrieval)

  1. Encode all corpus documents and queries
  2. Build a Faiss IndexFlatIP index (cosine similarity via L2-normalized vectors)
  3. Retrieve top-k documents for each query
  4. Report MRR@10 and nDCG@10

Overall Score

The Overall metric is the macro-average of Accuracy, F1, MRR@10, and nDCG@10.

Leaderboard

Model #Params Overall Accuracy F1 MRR@10 nDCG@10
Jina-Embeddings-v3 0.6B 41.48 60.33 46.11 32.49 26.98
Jasper-Token-Compression 0.6B 42.75 61.25 47.69 33.56 28.50
Qwen3-Embedding-0.6B 0.6B 44.92 62.81 50.32 36.00 30.56
TabEmbed-0.6B 0.6B 65.27 67.16 56.56 71.72 65.64
F2LLM-4B 4B 48.02 64.92 52.48 40.60 34.08
Octen-Embedding-4B 4B 48.62 65.36 53.64 40.97 34.51
Qwen3-Embedding-4B 4B 48.91 65.09 52.72 42.04 35.76
TabEmbed-4B 4B 70.71 69.51 59.75 79.33 74.25
SFR-Embedding-Mistral 7B 49.42 64.28 50.75 44.23 38.41
Linq-Embed-Mistral 7B 50.74 66.06 53.33 44.65 38.92
GTE-Qwen2-7B-Instruct 7B 51.27 64.67 51.76 47.44 41.19
Qwen3-Embedding-8B 8B 48.03 65.08 52.81 40.06 34.16
TabEmbed-8B 8B 71.62 69.88 60.19 80.58 75.83

Quick Start

# Clone the evaluation code
git clone https://github.com/qiangminjie27/TabEmbed.git
cd TabEmbed
pip install -r requirements.txt

# Run evaluation
python src/run_benchmark.py \
    --benchmark_dir /path/to/TabBench \
    --model_name_or_path your-model-name \
    --output_dir results/ \
    --max_seq_length 1024 \
    --batch_size 64

Source Datasets

TabBench is built upon the following high-quality data sources:

Raw evaluation data is sourced from tabula-8b-eval-suite.

Citation

If you use TabBench in your research, please cite:

Paper coming soon. Please check back later for the BibTeX citation.

License

This benchmark is released under the MIT License.

Note: The individual upstream datasets included in this benchmark may have their own respective licenses. Please refer to the original data sources for their specific terms.