When2Tool / README.md
cesun's picture
Upload README.md with huggingface_hub
4a6d05f verified
metadata
configs:
  - config_name: single_hop
    default: true
    data_files:
      - split: train
        path: single_hop/train-*.parquet
      - split: test
        path: single_hop/test-*.parquet
  - config_name: multi_hop
    data_files:
      - split: train
        path: multi_hop/train-*.parquet
      - split: test
        path: multi_hop/test-*.parquet
license: mit
task_categories:
  - question-answering
language:
  - en
tags:
  - tool-use
  - agents
  - llm
  - benchmark
pretty_name: When2Tool
size_categories:
  - 1K<n<10K

When2Tool

Benchmark dataset for "LLM Agents Already Know When to Call Tools — Even Without Reasoning" (arXiv:2605.09252).

Overview

When2Tool is a benchmark of 18 environments designed to study when LLM agents should call tools. Tasks range from trivially solvable without tools to impossible without them, across three categories of tool necessity:

  • Computational Scale (5 envs): Calculator, Statistics, Counting, Matrix, Prime
  • Knowledge Boundaries (5 envs): Retriever, Historical Year, Game Rule, Hash, Decoding
  • Execution Reliability (5 envs): List Manipulation, DateTime, Code Executor, Schedule, Regex Match
  • Multi-hop (3 envs): Calculator, Retriever, Code Executor (3-step chains)

Each environment has three difficulty levels (easy, medium, hard) that create a clear decision boundary between tool-necessary and tool-unnecessary tasks.

Dataset Structure

Configs

  • single_hop: 15 single-hop environments (900 train / 2,250 test)
  • multi_hop: 3 multi-hop environments with 3-step chains (180 train / 450 test)

Fields

Field Type Description
id int Unique task identifier
difficulty str easy, medium, or hard
multi_step bool Whether the task requires multiple tool calls
instruction str The task instruction given to the agent
env_name str Environment name (e.g., CalculatorEnv)
tools str (JSON) Available tools for this environment
parameters str (JSON) Environment parameters (e.g., corpus for retriever)
answer str Expected final answer
steps str (JSON) Intermediate steps for multi-hop tasks
tags str (JSON) Environment and task type tags

Loading

from datasets import load_dataset

# Single-hop tasks
ds = load_dataset("Trustworthy-ML-Lab/When2Tool", "single_hop")

# Multi-hop tasks
ds_mh = load_dataset("Trustworthy-ML-Lab/When2Tool", "multi_hop")

# Access a sample
print(ds["test"][0]["instruction"])
print(ds["test"][0]["env_name"])

Citation

@article{sun2026when2tool,
  title={LLM Agents Already Know When to Call Tools -- Even Without Reasoning},
  author={Sun, Chung-En and Liu, Linbo and Yan, Ge and Wang, Zimo and Weng, Tsui-Wei},
  journal={arXiv preprint arXiv:2605.09252},
  year={2026},
  url={https://arxiv.org/abs/2605.09252}
}