File size: 2,970 Bytes
153066c
 
 
4a6d05f
153066c
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
---
configs:
  - config_name: single_hop
    default: true
    data_files:
      - split: train
        path: single_hop/train-*.parquet
      - split: test
        path: single_hop/test-*.parquet
  - config_name: multi_hop
    data_files:
      - split: train
        path: multi_hop/train-*.parquet
      - split: test
        path: multi_hop/test-*.parquet
license: mit
task_categories:
  - question-answering
language:
  - en
tags:
  - tool-use
  - agents
  - llm
  - benchmark
pretty_name: When2Tool
size_categories:
  - 1K<n<10K
---

# When2Tool

Benchmark dataset for **"LLM Agents Already Know When to Call Tools — Even Without Reasoning"** ([arXiv:2605.09252](https://arxiv.org/abs/2605.09252)).

## Overview

When2Tool is a benchmark of 18 environments designed to study *when* LLM agents should call tools. Tasks range from trivially solvable without tools to impossible without them, across three categories of tool necessity:

- **Computational Scale** (5 envs): Calculator, Statistics, Counting, Matrix, Prime
- **Knowledge Boundaries** (5 envs): Retriever, Historical Year, Game Rule, Hash, Decoding
- **Execution Reliability** (5 envs): List Manipulation, DateTime, Code Executor, Schedule, Regex Match
- **Multi-hop** (3 envs): Calculator, Retriever, Code Executor (3-step chains)

Each environment has three difficulty levels (easy, medium, hard) that create a clear decision boundary between tool-necessary and tool-unnecessary tasks.

## Dataset Structure

### Configs

- `single_hop`: 15 single-hop environments (900 train / 2,250 test)
- `multi_hop`: 3 multi-hop environments with 3-step chains (180 train / 450 test)

### Fields

| Field | Type | Description |
|-------|------|-------------|
| `id` | int | Unique task identifier |
| `difficulty` | str | `easy`, `medium`, or `hard` |
| `multi_step` | bool | Whether the task requires multiple tool calls |
| `instruction` | str | The task instruction given to the agent |
| `env_name` | str | Environment name (e.g., `CalculatorEnv`) |
| `tools` | str (JSON) | Available tools for this environment |
| `parameters` | str (JSON) | Environment parameters (e.g., corpus for retriever) |
| `answer` | str | Expected final answer |
| `steps` | str (JSON) | Intermediate steps for multi-hop tasks |
| `tags` | str (JSON) | Environment and task type tags |

### Loading

```python
from datasets import load_dataset

# Single-hop tasks
ds = load_dataset("Trustworthy-ML-Lab/When2Tool", "single_hop")

# Multi-hop tasks
ds_mh = load_dataset("Trustworthy-ML-Lab/When2Tool", "multi_hop")

# Access a sample
print(ds["test"][0]["instruction"])
print(ds["test"][0]["env_name"])
```

## Citation

```bibtex
@article{sun2026when2tool,
  title={LLM Agents Already Know When to Call Tools -- Even Without Reasoning},
  author={Sun, Chung-En and Liu, Linbo and Yan, Ge and Wang, Zimo and Weng, Tsui-Wei},
  journal={arXiv preprint arXiv:2605.09252},
  year={2026},
  url={https://arxiv.org/abs/2605.09252}
}
```