Datasets:
File size: 6,970 Bytes
a4076d0 d8b5a44 a4076d0 def88be d8b5a44 a4076d0 3c49117 def88be 7088d27 8da38df 7088d27 def88be d8b5a44 def88be 11bf55b d8b5a44 | 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 | ---
license: other
size_categories:
- 10K<n<100K
task_categories:
- text-generation
license_name: mixed-license
license_link: LICENSE.md
tags:
- llm-agents
- deployment-time-learning
- continual-learning
configs:
- config_name: alfworld
data_files: alfworld/alfworld.jsonl
- config_name: banking77
data_files: banking77/banking77.jsonl
- config_name: bird
data_files: bird/bird.jsonl
- config_name: cmdl
data_files: cmdl/cmdl.jsonl
- config_name: ddxplus
data_files: ddxplus/ddxplus.jsonl
- config_name: lfd
data_files: lfd/lfd.jsonl
- config_name: mud
data_files: mud/mud.jsonl
- config_name: rca
data_files: rca/rca.jsonl
- config_name: scienceworld
data_files: scienceworld/scienceworld.jsonl
- config_name: sentifin
data_files: sentifin/sentifin.jsonl
- config_name: spider
data_files: spider/spider.jsonl
- config_name: 2wiki
data_files: 2wiki/2wiki.jsonl
- config_name: ehr
data_files: ehr/ehr.jsonl
---
# DTLBench
<p align="center">
<a href="https://github.com/guosyjlu/CASCADE">
💻 GitHub Repo
</a> |
<a href="https://physionet.org/content/mimic-iv-ext-dtlbench/1.0.0/">
🫀 DTLBench PhysioNet (To be released)
</a> |
<a href="https://arxiv.org/abs/2605.06702">
📄 Paper
</a>
</p>
DTLBench is a benchmark for **deployment-time learning** of large language model agents. It collects diverse task streams spanning medical diagnosis, legal analysis, operational reasoning, financial prediction, text-to-SQL, embodied decision making, tabular reasoning on EHRs, deep search, etc.
The dataset was introduced in the paper: [CASCADE: Case-Based Continual Adaptation for Large Language Models During Deployment](https://arxiv.org/abs/2605.06702).
## Benchmark Overview
- Total tasks: `16` (3 of them will be released through PhysioNet)
- Data format: one JSON object per line (`.jsonl`)
- Primary use case: benchmark streams for deployment-time learning
DTLBench covers three environment styles used in CASCADE:
- `single-turn`: one input, one final answer
- `multi-turn`: sequential interaction with the environment
### Summary statistics of the DTLBench. The maximum steps refer to the maximum number of interaction steps that the environment allows per task.
| **Property** | **Domain** | **Task** | **Dataset** | **Maximum Steps** | **Number of Samples** |
|------------------------------|---------------------|--------------------------------------------------|-------------------|-------------------|-----------------------|
| **Single-turn** | **Medical** | Medical Diagnosis | DDXPlus | 1 | 3136 |
| | | Medication Recommendation | MIMIC-IV-MR | 1 | 2881 |
| | | Medical Specialty Referral | MIMIC-IV-MSR | 1 | 2115 |
| | | Triage Level Prediction | MIMIC-IV-TLP | 1 | 2200 |
| | **Legal** | Multi-Defendant Legal Charge Prediction | MUD | 1 | 1740 |
| | | Penalty Legal Prediction | CMDL | 1 | 2080 |
| | **Financial** | Financial Customer Intent Routing | Banking77 | 1 | 5000 |
| | | Entity-Aware Financial Sentiment Analysis | SEntFiN | 1 | 2299 |
| | **AIOps** | AIOps Root Cause Analysis | RCA | 1 | 2925 |
| | | AIOps Log Fault Diagnosis | LFD | 1 | 3000 |
| | **Coding** | Text-to-SQL | SPIDER | 1 | 2147 |
| | | Knowledge-Augmented Text-to-SQL | BIRD | 1 | 1534 |
| **Multi-turn, Simulated** | **Embodied** | Household Embodied Decision Making | ALFWorld | 30 | 2000 |
| | | Scientific Embodied Decision Making | ScienceWorld | 10-30 | 1857 |
| **Multi-turn, Real-world** | **Information Seeking** | Web-based Deep Search | 2Wiki | 5 | 2500 |
| | **Medical** | Complex Tabular Reasoning on Electronic Health Records | MIMIC-III | 5 | 2500 |
## Data Format
Each config can be loaded independently from Hugging Face, and each task keeps the fields needed by its original environment. All tasks include a `task` field, which is the main query or observation presented to the agent.
## Load the Dataset
Using `datasets`:
```python
from datasets import load_dataset
# Load one task
ddxplus = load_dataset("guosy/DTLBench", "ddxplus")
print(ddxplus["train"][0])
# Load another task
spider = load_dataset("guosy/DTLBench", "spider")
print(spider["train"][0]["task"])
```
Using `huggingface-cli`:
```bash
huggingface-cli download --repo-type dataset guosy/DTLBench --local-dir ./DTLBench
```
## License
DTLBench is a **mixed-license** collection. Each subdataset follows its own original license, and the benchmark authors do not claim additional rights beyond those licenses.
Please see [LICENSE.md](LICENSE.md) and the per-task `LICENSE` files for details. In particular:
- Some tasks are under permissive licenses such as MIT or Apache-2.0
- Some tasks use CC licenses with attribution or share-alike requirements
- Some tasks have unclear or unknown redistribution terms
You are responsible for ensuring your use complies with the license of each individual subdataset.
## Citation
If you use DTLBench, please consider citing our paper:
```
@misc{guo2026cascadecasebasedcontinualadaptation,
title={CASCADE: Case-Based Continual Adaptation for Large Language Models During Deployment},
author={Siyuan Guo and Yali Du and Hechang Chen and Yi Chang and Jun Wang},
year={2026},
eprint={2605.06702},
archivePrefix={arXiv},
primaryClass={cs.AI},
url={https://arxiv.org/abs/2605.06702},
}
``` |