--- language: - en license: cc-by-4.0 task_categories: - text-generation --- ## Dataset Information Most conversational agents (CAs) are designed to satisfy user needs through user-driven interactions. However, many real-world settings, such as academic interviewing, judicial proceedings, and journalistic investigations, involve broader institutional decision-making processes and require agents that can elicit information from users. To enable systematic research on this setting, we present *YIELD*, a 26M-token dataset of 2,281 ethically sourced, human-to-human dialogues. For full details, see the accompanying paper [here](https://doi.org/10.48550/arXiv.2604.10968). ## Code Repository GitHub: https://github.com/infosenselab/yield ## Citing YIELD If you use this resource in your projects, please cite the following paper. ```bibtex @misc{De_Lima_YIELD_A_Large-Scale_2026, author = {De Lima, Victor and Yang, Grace Hui}, doi = {10.48550/arXiv.2604.10968}, title = {{YIELD: A Large-Scale Dataset and Evaluation Framework for Information Elicitation Agents}}, url = {https://arxiv.org/abs/2604.10968}, year = {2026} } ```