| --- |
| license: mit |
| task_categories: |
| - table-question-answering |
| - text-generation |
| - summarization |
| language: |
| - en |
| pretty_name: DA-Code |
| size_categories: |
| - 1B<n<10B |
| tags: |
| - code |
| configs: |
| - config_name: default |
| data_files: |
| - split: test |
| path: "test.csv" |
| sep: "," |
| --- |
| |
| # [EMNLP2024] DA-Code: Agent Data Science Code Generation Benchmark for Large Language Models |
|
|
| DA-Code is a comprehensive evaluation dataset designed to assess the data analysis and code generation capabilities of LLM in agent-based data science tasks. Our papers and experiment reports have been published on Arxiv. |
|
|
| ## Dataset Overview |
|
|
| - 500 complex real-world data analysis tasks across Data Wrangling (DW), Machine Learning (ML), and Exploratory Data Analysis (EDA). |
| - Tasks cover the entire data analysis pipeline, from raw data handling to gaining insights using SQL and Python. |
| - Each example is meticulously designed to ensure high complexity and quality, with robust evaluation suites. |
| - An interactive sandbox environment allows LLMs/Agents to autonomously explore, reason, and complete tasks. |
|
|
| ## Usage |
|
|
| This dataset can be used to: |
|
|
| - Evaluate LLMs’ data analysis and code generation capabilities |
| - Benchmark autonomous reasoning in real-world tasks |
| - Develop and test multi-step data analysis strategies |
|
|
| ## Citation |
|
|
| If you use this dataset in your research, please cite our paper: |
|
|
| ``` |
| |
| ``` |