Datasets:
Dataset Preview
The full dataset viewer is not available (click to read why). Only showing a preview of the rows.
The dataset generation failed
Error code: DatasetGenerationError
Exception: TypeError
Message: Couldn't cast array of type
struct<app.py: string, test_app.py: string, edit-test.txt: string, src/__init__.py: string, src/models.py: string, src/utils.py: string, src/views.py: string, tests/__init__.py: string, tests/test_feature.py: string, test_scaffold.py: string, monolith.py: string, test_refactor.py: string>
to
{'solution.py': Value('string'), 'test_solution.py': Value('string')}
Traceback: Traceback (most recent call last):
File "/usr/local/lib/python3.12/site-packages/datasets/builder.py", line 1872, in _prepare_split_single
for key, table in generator:
^^^^^^^^^
File "/usr/local/lib/python3.12/site-packages/datasets/packaged_modules/json/json.py", line 289, in _generate_tables
self._cast_table(pa_table, json_field_paths=json_field_paths),
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/site-packages/datasets/packaged_modules/json/json.py", line 124, in _cast_table
pa_table = table_cast(pa_table, self.info.features.arrow_schema)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/site-packages/datasets/table.py", line 2272, in table_cast
return cast_table_to_schema(table, schema)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/site-packages/datasets/table.py", line 2224, in cast_table_to_schema
cast_array_to_feature(
File "/usr/local/lib/python3.12/site-packages/datasets/table.py", line 1795, in wrapper
return pa.chunked_array([func(chunk, *args, **kwargs) for chunk in array.chunks])
^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/site-packages/datasets/table.py", line 2092, in cast_array_to_feature
raise TypeError(f"Couldn't cast array of type\n{_short_str(array.type)}\nto\n{_short_str(feature)}")
TypeError: Couldn't cast array of type
struct<app.py: string, test_app.py: string, edit-test.txt: string, src/__init__.py: string, src/models.py: string, src/utils.py: string, src/views.py: string, tests/__init__.py: string, tests/test_feature.py: string, test_scaffold.py: string, monolith.py: string, test_refactor.py: string>
to
{'solution.py': Value('string'), 'test_solution.py': Value('string')}
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
File "/src/services/worker/src/worker/job_runners/config/parquet_and_info.py", line 1347, in compute_config_parquet_and_info_response
parquet_operations = convert_to_parquet(builder)
^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/src/services/worker/src/worker/job_runners/config/parquet_and_info.py", line 980, in convert_to_parquet
builder.download_and_prepare(
File "/usr/local/lib/python3.12/site-packages/datasets/builder.py", line 884, in download_and_prepare
self._download_and_prepare(
File "/usr/local/lib/python3.12/site-packages/datasets/builder.py", line 947, in _download_and_prepare
self._prepare_split(split_generator, **prepare_split_kwargs)
File "/usr/local/lib/python3.12/site-packages/datasets/builder.py", line 1739, in _prepare_split
for job_id, done, content in self._prepare_split_single(
^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/site-packages/datasets/builder.py", line 1922, in _prepare_split_single
raise DatasetGenerationError("An error occurred while generating the dataset") from e
datasets.exceptions.DatasetGenerationError: An error occurred while generating the datasetNeed help to make the dataset viewer work? Make sure to review how to configure the dataset viewer, and open a discussion for direct support.
task_id string | prompt string | initial_files dict | test_type string | test_command string | expected_output null | timeout_seconds int64 | metadata dict |
|---|---|---|---|---|---|---|---|
BigCodeBench_0 | Implement the function `task_func` in solution.py based on the docstring. Make all tests in test_solution.py pass. Do NOT modify test_solution.py. | {
"solution.py": "import itertools\nfrom random import shuffle\ndef task_func(numbers=list(range(1, 3))):\n",
"test_solution.py": "from solution import task_func\n\nimport unittest\nfrom unittest.mock import patch\nfrom random import seed, shuffle\nimport itertools\nclass TestCases(unittest.TestCase):\n def test... | pytest | python3 -m pytest test_solution.py -v | null | 180 | {
"difficulty": "hard",
"language": "python",
"source": "BigCodeBench",
"entry_point": "task_func",
"libraries": "['random', 'itertools']"
} |
BigCodeBench_1 | Implement the function `task_func` in solution.py based on the docstring. Make all tests in test_solution.py pass. Do NOT modify test_solution.py. | {
"solution.py": "import collections\nimport random\nimport string\ndef task_func(length=100):\n",
"test_solution.py": "from solution import task_func\n\nimport unittest\nimport string\nclass TestCases(unittest.TestCase):\n def setUp(self):\n # Prepare valid characters and set a random seed for reproducib... | pytest | python3 -m pytest test_solution.py -v | null | 180 | {
"difficulty": "hard",
"language": "python",
"source": "BigCodeBench",
"entry_point": "task_func",
"libraries": "['collections', 'random', 'string']"
} |
BigCodeBench_2 | Implement the function `task_func` in solution.py based on the docstring. Make all tests in test_solution.py pass. Do NOT modify test_solution.py. | {
"solution.py": "import random\nimport statistics\ndef task_func(LETTERS):\n",
"test_solution.py": "from solution import task_func\n\nimport unittest\nclass TestCases(unittest.TestCase):\n \n def setUp(self):\n # Setting up a common letters array and sorted dictionary for use in all tests\n sel... | pytest | python3 -m pytest test_solution.py -v | null | 180 | {
"difficulty": "hard",
"language": "python",
"source": "BigCodeBench",
"entry_point": "task_func",
"libraries": "['statistics', 'random']"
} |
BigCodeBench_3 | Implement the function `task_func` in solution.py based on the docstring. Make all tests in test_solution.py pass. Do NOT modify test_solution.py. | {
"solution.py": "import random\nimport numpy as np\ndef task_func(LETTERS):\n",
"test_solution.py": "from solution import task_func\n\nimport unittest\n \nclass TestCases(unittest.TestCase):\n def setUp(self):\n # Common setup for all tests: explicitly define the list of letters\n self.letters ... | pytest | python3 -m pytest test_solution.py -v | null | 180 | {
"difficulty": "hard",
"language": "python",
"source": "BigCodeBench",
"entry_point": "task_func",
"libraries": "['numpy', 'random']"
} |
BigCodeBench_4 | Implement the function `task_func` in solution.py based on the docstring. Make all tests in test_solution.py pass. Do NOT modify test_solution.py. | {
"solution.py": "from collections import Counter\nimport itertools\ndef task_func(d):\n",
"test_solution.py": "from solution import task_func\n\nimport unittest\nclass TestCases(unittest.TestCase):\n def test_case_1(self):\n \"\"\"Checks the basic functionality with single-element lists.\"\"\"\n i... | pytest | python3 -m pytest test_solution.py -v | null | 180 | {
"difficulty": "hard",
"language": "python",
"source": "BigCodeBench",
"entry_point": "task_func",
"libraries": "['collections', 'itertools']"
} |
BigCodeBench_5 | Implement the function `task_func` in solution.py based on the docstring. Make all tests in test_solution.py pass. Do NOT modify test_solution.py. | {
"solution.py": "import random\nimport math\ndef task_func(LETTERS=[chr(i) for i in range(97, 123)]):\n",
"test_solution.py": "from solution import task_func\n\nimport unittest\nfrom unittest.mock import patch\nimport math\nimport random\nclass TestCases(unittest.TestCase):\n def setUp(self):\n self.LETT... | pytest | python3 -m pytest test_solution.py -v | null | 180 | {
"difficulty": "hard",
"language": "python",
"source": "BigCodeBench",
"entry_point": "task_func",
"libraries": "['math', 'random']"
} |
BigCodeBench_6 | Implement the function `task_func` in solution.py based on the docstring. Make all tests in test_solution.py pass. Do NOT modify test_solution.py. | {
"solution.py": "import os\nimport re\ndef task_func(pattern, log_dir='/var/log/'):\n",
"test_solution.py": "from solution import task_func\n\nimport unittest\nfrom unittest.mock import patch\nimport os\nimport re\nclass TestCases(unittest.TestCase):\n \n @patch(\"os.listdir\")\n @patch(\"os.path.getmtime... | pytest | python3 -m pytest test_solution.py -v | null | 180 | {
"difficulty": "hard",
"language": "python",
"source": "BigCodeBench",
"entry_point": "task_func",
"libraries": "['re', 'os']"
} |
BigCodeBench_7 | Implement the function `task_func` in solution.py based on the docstring. Make all tests in test_solution.py pass. Do NOT modify test_solution.py. | {
"solution.py": "import csv\nimport collections\nimport operator\ndef task_func(csv_file_path):\n",
"test_solution.py": "from solution import task_func\n\nimport os\nimport unittest\nimport csv\nclass TestCases(unittest.TestCase):\n def setUp(self):\n # Create a directory for test files if it does not ex... | pytest | python3 -m pytest test_solution.py -v | null | 180 | {
"difficulty": "hard",
"language": "python",
"source": "BigCodeBench",
"entry_point": "task_func",
"libraries": "['operator', 'csv', 'collections']"
} |
BigCodeBench_8 | Implement the function `task_func` in solution.py based on the docstring. Make all tests in test_solution.py pass. Do NOT modify test_solution.py. | {
"solution.py": "from collections import Counter\nimport itertools\nfrom random import randint\ndef task_func(T1, RANGE=100):\n",
"test_solution.py": "from solution import task_func\n\nimport unittest\nfrom collections import Counter\nclass TestCases(unittest.TestCase):\n def test_case_1(self):\n \"\"\"S... | pytest | python3 -m pytest test_solution.py -v | null | 180 | {
"difficulty": "hard",
"language": "python",
"source": "BigCodeBench",
"entry_point": "task_func",
"libraries": "['collections', 'random', 'itertools']"
} |
BigCodeBench_9 | Implement the function `task_func` in solution.py based on the docstring. Make all tests in test_solution.py pass. Do NOT modify test_solution.py. | {
"solution.py": "import pandas as pd\nimport matplotlib.pyplot as plt\nimport seaborn as sns\ndef task_func(list_of_pairs):\n",
"test_solution.py": "from solution import task_func\n\nimport unittest\nclass TestCases(unittest.TestCase):\n \"\"\"Test cases for the task_func function.\"\"\"\n @staticmethod\n ... | pytest | python3 -m pytest test_solution.py -v | null | 180 | {
"difficulty": "hard",
"language": "python",
"source": "BigCodeBench",
"entry_point": "task_func",
"libraries": "['pandas', 'matplotlib', 'seaborn']"
} |
BigCodeBench_10 | Implement the function `task_func` in solution.py based on the docstring. Make all tests in test_solution.py pass. Do NOT modify test_solution.py. | {
"solution.py": "import numpy as np\nimport itertools\nimport random\nimport statistics\ndef task_func(T1, RANGE=100):\n",
"test_solution.py": "from solution import task_func\n\nimport unittest\nimport numpy as np\nimport statistics\nfrom unittest.mock import patch\nclass TestCases(unittest.TestCase):\n @patch(... | pytest | python3 -m pytest test_solution.py -v | null | 180 | {
"difficulty": "hard",
"language": "python",
"source": "BigCodeBench",
"entry_point": "task_func",
"libraries": "['statistics', 'numpy', 'itertools', 'random']"
} |
BigCodeBench_11 | Implement the function `task_func` in solution.py based on the docstring. Make all tests in test_solution.py pass. Do NOT modify test_solution.py. | {
"solution.py": "import numpy as np\nimport itertools\nimport random\ndef task_func(T1, max_value=100):\n",
"test_solution.py": "from solution import task_func\n\nimport unittest\nfrom unittest.mock import patch\nclass TestCases(unittest.TestCase):\n @patch('random.randint')\n def test_case_1(self, mock_rand... | pytest | python3 -m pytest test_solution.py -v | null | 180 | {
"difficulty": "hard",
"language": "python",
"source": "BigCodeBench",
"entry_point": "task_func",
"libraries": "['numpy', 'itertools', 'random']"
} |
BigCodeBench_12 | Implement the function `task_func` in solution.py based on the docstring. Make all tests in test_solution.py pass. Do NOT modify test_solution.py. | {
"solution.py": "import subprocess\nimport os\nimport json\nfrom datetime import datetime\ndef task_func(script_name='backup.sh', log_file='/home/user/backup_log.json'):\n",
"test_solution.py": "from solution import task_func\n\nimport unittest\nfrom unittest.mock import patch, mock_open\nclass TestCases(unittest.... | pytest | python3 -m pytest test_solution.py -v | null | 180 | {
"difficulty": "hard",
"language": "python",
"source": "BigCodeBench",
"entry_point": "task_func",
"libraries": "['subprocess', 'datetime', 'json', 'os']"
} |
BigCodeBench_13 | Implement the function `task_func` in solution.py based on the docstring. Make all tests in test_solution.py pass. Do NOT modify test_solution.py. | {
"solution.py": "import subprocess\nimport ftplib\nimport os\ndef task_func(ftp_server='ftp.dlptest.com', ftp_user='dlpuser', ftp_password='rNrKYTX9g7z3RgJRmxWuGHbeu', ftp_dir='/ftp/test'):\n",
"test_solution.py": "from solution import task_func\n\nimport unittest\nfrom unittest.mock import patch\nimport os\nclass... | pytest | python3 -m pytest test_solution.py -v | null | 180 | {
"difficulty": "hard",
"language": "python",
"source": "BigCodeBench",
"entry_point": "task_func",
"libraries": "['subprocess', 'ftplib', 'os']"
} |
BigCodeBench_14 | Implement the function `task_func` in solution.py based on the docstring. Make all tests in test_solution.py pass. Do NOT modify test_solution.py. | {
"solution.py": "import configparser\nimport os\nimport shutil\ndef task_func(config_file_path, archieve_dir ='/home/user/archive'):\n",
"test_solution.py": "from solution import task_func\n\nimport unittest\nimport tempfile\nimport shutil\nimport os\nimport configparser\nclass TestCases(unittest.TestCase):\n d... | pytest | python3 -m pytest test_solution.py -v | null | 180 | {
"difficulty": "hard",
"language": "python",
"source": "BigCodeBench",
"entry_point": "task_func",
"libraries": "['configparser', 'shutil', 'os']"
} |
BigCodeBench_15 | Implement the function `task_func` in solution.py based on the docstring. Make all tests in test_solution.py pass. Do NOT modify test_solution.py. | {
"solution.py": "import subprocess\nimport csv\nimport os\ndef task_func(commands_file_path, output_dir_path):\n",
"test_solution.py": "from solution import task_func\n\nimport unittest\nimport tempfile\nimport shutil\nimport os\nimport csv\nclass TestCases(unittest.TestCase):\n def setUp(self):\n # Setu... | pytest | python3 -m pytest test_solution.py -v | null | 180 | {
"difficulty": "hard",
"language": "python",
"source": "BigCodeBench",
"entry_point": "task_func",
"libraries": "['subprocess', 'csv', 'os']"
} |
BigCodeBench_16 | Implement the function `task_func` in solution.py based on the docstring. Make all tests in test_solution.py pass. Do NOT modify test_solution.py. | {
"solution.py": "import os\nimport glob\nimport subprocess\ndef task_func(directory, backup_dir='/path/to/backup'):\n",
"test_solution.py": "from solution import task_func\n\nimport unittest\nimport tempfile\nimport os\nimport subprocess\nimport glob\nimport shutil\nclass TestCases(unittest.TestCase):\n def set... | pytest | python3 -m pytest test_solution.py -v | null | 180 | {
"difficulty": "hard",
"language": "python",
"source": "BigCodeBench",
"entry_point": "task_func",
"libraries": "['glob', 'subprocess', 'os']"
} |
BigCodeBench_17 | Implement the function `task_func` in solution.py based on the docstring. Make all tests in test_solution.py pass. Do NOT modify test_solution.py. | {
"solution.py": "import subprocess\nimport psutil\nimport time\ndef task_func(process_name: str) -> str:\n",
"test_solution.py": "from solution import task_func\n\nimport unittest\nfrom unittest.mock import patch, MagicMock\nclass TestCases(unittest.TestCase):\n @patch('psutil.process_iter')\n @patch('subpro... | pytest | python3 -m pytest test_solution.py -v | null | 180 | {
"difficulty": "hard",
"language": "python",
"source": "BigCodeBench",
"entry_point": "task_func",
"libraries": "['psutil', 'subprocess', 'time']"
} |
BigCodeBench_18 | Implement the function `task_func` in solution.py based on the docstring. Make all tests in test_solution.py pass. Do NOT modify test_solution.py. | {
"solution.py": "import subprocess\nimport csv\nimport glob\nimport random\nimport os\ndef task_func(file):\n",
"test_solution.py": "from solution import task_func\n\nimport unittest\nimport csv\nimport os\nimport tempfile\nclass TestCases(unittest.TestCase):\n def setUp(self):\n # Create a temporary dir... | pytest | python3 -m pytest test_solution.py -v | null | 180 | {
"difficulty": "hard",
"language": "python",
"source": "BigCodeBench",
"entry_point": "task_func",
"libraries": "['glob', 'subprocess', 'random', 'os', 'csv']"
} |
BigCodeBench_19 | Implement the function `task_func` in solution.py based on the docstring. Make all tests in test_solution.py pass. Do NOT modify test_solution.py. | {
"solution.py": "import os\nimport glob\nimport zipfile\ndef task_func(directory):\n",
"test_solution.py": "from solution import task_func\n\nimport unittest\nimport os\nimport tempfile\nimport zipfile\nclass TestCases(unittest.TestCase):\n \n def setUp(self):\n \"\"\"Setup a temporary directory befor... | pytest | python3 -m pytest test_solution.py -v | null | 180 | {
"difficulty": "hard",
"language": "python",
"source": "BigCodeBench",
"entry_point": "task_func",
"libraries": "['glob', 'zipfile', 'os']"
} |
BigCodeBench_20 | Implement the function `task_func` in solution.py based on the docstring. Make all tests in test_solution.py pass. Do NOT modify test_solution.py. | {
"solution.py": "import ast\nimport pandas as pd\nimport seaborn as sns\ndef task_func(csv_file):\n",
"test_solution.py": "from solution import task_func\n\nimport unittest\nimport matplotlib\nimport os\nclass TestCases(unittest.TestCase):\n \"\"\"Test cases for the task_func function.\"\"\"\n def setUp(self... | pytest | python3 -m pytest test_solution.py -v | null | 180 | {
"difficulty": "hard",
"language": "python",
"source": "BigCodeBench",
"entry_point": "task_func",
"libraries": "['ast', 'pandas', 'seaborn']"
} |
BigCodeBench_21 | Implement the function `task_func` in solution.py based on the docstring. Make all tests in test_solution.py pass. Do NOT modify test_solution.py. | {
"solution.py": "import psutil\nimport platform\ndef task_func():\n",
"test_solution.py": "from solution import task_func\n\nimport unittest\nclass TestCases(unittest.TestCase):\n \n def test_presence_OS(self):\n \"\"\"Test that the result has the correct keys and that each key maps to the expected da... | pytest | python3 -m pytest test_solution.py -v | null | 180 | {
"difficulty": "hard",
"language": "python",
"source": "BigCodeBench",
"entry_point": "task_func",
"libraries": "['psutil', 'platform']"
} |
BigCodeBench_22 | Implement the function `task_func` in solution.py based on the docstring. Make all tests in test_solution.py pass. Do NOT modify test_solution.py. | {
"solution.py": "import collections\nfrom itertools import zip_longest\nfrom random import choices\ndef task_func(l1, l2, K=10):\n",
"test_solution.py": "from solution import task_func\n\nimport unittest\nimport collections\nimport random\nclass TestCases(unittest.TestCase):\n def setUp(self):\n # Set a cons... | pytest | python3 -m pytest test_solution.py -v | null | 180 | {
"difficulty": "hard",
"language": "python",
"source": "BigCodeBench",
"entry_point": "task_func",
"libraries": "['collections', 'random', 'itertools']"
} |
BigCodeBench_23 | Implement the function `task_func` in solution.py based on the docstring. Make all tests in test_solution.py pass. Do NOT modify test_solution.py. | {
"solution.py": "import numpy as np\nfrom itertools import zip_longest\ndef task_func(l1, l2,THRESHOLD = 0.5):\n",
"test_solution.py": "from solution import task_func\n\nimport unittest\nclass TestCases(unittest.TestCase):\n def test_case_1(self):\n # Test with two lists of equal length where one element... | pytest | python3 -m pytest test_solution.py -v | null | 180 | {
"difficulty": "hard",
"language": "python",
"source": "BigCodeBench",
"entry_point": "task_func",
"libraries": "['numpy', 'itertools']"
} |
BigCodeBench_24 | Implement the function `task_func` in solution.py based on the docstring. Make all tests in test_solution.py pass. Do NOT modify test_solution.py. | {
"solution.py": "import base64\nimport hashlib\nimport os\ndef task_func(password, SALT_LENGTH = 32):\n",
"test_solution.py": "from solution import task_func\n\nimport unittest\nimport base64\nimport hashlib\nimport os\nclass TestCases(unittest.TestCase):\n def decode_and_regenerate_password(self, encoded_salt,... | pytest | python3 -m pytest test_solution.py -v | null | 180 | {
"difficulty": "hard",
"language": "python",
"source": "BigCodeBench",
"entry_point": "task_func",
"libraries": "['base64', 'hashlib', 'os']"
} |
BigCodeBench_25 | Implement the function `task_func` in solution.py based on the docstring. Make all tests in test_solution.py pass. Do NOT modify test_solution.py. | {
"solution.py": "import base64\nimport json\nimport zlib\ndef task_func(data_dict):\n",
"test_solution.py": "from solution import task_func\n\nimport unittest\nimport json\nimport zlib\nimport base64\nclass TestCases(unittest.TestCase):\n def test_case_1(self):\n # Test with a simple dictionary containin... | pytest | python3 -m pytest test_solution.py -v | null | 180 | {
"difficulty": "hard",
"language": "python",
"source": "BigCodeBench",
"entry_point": "task_func",
"libraries": "['zlib', 'base64', 'json']"
} |
BigCodeBench_26 | Implement the function `task_func` in solution.py based on the docstring. Make all tests in test_solution.py pass. Do NOT modify test_solution.py. | {
"solution.py": "import base64\nfrom cryptography.fernet import Fernet\ndef task_func(message, encryption_key):\n",
"test_solution.py": "from solution import task_func\n\nimport unittest\nimport base64\nfrom cryptography.fernet import Fernet\nclass TestCases(unittest.TestCase):\n def test_case_1(self):\n ... | pytest | python3 -m pytest test_solution.py -v | null | 180 | {
"difficulty": "hard",
"language": "python",
"source": "BigCodeBench",
"entry_point": "task_func",
"libraries": "['base64', 'cryptography']"
} |
BigCodeBench_27 | Implement the function `task_func` in solution.py based on the docstring. Make all tests in test_solution.py pass. Do NOT modify test_solution.py. | {
"solution.py": "import json\nimport base64\nfrom datetime import datetime\ndef task_func(data: dict, DATE_FORMAT = \"%Y-%m-%d %H:%M:%S\") -> str:\n",
"test_solution.py": "from solution import task_func\n\nimport unittest\nimport json\nimport base64\nfrom datetime import datetime\nclass TestCases(unittest.TestCase... | pytest | python3 -m pytest test_solution.py -v | null | 180 | {
"difficulty": "hard",
"language": "python",
"source": "BigCodeBench",
"entry_point": "task_func",
"libraries": "['base64', 'json', 'datetime']"
} |
BigCodeBench_28 | Implement the function `task_func` in solution.py based on the docstring. Make all tests in test_solution.py pass. Do NOT modify test_solution.py. | {
"solution.py": "import requests\nimport json\nimport base64\ndef task_func(data, url=\"http://your-api-url.com\"):\n",
"test_solution.py": "from solution import task_func\n\nimport unittest\nfrom unittest.mock import patch, Mock\nimport requests\nimport json\n# Mocking the requests.post method\ndef mock_post(*arg... | pytest | python3 -m pytest test_solution.py -v | null | 180 | {
"difficulty": "hard",
"language": "python",
"source": "BigCodeBench",
"entry_point": "task_func",
"libraries": "['base64', 'requests', 'json']"
} |
BigCodeBench_29 | Implement the function `task_func` in solution.py based on the docstring. Make all tests in test_solution.py pass. Do NOT modify test_solution.py. | {
"solution.py": "from sklearn.preprocessing import StandardScaler\nimport numpy as np\nimport base64\ndef task_func(data):\n",
"test_solution.py": "from solution import task_func\n\nimport unittest\nfrom unittest.mock import patch \nimport numpy as np\nimport base64\nfrom sklearn.preprocessing import StandardScale... | pytest | python3 -m pytest test_solution.py -v | null | 180 | {
"difficulty": "hard",
"language": "python",
"source": "BigCodeBench",
"entry_point": "task_func",
"libraries": "['base64', 'numpy', 'sklearn']"
} |
BigCodeBench_30 | Implement the function `task_func` in solution.py based on the docstring. Make all tests in test_solution.py pass. Do NOT modify test_solution.py. | {
"solution.py": "import json\nimport os\nimport re\ndef task_func(\n file_path,\n attribute,\n INPUT_JSON={\n \"type\": \"object\",\n \"properties\": {\n \"name\": {\"type\": str}, \n \"age\": {\"type\": int}, \n \"email\": {\"type\": str} \n },\n ... | pytest | python3 -m pytest test_solution.py -v | null | 180 | {
"difficulty": "hard",
"language": "python",
"source": "BigCodeBench",
"entry_point": "task_func",
"libraries": "['json', 're', 'os']"
} |
BigCodeBench_31 | Implement the function `task_func` in solution.py based on the docstring. Make all tests in test_solution.py pass. Do NOT modify test_solution.py. | {
"solution.py": "import nltk\nfrom string import punctuation\nimport seaborn as sns\nimport matplotlib.pyplot as plt\n# Constants\nPUNCTUATION = set(punctuation)\ndef task_func(text):\n",
"test_solution.py": "from solution import task_func\n\nimport unittest\nclass TestCases(unittest.TestCase):\n \"\"\"Test cas... | pytest | python3 -m pytest test_solution.py -v | null | 180 | {
"difficulty": "hard",
"language": "python",
"source": "BigCodeBench",
"entry_point": "task_func",
"libraries": "['nltk', 'matplotlib', 'string', 'seaborn']"
} |
BigCodeBench_32 | Implement the function `task_func` in solution.py based on the docstring. Make all tests in test_solution.py pass. Do NOT modify test_solution.py. | {
"solution.py": "import requests\nfrom bs4 import BeautifulSoup\ndef task_func(url, tag):\n",
"test_solution.py": "from solution import task_func\n\nimport unittest\nfrom unittest.mock import patch, Mock\nimport requests\nfrom bs4 import BeautifulSoup\nimport os\nclass TestCases(unittest.TestCase):\n @patch('re... | pytest | python3 -m pytest test_solution.py -v | null | 180 | {
"difficulty": "hard",
"language": "python",
"source": "BigCodeBench",
"entry_point": "task_func",
"libraries": "['bs4', 'requests']"
} |
BigCodeBench_33 | Implement the function `task_func` in solution.py based on the docstring. Make all tests in test_solution.py pass. Do NOT modify test_solution.py. | {
"solution.py": "import numpy as np\nfrom functools import reduce\ndef task_func(list_of_pairs):\n",
"test_solution.py": "from solution import task_func\n\nimport unittest\nimport numpy as np\nfrom functools import reduce\nclass TestCases(unittest.TestCase):\n \n def test_case_1(self):\n # Basic test ... | pytest | python3 -m pytest test_solution.py -v | null | 180 | {
"difficulty": "hard",
"language": "python",
"source": "BigCodeBench",
"entry_point": "task_func",
"libraries": "['numpy', 'functools']"
} |
BigCodeBench_34 | Implement the function `task_func` in solution.py based on the docstring. Make all tests in test_solution.py pass. Do NOT modify test_solution.py. | {
"solution.py": "import re\nfrom wordcloud import WordCloud\nimport matplotlib.pyplot as plt\ndef task_func(text):\n",
"test_solution.py": "from solution import task_func\n\nimport unittest\nclass TestCases(unittest.TestCase):\n \"\"\"Test cases for the task_func function.\"\"\"\n def test_case_1(self):\n ... | pytest | python3 -m pytest test_solution.py -v | null | 180 | {
"difficulty": "hard",
"language": "python",
"source": "BigCodeBench",
"entry_point": "task_func",
"libraries": "['matplotlib', 're', 'wordcloud']"
} |
BigCodeBench_35 | Implement the function `task_func` in solution.py based on the docstring. Make all tests in test_solution.py pass. Do NOT modify test_solution.py. | {
"solution.py": "import seaborn as sns\nimport matplotlib.pyplot as plt\ndef task_func(df, target_values=[1, 3, 4]):\n",
"test_solution.py": "from solution import task_func\n\nimport unittest\nimport pandas as pd\nclass TestCases(unittest.TestCase):\n \"\"\"Test cases for the task_func function.\"\"\"\n def ... | pytest | python3 -m pytest test_solution.py -v | null | 180 | {
"difficulty": "hard",
"language": "python",
"source": "BigCodeBench",
"entry_point": "task_func",
"libraries": "['matplotlib', 'seaborn']"
} |
BigCodeBench_36 | Implement the function `task_func` in solution.py based on the docstring. Make all tests in test_solution.py pass. Do NOT modify test_solution.py. | {
"solution.py": "import numpy as np\nfrom scipy import stats\nimport matplotlib.pyplot as plt\nTARGET_VALUES = np.array([1, 3, 4])\ndef task_func(df):\n",
"test_solution.py": "from solution import task_func\n\nimport unittest\nimport pandas as pd\nclass TestCases(unittest.TestCase):\n \"\"\"Test cases for the t... | pytest | python3 -m pytest test_solution.py -v | null | 180 | {
"difficulty": "hard",
"language": "python",
"source": "BigCodeBench",
"entry_point": "task_func",
"libraries": "['numpy', 'matplotlib', 'scipy']"
} |
BigCodeBench_37 | Implement the function `task_func` in solution.py based on the docstring. Make all tests in test_solution.py pass. Do NOT modify test_solution.py. | {
"solution.py": "import pandas as pd\nfrom sklearn.ensemble import RandomForestClassifier\nimport seaborn as sns\nimport matplotlib.pyplot as plt\ndef task_func(df, target_column):\n",
"test_solution.py": "from solution import task_func\n\nimport unittest\nimport pandas as pd\nclass TestCases(unittest.TestCase):\n... | pytest | python3 -m pytest test_solution.py -v | null | 180 | {
"difficulty": "hard",
"language": "python",
"source": "BigCodeBench",
"entry_point": "task_func",
"libraries": "['sklearn', 'matplotlib', 'seaborn']"
} |
BigCodeBench_38 | Implement the function `task_func` in solution.py based on the docstring. Make all tests in test_solution.py pass. Do NOT modify test_solution.py. | {
"solution.py": "import pandas as pd\nfrom sklearn.preprocessing import StandardScaler\nimport matplotlib.pyplot as plt\n# Constants\nFEATURE_NAMES = [\"Feature 1\", \"Feature 2\", \"Feature 3\", \"Feature 4\", \"Feature 5\"]\ndef task_func(data_matrix):\n",
"test_solution.py": "from solution import task_func\n\ni... | pytest | python3 -m pytest test_solution.py -v | null | 180 | {
"difficulty": "hard",
"language": "python",
"source": "BigCodeBench",
"entry_point": "task_func",
"libraries": "['pandas', 'matplotlib', 'sklearn']"
} |
BigCodeBench_39 | Implement the function `task_func` in solution.py based on the docstring. Make all tests in test_solution.py pass. Do NOT modify test_solution.py. | {
"solution.py": "import numpy as np\nfrom scipy.stats import ttest_1samp\nimport matplotlib.pyplot as plt\n# Constants\nALPHA = 0.05\ndef task_func(data_matrix):\n",
"test_solution.py": "from solution import task_func\n\nimport unittest\nclass TestCases(unittest.TestCase):\n \"\"\"Test cases for the task_func f... | pytest | python3 -m pytest test_solution.py -v | null | 180 | {
"difficulty": "hard",
"language": "python",
"source": "BigCodeBench",
"entry_point": "task_func",
"libraries": "['numpy', 'matplotlib', 'scipy']"
} |
BigCodeBench_40 | Implement the function `task_func` in solution.py based on the docstring. Make all tests in test_solution.py pass. Do NOT modify test_solution.py. | {
"solution.py": "import pandas as pd\nimport seaborn as sns\nfrom scipy.stats import zscore\ndef task_func(data_matrix):\n",
"test_solution.py": "from solution import task_func\n\nimport unittest\nimport numpy as np\nimport matplotlib\nclass TestCases(unittest.TestCase):\n \"\"\"Test cases for the task_func fun... | pytest | python3 -m pytest test_solution.py -v | null | 180 | {
"difficulty": "hard",
"language": "python",
"source": "BigCodeBench",
"entry_point": "task_func",
"libraries": "['pandas', 'scipy', 'seaborn']"
} |
BigCodeBench_41 | Implement the function `task_func` in solution.py based on the docstring. Make all tests in test_solution.py pass. Do NOT modify test_solution.py. | {
"solution.py": "import pandas as pd\nimport matplotlib.pyplot as plt\nfrom scipy.stats import skew\ndef task_func(data_matrix):\n",
"test_solution.py": "from solution import task_func\n\nimport unittest\nimport os\nimport numpy as np\nclass TestCases(unittest.TestCase):\n \"\"\"Test cases for the task_func fun... | pytest | python3 -m pytest test_solution.py -v | null | 180 | {
"difficulty": "hard",
"language": "python",
"source": "BigCodeBench",
"entry_point": "task_func",
"libraries": "['pandas', 'matplotlib', 'scipy']"
} |
BigCodeBench_42 | Implement the function `task_func` in solution.py based on the docstring. Make all tests in test_solution.py pass. Do NOT modify test_solution.py. | {
"solution.py": "import pandas as pd\nimport matplotlib.pyplot as plt\nfrom sklearn.decomposition import PCA\ndef task_func(data_matrix, n_components=2):\n",
"test_solution.py": "from solution import task_func\n\nimport unittest\nimport numpy as np\nclass TestCases(unittest.TestCase):\n \"\"\"Test cases for the... | pytest | python3 -m pytest test_solution.py -v | null | 180 | {
"difficulty": "hard",
"language": "python",
"source": "BigCodeBench",
"entry_point": "task_func",
"libraries": "['pandas', 'matplotlib', 'sklearn']"
} |
BigCodeBench_43 | Implement the function `task_func` in solution.py based on the docstring. Make all tests in test_solution.py pass. Do NOT modify test_solution.py. | {
"solution.py": "import numpy as np\nimport seaborn as sns\ndef task_func(df):\n",
"test_solution.py": "from solution import task_func\n\nimport unittest\nimport pandas as pd\nclass TestCases(unittest.TestCase):\n \"\"\"Test cases for the f_112 function.\"\"\"\n def setUp(self):\n # Generating more co... | pytest | python3 -m pytest test_solution.py -v | null | 180 | {
"difficulty": "hard",
"language": "python",
"source": "BigCodeBench",
"entry_point": "task_func",
"libraries": "['numpy', 'seaborn']"
} |
BigCodeBench_44 | Implement the function `task_func` in solution.py based on the docstring. Make all tests in test_solution.py pass. Do NOT modify test_solution.py. | {
"solution.py": "from sklearn.preprocessing import MinMaxScaler\nimport matplotlib.pyplot as plt\ndef task_func(df):\n",
"test_solution.py": "from solution import task_func\n\nimport unittest\nimport pandas as pd\nimport numpy as np\nclass TestCases(unittest.TestCase):\n \"\"\"Test cases for the task_func funct... | pytest | python3 -m pytest test_solution.py -v | null | 180 | {
"difficulty": "hard",
"language": "python",
"source": "BigCodeBench",
"entry_point": "task_func",
"libraries": "['matplotlib', 'sklearn']"
} |
BigCodeBench_45 | Implement the function `task_func` in solution.py based on the docstring. Make all tests in test_solution.py pass. Do NOT modify test_solution.py. | {
"solution.py": "import pandas as pd\nimport numpy as np\nfrom sklearn.decomposition import PCA\nimport seaborn as sns\nimport matplotlib.pyplot as plt\ndef task_func(df: pd.DataFrame):\n",
"test_solution.py": "from solution import task_func\n\nimport unittest\nclass TestCases(unittest.TestCase):\n \"\"\"Test c... | pytest | python3 -m pytest test_solution.py -v | null | 180 | {
"difficulty": "hard",
"language": "python",
"source": "BigCodeBench",
"entry_point": "task_func",
"libraries": "['pandas', 'matplotlib', 'numpy', 'sklearn', 'seaborn']"
} |
BigCodeBench_46 | Implement the function `task_func` in solution.py based on the docstring. Make all tests in test_solution.py pass. Do NOT modify test_solution.py. | {
"solution.py": "from scipy.stats import zscore\nimport matplotlib.pyplot as plt\ndef task_func(df):\n",
"test_solution.py": "from solution import task_func\n\nimport unittest\nimport pandas as pd\nimport numpy as np\nclass TestCases(unittest.TestCase):\n \"\"\"Test cases for the task_func function.\"\"\"\n ... | pytest | python3 -m pytest test_solution.py -v | null | 180 | {
"difficulty": "hard",
"language": "python",
"source": "BigCodeBench",
"entry_point": "task_func",
"libraries": "['matplotlib', 'scipy']"
} |
BigCodeBench_47 | Implement the function `task_func` in solution.py based on the docstring. Make all tests in test_solution.py pass. Do NOT modify test_solution.py. | {
"solution.py": "from sklearn.preprocessing import StandardScaler\nimport seaborn as sns\nimport matplotlib.pyplot as plt\ndef task_func(df):\n",
"test_solution.py": "from solution import task_func\n\nimport unittest\nimport pandas as pd\nclass TestCases(unittest.TestCase):\n \"\"\"Test cases for the task_func ... | pytest | python3 -m pytest test_solution.py -v | null | 180 | {
"difficulty": "hard",
"language": "python",
"source": "BigCodeBench",
"entry_point": "task_func",
"libraries": "['sklearn', 'matplotlib', 'seaborn']"
} |
BigCodeBench_48 | Implement the function `task_func` in solution.py based on the docstring. Make all tests in test_solution.py pass. Do NOT modify test_solution.py. | {
"solution.py": "import time\nfrom datetime import datetime\nimport random\nimport matplotlib.pyplot as plt\n# Constants\nDATE_FORMAT = \"%Y-%m-%d %H:%M:%S\"\ndef task_func(n, output_path=None):\n",
"test_solution.py": "from solution import task_func\n\nimport unittest\nimport os\nclass TestCases(unittest.TestCase... | pytest | python3 -m pytest test_solution.py -v | null | 180 | {
"difficulty": "hard",
"language": "python",
"source": "BigCodeBench",
"entry_point": "task_func",
"libraries": "['datetime', 'random', 'matplotlib', 'time']"
} |
BigCodeBench_49 | Implement the function `task_func` in solution.py based on the docstring. Make all tests in test_solution.py pass. Do NOT modify test_solution.py. | {
"solution.py": "from datetime import datetime\nimport pandas as pd\nimport matplotlib.pyplot as plt\n# Constants\nDATE_FORMAT = \"%Y-%m-%d %H:%M:%S\"\ndef task_func(timestamps):\n",
"test_solution.py": "from solution import task_func\n\nimport unittest\nclass TestCases(unittest.TestCase):\n \"\"\"Test cases fo... | pytest | python3 -m pytest test_solution.py -v | null | 180 | {
"difficulty": "hard",
"language": "python",
"source": "BigCodeBench",
"entry_point": "task_func",
"libraries": "['pandas', 'datetime', 'matplotlib']"
} |
BigCodeBench_50 | Implement the function `task_func` in solution.py based on the docstring. Make all tests in test_solution.py pass. Do NOT modify test_solution.py. | {
"solution.py": "from datetime import datetime\nimport pandas as pd\nimport pytz\nimport matplotlib.pyplot as plt\n# Constants\nDATE_FORMAT = \"%Y-%m-%d %H:%M:%S\"\nTIMEZONES = [\n \"America/New_York\",\n \"Europe/London\",\n \"Asia/Shanghai\",\n \"Asia/Tokyo\",\n \"Australia/Sydney\",\n]\ndef task_fu... | pytest | python3 -m pytest test_solution.py -v | null | 180 | {
"difficulty": "hard",
"language": "python",
"source": "BigCodeBench",
"entry_point": "task_func",
"libraries": "['pytz', 'pandas', 'datetime', 'matplotlib']"
} |
BigCodeBench_51 | Implement the function `task_func` in solution.py based on the docstring. Make all tests in test_solution.py pass. Do NOT modify test_solution.py. | {
"solution.py": "from sklearn.cluster import KMeans\nimport matplotlib.pyplot as plt\ndef task_func(df, age: int, height: int):\n",
"test_solution.py": "from solution import task_func\n\nimport unittest\nimport pandas as pd\nclass TestCases(unittest.TestCase):\n \"\"\"Test cases for the task_func function.\"\"\... | pytest | python3 -m pytest test_solution.py -v | null | 180 | {
"difficulty": "hard",
"language": "python",
"source": "BigCodeBench",
"entry_point": "task_func",
"libraries": "['matplotlib', 'sklearn']"
} |
BigCodeBench_52 | Implement the function `task_func` in solution.py based on the docstring. Make all tests in test_solution.py pass. Do NOT modify test_solution.py. | {
"solution.py": "import pandas as pd\nimport regex as re\n# Constants\nSTOPWORDS = [\"a\", \"an\", \"the\", \"in\", \"is\", \"are\"]\ndef task_func(text):\n",
"test_solution.py": "from solution import task_func\n\nimport unittest\nclass TestCases(unittest.TestCase):\n \"\"\"Test cases for the task_func function... | pytest | python3 -m pytest test_solution.py -v | null | 180 | {
"difficulty": "hard",
"language": "python",
"source": "BigCodeBench",
"entry_point": "task_func",
"libraries": "['regex', 'pandas']"
} |
BigCodeBench_53 | Implement the function `task_func` in solution.py based on the docstring. Make all tests in test_solution.py pass. Do NOT modify test_solution.py. | {
"solution.py": "import pandas as pd\nimport regex as re\nimport seaborn as sns\nimport matplotlib.pyplot as plt\nCOLUMN_NAMES = [\"Name\", \"Email\", \"Age\", \"Country\"]\ndef task_func(text):\n",
"test_solution.py": "from solution import task_func\n\nimport unittest\nclass TestCases(unittest.TestCase):\n \"\... | pytest | python3 -m pytest test_solution.py -v | null | 180 | {
"difficulty": "hard",
"language": "python",
"source": "BigCodeBench",
"entry_point": "task_func",
"libraries": "['regex', 'pandas', 'matplotlib', 'seaborn']"
} |
BigCodeBench_54 | Implement the function `task_func` in solution.py based on the docstring. Make all tests in test_solution.py pass. Do NOT modify test_solution.py. | {
"solution.py": "import pandas as pd\nimport regex as re\nfrom sklearn.feature_extraction.text import CountVectorizer\ndef task_func(text):\n",
"test_solution.py": "from solution import task_func\n\nimport unittest\nclass TestCases(unittest.TestCase):\n \"\"\"Test cases for the task_func function.\"\"\"\n de... | pytest | python3 -m pytest test_solution.py -v | null | 180 | {
"difficulty": "hard",
"language": "python",
"source": "BigCodeBench",
"entry_point": "task_func",
"libraries": "['regex', 'pandas', 'sklearn']"
} |
BigCodeBench_55 | Implement the function `task_func` in solution.py based on the docstring. Make all tests in test_solution.py pass. Do NOT modify test_solution.py. | {
"solution.py": "import re\nimport pandas as pd\nSTOPWORDS = [\"Those\", \"are\", \"the\", \"words\", \"to\", \"ignore\"]\ndef task_func(text):\n",
"test_solution.py": "from solution import task_func\n\nimport unittest\nclass TestCases(unittest.TestCase):\n \"\"\"Test cases for the task_func function.\"\"\"\n ... | pytest | python3 -m pytest test_solution.py -v | null | 180 | {
"difficulty": "hard",
"language": "python",
"source": "BigCodeBench",
"entry_point": "task_func",
"libraries": "['pandas', 're']"
} |
BigCodeBench_56 | Implement the function `task_func` in solution.py based on the docstring. Make all tests in test_solution.py pass. Do NOT modify test_solution.py. | {
"solution.py": "import pandas as pd\nimport regex as re\ndef task_func(text):\n",
"test_solution.py": "from solution import task_func\n\nimport unittest\nclass TestCases(unittest.TestCase):\n \"\"\"Test cases for the task_func function.\"\"\"\n def test_case_1(self):\n text = \"Score: 85, Category: M... | pytest | python3 -m pytest test_solution.py -v | null | 180 | {
"difficulty": "hard",
"language": "python",
"source": "BigCodeBench",
"entry_point": "task_func",
"libraries": "['regex', 'pandas']"
} |
BigCodeBench_57 | Implement the function `task_func` in solution.py based on the docstring. Make all tests in test_solution.py pass. Do NOT modify test_solution.py. | {
"solution.py": "import pandas as pd\nimport matplotlib.pyplot as plt\nimport seaborn as sns\ndef task_func(csv_file_path: str, title: str):\n",
"test_solution.py": "from solution import task_func\n\nimport unittest\nimport os\nclass TestCases(unittest.TestCase):\n \"\"\"Test cases for the task_func function.\"... | pytest | python3 -m pytest test_solution.py -v | null | 180 | {
"difficulty": "hard",
"language": "python",
"source": "BigCodeBench",
"entry_point": "task_func",
"libraries": "['pandas', 'matplotlib', 'seaborn']"
} |
BigCodeBench_58 | Implement the function `task_func` in solution.py based on the docstring. Make all tests in test_solution.py pass. Do NOT modify test_solution.py. | {
"solution.py": "import numpy as np\nfrom scipy import stats\nimport matplotlib.pyplot as plt\ndef task_func(mu, sigma, num_samples):\n",
"test_solution.py": "from solution import task_func\n\nimport unittest\nclass TestCases(unittest.TestCase):\n \"\"\"Test cases for the task_func function.\"\"\"\n def test... | pytest | python3 -m pytest test_solution.py -v | null | 180 | {
"difficulty": "hard",
"language": "python",
"source": "BigCodeBench",
"entry_point": "task_func",
"libraries": "['numpy', 'matplotlib', 'scipy']"
} |
BigCodeBench_59 | Implement the function `task_func` in solution.py based on the docstring. Make all tests in test_solution.py pass. Do NOT modify test_solution.py. | {
"solution.py": "import wikipedia\nfrom wordcloud import WordCloud\nimport matplotlib.pyplot as plt\ndef task_func(page_title):\n",
"test_solution.py": "from solution import task_func\n\nimport unittest\nfrom unittest.mock import patch\nclass A :\n def __init__(self, content) -> None:\n self.content = co... | pytest | python3 -m pytest test_solution.py -v | null | 180 | {
"difficulty": "hard",
"language": "python",
"source": "BigCodeBench",
"entry_point": "task_func",
"libraries": "['wikipedia', 'matplotlib', 'wordcloud']"
} |
BigCodeBench_60 | Implement the function `task_func` in solution.py based on the docstring. Make all tests in test_solution.py pass. Do NOT modify test_solution.py. | {
"solution.py": "import json\nimport pandas as pd\ndef task_func(result, csv_file_path=\"test.csv\", json_file_path=\"test.json\"):\n",
"test_solution.py": "from solution import task_func\n\nimport unittest\nimport os\nclass TestCases(unittest.TestCase):\n \"\"\"Test cases for the task_func function.\"\"\"\n ... | pytest | python3 -m pytest test_solution.py -v | null | 180 | {
"difficulty": "hard",
"language": "python",
"source": "BigCodeBench",
"entry_point": "task_func",
"libraries": "['pandas', 'json']"
} |
End of preview.
CC-Arena Benchmark Dataset
Full benchmark datasets for CC-Arena — a framework for evaluating AI coding agents (Claude Code, Cursor, etc.).
Quick Start
Via CC-Arena CLI (recommended)
# Download a specific benchmark
python3 -m cc_arena.tasks.downloader download humaneval
# Download with limit
python3 -m cc_arena.tasks.downloader download bigcodebench --limit 100
# List all available benchmarks
python3 -m cc_arena.tasks.downloader list
Via huggingface_hub
from huggingface_hub import hf_hub_download
path = hf_hub_download(
repo_id="songjhPKU/cc-arena-dataset",
filename="humaneval.jsonl",
repo_type="dataset",
)
Via Direct URL
wget https://huggingface.co/datasets/songjhPKU/cc-arena-dataset/resolve/main/humaneval.jsonl
Dataset Structure
Each benchmark is a single JSONL file. One line = one task:
| File | Tasks | Difficulty | Description |
|---|---|---|---|
humaneval.jsonl |
164 | Easy | OpenAI HumanEval function-level code generation |
bigcodebench.jsonl |
1140 | Hard | BigCodeBench-Hard multi-library tasks |
naturalcodebench.jsonl |
~280 | Medium | NaturalCodeBench Python + Java real-world tasks |
devbench.jsonl |
~22 | Hard | DevBench multi-stage software engineering projects |
custom.jsonl |
10+ | Mixed | CC-Arena custom tasks (replaycode + engineering) |
swebench_lite.jsonl |
5 | Medium | SWE-bench style bug-fixing tasks |
builtin.jsonl |
3 | Easy | Smoke tests for verifying CC-Arena works |
JSONL Schema
Each line is a JSON object with these fields:
{
"task_id": "HumanEval_0",
"prompt": "Implement the function `has_close_elements` in solution.py. ...",
"initial_files": {
"solution.py": "from typing import List\n\ndef has_close_elements(...): ...",
"test_solution.py": "from solution import has_close_elements\n..."
},
"test_type": "pytest",
"test_command": "python3 -m pytest test_solution.py -v",
"expected_output": null,
"timeout_seconds": 120,
"metadata": {
"difficulty": "easy",
"language": "python",
"source": "HumanEval"
}
}
Field Reference
| Field | Type | Description |
|---|---|---|
task_id |
string | Unique task identifier |
prompt |
string | Instructions given to the coding agent |
initial_files |
dict | Files placed in workspace before agent runs ({path: content}) |
test_type |
string | pytest, stdout_contains, file_contains, file_exists, exit_code |
test_command |
string | Command to run tests |
expected_output |
string/null | Expected output for non-pytest test types |
timeout_seconds |
int | Task timeout |
metadata |
dict | Additional info (difficulty, language, source, etc.) |
How It Works
- The CC-Arena downloader fetches a JSONL file from this dataset
- For each row, it creates a task directory:
benchmarks/<benchmark>/tasks/<task_id>/ ├── task.yaml # Generated from metadata ├── solution.py # From initial_files ├── test_solution.py # From initial_files └── (other files) # From initial_files - The agent receives the
promptand works in the task directory - Tests are run according to
test_typeandtest_command
License
Apache 2.0
Citation
@misc{cc-arena,
title={CC-Arena: A Framework for Evaluating AI Coding Agents},
url={https://github.com/songjhPKU/cc-arena},
year={2026}
}
- Downloads last month
- 74