Dataset Preview
Duplicate
The full dataset viewer is not available (click to read why). Only showing a preview of the rows.
The dataset generation failed
Error code:   DatasetGenerationError
Exception:    CastError
Message:      Couldn't cast
repo: string
instance_id: string
base_commit: string
patch: string
test_patch: string
problem_statement: string
hint_text: null
created_at: string
closed_at: string
version: string
FAIL_TO_PASS: list<element: string>
  child 0, element: string
PASS_TO_PASS: list<element: string>
  child 0, element: string
environment_setup_commit: null
command_build: string
command_test: string
image_name: string
meta: struct<issue_number: int64, merge_commit: string, merged_at: string, pr_number: int64, score: struct<complexity: int64, task_correctness: int64, test_completeness: int64, test_correctness: int64>, timeout_build: int64, timeout_test: int64, type: string, url: struct<diff: string, issue: string, pr: string>>
  child 0, issue_number: int64
  child 1, merge_commit: string
  child 2, merged_at: string
  child 3, pr_number: int64
  child 4, score: struct<complexity: int64, task_correctness: int64, test_completeness: int64, test_correctness: int64>
      child 0, complexity: int64
      child 1, task_correctness: int64
      child 2, test_completeness: int64
      child 3, test_correctness: int64
  child 5, timeout_build: int64
  child 6, timeout_test: int64
  child 7, type: string
  child 8, url: struct<diff: string, issue: string, pr: string>
      child 0, diff: string
      child 1, issue: string
      child 2, pr: string
sample_type: string
-- schema metadata --
pandas: '{"index_columns": [{"kind": "range", "name": null, "start": 0, "' + 2451
to
{'repo': Value('string'), 'instance_id': Value('string'), 'base_commit': Value('string'), 'patch': Value('string'), 'test_patch': Value('string'), 'problem_statement': Value('null'), 'hint_text': Value('null'), 'created_at': Value('string'), 'closed_at': Value('string'), 'version': Value('string'), 'FAIL_TO_PASS': List(Value('null')), 'PASS_TO_PASS': List(Value('string')), 'environment_setup_commit': Value('null'), 'command_build': Value('string'), 'command_test': Value('string'), 'image_name': Value('string'), 'meta': {'issue_number': Value('int64'), 'merge_commit': Value('string'), 'merged_at': Value('string'), 'pr_number': Value('int64'), 'score': {'complexity': Value('int64'), 'task_correctness': Value('int64'), 'test_completeness': Value('int64'), 'test_correctness': Value('int64')}, 'timeout_build': Value('int64'), 'timeout_test': Value('int64'), 'type': Value('string'), 'url': {'diff': Value('string'), 'issue': Value('string'), 'pr': Value('string')}}}
because column names don't match
Traceback:    Traceback (most recent call last):
                File "/src/services/worker/src/worker/job_runners/config/parquet_and_info.py", line 1405, in compute_config_parquet_and_info_response
                  fill_builder_info(builder, hf_endpoint=hf_endpoint, hf_token=hf_token, validate=validate)
                File "/src/services/worker/src/worker/job_runners/config/parquet_and_info.py", line 578, in fill_builder_info
                  ) = retry_validate_get_features_num_examples_size_and_compression_ratio(
                File "/src/services/worker/src/worker/job_runners/config/parquet_and_info.py", line 497, in retry_validate_get_features_num_examples_size_and_compression_ratio
                  validate(pf)
                File "/src/services/worker/src/worker/job_runners/config/parquet_and_info.py", line 535, in validate
                  raise TooBigRowGroupsError(
              worker.job_runners.config.parquet_and_info.TooBigRowGroupsError: Parquet file has too big row groups. First row group has 933526924 which exceeds the limit of 300000000
              
              During handling of the above exception, another exception occurred:
              
              Traceback (most recent call last):
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/builder.py", line 1815, in _prepare_split_single
                  for _, table in generator:
                File "/src/services/worker/src/worker/job_runners/config/parquet_and_info.py", line 692, in wrapped
                  for item in generator(*args, **kwargs):
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/packaged_modules/parquet/parquet.py", line 106, in _generate_tables
                  yield f"{file_idx}_{batch_idx}", self._cast_table(pa_table)
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/packaged_modules/parquet/parquet.py", line 73, in _cast_table
                  pa_table = table_cast(pa_table, self.info.features.arrow_schema)
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/table.py", line 2272, in table_cast
                  return cast_table_to_schema(table, schema)
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/table.py", line 2218, in cast_table_to_schema
                  raise CastError(
              datasets.table.CastError: Couldn't cast
              repo: string
              instance_id: string
              base_commit: string
              patch: string
              test_patch: string
              problem_statement: string
              hint_text: null
              created_at: string
              closed_at: string
              version: string
              FAIL_TO_PASS: list<element: string>
                child 0, element: string
              PASS_TO_PASS: list<element: string>
                child 0, element: string
              environment_setup_commit: null
              command_build: string
              command_test: string
              image_name: string
              meta: struct<issue_number: int64, merge_commit: string, merged_at: string, pr_number: int64, score: struct<complexity: int64, task_correctness: int64, test_completeness: int64, test_correctness: int64>, timeout_build: int64, timeout_test: int64, type: string, url: struct<diff: string, issue: string, pr: string>>
                child 0, issue_number: int64
                child 1, merge_commit: string
                child 2, merged_at: string
                child 3, pr_number: int64
                child 4, score: struct<complexity: int64, task_correctness: int64, test_completeness: int64, test_correctness: int64>
                    child 0, complexity: int64
                    child 1, task_correctness: int64
                    child 2, test_completeness: int64
                    child 3, test_correctness: int64
                child 5, timeout_build: int64
                child 6, timeout_test: int64
                child 7, type: string
                child 8, url: struct<diff: string, issue: string, pr: string>
                    child 0, diff: string
                    child 1, issue: string
                    child 2, pr: string
              sample_type: string
              -- schema metadata --
              pandas: '{"index_columns": [{"kind": "range", "name": null, "start": 0, "' + 2451
              to
              {'repo': Value('string'), 'instance_id': Value('string'), 'base_commit': Value('string'), 'patch': Value('string'), 'test_patch': Value('string'), 'problem_statement': Value('null'), 'hint_text': Value('null'), 'created_at': Value('string'), 'closed_at': Value('string'), 'version': Value('string'), 'FAIL_TO_PASS': List(Value('null')), 'PASS_TO_PASS': List(Value('string')), 'environment_setup_commit': Value('null'), 'command_build': Value('string'), 'command_test': Value('string'), 'image_name': Value('string'), 'meta': {'issue_number': Value('int64'), 'merge_commit': Value('string'), 'merged_at': Value('string'), 'pr_number': Value('int64'), 'score': {'complexity': Value('int64'), 'task_correctness': Value('int64'), 'test_completeness': Value('int64'), 'test_correctness': Value('int64')}, 'timeout_build': Value('int64'), 'timeout_test': Value('int64'), 'type': Value('string'), 'url': {'diff': Value('string'), 'issue': Value('string'), 'pr': Value('string')}}}
              because column names don't match
              
              The above exception was the direct cause of the following exception:
              
              Traceback (most recent call last):
                File "/src/services/worker/src/worker/job_runners/config/parquet_and_info.py", line 1428, in compute_config_parquet_and_info_response
                  parquet_operations, partial, estimated_dataset_info = stream_convert_to_parquet(
                File "/src/services/worker/src/worker/job_runners/config/parquet_and_info.py", line 994, in stream_convert_to_parquet
                  builder._prepare_split(
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/builder.py", line 1702, in _prepare_split
                  for job_id, done, content in self._prepare_split_single(
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/builder.py", line 1858, in _prepare_split_single
                  raise DatasetGenerationError("An error occurred while generating the dataset") from e
              datasets.exceptions.DatasetGenerationError: An error occurred while generating the dataset

Need help to make the dataset viewer work? Make sure to review how to configure the dataset viewer, and open a discussion for direct support.

repo
string
instance_id
string
base_commit
string
patch
string
test_patch
string
problem_statement
null
hint_text
null
created_at
string
closed_at
string
version
string
FAIL_TO_PASS
list
PASS_TO_PASS
list
environment_setup_commit
null
command_build
string
command_test
string
image_name
string
meta
dict
ansys/pydyna
ansys__pydyna-720
f7c50f6978985b174a4e0de2fb78805f730c39a4
diff --git a/doc/changelog/720.fixed.md b/doc/changelog/720.fixed.md new file mode 100644 index 00000000..c2563475 --- /dev/null +++ b/doc/changelog/720.fixed.md @@ -0,0 +1 @@ +fix: deck.get() in the presence of Encrypted keywords \ No newline at end of file diff --git a/src/ansys/dyna/core/lib/deck.py b/src/ansys/dyna...
diff --git a/tests/test_deck.py b/tests/test_deck.py index 4286e2b2..cd3528a2 100644 --- a/tests/test_deck.py +++ b/tests/test_deck.py @@ -459,9 +459,11 @@ def test_deck_encrypted_import_expand(file_utils): """Import an encrypted file as a deck.""" deck = Deck() filename = file_utils.assets_folder / "tes...
null
null
2025-02-17T15:25:37Z
2025-02-17T15:53:52Z
0.4.6
[]
[ "tests/test_deck.py::test_deck_long_deck_write_standard", "tests/test_deck.py::test_control_debug", "tests/test_deck.py::test_deck_long_deck_write_noargs", "tests/test_deck.py::test_deck_expand_transform", "tests/test_deck.py::test_deck_load_title", "tests/test_deck.py::test_deck_expand_recursive_include_...
null
pip install -e .; pip install pytest pytest-json-report;
pytest --json-report --json-report-file=report_pytest.json
python:3.11
{ "issue_number": 719, "merge_commit": "2f1cadab738e6e1ba35df7cdf9170ebe71d8eb2b", "merged_at": "2025-02-17T15:53:51Z", "pr_number": 720, "score": { "complexity": -1, "task_correctness": -1, "test_completeness": -1, "test_correctness": -1 }, "timeout_build": 3000, "timeout_test": 600, ...
DataBiosphere/toil
DataBiosphere__toil-5221
88d656ce0cec4e5e6b215596c47218321fd9cb7a
diff --git a/src/toil/batchSystems/slurm.py b/src/toil/batchSystems/slurm.py index 3ddddff6..df6ebc9b 100644 --- a/src/toil/batchSystems/slurm.py +++ b/src/toil/batchSystems/slurm.py @@ -13,6 +13,7 @@ # limitations under the License. from __future__ import annotations +import errno import logging import math imp...
diff --git a/src/toil/test/batchSystems/test_slurm.py b/src/toil/test/batchSystems/test_slurm.py index b2818359..2c6aaeb4 100644 --- a/src/toil/test/batchSystems/test_slurm.py +++ b/src/toil/test/batchSystems/test_slurm.py @@ -1,3 +1,4 @@ +import errno import textwrap from queue import Queue @@ -29,6 +30,9 @@ def c...
null
null
2024-08-19T13:45:15Z
2025-03-04T18:26:32Z
0.4.6
[]
[ "src/toil/test/batchSystems/test_slurm.py::SlurmTest::test_PartitionSet_get_partition", "src/toil/test/batchSystems/test_slurm.py::SlurmTest::test_getJobDetailsFromSacct_one_not_exists", "src/toil/test/batchSystems/test_slurm.py::SlurmTest::test_getJobExitCode_job_not_exists", "src/toil/test/batchSystems/test...
null
pip install -e .; pip install pytest pytest-json-report;
pytest --json-report --json-report-file=report_pytest.json
python:3.11
{ "issue_number": 5064, "merge_commit": "deb243bd78f49519189bcec75df0bad9b983e7dc", "merged_at": "2025-03-04T18:26:30Z", "pr_number": 5221, "score": { "complexity": -1, "task_correctness": -1, "test_completeness": -1, "test_correctness": -1 }, "timeout_build": 3000, "timeout_test": 600, ...

YAML Metadata Warning:empty or missing yaml metadata in repo card

Check out the documentation for more information.

README.md exists but content is empty.
Downloads last month
24