Dataset Preview
Duplicate
The full dataset viewer is not available (click to read why). Only showing a preview of the rows.
The dataset generation failed
Error code:   DatasetGenerationError
Exception:    ArrowCapacityError
Message:      array cannot contain more than 2147483646 bytes, have 12052107539
Traceback:    Traceback (most recent call last):
                File "/usr/local/lib/python3.12/site-packages/datasets/builder.py", line 1594, in _prepare_split_single
                  writer.write(example)
                File "/usr/local/lib/python3.12/site-packages/datasets/arrow_writer.py", line 682, in write
                  self.write_examples_on_file()
                File "/usr/local/lib/python3.12/site-packages/datasets/arrow_writer.py", line 655, in write_examples_on_file
                  self._write_batch(batch_examples=batch_examples)
                File "/usr/local/lib/python3.12/site-packages/datasets/arrow_writer.py", line 752, in _write_batch
                  arrays.append(pa.array(typed_sequence))
                                ^^^^^^^^^^^^^^^^^^^^^^^^
                File "pyarrow/array.pxi", line 256, in pyarrow.lib.array
                File "pyarrow/array.pxi", line 118, in pyarrow.lib._handle_arrow_array_protocol
                File "/usr/local/lib/python3.12/site-packages/datasets/arrow_writer.py", line 291, in __arrow_array__
                  out = self._arrow_array(type=type)
                        ^^^^^^^^^^^^^^^^^^^^^^^^^^^^
                File "/usr/local/lib/python3.12/site-packages/datasets/arrow_writer.py", line 340, in _arrow_array
                  out = pa.array(cast_to_python_objects(examples, only_1d_for_numpy=True))
                        ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
                File "pyarrow/array.pxi", line 375, in pyarrow.lib.array
                File "pyarrow/array.pxi", line 46, in pyarrow.lib._sequence_to_array
                File "pyarrow/error.pxi", line 155, in pyarrow.lib.pyarrow_internal_check_status
                File "pyarrow/error.pxi", line 92, in pyarrow.lib.check_status
              pyarrow.lib.ArrowCapacityError: array cannot contain more than 2147483646 bytes, have 12052107539
              
              During handling of the above exception, another exception occurred:
              
              Traceback (most recent call last):
                File "/usr/local/lib/python3.12/site-packages/datasets/builder.py", line 1607, in _prepare_split_single
                  num_examples, num_bytes = writer.finalize()
                                            ^^^^^^^^^^^^^^^^^
                File "/usr/local/lib/python3.12/site-packages/datasets/arrow_writer.py", line 783, in finalize
                  self.write_examples_on_file()
                File "/usr/local/lib/python3.12/site-packages/datasets/arrow_writer.py", line 655, in write_examples_on_file
                  self._write_batch(batch_examples=batch_examples)
                File "/usr/local/lib/python3.12/site-packages/datasets/arrow_writer.py", line 752, in _write_batch
                  arrays.append(pa.array(typed_sequence))
                                ^^^^^^^^^^^^^^^^^^^^^^^^
                File "pyarrow/array.pxi", line 256, in pyarrow.lib.array
                File "pyarrow/array.pxi", line 118, in pyarrow.lib._handle_arrow_array_protocol
                File "/usr/local/lib/python3.12/site-packages/datasets/arrow_writer.py", line 291, in __arrow_array__
                  out = self._arrow_array(type=type)
                        ^^^^^^^^^^^^^^^^^^^^^^^^^^^^
                File "/usr/local/lib/python3.12/site-packages/datasets/arrow_writer.py", line 340, in _arrow_array
                  out = pa.array(cast_to_python_objects(examples, only_1d_for_numpy=True))
                        ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
                File "pyarrow/array.pxi", line 375, in pyarrow.lib.array
                File "pyarrow/array.pxi", line 46, in pyarrow.lib._sequence_to_array
                File "pyarrow/error.pxi", line 155, in pyarrow.lib.pyarrow_internal_check_status
                File "pyarrow/error.pxi", line 92, in pyarrow.lib.check_status
              pyarrow.lib.ArrowCapacityError: array cannot contain more than 2147483646 bytes, have 12052107539
              
              The above exception was the direct cause of the following exception:
              
              Traceback (most recent call last):
                File "/src/services/worker/src/worker/job_runners/config/parquet_and_info.py", line 1342, in compute_config_parquet_and_info_response
                  parquet_operations, partial, estimated_dataset_info = stream_convert_to_parquet(
                                                                        ^^^^^^^^^^^^^^^^^^^^^^^^^^
                File "/src/services/worker/src/worker/job_runners/config/parquet_and_info.py", line 907, in stream_convert_to_parquet
                  builder._prepare_split(split_generator=splits_generators[split], file_format="parquet")
                File "/usr/local/lib/python3.12/site-packages/datasets/builder.py", line 1438, in _prepare_split
                  for job_id, done, content in self._prepare_split_single(
                                               ^^^^^^^^^^^^^^^^^^^^^^^^^^^
                File "/usr/local/lib/python3.12/site-packages/datasets/builder.py", line 1616, in _prepare_split_single
                  raise DatasetGenerationError("An error occurred while generating the dataset") from e
              datasets.exceptions.DatasetGenerationError: An error occurred while generating the dataset

Need help to make the dataset viewer work? Make sure to review how to configure the dataset viewer, and open a discussion for direct support.

gz
unknown
__key__
string
__url__
string
"cHN0MAUKAwAAAAEEAgAAARUEERhCaW86OkVuc0VNQkw6OlRyYW5zY3JpcHQDAAAAHgiBAAAADGlzX2Nhbm9uaWNhbAoHRERYMTF(...TRUNCATED)
homo_sapiens_merged/115_GRCh37/1/1-1000000
"hf://datasets/pzweuj/SchemaBio_Bundle@2f0de0191165e3e0af6788ef17943a96cd76c019/hg19/homo_sapiens_me(...TRUNCATED)
"cHN0MAUKAwAAAAEEAwAAAAEEAgAAANAEEShCaW86OkVuc0VNQkw6OkZ1bmNnZW46OlJlZ3VsYXRvcnlGZWF0dXJlAwAAAA0IgAA(...TRUNCATED)
homo_sapiens_merged/115_GRCh37/1/1-1000000_reg
"hf://datasets/pzweuj/SchemaBio_Bundle@2f0de0191165e3e0af6788ef17943a96cd76c019/hg19/homo_sapiens_me(...TRUNCATED)
"cHN0MAUKAwAAAAEEAgAAAJMEERhCaW86OkVuc0VNQkw6OlRyYW5zY3JpcHQDAAAAIwoFMTU4NDYAAAANX2dlbmVfaGduY19pZAi(...TRUNCATED)
homo_sapiens_merged/115_GRCh37/1/100000001-101000000
"hf://datasets/pzweuj/SchemaBio_Bundle@2f0de0191165e3e0af6788ef17943a96cd76c019/hg19/homo_sapiens_me(...TRUNCATED)
"cHN0MAUKAwAAAAEEAwAAAAEEAgAAAIYEEShCaW86OkVuc0VNQkw6OkZ1bmNnZW46OlJlZ3VsYXRvcnlGZWF0dXJlAwAAAA0IgAA(...TRUNCATED)
homo_sapiens_merged/115_GRCh37/1/100000001-101000000_reg
"hf://datasets/pzweuj/SchemaBio_Bundle@2f0de0191165e3e0af6788ef17943a96cd76c019/hg19/homo_sapiens_me(...TRUNCATED)
"cHN0MAUKAwAAAAEEAgAAAJIEERhCaW86OkVuc0VNQkw6OlRyYW5zY3JpcHQDAAAAIgoMSzdFUzk1X0hVTUFOAAAAB190cmVtYmw(...TRUNCATED)
homo_sapiens_merged/115_GRCh37/1/10000001-11000000
"hf://datasets/pzweuj/SchemaBio_Bundle@2f0de0191165e3e0af6788ef17943a96cd76c019/hg19/homo_sapiens_me(...TRUNCATED)
"cHN0MAUKAwAAAAEEAwAAAAEEAgAAANsEEShCaW86OkVuc0VNQkw6OkZ1bmNnZW46OlJlZ3VsYXRvcnlGZWF0dXJlAwAAAA0EERN(...TRUNCATED)
homo_sapiens_merged/115_GRCh37/1/10000001-11000000_reg
"hf://datasets/pzweuj/SchemaBio_Bundle@2f0de0191165e3e0af6788ef17943a96cd76c019/hg19/homo_sapiens_me(...TRUNCATED)
"cHN0MAUKAwAAAAEEAgAAAooEERhCaW86OkVuc0VNQkw6OlRyYW5zY3JpcHQDAAAAHAQCAAAAAQQRF0Jpbzo6RW5zRU1CTDo6QXR(...TRUNCATED)
homo_sapiens_merged/115_GRCh37/1/1000001-2000000
"hf://datasets/pzweuj/SchemaBio_Bundle@2f0de0191165e3e0af6788ef17943a96cd76c019/hg19/homo_sapiens_me(...TRUNCATED)
"cHN0MAUKAwAAAAEEAwAAAAEEAgAAAVAEEShCaW86OkVuc0VNQkw6OkZ1bmNnZW46OlJlZ3VsYXRvcnlGZWF0dXJlAwAAAA0EAwA(...TRUNCATED)
homo_sapiens_merged/115_GRCh37/1/1000001-2000000_reg
"hf://datasets/pzweuj/SchemaBio_Bundle@2f0de0191165e3e0af6788ef17943a96cd76c019/hg19/homo_sapiens_me(...TRUNCATED)
"cHN0MAUKAwAAAAEEAgAAAGIEERhCaW86OkVuc0VNQkw6OlRyYW5zY3JpcHQDAAAAIwoENDUzOQAAAA1fZ2VuZV9oZ25jX2lkCIE(...TRUNCATED)
homo_sapiens_merged/115_GRCh37/1/101000001-102000000
"hf://datasets/pzweuj/SchemaBio_Bundle@2f0de0191165e3e0af6788ef17943a96cd76c019/hg19/homo_sapiens_me(...TRUNCATED)
"cHN0MAUKAwAAAAEEAwAAAAEEAgAAAJ0EEShCaW86OkVuc0VNQkw6OkZ1bmNnZW46OlJlZ3VsYXRvcnlGZWF0dXJlAwAAAA0EAgA(...TRUNCATED)
homo_sapiens_merged/115_GRCh37/1/101000001-102000000_reg
"hf://datasets/pzweuj/SchemaBio_Bundle@2f0de0191165e3e0af6788ef17943a96cd76c019/hg19/homo_sapiens_me(...TRUNCATED)
End of preview.

🧬 SchemaBio Bundle

English | 中文

📖 概述

本数据集是一个全面、精心整理的一站式注释库,专为生物信息学和基因组变异分析设计(如 WES/WGS 流水线)。它将核心的变异效应预测器、临床数据库和基因组参考文件整合到统一的资源库中,大幅减少数据收集、格式对齐和预处理的时间。

🗂️ 包含数据与来源

本资源库包含以下类别的数据:

  1. 变异效应预测器与评分: AlphaMissense、EVOScore2、Pangolin。
  2. 临床与人群频率数据: gnomAD、CIViC、mskcc_hotspot。
  3. 基因组参考与区域: UCSC 注释文件(cytobandncbiRefSeq.gtf)、Ensembl 参考文件(GRCh38/37 FASTAvep_cache)、ManeSelectBed。

⚖️ 许可证与商业使用

好消息:本资源库包含的所有基础数据集目前均允许商业使用,前提是用户严格遵守各自的署名和相同方式共享要求。

由于本资源库作为聚合器,它不受单一许可证约束。您必须遵守各基础数据源的单独许可证。下表概述了每个组件的具体许可证:

📊 数据库许可证汇总

数据库 / 来源 版本 官方许可证 商业使用 再分发 关键要求与说明
AlphaMissense v3 CC BY 4.0 ✅ 允许 ✅ 允许 必须署名 DeepMind Technologies Limited。
EVOScore2 20260128 MIT ✅ 允许 ✅ 允许 必须包含原始版权声明。
Pangolin v1 CC BY 4.0 ✅ 允许 ✅ 允许 必须提供适当署名。
ManeSelectBed v49 MIT ✅ 允许 ✅ 允许 必须包含原始版权声明。
mskcc_hotspot v2 ODbL ✅ 允许 ✅ 允许 相同方式共享: 如果您修改/改编此数据库,衍生作品也必须采用 ODbL 许可证。
CIViC 20260301 CC0 1.0 ✅ 允许 ✅ 允许 公共领域。无限制。
gnomAD v4.1 无限制 ✅ 允许 ✅ 允许 Broad Institute 声明对使用无限制。
UCSC Data / 公共领域 ✅ 允许 ✅ 允许 公共资助数据,可自由用于任何用途。
Ensembl Data release 115 无限制 ✅ 允许 ✅ 允许 EMBL-EBI 对使用或再分发无限制。

免责声明:本组合数据集的创建者对原始源数据不主张任何所有权。用户需自行确保其下游使用符合原始数据提供者的条款。

Downloads last month
189