The dataset viewer is not available for this split.
Parquet error: Scan size limit exceeded: attempted to read 801713031 bytes, limit is 300000000 bytes
Make sure that
1. the Parquet files contain a page index to enable random access without loading entire row groups2. otherwise use smaller row-group sizes when serializing the Parquet files
Error code: TooBigContentError
Need help to make the dataset viewer work? Make sure to review how to configure the dataset viewer, and open a discussion for direct support.
BioASQ-BEIR Vector Database Dataset (4096d, 1M)
Generated embeddings dataset for vector database training and evaluation.
Dataset Summary
This dataset contains 1,000,000 text samples with vector embeddings (4096 dimensions) generated from the BioASQ-BEIR dataset using Qwen/Qwen3-Embedding-8B.
Dataset Structure
- Base dataset: 1,000,000 samples with embeddings
- Embedding dimension: 4096
Repository Structure
parquet/
base.parquet- Main dataset with text and embeddings
Usage
from datasets import load_dataset
dataset = load_dataset("maknee/bioasq_bier_4096_1m")
base_data = dataset["train"]
import numpy as np
embeddings = np.array(base_data["embedding"])
texts = base_data["text"]
Dataset Information
- Source: BioASQ-BEIR
- Size: 1,000,000 samples
- Dimension: 4096
- Format: Parquet
Citation
@dataset{huggingface_embeddings_maknee_bioasq_bier_4096_1m,
title={BioASQ-BEIR Vector Database Embeddings Dataset},
author={Henry Zhu},
year={2026},
url={https://huggingface.co/datasets/maknee/bioasq_bier_4096_1m}
}
License
MIT License.
- Downloads last month
- 389