broadinstitute/axonet-vae-stage1
Image Segmentation • Updated • 11
Exception: SplitsNotFoundError
Message: The split names could not be parsed from the dataset config.
Traceback: Traceback (most recent call last):
File "/usr/local/lib/python3.12/site-packages/datasets/inspect.py", line 286, in get_dataset_config_info
for split_generator in builder._split_generators(
^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/site-packages/datasets/packaged_modules/folder_based_builder/folder_based_builder.py", line 244, in _split_generators
raise ValueError(
ValueError: `file_name` or `*_file_name` must be present as dictionary key (with type string) in metadata files
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
File "/src/services/worker/src/worker/job_runners/config/split_names.py", line 65, in compute_split_names_from_streaming_response
for split in get_dataset_split_names(
^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/site-packages/datasets/inspect.py", line 340, in get_dataset_split_names
info = get_dataset_config_info(
^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/site-packages/datasets/inspect.py", line 291, in get_dataset_config_info
raise SplitsNotFoundError("The split names could not be parsed from the dataset config.") from err
datasets.inspect.SplitsNotFoundError: The split names could not be parsed from the dataset config.Need help to make the dataset viewer work? Make sure to review how to configure the dataset viewer, and open a discussion for direct support.
Multi-view rendered images of neuronal morphologies from NeuroMorpho.org, prepared for training segmentation and multimodal (CLIP-style) models.
This dataset contains 7,158 curated neurons rendered from 24 viewpoints each, totaling approximately 164,000 images. Each neuron has:
data/
curated_manifest.jsonl # 7,158 neurons (QC'd, species-balanced)
full_manifest.jsonl # 164,016 images (all views)
metadata.jsonl # NeuroMorpho metadata per neuron
images/
[neuron_id]/
[neuron_id]_0000_mask_bw.png # Binary mask
[neuron_id]_0000_mask.png # Semantic segmentation
[neuron_id]_0000_mask_color.png # Color-coded visualization
[neuron_id]_0000_depth.png # Depth map
... (24 views per neuron)
provenance/
curation_report.txt # QC statistics
download_log.jsonl # Download metadata
render_config.json # Rendering parameters
curated_manifest.jsonl)
One record per neuron (7,158 total):
neuron_id: Unique NeuroMorpho.org identifier (integer)neuron_name: NeuroMorpho.org name stringswc: Path to SWC morphology filespecies: Species namebrain_region: List of anatomical regionscell_type: List of cell type classificationsarchive: Source archive/labphysical_Integrity: Data quality annotationfull_manifest.jsonl)
One record per rendered view (~164K total):
neuron_id: Neuron identifier (string)swc: SWC filenamemask: Path to semantic segmentation maskmask_bw: Path to binary maskdepth: Path to depth mapidx: View index (0-23)camera: Camera parameters (eye, target, up, fovy, etc.)qc_fraction: Quality control scoreview_tier: View classification (canonical, etc.)metadata.jsonl)
Full NeuroMorpho.org metadata per neuron:
neuron_id: NeuroMorpho.org identifierspecies: Species (mouse, rat, human, etc.)brain_region: Brain region(s)cell_type: Cell type classificationarchive: Source archive/labmorphometrics: Quantitative measurements (soma surface, total length, etc.)| Species | Count | Percentage |
|---|---|---|
| mouse | 2,000 | 27.9% |
| rat | 2,000 | 27.9% |
| human | 1,036 | 14.5% |
| chimpanzee | 257 | 3.6% |
| giraffe | 207 | 2.9% |
| Other (26 species) | 1,658 | 23.2% |
import json
# Load curated manifest (neuron-level, 7,158 neurons)
with open("data/curated_manifest.jsonl") as f:
neurons = [json.loads(line) for line in f]
print(f"Loaded {len(neurons)} neurons")
# Example: neurons[0] = {"neuron_id": 84160, "species": "African wild dog", ...}
# Load full manifest (image-level, ~164K views)
with open("data/full_manifest.jsonl") as f:
images = [json.loads(line) for line in f]
print(f"Loaded {len(images)} image records")
# Example: images[0] = {"neuron_id": "10024_ADLR.CNG", "mask": "..._mask.png", ...}
# Load metadata (full NeuroMorpho.org metadata)
metadata = {}
with open("data/metadata.jsonl") as f:
for line in f:
record = json.loads(line)
nid = record.get("neuron_id") or record.get("neuron_name")
metadata[str(nid)] = record
@misc{axonet2025,
author = {Hall, Giles},
title = {AxoNet: Multimodal Neuron Morphology Embeddings via 2D Projections},
year = {2025},
publisher = {HuggingFace},
howpublished = {\url{https://huggingface.co/datasets/broadinstitute/axonet-neuromorpho-dataset}}
}
This dataset is derived from NeuroMorpho.org. Please cite:
Ascoli GA, Donohue DE, Halavi M (2007) NeuroMorpho.Org: A Central Resource for Neuronal Morphologies. J Neurosci 27:9247-9251.