The dataset viewer is not available for this subset.
Exception: SplitsNotFoundError
Message: The split names could not be parsed from the dataset config.
Traceback: Traceback (most recent call last):
File "/usr/local/lib/python3.12/site-packages/datasets/inspect.py", line 286, in get_dataset_config_info
for split_generator in builder._split_generators(
^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/site-packages/datasets/packaged_modules/json/json.py", line 91, in _split_generators
pa_table = next(iter(self._generate_tables(**splits[0].gen_kwargs, allow_full_read=False)))[1]
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/site-packages/datasets/packaged_modules/json/json.py", line 193, in _generate_tables
examples = [ujson_loads(line) for line in batch.splitlines()]
^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/site-packages/datasets/utils/json.py", line 20, in ujson_loads
return pd.io.json.ujson_loads(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
ValueError: Expected object or value
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
File "/src/services/worker/src/worker/job_runners/config/split_names.py", line 65, in compute_split_names_from_streaming_response
for split in get_dataset_split_names(
^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/site-packages/datasets/inspect.py", line 340, in get_dataset_split_names
info = get_dataset_config_info(
^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/site-packages/datasets/inspect.py", line 291, in get_dataset_config_info
raise SplitsNotFoundError("The split names could not be parsed from the dataset config.") from err
datasets.inspect.SplitsNotFoundError: The split names could not be parsed from the dataset config.Need help to make the dataset viewer work? Make sure to review how to configure the dataset viewer, and open a discussion for direct support.
NAG-ME-QD Experiment Data
Trained model weights, elite archive, and full experiment results from the Neural-Architecture-Generation-using-MAP-Elites-Quality-Diversity research project.
Code repository: github.com/[Nozomi1856]/Neural-Architecture-Generation-using-MAP-Elites-Quality-Diversity
Author: Pratheeksha Aravind
Experiment configuration
| Setting | Value |
|---|---|
| Dataset | CIFAR-10 |
| MAP-Elites iterations | 20 |
| Initial population | 10 |
| Evaluation mode | Zero-shot |
| Final training epochs | 5 |
| Behavior space dimensions | 32 |
| Seed | 42 |
Total runtime: ~24.8 minutes
Results summary
| Model | Cell type | Val accuracy | Parameters |
|---|---|---|---|
top1 (fe0b192e6185) |
HYBRID | 54.4% | 40,842 |
top2 (705bc1677934) |
HYBRID | 56.1% | 56,330 |
top_transformer (7266c514462f) |
HYBRID | 58.0% | 57,482 |
top_recurrent (268dec31f257) |
DAG | 44.3% | 77,194 |
top_conv (fe0b192e6185) |
HYBRID | 53.8% | 40,842 |
Archive: 24 elites, QD score 11.99, max fitness 0.612, mean fitness 0.500
File structure
nag_output/
βββ final_results.json # Full experiment summary and config
βββ final_qd_trajectory.json # QD score, coverage, fitness per iteration
βββ final_archive_changes.json # Per-iteration archive update history
βββ final_mutation_stats.json # Mutation operator success rates
βββ behavior_space_reference.json # 32D behavior dimension definitions
βββ qd_trajectory_plot.png # Fitness/coverage plot
β
βββ top1/
β βββ architecture.json # Architecture specification
β βββ model.pth # Trained PyTorch weights
β βββ training_history.json # Per-epoch loss and accuracy
βββ top2/ # (same structure)
βββ top_conv/ # (same structure)
βββ top_transformer/ # (same structure)
βββ top_recurrent/ # (same structure)
β
βββ elite_archive/
βββ elite_index.json # Index of all 24 elites with fitness and tags
βββ elite_[id].json # Per-elite architecture spec and behavior vector
Loading a model
import torch
import json
# Load architecture spec
with open("nag_output/top_transformer/architecture.json") as f:
arch_spec = json.load(f)
# Load weights
weights = torch.load("nag_output/top_transformer/model.pth", map_location="cpu")
# To rebuild the model, use NASNetwork from the code repo:
# https://github.com/Nozomi1856/Multi-Paradigm-MAP-Elites-Quality-Diversity-Neural-Architecture-Generation-with-DARTS-cells
Notes
The zero-shot fitness scores used during search are proxy estimates, not true validation accuracy. Final accuracy figures above are from short post-search training runs (5 epochs). Results likely reflect basic CNN structure and standard training hyperparameters more than the NAS components β this is a known limitation of the current implementation.
Citation
If you use this data, please credit:
Pratheeksha Aravind. NAG-ME-QD: Neural Architecture Generation with MAP-Elites
Quality-Diversity Optimization. Work in progress, 2026.
GitHub: https://github.com/Nozomi1856/Neural-Architecture-Generation-using-MAP-Elites-Quality-Diversity/settings
- Downloads last month
- 13