File size: 1,453 Bytes
7e4c1e8 | 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 | ---
license: apache-2.0
tags:
- knowledge-graph
- wiki5m
- scaling-law
- preservation
- ood
---
# Hypernet Scaling Law Data
Data assets for scaling-law and preservation (catastrophic forgetting) experiments.
## Contents
- **OOD splits**: `train_ood_scaling_law.pq`, `valid_ood_scaling_law.pq`, `eval_ood_scaling_law.pq` — train/valid/eval by domain (eval = held-out domains).
- **Scaling law**: `train_scaling_law.pq`, `valid_scaling_law.pq` — 1hop/2hop/3hop QA.
- **With facts**: `train_scaling_law_with_facts.pq`, `valid_scaling_law_with_facts.pq` — same + `facts` column from relation templates.
- **Preservation**: `preservation_train.pq`, `preservation_eval.pq` (and `preserve_data/`, `preserve_data_2hop/`, `preserve_data_combined/`) — entities not in train, for preservation loss and eval.
- **Relation templates**: `relation_template_mapping.csv` — relation label → question template and noun_template for fact generation.
- **EDA**: `domain_counts_eda.csv`, `figures/` — domain and n_hop stats/plots.
## Schema (parquet)
Canonical columns: `triplet_subject`, `triplet_relation`, `triplet_object`, `question_prompt`, `answer`.
Some files add `n_hop`, `facts` (list of strings), or `domain`.
## Usage
```python
import pandas as pd
from huggingface_hub import hf_hub_download
path = hf_hub_download(repo_id="nace-ai/hypernet-scaling-law-data", filename="train_ood_scaling_law.pq")
df = pd.read_parquet(path)
```
|