buckeyeguy commited on
Commit
06f263a
·
verified ·
1 Parent(s): 4ac60f8

Upload README.md with huggingface_hub

Browse files
Files changed (1) hide show
  1. README.md +166 -0
README.md ADDED
@@ -0,0 +1,166 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ license: cc-by-4.0
3
+ task_categories:
4
+ - tabular-classification
5
+ - graph-ml
6
+ tags:
7
+ - intrusion-detection
8
+ - CAN-bus
9
+ - graph-neural-networks
10
+ - knowledge-distillation
11
+ pretty_name: GraphIDS Paper Data
12
+ ---
13
+
14
+ # GraphIDS — Paper Data
15
+
16
+ Evaluation artifacts for "Adaptive Fusion of Graph-Based Ensembles for Automotive Intrusion Detection".
17
+
18
+ Consumed by [kd-gat-paper](https://github.com/frenken-lab/kd-gat-paper) via `data/pull_data.py`.
19
+
20
+ ## Schema Contract
21
+
22
+ **If you change column names or file structure, `pull_data.py` will fail.**
23
+ The input schema is enforced in `pull_data.py:INPUT_SCHEMA`.
24
+
25
+ ### metrics.parquet
26
+
27
+ Per-model evaluation metrics across all runs.
28
+
29
+ | Column | Type | Description |
30
+ |---|---|---|
31
+ | `run_id` | str | Run identifier, e.g. `hcrl_sa/eval_large_evaluation` |
32
+ | `model` | str | Model name: `gat`, `vgae`, `fusion` |
33
+ | `accuracy` | float | Classification accuracy |
34
+ | `precision` | float | Precision |
35
+ | `recall` | float | Recall (sensitivity) |
36
+ | `f1` | float | F1 score |
37
+ | `specificity` | float | Specificity (TNR) |
38
+ | `balanced_accuracy` | float | Balanced accuracy |
39
+ | `mcc` | float | Matthews correlation coefficient |
40
+ | `fpr` | float | False positive rate |
41
+ | `fnr` | float | False negative rate |
42
+ | `auc` | float | Area under ROC curve |
43
+ | `n_samples` | float | Number of evaluation samples |
44
+ | `dataset` | str | Dataset name: `hcrl_sa`, `hcrl_ch`, `set_01`–`set_04` |
45
+
46
+ ### embeddings.parquet
47
+
48
+ 2D UMAP projections of graph embeddings per model.
49
+
50
+ | Column | Type | Description |
51
+ |---|---|---|
52
+ | `run_id` | str | Run identifier |
53
+ | `model` | str | Model that produced the embedding: `gat`, `vgae` |
54
+ | `x` | float | UMAP dimension 1 |
55
+ | `y` | float | UMAP dimension 2 |
56
+ | `label` | int | Ground truth: 0 = normal, 1 = attack |
57
+
58
+ ### cka_similarity.parquet
59
+
60
+ CKA similarity between teacher and student layers (KD runs only).
61
+
62
+ | Column | Type | Description |
63
+ |---|---|---|
64
+ | `run_id` | str | Run identifier (only `*_kd` runs) |
65
+ | `dataset` | str | Dataset name |
66
+ | `teacher_layer` | str | Teacher layer name, e.g. `Teacher L1` |
67
+ | `student_layer` | str | Student layer name, e.g. `Student L1` |
68
+ | `similarity` | float | CKA similarity score (0–1) |
69
+
70
+ ### dqn_policy.parquet
71
+
72
+ DQN fusion weight (alpha) per evaluated graph.
73
+
74
+ | Column | Type | Description |
75
+ |---|---|---|
76
+ | `run_id` | str | Run identifier |
77
+ | `dataset` | str | Dataset name |
78
+ | `scale` | str | Model scale: `large`, `small` |
79
+ | `has_kd` | int | Whether KD was used: 0 or 1 |
80
+ | `action_idx` | int | Graph index within the evaluation set |
81
+ | `alpha` | float | Fusion weight (0 = full VGAE, 1 = full GAT) |
82
+
83
+ **Note:** Lacks per-graph `label` and `attack_type`. The paper figure needs these fields joined from evaluation results. This is a known gap — see `pull_data.py` skip message.
84
+
85
+ ### recon_errors.parquet
86
+
87
+ VGAE reconstruction error per evaluated graph.
88
+
89
+ | Column | Type | Description |
90
+ |---|---|---|
91
+ | `run_id` | str | Run identifier |
92
+ | `error` | float | Scalar reconstruction error |
93
+ | `label` | int | Ground truth: 0 = normal, 1 = attack |
94
+
95
+ **Note:** Single scalar error — no per-component decomposition (Node Recon, CAN ID, Neighbor, KL). The paper figure needs the component breakdown. This is a known gap.
96
+
97
+ ### attention_weights.parquet
98
+
99
+ Mean GAT attention weights aggregated per head.
100
+
101
+ | Column | Type | Description |
102
+ |---|---|---|
103
+ | `run_id` | str | Run identifier |
104
+ | `sample_idx` | int | Graph sample index |
105
+ | `label` | int | Ground truth: 0 = normal, 1 = attack |
106
+ | `layer` | int | GAT layer index |
107
+ | `head` | int | Attention head index |
108
+ | `mean_alpha` | float | Mean attention weight for this head |
109
+
110
+ **Note:** Aggregated per-head, not per-edge. The paper figure needs per-edge attention weights with node positions. This is a known gap.
111
+
112
+ ### graph_samples.json
113
+
114
+ Raw CAN bus graph instances with node/edge features.
115
+
116
+ Top-level keys: `schema_version`, `exported_at`, `data`, `feature_names`.
117
+
118
+ Each item in `data`:
119
+ - `dataset`: str — dataset name
120
+ - `label`: int — 0/1
121
+ - `attack_type`: int — attack type code
122
+ - `attack_type_name`: str — human-readable name
123
+ - `nodes`: list of `{id, features, node_y, node_attack_type, node_attack_type_name}`
124
+ - `links`: list of `{source, target, edge_features}`
125
+ - `num_nodes`, `num_edges`: int
126
+ - `id_entropy`, `stats`: additional metadata
127
+
128
+ ### metrics/*.json
129
+
130
+ Per-configuration evaluation results. 18 files covering 6 datasets x 3 configs (large, small, small_kd).
131
+
132
+ Each file: `{schema_version, exported_at, data: [{model, scenario, metric_name, value}]}`
133
+
134
+ Currently only contains `val` scenario — cross-dataset test scenarios are not yet exported.
135
+
136
+ ### Other files
137
+
138
+ | File | Description |
139
+ |---|---|
140
+ | `leaderboard.json` | Cross-dataset model comparison (all metrics, all runs) |
141
+ | `model_sizes.json` | Parameter counts per model type and scale |
142
+ | `training_curves.parquet` | Loss/accuracy curves over training epochs |
143
+ | `graph_statistics.parquet` | Per-graph structural statistics |
144
+ | `datasets.json` | Dataset metadata |
145
+ | `runs.json` | Run metadata |
146
+ | `kd_transfer.json` | Knowledge distillation transfer metrics |
147
+
148
+ ## Run ID Convention
149
+
150
+ Format: `{dataset}/{eval_config}`
151
+
152
+ - Datasets: `hcrl_sa`, `hcrl_ch`, `set_01` through `set_04`
153
+ - Configs: `eval_large_evaluation`, `eval_small_evaluation`, `eval_small_evaluation_kd`
154
+
155
+ The paper defaults to `hcrl_sa/eval_large_evaluation` for main results and `hcrl_sa/eval_small_evaluation_kd` for CKA.
156
+
157
+ ## Known Gaps
158
+
159
+ These files need richer exports from the KD-GAT pipeline:
160
+
161
+ 1. **dqn_policy.parquet** — needs per-graph `label` + `attack_type` columns
162
+ 2. **recon_errors.parquet** — needs per-component error decomposition
163
+ 3. **attention_weights.parquet** — needs per-edge weights + node positions
164
+ 4. **metrics/*.json** — needs cross-dataset test scenario results
165
+
166
+ Until these are addressed, `pull_data.py` preserves existing committed files for the affected figures.