Small main readme update
Browse files
README.md
CHANGED
|
@@ -33,8 +33,8 @@ Two MLP models trained on CIFAR-10 that were used to generate **Figure 3** in ou
|
|
| 33 |
|
| 34 |
**Training Details:**
|
| 35 |
- Dataset: CIFAR-10
|
| 36 |
-
-
|
| 37 |
-
- Architecture: Multi-layer perceptrons
|
| 38 |
|
| 39 |
### Transformer Models (`models/transformers/`)
|
| 40 |
Best-performing transformer model trained on the Shakespeare word dataset, representing our optimal validation accuracy checkpoint.
|
|
|
|
| 33 |
|
| 34 |
**Training Details:**
|
| 35 |
- Dataset: CIFAR-10
|
| 36 |
+
- Selection: These are the models we used to generate Figure 3
|
| 37 |
+
- Architecture: Multi-layer perceptrons
|
| 38 |
|
| 39 |
### Transformer Models (`models/transformers/`)
|
| 40 |
Best-performing transformer model trained on the Shakespeare word dataset, representing our optimal validation accuracy checkpoint.
|