Update README.md
Browse files
README.md
CHANGED
|
@@ -1,5 +1,5 @@
|
|
| 1 |
-
# SAE-V
|
| 2 |
-
|
| 3 |
|
| 4 |
## 1.Training Parameter
|
| 5 |
|
|
@@ -133,7 +133,7 @@ The differences in training parameters arise because the LLaVA-NeXT-7B model req
|
|
| 133 |
|
| 134 |
## 2. Quickstart
|
| 135 |
|
| 136 |
-
The SAE and SAE-V is developed based on [SAELens-V](https://github.com/
|
| 137 |
|
| 138 |
```python
|
| 139 |
from saev_lens import SAE
|
|
@@ -142,4 +142,4 @@ sae = SAE.load_from_pretrained(
|
|
| 142 |
device ="cuda:0"
|
| 143 |
)
|
| 144 |
```
|
| 145 |
-
More using tutorial is presented in [SAELens-V](https://github.com/
|
|
|
|
| 1 |
+
# (ICML 2025 Poster) SAE-V: Interpreting Multimodal Models for Enhanced Alignment
|
| 2 |
+
This repository contains the SAE-V model for our ICML 2025 Poster paper "SAE-V: Interpreting Multimodal Models for Enhanced Alignment", including 2 sparse autoencoder (SAE) and 3 sparse autoencoder with Vision (SAE-V). See each model folders and the [source code](https://github.com/PKU-Alignment/SAELens-V) for more information.
|
| 3 |
|
| 4 |
## 1.Training Parameter
|
| 5 |
|
|
|
|
| 133 |
|
| 134 |
## 2. Quickstart
|
| 135 |
|
| 136 |
+
The SAE and SAE-V is developed based on [SAELens-V](https://github.com/PKU-Alignment/SAELens-V). The loading example is as follow:
|
| 137 |
|
| 138 |
```python
|
| 139 |
from saev_lens import SAE
|
|
|
|
| 142 |
device ="cuda:0"
|
| 143 |
)
|
| 144 |
```
|
| 145 |
+
More using tutorial is presented in [SAELens-V](https://github.com/PKU-Alignment/SAELens-V).
|