Update README.md
Browse files
README.md
CHANGED
|
@@ -255,6 +255,10 @@ We evaluate LCO-Embedding with state-of-the-art embedding models, including E5-V
|
|
| 255 |
|
| 256 |
<div align='center'><img src="https://cdn-uploads.huggingface.co/production/uploads/63108cc834c7d77420b0fd68/63WBsKh57HbNwwe3bZ-oZ.png" alt="mieb_lite" width="100%"/></div>
|
| 257 |
|
|
|
|
|
|
|
|
|
|
|
|
|
| 258 |
Performance and efficiency comparisons of different training strategies using 3B and 7B variants of Qwen2.5-VL backbones.
|
| 259 |
|
| 260 |
<div align='center'><img src="https://github.com/LCO-Embedding/LCO-Embedding/raw/main/assets/lora_ablation.png" alt="lora_ablation" width="100%"/></div>
|
|
|
|
| 255 |
|
| 256 |
<div align='center'><img src="https://cdn-uploads.huggingface.co/production/uploads/63108cc834c7d77420b0fd68/63WBsKh57HbNwwe3bZ-oZ.png" alt="mieb_lite" width="100%"/></div>
|
| 257 |
|
| 258 |
+
LCO-Embedding is also SOTA on MAEB (massive audio embedding benchmark) without even training on audio. Screenshot from the MAEB paper.
|
| 259 |
+
|
| 260 |
+

|
| 261 |
+
|
| 262 |
Performance and efficiency comparisons of different training strategies using 3B and 7B variants of Qwen2.5-VL backbones.
|
| 263 |
|
| 264 |
<div align='center'><img src="https://github.com/LCO-Embedding/LCO-Embedding/raw/main/assets/lora_ablation.png" alt="lora_ablation" width="100%"/></div>
|