castleyu commited on
Commit ·
d641603
1
Parent(s): 1400ae4
update readme.md
Browse files
README.md
CHANGED
|
@@ -21,7 +21,7 @@ tags:
|
|
| 21 |
<p><i>Tencent Robotics X × HY Vision Team</i></p>
|
| 22 |
|
| 23 |
<a href="https://github.com/Tencent-Hunyuan/HY-Embodied/blob/master/hy_embodied_tech_report.pdf"><img src="https://img.shields.io/badge/Paper-Report-red?logo=report" alt="Tech Report"></a>
|
| 24 |
-
<a href="https://arxiv.org"><img src="https://img.shields.io/badge/Paper-Arxiv-red?logo=arxiv" alt="Paper"></a>
|
| 25 |
<a href="https://huggingface.co/tencent/HY-Embodied-0.5/tree/main"><img src="https://img.shields.io/badge/Models-HuggingFace-yellow?logo=huggingface" alt="Models"></a>
|
| 26 |
<a href="https://github.com/Tencent-Hunyuan/HY-Embodied"><img src="https://img.shields.io/badge/GitHub-Repo-181717?logo=github&logoColor=white" alt="GitHub"></a>
|
| 27 |
|
|
@@ -302,7 +302,15 @@ for i, msgs in enumerate(messages_batch):
|
|
| 302 |
*Note: Results for HY-Embodied-0.5 MoT-2B are reported in thinking mode, while for all other models, we report the better performance between non-thinking and thinking modes.*
|
| 303 |
|
| 304 |
## 📚 Citation
|
| 305 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 306 |
|
| 307 |
## 🙏 Acknowledgements
|
| 308 |
|
|
|
|
| 21 |
<p><i>Tencent Robotics X × HY Vision Team</i></p>
|
| 22 |
|
| 23 |
<a href="https://github.com/Tencent-Hunyuan/HY-Embodied/blob/master/hy_embodied_tech_report.pdf"><img src="https://img.shields.io/badge/Paper-Report-red?logo=report" alt="Tech Report"></a>
|
| 24 |
+
<a href="https://arxiv.org/abs/2604.07430"><img src="https://img.shields.io/badge/Paper-Arxiv-red?logo=arxiv" alt="Paper"></a>
|
| 25 |
<a href="https://huggingface.co/tencent/HY-Embodied-0.5/tree/main"><img src="https://img.shields.io/badge/Models-HuggingFace-yellow?logo=huggingface" alt="Models"></a>
|
| 26 |
<a href="https://github.com/Tencent-Hunyuan/HY-Embodied"><img src="https://img.shields.io/badge/GitHub-Repo-181717?logo=github&logoColor=white" alt="GitHub"></a>
|
| 27 |
|
|
|
|
| 302 |
*Note: Results for HY-Embodied-0.5 MoT-2B are reported in thinking mode, while for all other models, we report the better performance between non-thinking and thinking modes.*
|
| 303 |
|
| 304 |
## 📚 Citation
|
| 305 |
+
If you find it useful for your research and applications, please cite our paper using this BibTeX:
|
| 306 |
+
```bibtex
|
| 307 |
+
@article{tencent2026hyembodied05,
|
| 308 |
+
title={HY-Embodied-0.5: Embodied Foundation Models for Real-World Agents},
|
| 309 |
+
author={Tencent Robotics X and HY Vision Team},
|
| 310 |
+
journal={arXiv preprint arXiv:2604.07430},
|
| 311 |
+
year={2026}
|
| 312 |
+
}
|
| 313 |
+
```
|
| 314 |
|
| 315 |
## 🙏 Acknowledgements
|
| 316 |
|