Update README.md
Browse files
README.md
CHANGED
|
@@ -1,3 +1,23 @@
|
|
| 1 |
-
-
|
| 2 |
-
|
| 3 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
# MT-LLM
|
| 2 |
+
|
| 3 |
+
><em> in International Conference on Acoustics, Speech, and Signal Processing (ICASSP), 2025
|
| 4 |
+
<br> Authors: Lingwei Meng, Shujie Hu, Jiawen Kang, Yuejiao Wang, Wenxuan Wu, Xixin Wu, Xunying Liu, Helen Meng</em>
|
| 5 |
+
|
| 6 |
+
This repository contains the trained MT-LLM model for instruction-based multi-talker overlapped speech recognition.
|
| 7 |
+
|
| 8 |
+
Please check https://github.com/cuhealthybrains/MT-LLM
|
| 9 |
+
|
| 10 |
+
If you find our work is useful in your research, please cite the following paper:
|
| 11 |
+
|
| 12 |
+
```bibtex
|
| 13 |
+
@inproceedings{meng2025mtllm,
|
| 14 |
+
title={Large Language Model Can Transcribe Speech in Multi-Talker Scenarios with Versatile Instructions},
|
| 15 |
+
author={Meng, Lingwei and Hu, Shujie and Kang, Jiawen and Li, Zhaoqing and Wang, Yuejiao and Wu, Wenxuan and Wu, Xixin and Liu, Xunying and Meng, Helen},
|
| 16 |
+
booktitle={ICASSP 2025-2025 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP)},
|
| 17 |
+
year={2025}
|
| 18 |
+
}
|
| 19 |
+
```
|
| 20 |
+
|
| 21 |
+
---
|
| 22 |
+
license: mit
|
| 23 |
+
---
|