Update README.md
Browse files
README.md
CHANGED
|
@@ -1,7 +1,15 @@
|
|
| 1 |
-
Chat-TS Model Trained off of LLama3.1-7B backbone
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 2 |
|
| 3 |
If you use this work please cite:
|
| 4 |
|
|
|
|
| 5 |
@misc{quinlan2025chattsenhancingmultimodalreasoning,
|
| 6 |
title={Chat-TS: Enhancing Multi-Modal Reasoning Over Time-Series and Natural Language Data},
|
| 7 |
author={Paul Quinlan and Qingguo Li and Xiaodan Zhu},
|
|
@@ -10,4 +18,5 @@ If you use this work please cite:
|
|
| 10 |
archivePrefix={arXiv},
|
| 11 |
primaryClass={cs.AI},
|
| 12 |
url={https://arxiv.org/abs/2503.10883},
|
| 13 |
-
}
|
|
|
|
|
|
| 1 |
+
Chat-TS Model Trained off of LLama3.1-7B backbone.
|
| 2 |
+
|
| 3 |
+
This model discretely tokenizes time-series and uses an expanded vocabulary to model time-series representations.
|
| 4 |
+
Due to these modifications it should be compatible with most modern inferance frameworks as you can simply pass the multi-modal token stream directly to the model (eg. VLLM)
|
| 5 |
+
|
| 6 |
+
This model was trained for text generation tasks, however this framework is extensible to time-series generation aswell.
|
| 7 |
+
|
| 8 |
+
For more information please see the paper below.
|
| 9 |
|
| 10 |
If you use this work please cite:
|
| 11 |
|
| 12 |
+
```bash
|
| 13 |
@misc{quinlan2025chattsenhancingmultimodalreasoning,
|
| 14 |
title={Chat-TS: Enhancing Multi-Modal Reasoning Over Time-Series and Natural Language Data},
|
| 15 |
author={Paul Quinlan and Qingguo Li and Xiaodan Zhu},
|
|
|
|
| 18 |
archivePrefix={arXiv},
|
| 19 |
primaryClass={cs.AI},
|
| 20 |
url={https://arxiv.org/abs/2503.10883},
|
| 21 |
+
}
|
| 22 |
+
```
|