JMedLoRA:Medical Domain Adaptation on Japanese Large Language Models using Instruction-tuning
Paper • 2310.10083 • Published • 2
⚠️⚠️⚠️
Only for research purpose.
Do not use it for medical purpose.
⚠️⚠️⚠️
This model is an instruction-tuned model of Llama2-70B with our own medical Q&A dataset.
QLoRA
1617017 seconds on NVIDIA A100 x 4 (not fully used)
The following bitsandbytes quantization config was used during training:
本データを利用する場合は以下の文献の引用をご検討ください.
@article{sukeda2023jmedlora,
title={{JMedLoRA: Medical Domain Adaptation on Japanese Large Language Models using Instruction-tuning}},
author={Sukeda, Issey and Suzuki, Masahiro and Sakaji, Hiroki and Kodera, Satoshi},
journal={arXiv preprint arXiv:2310.10083},
year={2023}
}