Mistral 7B Fine-tuned on Medical, Legal & Finance Data
Model Description
This model is a fine-tuned version of Mistral-7B-v0.1 on a combined dataset of:
- ๐ฅ Medical QA (MedMCQA)
- โ๏ธ Legal QA (Legal-QA-v1)
- ๐ฐ Finance Sentiment (FinGPT)
Training Details
- Base model: mistralai/Mistral-7B-v0.1
- Fine-tuning method: QLoRA (4-bit quantization)
- Dataset size: 9,742 samples
- Epochs: 3 (~10 Hours)
- Hardware: Google Colab T4 GPU
Usage
from transformers import AutoModelForCausalLM, AutoTokenizer
model = AutoModelForCausalLM.from_pretrained("your-username/mistral-7b-medlexfin")
tokenizer = AutoTokenizer.from_pretrained("your-username/mistral-7b-medlexfin")
Domains
- Medical question answering
- Legal advice and explanation
- Financial sentiment analysis
Inference Providers NEW
This model isn't deployed by any Inference Provider. ๐ Ask for provider support