Mistral 7B Fine-tuned on Medical, Legal & Finance Data

Model Description

This model is a fine-tuned version of Mistral-7B-v0.1 on a combined dataset of:

  • ๐Ÿฅ Medical QA (MedMCQA)
  • โš–๏ธ Legal QA (Legal-QA-v1)
  • ๐Ÿ’ฐ Finance Sentiment (FinGPT)

Training Details

  • Base model: mistralai/Mistral-7B-v0.1
  • Fine-tuning method: QLoRA (4-bit quantization)
  • Dataset size: 9,742 samples
  • Epochs: 3 (~10 Hours)
  • Hardware: Google Colab T4 GPU

Usage

from transformers import AutoModelForCausalLM, AutoTokenizer

model = AutoModelForCausalLM.from_pretrained("your-username/mistral-7b-medlexfin")
tokenizer = AutoTokenizer.from_pretrained("your-username/mistral-7b-medlexfin")

Domains

  • Medical question answering
  • Legal advice and explanation
  • Financial sentiment analysis
Downloads last month

-

Downloads are not tracked for this model. How to track
Inference Providers NEW
This model isn't deployed by any Inference Provider. ๐Ÿ™‹ Ask for provider support