Healthcare Patient Interaction LoRA Adapter
A specialized LoRA adapter for healthcare patient interaction and triage assessment, trained using medical conversation data from ruslanmv/Medical-Llama3-8B and fine-tuned on Llama 3.1 8B Instruct.
Model Details
- Base Model: NousResearch/Meta-Llama-3.1-8B-Instruct
- Training Data Source: ruslanmv/Medical-Llama3-8B dataset
- Adapter Type: LoRA (Low-Rank Adaptation)
- Task: Medical patient interaction and triage
- Training Samples: 1000 medical conversation samples
- Training Cost: ~$25 on AWS SageMaker
- Training Time: 27 minutes on ml.g5.12xlarge
Training Approach
This adapter was created by:
- Data Source: Using the medical conversation dataset from
ruslanmv/Medical-Llama3-8B - Base Model: Fine-tuning on
NousResearch/Meta-Llama-3.1-8B-Instruct(upgraded from Llama 3.0 to 3.1) - Method: LoRA (Low-Rank Adaptation) for efficient fine-tuning
- Focus: Specialized for patient interaction and triage scenarios
Model Lineage
ruslanmv/Medical-Llama3-8B (dataset)
โ (medical conversation data)
NousResearch/Meta-Llama-3.1-8B-Instruct (base model)
โ (LoRA fine-tuning)
ccala/healthcare-patient-interaction-lora (this adapter)
Dataset Information
- Source: ruslanmv/Medical-Llama3-8B
- Type: Medical conversation data
- Format: Patient-healthcare provider interactions
- Samples Used: 1000 representative conversations
- Focus: Patient triage and initial assessment scenarios
Training Configuration
- LoRA Rank (r): 8
- LoRA Alpha: 16
- Target Modules: q_proj, k_proj, v_proj, o_proj, gate_proj, up_proj, down_proj
- Training Steps: 500
- Learning Rate: 1e-4
- Batch Size: 2
- Quantization: 4-bit QLoRA
Acknowledgments
- Dataset: Thanks to ruslanmv for the Medical-Llama3-8B dataset
- Base Model: NousResearch for the Meta-Llama-3.1-8B-Instruct model
- Downloads last month
- -
Model tree for ccala/healthcare-patient-interaction-lora
Base model
NousResearch/Meta-Llama-3.1-8B-Instruct