Mistral-7B IoT Anomaly Detection β LoRA Adapter
Fine-tuned Mistral-7B-Instruct-v0.3 using LoRA/QLoRA for industrial IoT anomaly detection and predictive maintenance Q&A.
π Use Case
This adapter specializes Mistral-7B for technical Q&A in the industrial IoT domain:
- Classifying sensor anomalies (point, contextual, trend)
- Predictive maintenance strategies and decision-making
- LSTM and ML pipeline design for sensor data
- Evaluation metrics for imbalanced anomaly detection datasets
- Scalable IoT data pipeline architecture
π§ Training Details
| Parameter | Value |
|---|---|
| Base model | mistralai/Mistral-7B-Instruct-v0.3 |
| Method | LoRA + QLoRA (4-bit quantization) |
| LoRA rank (r) | 16 |
| LoRA alpha | 16 |
| Target modules | q_proj, k_proj, v_proj, o_proj, gate_proj, up_proj, down_proj |
| Trainable parameters | ~1% of total |
| Training epochs | 3 |
| Final training loss | 1.9318 |
| Final validation loss | 1.8930 |
| Validation loss improvement | 14.5% over epoch 1 |
| Framework | Unsloth + PEFT + TRL |
| Hardware | Google Colab T4 GPU |
π Training Results
| Epoch | Training Loss | Validation Loss |
|---|---|---|
| 1 | 2.0929 | 2.2116 |
| 2 | 2.0924 | 2.0939 |
| 3 | 1.9318 | 1.8930 |
Consistent decrease in both training and validation loss across all epochs, with validation loss dropping below training loss at epoch 3 β indicating good generalization on the small domain-specific dataset.
π License
This model adapter is released under the Apache 2.0 license, consistent with the base Mistral-7B model license.
Inference Providers NEW
This model isn't deployed by any Inference Provider. π Ask for provider support
Model tree for udishaduttachowdhury/mistral-7b-iot-anomaly-detection-lora
Base model
mistralai/Mistral-7B-v0.3 Finetuned
mistralai/Mistral-7B-Instruct-v0.3