Qwen3-1.7B-grok
Fine-tuned version of Qwen/Qwen3-1.7B for Grok parser pattern generation.
Description
Given a raw log line, generates the appropriate parse_grok!() pattern that extracts structured fields.
Training Details
- Base Model: Qwen/Qwen3-1.7B
- Method: LoRA (r=64, alpha=64) + 4-bit quantization
- Training Samples: 646973
- Epochs: 1
- Learning Rate: 0.0002
- Final Training Loss: 0.44244455589069753
- Training Time: 39.5 minutes
- Hardware: NVIDIA H100 80GB
- Framework: Unsloth + TRL
Usage
Model card will be updated with benchmark results after evaluation.
- Downloads last month
- -