LFM2.5-VL-1.6B UCF Crime β GGUF
Base model: LiquidAI/LFM2.5-VL-1.6B fine-tuned on the UCF Crime dataset for surveillance crime detection.
Quantized GGUF files for fast inference with llama.cpp / Ollama / LM Studio β derived from rajofearth/lfm-ucf-unsloth.
π Training notebook (free Colab): Open in Colab β see exactly how this model was trained and exported.
About this model
Fine-tuned using Unsloth on ~26k surveillance images from the UCF Crime dataset across 15 categories:
Abuse Β· Arrest Β· Arson Β· Assault Β· Burglary Β· Explosion Β· Fighting Β· Robbery Β· Shooting Β· Shoplifting Β· Stealing Β· Vandalism Β· Road Accident Β· Normal
The model analyzes surveillance images and outputs structured JSON:
{
"isHarm": true,
"descriptionIfHarm": "The image depicts a physical altercation."
}
When no harmful activity is detected:
{
"isHarm": false
}
β οΈ Output format note: Always include a system prompt explicitly requesting JSON output β the model is trained toward it but won't default to that format without instruction.
Usage
llama.cpp
./llama-cli -m LFM2.5-VL-1.6B.Q4_K_M.gguf \
--image your_surveillance_image.jpg \
-p "Analyze this surveillance image and respond ONLY in JSON: {\"isHarm\": true/false, \"descriptionIfHarm\": \"reason if harmful, else omit\"}." \
--temp 0.1
Ollama
# Create a Modelfile
cat > Modelfile << 'EOF'
FROM ./LFM2.5-VL-1.6B.Q4_K_M.gguf
SYSTEM "You are a surveillance analysis assistant. Analyze images for harmful or criminal activity. Always respond in strict JSON: {\"isHarm\": true/false, \"descriptionIfHarm\": \"brief description if harmful, else omit\"}."
EOF
ollama create lfm-ucf -f Modelfile
ollama run lfm-ucf
Note: Vision GGUF support for this architecture is still experimental in llama.cpp and Ollama. Results may vary β the LoRA version via Unsloth/Transformers is more reliable for production use.
Performance
| Model | Accuracy (5,200 samples) |
|---|---|
| Base model (untrained) | 35.2% |
| This model (fine-tuned) | 44.8% |
+9.6 percentage point improvement on UCF Crime CCTV imagery. Evaluated using an LLM judge on a held-out test set.
Reproduce This Fine-Tune
The full pipeline β training, evaluation, and GGUF export β is available as a free public Colab notebook:
No paid GPU required. Runs on a free T4.
Related
| Resource | Link |
|---|---|
| Base model | LiquidAI/LFM2.5-VL-1.6B |
| LoRA adapters | rajofearth/lfm-ucf-unsloth |
| Training notebook | Google Colab |
| Dataset | tanzzpatil/ucf-crime-small |
Developed by: rajofearth Β· Created with Unsloth + Google Colab (free tier).
- Downloads last month
- 270
4-bit
