Model Card for Villanova-2B-VL-2603-GGUF

Villanova.AI logo

Villanova-2B-VL-2603 is a fully open, multilingual Vision-Language Model developed by Villanova.AI. Part of the Villanova project, it extends our text-only Villanova-2B-2603 to visual understanding while preserving native support for five European languages. All model weights, training data sources, and training details are publicly released.

This repo contains GGUF format model files for the VillanovaAI/Villanova-2B-VL-2603 model.


Model Family

Villanova-2B-Base-2603 β€” Base model (4.4T)
 ↳ Villanova-2B-2603 β€” SFT / Instruct
  ↳ Villanova-2B-2603-GGUF β€” Quantized
 ↳ Villanova-2B-VL-2603 β€” Vision-Language Instruct
  ↳ Villanova-2B-VL-2603-GGUF β€” Quantized β€” πŸ“ This model

Villanova-2B-Base-2512-Preview β€” Base model (2.2T) (previous version, not recommended)
 ↳ Villanova-2B-2512-Preview β€” SFT / Instruct (previous version, not recommended)


About GGUF

GGUF is a format introduced by llama.cpp.

It is a file format for storing and distributing LLMs that is designed for portability and efficient inference on the edge.

Quick Usage with llama.cpp

You can run this model directly using the llama-cli tool (part of llama.cpp).

To run the model with the Q8_0 quantization directly from Hugging Face:

llama-cli -hf VillanovaAI/Villanova-2B-VL-2603-GGUF:Q8_0 
Downloads last month
511
GGUF
Model size
2B params
Architecture
llama
Hardware compatibility
Log In to add your hardware

8-bit

16-bit

Inference Providers NEW
This model isn't deployed by any Inference Provider. πŸ™‹ Ask for provider support

Model tree for VillanovaAI/Villanova-2B-VL-2603-GGUF

Quantized
(3)
this model

Collection including VillanovaAI/Villanova-2B-VL-2603-GGUF