What is this?

This is the Colox-v1 GGUF Lora adapter.

How can I use this?

This is compatible with any service that supports GGUF Lora adpaters plus base model. (ex: llama-cpp, ollama, etc.)

What is the base model?

Llama 3.1 8B Instruct

How is this Lora adapter quanitized?

This has been quanitized to F16.

Downloads last month
9
GGUF
Model size
41.9M params
Architecture
llama
Hardware compatibility
Log In to add your hardware

16-bit

Inference Providers NEW
This model isn't deployed by any Inference Provider. ๐Ÿ™‹ Ask for provider support

Model tree for retronic/Colox_The1st-LORA_GGUF

Quantized
(1)
this model