nhant/fingpt-mt_llama3-8b_lora-F32-GGUF

This LoRA adapter was converted to GGUF format from FinGPT/fingpt-mt_llama3-8b_lora via the ggml.ai's GGUF-my-lora space. Refer to the original adapter repository for more details.

Use with llama.cpp

# with cli
llama-cli -m base_model.gguf --lora fingpt-mt_llama3-8b_lora-f32.gguf (...other args)

# with server
llama-server -m base_model.gguf --lora fingpt-mt_llama3-8b_lora-f32.gguf (...other args)

To know more about LoRA usage with llama.cpp server, refer to the llama.cpp server documentation.

Downloads last month
8
GGUF
Model size
3.41M params
Architecture
llama
Hardware compatibility
Log In to add your hardware

32-bit

Inference Providers NEW
This model isn't deployed by any Inference Provider. ๐Ÿ™‹ Ask for provider support

Model tree for nhant/fingpt-mt_llama3-8b_lora-F32-GGUF

Quantized
(3)
this model