Qalb-1.0-8B-Instruct - GGUF

This repository contains the GGUF (GGML Unified Format) version of the enstazao/Qalb-1.0-8B-Instruct model.

Origin

This model was converted from its original Hugging Face format using the llama.cpp project.

Purpose

Qalb-1.0-8B-Instruct is a general-purpose large language model, suitable for various natural language processing tasks, including text generation, question answering, and summarization.

How to Use with Ollama

To use this GGUF model with Ollama, you can run the following command:

ollama run hf.co/ReySajju742/Qalb-1.0-gguf

This command will automatically download and set up the model for use with Ollama.

Potential Enhancements

  • Quantization: This model is currently available in f16 format. For reduced size and potentially faster inference with minimal performance impact, consider quantizing it further (e.g., to Q4_K_M). This can be done using the llama.cpp tools.
  • LoRA Adapters: The llama.cpp ecosystem also supports integrating LoRA (Low-Rank Adaptation) adapters, which can fine-tune the model for specific tasks or datasets without requiring a full model conversion.

For more details on these enhancements, please refer to the llama.cpp GitHub repository.

Reproduction Instructions

The GGUF conversion process for this model involved the following steps:

  1. Downloading the Hugging Face Model: The original enstazao/Qalb-1.0-8B-Instruct model was downloaded from Hugging Face.
  2. Cloning llama.cpp: The llama.cpp repository was cloned to access its conversion tools.
  3. Converting to GGUF: The downloaded Hugging Face model was converted to the GGUF format (f16) using the llama.cpp's convert_hf_to_gguf.py script.
  4. Uploading to Hugging Face: The resulting GGUF file was then uploaded to this repository (ReySajju742/Qalb-1.0-gguf).

This entire process can be reproduced using the provided Colab notebook, which automates these steps.

Downloads last month
2
GGUF
Model size
8B params
Architecture
llama
Hardware compatibility
Log In to add your hardware

16-bit

Inference Providers NEW
This model isn't deployed by any Inference Provider. ๐Ÿ™‹ Ask for provider support

Model tree for ReySajju742/Qalb-1.0-gguf

Quantized
(4)
this model