Llama-3.2-1B-Instruct Q4_K_M โ€” Big-Endian

Big-endian GGUF of meta-llama/Llama-3.2-1B-Instruct for IBM AIX and other big-endian POWER systems.

Why Big-Endian?

GGUF files store all data in little-endian. On big-endian systems (AIX, z/OS), llama.cpp detects the mismatch and fails. This pre-converted model works directly.

Model Details

Field Value
Parameters 1.24B
Architecture LLaMA transformer, 16 layers
Quantization Q4_K_M
Context length 131,072 tokens
File Llama-3.2-1B-Instruct-Q4_K_M-be.gguf (770 MB)
Endianness Big-endian
Source meta-llama/Llama-3.2-1B-Instruct

Performance on IBM POWER9 (AIX 7.3)

Tested at 16 threads, SMT-2:

  • Generation: 9.03 tok/s
  • Prompt processing: 2.98 tok/s

Quick Start (AIX)

git clone https://gitlab.com/librepower/llama-aix.git
cd llama-aix
./scripts/fetch_upstream.sh && ./scripts/build_aix_73.sh
wget https://huggingface.co/librepowerai/Llama-3.2-1B-Instruct-Q4_K_M-BE/resolve/main/Llama-3.2-1B-Instruct-Q4_K_M-be.gguf
./build/bin/llama-simple -m Llama-3.2-1B-Instruct-Q4_K_M-be.gguf -n 256 -t 16 "Your prompt"

Related


LibrePower โ€” Unlocking IBM Power Systems through open source. https://librepower.org | hello@librepower.org

Downloads last month
4
GGUF
Model size
1B params
Architecture
llama
Hardware compatibility
Log In to add your hardware

4-bit

Inference Providers NEW
This model isn't deployed by any Inference Provider. ๐Ÿ™‹ Ask for provider support

Model tree for librepowerai/Llama-3.2-1B-Instruct-Q4_K_M-BE

Quantized
(367)
this model