H2O-Danube2-1.8B-Chat Q4_K_M โ Big-Endian
Big-endian GGUF of h2oai/h2o-danube2-1.8b-chat for IBM AIX and other big-endian POWER systems.
Why Big-Endian?
GGUF files store all data in little-endian. On big-endian systems (AIX, z/OS), llama.cpp detects the mismatch and fails. This pre-converted model works directly.
Model Details
| Field | Value |
|---|---|
| Parameters | 1.8B |
| Architecture | LLaMA-style transformer (Mistral variant) |
| Quantization | Q4_K_M |
| Context length | 8,192 tokens |
| File | h2o-danube2-1.8b-chat-Q4_K_M-be.gguf (1.04 GB) |
| Endianness | Big-endian |
| Source | h2oai/h2o-danube2-1.8b-chat |
Performance on IBM POWER9 (AIX 7.3)
Tested at 16 threads, SMT-2:
- Generation: 18.59 tok/s
- Prompt processing: 6.54 tok/s
Quick Start (AIX)
git clone https://gitlab.com/librepower/llama-aix.git
cd llama-aix
./scripts/fetch_upstream.sh && ./scripts/build_aix_73.sh
wget https://huggingface.co/librepowerai/H2O-Danube2-1.8B-Chat-Q4_K_M-BE/resolve/main/h2o-danube2-1.8b-chat-Q4_K_M-be.gguf
./build/bin/llama-simple -m h2o-danube2-1.8b-chat-Q4_K_M-be.gguf -n 256 -t 16 "Your prompt"
Related
LibrePower โ Unlocking IBM Power Systems through open source. https://librepower.org | hello@librepower.org
- Downloads last month
- 10
Hardware compatibility
Log In to add your hardware
4-bit
Model tree for librepowerai/H2O-Danube2-1.8B-Chat-Q4_K_M-BE
Base model
h2oai/h2o-danube2-1.8b-chat