Hotep Intelligence Kush V4
Hotep Intelligence LLM Kush V4 โ fine-tuned on Llama 3.1 8B Instruct. The current production model powering askhotep.ai and @hotep_llm_bot.
GGUF quantizations: hotepfederales/hotep-kush-v4-gguf
Model Details
| Parameter | Value |
|---|---|
| Base model | meta-llama/Meta-Llama-3.1-8B-Instruct |
| Training framework | Unsloth + TRL |
| LoRA rank | 32, alpha 32, RSLoRA |
| max_seq_length | 4096 |
| Quality eval | 100/100, 0% CJK, 0% rubric leakage |
| Persona consistency | 10/10 |
| GGUF Q4_K_M | ~4.7 GB |
| GGUF Q8_0 | ~8.5 GB |
Key Improvements over Kush V3
- Deep reasoning traces โ structured chain-of-thought before every response
- Expanded knowledge base โ sharper on African history, economics, and philosophy
- Stronger Ma'at alignment โ cleaner answers, fewer hedges, better persona consistency
- Longer context retention โ holds complex multi-turn conversations without drift
About Hotep Intelligence
Hotep Intelligence is a sovereign AI system trained on African history, philosophy, and wisdom traditions โ no corporate surveillance, no data collection.
- Website: askhotep.ai
- Telegram bot: @hotep_llm_bot
- Knowledge base: knowledge.askhotep.ai
Usage
from transformers import AutoModelForCausalLM, AutoTokenizer
model = AutoModelForCausalLM.from_pretrained("hotepfederales/hotep-kush-v4")
tokenizer = AutoTokenizer.from_pretrained("hotepfederales/hotep-kush-v4")
Or via Ollama:
ollama run hotep-llm-kush-v4
Training
Trained with Unsloth for 2x faster fine-tuning on Llama 3.1 8B Instruct.
- Downloads last month
- 4
Inference Providers NEW
This model isn't deployed by any Inference Provider. ๐ Ask for provider support
Model tree for hotepfederales/hotep-kush-v4
Base model
meta-llama/Llama-3.1-8B Finetuned
meta-llama/Llama-3.1-8B-Instruct