Cisco NX-AI GGUF Q5_K_M
5-bit quantized (balanced, ~800MB)
Download
wget https://huggingface.co/Renugadevi82/cisco-nx-ai-gguf-q5_k_m/resolve/main/cisco-nx-ai-q5_k_m.gguf
Usage with llama.cpp
./llama-cli -m cisco-nx-ai-q5_k_m.gguf -p "Configure VLAN 100" -n 100
Usage with llama-cpp-python
from llama_cpp import Llama
llm = Llama(model_path="cisco-nx-ai-q5_k_m.gguf")
output = llm("Configure VLAN 100", max_tokens=100)
print(output['choices'][0]['text'])
File size: 0.73 GB
- Downloads last month
- 6
Hardware compatibility
Log In to add your hardware
5-bit
Inference Providers NEW
This model isn't deployed by any Inference Provider. ๐ Ask for provider support