runtime error

Exit code: 1. Reason: Downloading model... qwen2.5-0.5b-instruct-q4_k_m.gguf: 0%| | 0.00/491M [00:00<?, ?B/s] qwen2.5-0.5b-instruct-q4_k_m.gguf: 0%| | 0.00/491M [00:01<?, ?B/s] qwen2.5-0.5b-instruct-q4_k_m.gguf: 0%| | 0.00/491M [00:02<?, ?B/s] qwen2.5-0.5b-instruct-q4_k_m.gguf: 7%|▋ | 32.6M/491M [00:03<00:17, 26.4MB/s] qwen2.5-0.5b-instruct-q4_k_m.gguf: 100%|██████████| 491M/491M [00:04<00:00, 238MB/s]  qwen2.5-0.5b-instruct-q4_k_m.gguf: 100%|██████████| 491M/491M [00:04<00:00, 101MB/s] Model path: /root/.cache/huggingface/hub/models--Qwen--Qwen2.5-0.5B-Instruct-GGUF/snapshots/9217f5db79a29953eb74d5343926648285ec7e67/qwen2.5-0.5b-instruct-q4_k_m.gguf Traceback (most recent call last): File "/app/app.py", line 27, in <module> from llama_cpp import Llama ModuleNotFoundError: No module named 'llama_cpp'

Container logs:

Fetching error logs...