runtime error

Exit code: 1. Reason: roup = 16 print_info: ssm_dt_b_c_rms = 0 print_info: model type = 9B print_info: model params = 8.95 B print_info: general.name = Unsloth_Gguf__7Xi9Wwh print_info: vocab type = BPE print_info: n_vocab = 248320 print_info: n_merges = 247587 print_info: BOS token = 11 ',' print_info: EOS token = 248046 '<|im_end|>' print_info: EOT token = 248046 '<|im_end|>' print_info: PAD token = 248055 '<|vision_pad|>' print_info: LF token = 198 'ÄŠ' print_info: FIM PRE token = 248060 '<|fim_prefix|>' print_info: FIM SUF token = 248062 '<|fim_suffix|>' print_info: FIM MID token = 248061 '<|fim_middle|>' print_info: FIM PAD token = 248063 '<|fim_pad|>' print_info: FIM REP token = 248064 '<|repo_name|>' print_info: FIM SEP token = 248065 '<|file_sep|>' print_info: EOG token = 248044 '<|endoftext|>' print_info: EOG token = 248046 '<|im_end|>' print_info: EOG token = 248063 '<|fim_pad|>' print_info: EOG token = 248064 '<|repo_name|>' print_info: EOG token = 248065 '<|file_sep|>' print_info: max token length = 256 load_tensors: loading model tensors, this can take a while... (mmap = true, direct_io = false) llama_model_load: error loading model: make_cpu_buft_list: no CPU backend found llama_model_load_from_file_impl: failed to load model common_init_from_params: failed to load model '/home/user/.cache/huggingface/hub/models--Jackrong--Qwen3.5-9B-Claude-4.6-Opus-Reasoning-Distilled-v2-GGUF/snapshots/db7a3a938b3b8ce22857b3151e2a7f12ddbf823c/Qwen3.5-9B.Q4_K_M.gguf' srv load_model: failed to load model, '/home/user/.cache/huggingface/hub/models--Jackrong--Qwen3.5-9B-Claude-4.6-Opus-Reasoning-Distilled-v2-GGUF/snapshots/db7a3a938b3b8ce22857b3151e2a7f12ddbf823c/Qwen3.5-9B.Q4_K_M.gguf' srv operator(): operator(): cleaning up before exit... main: exiting due to model loading error

Container logs:

Fetching error logs...