runtime error
Exit code: 1. Reason: Disabling PyTorch because PyTorch >= 2.4 is required but found 2.2.0 PyTorch was not found. Models won't be available and only tokenizers, configuration and file/data utilities can be used. Warning: You are sending unauthenticated requests to the HF Hub. Please set a HF_TOKEN to enable higher rate limits and faster downloads. Traceback (most recent call last): File "/app/main.py", line 37, in <module> "qwen": ModelHandler("Qwen/Qwen2.5-Coder-1.5B"), File "/app/model_handler.py", line 7, in __init__ self.model = AutoModelForCausalLM.from_pretrained(model_name, torch_dtype=torch.float16, device_map="auto") File "/opt/conda/lib/python3.10/site-packages/transformers/utils/import_utils.py", line 2035, in __getattribute__ requires_backends(cls, cls._backends) File "/opt/conda/lib/python3.10/site-packages/transformers/utils/import_utils.py", line 2021, in requires_backends raise ImportError("".join(failed)) ImportError: AutoModelForCausalLM requires the PyTorch library but it was not found in your environment. Check out the instructions on the installation page: https://pytorch.org/get-started/locally/ and follow the ones that match your environment. Please note that you may need to restart your runtime after installation.
Container logs:
Fetching error logs...