runtime error

Exit code: 1. Reason: Traceback (most recent call last): File "/home/user/app/app.py", line 3, in <module> from train import train as train_fn File "/home/user/app/train.py", line 180, in <module> train("AAPL", epochs=50) File "/home/user/app/train.py", line 102, in train optimizer.step() File "/usr/local/lib/python3.10/site-packages/torch/optim/optimizer.py", line 517, in wrapper out = func(*args, **kwargs) File "/usr/local/lib/python3.10/site-packages/torch/optim/optimizer.py", line 82, in _use_grad ret = func(*args, **kwargs) File "/usr/local/lib/python3.10/site-packages/torch/optim/adam.py", line 221, in step self._cuda_graph_capture_health_check() File "/usr/local/lib/python3.10/site-packages/torch/optim/optimizer.py", line 460, in _cuda_graph_capture_health_check capturing = torch.cuda.is_current_stream_capturing() File "/usr/local/lib/python3.10/site-packages/torch/cuda/graphs.py", line 52, in is_current_stream_capturing return _cuda_isCurrentStreamCapturing() torch.AcceleratorError: CUDA error: no CUDA-capable device is detected Search for `cudaErrorNoDevice' in https://docs.nvidia.com/cuda/cuda-runtime-api/group__CUDART__TYPES.html for more information. CUDA kernel errors might be asynchronously reported at some other API call, so the stacktrace below might be incorrect. For debugging consider passing CUDA_LAUNCH_BLOCKING=1 Device-side assertions were explicitly omitted for this error check; the error probably arose while initializing the DSA handlers.

Container logs:

Fetching error logs...