lfj-code / transfer /code /CCFM /logs /ccfm_v2_5404727.out
ethan1115's picture
Upload folder using huggingface_hub
0161e74 verified
==========================================
Job ID: 5404727
Job Name: ccfm_v2
Start: Sun Mar 15 04:49:31 JST 2026
Node: b0019
GPU: NVIDIA H100, 95830 MiB
NVIDIA H100, 95830 MiB
NVIDIA H100, 95830 MiB
NVIDIA H100, 95830 MiB
Run: CCFM v2 (KV fix + loss fix + EMA + RK4 + logit-normal + warmup)
==========================================
The following values were not passed to `accelerate launch` and had defaults used instead:
More than one GPU was found, enabling multi-GPU training.
If this was unintended please pass in `--num_processes=1`.
`--num_machines` was set to a value of `1`
`--mixed_precision` was set to a value of `'no'`
`--dynamo_backend` was set to a value of `'no'`
To avoid this warning pass in values for each of the problematic parameters or run `accelerate config`.
[W315 04:50:08.654423221 socket.cpp:764] [c10d] The client socket cannot be initialized to connect to [localhost]:29500 (errno: 97 - Address family not supported by protocol).
/home/hp250092/ku50001222/qian/aivc/lfj/transfer/code/scGPT/scgpt/model/model.py:21: UserWarning: flash_attn is not installed
warnings.warn("flash_attn is not installed")
/home/hp250092/ku50001222/qian/aivc/lfj/transfer/code/scGPT/scgpt/model/model.py:21: UserWarning: flash_attn is not installed
warnings.warn("flash_attn is not installed")
/home/hp250092/ku50001222/qian/aivc/lfj/transfer/code/scGPT/scgpt/model/model.py:21: UserWarning: flash_attn is not installed
warnings.warn("flash_attn is not installed")
/home/hp250092/ku50001222/qian/aivc/lfj/transfer/code/scGPT/scgpt/model/model.py:21: UserWarning: flash_attn is not installed
warnings.warn("flash_attn is not installed")
[W315 04:50:33.963867213 socket.cpp:764] [c10d] The client socket cannot be initialized to connect to [localhost]:29500 (errno: 97 - Address family not supported by protocol).
[W315 04:50:33.963864886 socket.cpp:764] [c10d] The client socket cannot be initialized to connect to [localhost]:29500 (errno: 97 - Address family not supported by protocol).
[W315 04:50:33.963886032 socket.cpp:764] [c10d] The client socket cannot be initialized to connect to [localhost]:29500 (errno: 97 - Address family not supported by protocol).
[W315 04:50:33.963888124 socket.cpp:764] [c10d] The client socket cannot be initialized to connect to [localhost]:29500 (errno: 97 - Address family not supported by protocol).
WARNING:accelerate.utils.other:[RANK 0] Detected kernel version 4.18.0, which is below the recommended minimum of 5.5.0; this can cause the process to hang. It is recommended to upgrade the kernel to the minimum version or higher.
CascadedFlowConfig(model_type='cascaded', batch_size=48, ntoken=512, d_model=128, nhead=8, nlayers=4, lr=5e-05, steps=200000, eta_min=1e-06, devices='1', test_only=False, data_name='norman', perturbation_function='crisper', noise_type='Gaussian', poisson_alpha=0.8, poisson_target_sum=-1, print_every=10000, mode='predict_y', result_path='./result', fusion_method='differential_perceiver', infer_top_gene=1000, n_top_genes=5000, checkpoint_path='', gamma=0.5, split_method='additive', use_mmd_loss=True, fold=1, use_negative_edge=True, topk=30, scgpt_dim=512, bottleneck_dim=128, latent_weight=1.0, choose_latent_p=0.4, target_std=1.0, dh_depth=2, warmup_batches=200, ema_decay=0.9999, t_sample_mode='logit_normal', t_expr_mean=0.0, t_expr_std=1.0, t_latent_mean=0.0, t_latent_std=1.0, warmup_steps=2000, scgpt_model_dir='transfer/data/scGPT_pretrained', scgpt_max_seq_len=1200, scgpt_cache_path='', latent_steps=20, expr_steps=20, ode_method='rk4')
Converted var_names to gene symbols, sample: ['RP11-34P13.8', 'RP11-54O7.3', 'SAMD11', 'PERM1', 'HES4']
Warning: ctrl is not in the gene namesWarning: ctrl is not in the gene names
Warning: ctrl is not in the gene names
Warning: ctrl is not in the gene names
/home/hp250092/ku50001222/qian/aivc/lfj/transfer/code/scDFM/src/data_process/data.py:185: ImplicitModificationWarning: Trying to modify attribute `.obs` of view, initializing view as actual.
self.adata.obs['condition'] = self.adata.obs['condition'].str.replace('ctrl', 'control')
/home/hp250092/ku50001222/qian/aivc/lfj/transfer/code/scDFM/src/data_process/data.py:185: ImplicitModificationWarning: Trying to modify attribute `.obs` of view, initializing view as actual.
self.adata.obs['condition'] = self.adata.obs['condition'].str.replace('ctrl', 'control')
/home/hp250092/ku50001222/qian/aivc/lfj/transfer/code/scDFM/src/data_process/data.py:185: ImplicitModificationWarning: Trying to modify attribute `.obs` of view, initializing view as actual.
self.adata.obs['condition'] = self.adata.obs['condition'].str.replace('ctrl', 'control')
/home/hp250092/ku50001222/qian/aivc/lfj/transfer/code/scDFM/src/data_process/data.py:185: ImplicitModificationWarning: Trying to modify attribute `.obs` of view, initializing view as actual.
self.adata.obs['condition'] = self.adata.obs['condition'].str.replace('ctrl', 'control')
/home/hp250092/ku50001222/qian/aivc/lfj/stack_env/lib/python3.13/site-packages/scanpy/preprocessing/_highly_variable_genes.py:806: ImplicitModificationWarning: Trying to modify attribute `._uns` of view, initializing view as actual.
adata.uns["hvg"] = {"flavor": flavor}
/home/hp250092/ku50001222/qian/aivc/lfj/stack_env/lib/python3.13/site-packages/scanpy/preprocessing/_highly_variable_genes.py:806: ImplicitModificationWarning: Trying to modify attribute `._uns` of view, initializing view as actual.
adata.uns["hvg"] = {"flavor": flavor}
/home/hp250092/ku50001222/qian/aivc/lfj/stack_env/lib/python3.13/site-packages/scanpy/preprocessing/_highly_variable_genes.py:806: ImplicitModificationWarning: Trying to modify attribute `._uns` of view, initializing view as actual.
adata.uns["hvg"] = {"flavor": flavor}
/home/hp250092/ku50001222/qian/aivc/lfj/stack_env/lib/python3.13/site-packages/scanpy/preprocessing/_highly_variable_genes.py:806: ImplicitModificationWarning: Trying to modify attribute `._uns` of view, initializing view as actual.
adata.uns["hvg"] = {"flavor": flavor}
/home/hp250092/ku50001222/qian/aivc/lfj/transfer/code/scDFM/src/data_process/data.py:277: ImplicitModificationWarning: Trying to modify attribute `.obs` of view, initializing view as actual.
self.adata.obs['perturbation_covariates'] = self.adata.obs[perturbation_covariates].apply(lambda x: '+'.join(x), axis=1)
/home/hp250092/ku50001222/qian/aivc/lfj/transfer/code/scDFM/src/data_process/data.py:277: ImplicitModificationWarning: Trying to modify attribute `.obs` of view, initializing view as actual.
self.adata.obs['perturbation_covariates'] = self.adata.obs[perturbation_covariates].apply(lambda x: '+'.join(x), axis=1)
/home/hp250092/ku50001222/qian/aivc/lfj/transfer/code/scDFM/src/data_process/data.py:277: ImplicitModificationWarning: Trying to modify attribute `.obs` of view, initializing view as actual.
self.adata.obs['perturbation_covariates'] = self.adata.obs[perturbation_covariates].apply(lambda x: '+'.join(x), axis=1)
/home/hp250092/ku50001222/qian/aivc/lfj/transfer/code/scDFM/src/data_process/data.py:277: ImplicitModificationWarning: Trying to modify attribute `.obs` of view, initializing view as actual.
self.adata.obs['perturbation_covariates'] = self.adata.obs[perturbation_covariates].apply(lambda x: '+'.join(x), axis=1)
/home/hp250092/ku50001222/qian/aivc/lfj/transfer/code/scDFM/src/data_process/data.py:329: ImplicitModificationWarning: Trying to modify attribute `.obs` of view, initializing view as actual.
self.adata.obs['perturbation_covariates'] = self.adata.obs[perturbation_covariates].apply(lambda x: '+'.join(x), axis=1)
/home/hp250092/ku50001222/qian/aivc/lfj/transfer/code/scDFM/src/data_process/data.py:329: ImplicitModificationWarning: Trying to modify attribute `.obs` of view, initializing view as actual.
self.adata.obs['perturbation_covariates'] = self.adata.obs[perturbation_covariates].apply(lambda x: '+'.join(x), axis=1)
/home/hp250092/ku50001222/qian/aivc/lfj/transfer/code/scDFM/src/data_process/data.py:329: ImplicitModificationWarning: Trying to modify attribute `.obs` of view, initializing view as actual.
self.adata.obs['perturbation_covariates'] = self.adata.obs[perturbation_covariates].apply(lambda x: '+'.join(x), axis=1)
/home/hp250092/ku50001222/qian/aivc/lfj/transfer/code/scDFM/src/data_process/data.py:329: ImplicitModificationWarning: Trying to modify attribute `.obs` of view, initializing view as actual.
self.adata.obs['perturbation_covariates'] = self.adata.obs[perturbation_covariates].apply(lambda x: '+'.join(x), axis=1)
##### loading vocab from file #####
##### loading vocab from file #####
/home/hp250092/ku50001222/qian/aivc/lfj/transfer/code/CCFM/src/data/scgpt_extractor.py:109: UserWarning: FrozenScGPTExtractor: 498/5035 HVG genes not found in scGPT vocab, will use zero vectors.
warnings.warn(
/home/hp250092/ku50001222/qian/aivc/lfj/transfer/code/CCFM/src/data/scgpt_extractor.py:109: UserWarning: FrozenScGPTExtractor: 498/5035 HVG genes not found in scGPT vocab, will use zero vectors.
warnings.warn(
##### loading vocab from file #####
##### loading vocab from file #####
/home/hp250092/ku50001222/qian/aivc/lfj/transfer/code/CCFM/src/data/scgpt_extractor.py:109: UserWarning: FrozenScGPTExtractor: 498/5035 HVG genes not found in scGPT vocab, will use zero vectors.
warnings.warn(
/home/hp250092/ku50001222/qian/aivc/lfj/transfer/code/CCFM/src/data/scgpt_extractor.py:109: UserWarning: FrozenScGPTExtractor: 498/5035 HVG genes not found in scGPT vocab, will use zero vectors.
warnings.warn(
Loaded 159/163 pretrained parameters
Loaded 159/163 pretrained parameters
Loaded 159/163 pretrained parameters
Loaded 159/163 pretrained parameters
0%| | 0/200000 [00:00<?, ?it/s] 0%| | 0/200000 [00:00<?, ?it/s] 0%| | 0/200000 [00:00<?, ?it/s] 0%| | 0/200000 [00:00<?, ?it/s]/home/hp250092/ku50001222/qian/aivc/lfj/stack_env/lib/python3.13/site-packages/torch/nn/modules/transformer.py:531: UserWarning: The PyTorch API of nested tensors is in prototype stage and will change in the near future. We recommend specifying layout=torch.jagged when constructing a nested tensor, as this layout receives active development, has better operator coverage, and works with torch.compile. (Triggered internally at /pytorch/aten/src/ATen/NestedTensorImpl.cpp:178.)
output = torch._nested_tensor_from_mask(
/home/hp250092/ku50001222/qian/aivc/lfj/stack_env/lib/python3.13/site-packages/torch/nn/modules/transformer.py:531: UserWarning: The PyTorch API of nested tensors is in prototype stage and will change in the near future. We recommend specifying layout=torch.jagged when constructing a nested tensor, as this layout receives active development, has better operator coverage, and works with torch.compile. (Triggered internally at /pytorch/aten/src/ATen/NestedTensorImpl.cpp:178.)
output = torch._nested_tensor_from_mask(
/home/hp250092/ku50001222/qian/aivc/lfj/stack_env/lib/python3.13/site-packages/torch/nn/modules/transformer.py:531: UserWarning: The PyTorch API of nested tensors is in prototype stage and will change in the near future. We recommend specifying layout=torch.jagged when constructing a nested tensor, as this layout receives active development, has better operator coverage, and works with torch.compile. (Triggered internally at /pytorch/aten/src/ATen/NestedTensorImpl.cpp:178.)
output = torch._nested_tensor_from_mask(
/home/hp250092/ku50001222/qian/aivc/lfj/stack_env/lib/python3.13/site-packages/torch/nn/modules/transformer.py:531: UserWarning: The PyTorch API of nested tensors is in prototype stage and will change in the near future. We recommend specifying layout=torch.jagged when constructing a nested tensor, as this layout receives active development, has better operator coverage, and works with torch.compile. (Triggered internally at /pytorch/aten/src/ATen/NestedTensorImpl.cpp:178.)
output = torch._nested_tensor_from_mask(
/home/hp250092/ku50001222/qian/aivc/lfj/stack_env/lib/python3.13/site-packages/torch/nn/modules/linear.py:134: UserWarning: gemm_and_bias error: CUBLAS_STATUS_NOT_INITIALIZED when calling cublasLtMatmul with transpose_mat1 1 transpose_mat2 0 m 128 n 48 k 256 mat1_ld 256 mat2_ld 256 result_ld 128 abType 0 cType 0 computeType 68 scaleType 0. Will attempt to recover by calling unfused cublas path. (Triggered internally at /pytorch/aten/src/ATen/cuda/CUDABlas.cpp:1765.)
return F.linear(input, self.weight, self.bias)
/home/hp250092/ku50001222/qian/aivc/lfj/stack_env/lib/python3.13/site-packages/torch/nn/modules/linear.py:134: UserWarning: gemm_and_bias error: CUBLAS_STATUS_NOT_INITIALIZED when calling cublasLtMatmul with transpose_mat1 1 transpose_mat2 0 m 128 n 48 k 256 mat1_ld 256 mat2_ld 256 result_ld 128 abType 0 cType 0 computeType 68 scaleType 0. Will attempt to recover by calling unfused cublas path. (Triggered internally at /pytorch/aten/src/ATen/cuda/CUDABlas.cpp:1765.)
return F.linear(input, self.weight, self.bias)
/home/hp250092/ku50001222/qian/aivc/lfj/stack_env/lib/python3.13/site-packages/torch/nn/modules/linear.py:134: UserWarning: gemm_and_bias error: CUBLAS_STATUS_NOT_INITIALIZED when calling cublasLtMatmul with transpose_mat1 1 transpose_mat2 0 m 128 n 48 k 256 mat1_ld 256 mat2_ld 256 result_ld 128 abType 0 cType 0 computeType 68 scaleType 0. Will attempt to recover by calling unfused cublas path. (Triggered internally at /pytorch/aten/src/ATen/cuda/CUDABlas.cpp:1765.)
return F.linear(input, self.weight, self.bias)
/home/hp250092/ku50001222/qian/aivc/lfj/stack_env/lib/python3.13/site-packages/torch/nn/modules/linear.py:134: UserWarning: gemm_and_bias error: CUBLAS_STATUS_NOT_INITIALIZED when calling cublasLtMatmul with transpose_mat1 1 transpose_mat2 0 m 128 n 48 k 256 mat1_ld 256 mat2_ld 256 result_ld 128 abType 0 cType 0 computeType 68 scaleType 0. Will attempt to recover by calling unfused cublas path. (Triggered internally at /pytorch/aten/src/ATen/cuda/CUDABlas.cpp:1765.)
return F.linear(input, self.weight, self.bias)
/home/hp250092/ku50001222/qian/aivc/lfj/stack_env/lib/python3.13/site-packages/torch/nn/modules/linear.py:134: UserWarning: gemm_and_bias error: CUBLAS_STATUS_NOT_INITIALIZED when calling cublasLtMatmul with transpose_mat1 1 transpose_mat2 0 m 128 n 48 k 128 mat1_ld 128 mat2_ld 128 result_ld 128 abType 0 cType 0 computeType 68 scaleType 0. Will attempt to recover by calling unfused cublas path. (Triggered internally at /pytorch/aten/src/ATen/cuda/CUDABlas.cpp:1765.)
return F.linear(input, self.weight, self.bias)
/home/hp250092/ku50001222/qian/aivc/lfj/stack_env/lib/python3.13/site-packages/torch/nn/modules/linear.py:134: UserWarning: gemm_and_bias error: CUBLAS_STATUS_NOT_INITIALIZED when calling cublasLtMatmul with transpose_mat1 1 transpose_mat2 0 m 128 n 48 k 128 mat1_ld 128 mat2_ld 128 result_ld 128 abType 0 cType 0 computeType 68 scaleType 0. Will attempt to recover by calling unfused cublas path. (Triggered internally at /pytorch/aten/src/ATen/cuda/CUDABlas.cpp:1765.)
return F.linear(input, self.weight, self.bias)
/home/hp250092/ku50001222/qian/aivc/lfj/stack_env/lib/python3.13/site-packages/torch/nn/modules/linear.py:134: UserWarning: gemm_and_bias error: CUBLAS_STATUS_NOT_INITIALIZED when calling cublasLtMatmul with transpose_mat1 1 transpose_mat2 0 m 128 n 48 k 128 mat1_ld 128 mat2_ld 128 result_ld 128 abType 0 cType 0 computeType 68 scaleType 0. Will attempt to recover by calling unfused cublas path. (Triggered internally at /pytorch/aten/src/ATen/cuda/CUDABlas.cpp:1765.)
return F.linear(input, self.weight, self.bias)
/home/hp250092/ku50001222/qian/aivc/lfj/stack_env/lib/python3.13/site-packages/torch/nn/modules/linear.py:134: UserWarning: gemm_and_bias error: CUBLAS_STATUS_NOT_INITIALIZED when calling cublasLtMatmul with transpose_mat1 1 transpose_mat2 0 m 128 n 48 k 128 mat1_ld 128 mat2_ld 128 result_ld 128 abType 0 cType 0 computeType 68 scaleType 0. Will attempt to recover by calling unfused cublas path. (Triggered internally at /pytorch/aten/src/ATen/cuda/CUDABlas.cpp:1765.)
return F.linear(input, self.weight, self.bias)
/home/hp250092/ku50001222/qian/aivc/lfj/stack_env/lib/python3.13/site-packages/torch/nn/functional.py:6637: UserWarning: gemm_and_bias error: CUBLAS_STATUS_NOT_INITIALIZED when calling cublasLtMatmul with transpose_mat1 1 transpose_mat2 0 m 128 n 96 k 128 mat1_ld 128 mat2_ld 128 result_ld 128 abType 0 cType 0 computeType 68 scaleType 0. Will attempt to recover by calling unfused cublas path. (Triggered internally at /pytorch/aten/src/ATen/cuda/CUDABlas.cpp:1765.)
attn_output = linear(attn_output, out_proj_weight, out_proj_bias)
/home/hp250092/ku50001222/qian/aivc/lfj/stack_env/lib/python3.13/site-packages/torch/nn/functional.py:6637: UserWarning: gemm_and_bias error: CUBLAS_STATUS_NOT_INITIALIZED when calling cublasLtMatmul with transpose_mat1 1 transpose_mat2 0 m 128 n 96 k 128 mat1_ld 128 mat2_ld 128 result_ld 128 abType 0 cType 0 computeType 68 scaleType 0. Will attempt to recover by calling unfused cublas path. (Triggered internally at /pytorch/aten/src/ATen/cuda/CUDABlas.cpp:1765.)
attn_output = linear(attn_output, out_proj_weight, out_proj_bias)
/home/hp250092/ku50001222/qian/aivc/lfj/stack_env/lib/python3.13/site-packages/torch/nn/functional.py:6637: UserWarning: gemm_and_bias error: CUBLAS_STATUS_NOT_INITIALIZED when calling cublasLtMatmul with transpose_mat1 1 transpose_mat2 0 m 128 n 96 k 128 mat1_ld 128 mat2_ld 128 result_ld 128 abType 0 cType 0 computeType 68 scaleType 0. Will attempt to recover by calling unfused cublas path. (Triggered internally at /pytorch/aten/src/ATen/cuda/CUDABlas.cpp:1765.)
attn_output = linear(attn_output, out_proj_weight, out_proj_bias)
/home/hp250092/ku50001222/qian/aivc/lfj/stack_env/lib/python3.13/site-packages/torch/nn/functional.py:6637: UserWarning: gemm_and_bias error: CUBLAS_STATUS_NOT_INITIALIZED when calling cublasLtMatmul with transpose_mat1 1 transpose_mat2 0 m 128 n 96 k 128 mat1_ld 128 mat2_ld 128 result_ld 128 abType 0 cType 0 computeType 68 scaleType 0. Will attempt to recover by calling unfused cublas path. (Triggered internally at /pytorch/aten/src/ATen/cuda/CUDABlas.cpp:1765.)
attn_output = linear(attn_output, out_proj_weight, out_proj_bias)
[rank2]: Traceback (most recent call last):
[rank2]: File "/home/hp250092/ku50001222/qian/aivc/lfj/transfer/code/CCFM/scripts/run_cascaded.py", line 372, in <module>
[rank2]: ema_p.lerp_(model_p.data, 1 - decay)
[rank2]: ~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^
[rank2]: RuntimeError: Expected all tensors to be on the same device, but found at least two devices, cuda:2 and cpu!
[rank1]: Traceback (most recent call last):
[rank1]: File "/home/hp250092/ku50001222/qian/aivc/lfj/transfer/code/CCFM/scripts/run_cascaded.py", line 372, in <module>
[rank1]: ema_p.lerp_(model_p.data, 1 - decay)
[rank1]: ~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^
[rank1]: RuntimeError: Expected all tensors to be on the same device, but found at least two devices, cuda:1 and cpu!
[rank0]: Traceback (most recent call last):
[rank0]: File "/home/hp250092/ku50001222/qian/aivc/lfj/transfer/code/CCFM/scripts/run_cascaded.py", line 372, in <module>
[rank0]: ema_p.lerp_(model_p.data, 1 - decay)
[rank0]: ~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^
[rank0]: RuntimeError: Expected all tensors to be on the same device, but found at least two devices, cuda:0 and cpu!
[rank3]: Traceback (most recent call last):
[rank3]: File "/home/hp250092/ku50001222/qian/aivc/lfj/transfer/code/CCFM/scripts/run_cascaded.py", line 372, in <module>
[rank3]: ema_p.lerp_(model_p.data, 1 - decay)
[rank3]: ~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^
[rank3]: RuntimeError: Expected all tensors to be on the same device, but found at least two devices, cuda:3 and cpu!
[rank0]:[W315 04:52:01.076850828 ProcessGroupNCCL.cpp:1553] Warning: WARNING: destroy_process_group() was not called before program exit, which can leak resources. For more info, please see https://pytorch.org/docs/stable/distributed.html#shutdown (function operator())
0%| | 0/200000 [00:53<?, ?it/s]
0%| | 0/200000 [00:53<?, ?it/s]
0%| | 0/200000 [00:53<?, ?it/s]
0%| | 0/200000 [00:53<?, ?it/s]
W0315 04:52:07.763000 128 torch/distributed/elastic/multiprocessing/api.py:1010] Sending process 200 closing signal SIGTERM
W0315 04:52:07.765000 128 torch/distributed/elastic/multiprocessing/api.py:1010] Sending process 201 closing signal SIGTERM
W0315 04:52:07.765000 128 torch/distributed/elastic/multiprocessing/api.py:1010] Sending process 202 closing signal SIGTERM
E0315 04:52:08.130000 128 torch/distributed/elastic/multiprocessing/api.py:984] failed (exitcode: 1) local_rank: 3 (pid: 203) of binary: /home/hp250092/ku50001222/qian/aivc/lfj/stack_env/bin/python
Traceback (most recent call last):
File "/home/hp250092/ku50001222/qian/aivc/lfj/stack_env/bin/accelerate", line 8, in <module>
sys.exit(main())
~~~~^^
File "/home/hp250092/ku50001222/qian/aivc/lfj/stack_env/lib/python3.13/site-packages/accelerate/commands/accelerate_cli.py", line 50, in main
args.func(args)
~~~~~~~~~^^^^^^
File "/home/hp250092/ku50001222/qian/aivc/lfj/stack_env/lib/python3.13/site-packages/accelerate/commands/launch.py", line 1396, in launch_command
multi_gpu_launcher(args)
~~~~~~~~~~~~~~~~~~^^^^^^
File "/home/hp250092/ku50001222/qian/aivc/lfj/stack_env/lib/python3.13/site-packages/accelerate/commands/launch.py", line 1023, in multi_gpu_launcher
distrib_run.run(args)
~~~~~~~~~~~~~~~^^^^^^
File "/home/hp250092/ku50001222/qian/aivc/lfj/stack_env/lib/python3.13/site-packages/torch/distributed/run.py", line 982, in run
elastic_launch(
~~~~~~~~~~~~~~~
config=config,
~~~~~~~~~~~~~~
entrypoint=cmd,
~~~~~~~~~~~~~~~
)(*cmd_args)
~^^^^^^^^^^^
File "/home/hp250092/ku50001222/qian/aivc/lfj/stack_env/lib/python3.13/site-packages/torch/distributed/launcher/api.py", line 170, in __call__
return launch_agent(self._config, self._entrypoint, list(args))
File "/home/hp250092/ku50001222/qian/aivc/lfj/stack_env/lib/python3.13/site-packages/torch/distributed/launcher/api.py", line 317, in launch_agent
raise ChildFailedError(
...<2 lines>...
)
torch.distributed.elastic.multiprocessing.errors.ChildFailedError:
============================================================
scripts/run_cascaded.py FAILED
------------------------------------------------------------
Failures:
[1]:
time : 2026-03-15_04:52:08
host : b0019
rank : 0 (local_rank: 0)
exitcode : 1 (pid: 200)
error_file: <N/A>
traceback : To enable traceback see: https://pytorch.org/docs/stable/elastic/errors.html
[2]:
time : 2026-03-15_04:52:08
host : b0019
rank : 1 (local_rank: 1)
exitcode : 1 (pid: 201)
error_file: <N/A>
traceback : To enable traceback see: https://pytorch.org/docs/stable/elastic/errors.html
[3]:
time : 2026-03-15_04:52:08
host : b0019
rank : 2 (local_rank: 2)
exitcode : 1 (pid: 202)
error_file: <N/A>
traceback : To enable traceback see: https://pytorch.org/docs/stable/elastic/errors.html
------------------------------------------------------------
Root Cause (first observed failure):
[0]:
time : 2026-03-15_04:52:07
host : b0019
rank : 3 (local_rank: 3)
exitcode : 1 (pid: 203)
error_file: <N/A>
traceback : To enable traceback see: https://pytorch.org/docs/stable/elastic/errors.html
============================================================
==========================================
Finished: Sun Mar 15 04:52:09 JST 2026
==========================================