Syntax error in dit_model.py prevents MagiHuman nodes from loading
ComfyUI cannot load any MagiHuman nodes due to an indention error.
/custom_nodes/ComfyUI_MagiHuman_fp8_ditFIX_nodes/inference/model/dit/dit_model.py
Change this:
ctx = init_empty_weights if is_accelerate_available() else nullcontext
with nullcontext():
model = DiTModel(model_config=model_config)
to this:
ctx = init_empty_weights if is_accelerate_available() else nullcontext
with ctx():
model = DiTModel(model_config=model_config)
Yea i changed it laat night but didnt add it to commit, i will be sure to do so
Im really new to this, bare with me ❤️
Were you able to get it running by chance? I oom from lack of memory. Just want to help out an get the model to the community early 🤷♂️
it fixed it for me but running your example workflow with the model from your online repo (davinci_distilled_FINAL.safetensors), but its not the same name used in the workflow. I get this error:
RuntimeError: Attempted to call variable.set_data(tensor), but variable and tensor have incompatible tensor type.
File "F:\ComfyUI\execution.py", line 525, in execute
output_data, output_ui, has_subgraph, has_pending_tasks = await get_output_data(prompt_id, unique_id, obj, input_data_all, execution_block_cb=execution_block_cb, pre_execute_cb=pre_execute_cb, v3_data=v3_data)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "F:\ComfyUI\execution.py", line 334, in get_output_data
return_values = await _async_map_node_over_list(prompt_id, unique_id, obj, input_data_all, obj.FUNCTION, allow_interrupt=True, execution_block_cb=execution_block_cb, pre_execute_cb=pre_execute_cb, v3_data=v3_data)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "F:\ComfyUI\custom_nodes\comfyui-lora-manager\py\metadata_collector\metadata_hook.py", line 168, in async_map_node_over_list_with_metadata
results = await original_map_node_over_list(
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "F:\ComfyUI\execution.py", line 308, in _async_map_node_over_list
await process_inputs(input_dict, i)
File "F:\ComfyUI\execution.py", line 296, in process_inputs
result = f(**inputs)
^^^^^^^^^^^
File "F:\ComfyUI\comfy_api\internal_init_.py", line 149, in wrapped_func
return method(locked_class, **inputs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "F:\ComfyUI\comfy_api\latest_io.py", line 1764, in EXECUTE_NORMALIZED
to_return = cls.execute(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "F:\ComfyUI\custom_nodes\ComfyUI_MagiHuman_fp8_ditFIX_nodes\MagiHuman_node.py", line 248, in execute
video_lat, audio_lat,params=infer_magihuman(model,seed,latents,steps,sr_steps=50,offload=offload,offload_block_num=offload_block_num)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "F:\ComfyUI\custom_nodes\ComfyUI_MagiHuman_fp8_ditFIX_nodes\inference\pipeline\entry.py", line 50, in infer_magihuman
latent_video, latent_audio,params=pipeline.run_offline(
^^^^^^^^^^^^^^^^^^^^^
File "F:\ComfyUI\custom_nodes\ComfyUI_MagiHuman_fp8_ditFIX_nodes\inference\pipeline\pipeline.py", line 115, in run_offline
self.pre_model(sr_mode)
File "F:\ComfyUI\custom_nodes\ComfyUI_MagiHuman_fp8_ditFIX_nodes\inference\pipeline\pipeline.py", line 70, in pre_model
self.sr_model = get_dit(self.config.sr_arch_config, self.config.engine_config, torch_type=torch.float8_e4m3fn)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "F:\ComfyUI\custom_nodes\ComfyUI_MagiHuman_fp8_ditFIX_nodes\inference\model\dit\dit_model.py", line 107, in get_dit
model = load_model_checkpoint(model, engine_config)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "F:\ComfyUI\custom_nodes\ComfyUI_MagiHuman_fp8_ditFIX_nodes\inference\infra\checkpoint\load_model_checkpoint.py", line 248, in load_model_checkpoint
load_sharded_safetensors_parallel_with_progress(
File "F:\ComfyUI\custom_nodes\ComfyUI_MagiHuman_fp8_ditFIX_nodes\inference\infra\checkpoint\load_model_checkpoint.py", line 175, in load_sharded_safetensors_parallel_with_progress
param_map[key].data = gpu_tensor.to(param_map[key].dtype)
^^^^^^^^^^^^^^^^^^^
I corrected the workflow on huggingface! Thank you for catching that, i focused mainly on guthub commits last night
Also, can i see an example youve generated? Im just interested to see the models fp8 quality lol i havent had a chance to get home and test it
Were you able to get it running by chance? I oom from lack of memory. Just want to help out an get the model to the community early 🤷♂️
I worked on your repo for 6 hours last night, did hours worth of bug patches, and trying to optimize memory usage, never could get it to run. After several hours I finally got the distill/base to run (super slow offloading most of it to cpu gahh) but then I would I insta OOM as soon as moving to the super resolution phase. And I'm on a 5090. I finally had to call it quits. I know Kijai has been working on it for days supposedly, and still nothing is coming out, so either they are taking it easy and busy on other stuff, or this is a really difficult task.
Thank you for your honesty and response! Im confident theres a larger issue at hand here. I thought the same thing about kijai not releasing it yet. It makes sense that if he hasnt gotten it out, really nobody is going to. Ill continue to mess with it
So as a slight update, did you edit the magicompiler settings in the videogenerate.py file at all? I think its the compiler not accepting my model. Once i clear that code to accept it, the workflow SHOULD run. Unlikely should but id love to know if you touched on that at all?
its the videogenerate.py node i fixed to run the correct block swapping from my model to the sr model. the magicompiler isnt the technical issue, its how the vram is passed from model to sr model. the sr model was sitting on top of the regular model which was never unloaded. at least thats what claude is saying
I think i fixed it. I still oom but it requires 30gb. Its never made it this far for me ever.
https://github.com/RealRebelAI/ComfyUI_MagiHuman_fp8_ditFIX_nodes
ill try tomorrow but I tested few times. May I know why I see my 64gb ram filling up to 100% while my vram stays around 8% (on 5090) on ksampler? I couldnt reach the 0/8 step on ksampler because I get the error 5min after starting the prompt... its always working on something before that point. I am using the model davinci_distilled_FINAL.safetensors provided. Here are the logs:
got prompt
After Max GPU memory allocated: 12.40 GB
After Max GPU memory allocated: 12.40 GB
infer False
[2026-04-01 22:32:10,975 - INFO] [Rank 0] Build dit model successfully
[2026-04-01 22:32:10,979 - INFO] [Rank 0] Loading checkpoint with safetensors format from pretrained_folder
[2026-04-01 22:34:04,626 - INFO] [Rank 0] Load Weight Missing Keys: []
[2026-04-01 22:34:04,646 - INFO] [Rank 0] Load Weight Unexpected Keys: []
[2026-04-01 22:34:04,646 - INFO] [Rank 0] Load checkpoint successfully
[2026-04-01 22:35:50,044 - INFO] [Rank 0] Load model successfully GPU 0 memory allocated: 0.32 GB, max_allocated: 11.55 GB, reserved: 0.41 GB, max_reserved: 11.75 GB
[2026-04-01 22:35:50,688 - INFO] [Rank 0] Begin init MagiEvaluator GPU 0 memory allocated: 0.32 GB, max_allocated: 11.55 GB, reserved: 0.41 GB, max_reserved: 11.75 GB
[2026-04-01 22:35:51,319 - INFO] [Rank 0] After init MagiEvaluator GPU 0 memory allocated: 0.32 GB, max_allocated: 11.55 GB, reserved: 0.41 GB, max_reserved: 11.75 GB
[2026-04-01 22:35:51,347 - INFO] [Rank 0] Using random audio, latent_audio: torch.Size([1, 51, 64])
[2026-04-01 22:35:51,348 - INFO] [Rank 0]
Time Elapsed: [0:00:00.019334] From [Step1: Prepare Latent Features (2026-04-01 22:35:51.328975)] To [Step2: Encode Image for Basic Resolution (2026-04-01 22:35:51.348309)]
[2026-04-01 22:35:51,348 - INFO] [Rank 0]
Time Elapsed: [0:00:00] From [Step2: Encode Image for Basic Resolution (2026-04-01 22:35:51.348309)] To [Step3: Basic Resolution Evaluation (2026-04-01 22:35:51.348309)]
Total number of layers: 40
Total number of groups: 40
0%| | 0/8 [00:00<?, ?it/s]
!!! Exception during processing !!! Promotion for Float8 Types is not supported, attempted to promote Float and Float8_e4m3fn
Traceback (most recent call last):
File "F:\ComfyUI\execution.py", line 534, in execute
output_data, output_ui, has_subgraph, has_pending_tasks = await get_output_data(prompt_id, unique_id, obj, input_data_all, execution_block_cb=execution_block_cb, pre_execute_cb=pre_execute_cb, v3_data=v3_data)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "F:\ComfyUI\execution.py", line 334, in get_output_data
return_values = await _async_map_node_over_list(prompt_id, unique_id, obj, input_data_all, obj.FUNCTION, allow_interrupt=True, execution_block_cb=execution_block_cb, pre_execute_cb=pre_execute_cb, v3_data=v3_data)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "F:\ComfyUI\custom_nodes\comfyui-lora-manager\py\metadata_collector\metadata_hook.py", line 168, in async_map_node_over_list_with_metadata
results = await original_map_node_over_list(
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "F:\ComfyUI\execution.py", line 308, in async_map_node_over_list
await process_inputs(input_dict, i)
File "F:\ComfyUI\execution.py", line 296, in process_inputs
result = f(**inputs)
^^^^^^^^^^^
File "F:\ComfyUI\comfy_api\internal_init.py", line 149, in wrapped_func
return method(locked_class, **inputs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "F:\ComfyUI\comfy_api\latest_io.py", line 1789, in EXECUTE_NORMALIZED
to_return = cls.execute(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "F:\ComfyUI\custom_nodes\ComfyUI_MagiHuman_fp8_ditFIX_nodes\MagiHuman_node.py", line 294, in execute
video_lat, audio_lat, params = infer_magihuman(model, seed, latents, steps, sr_steps=50, offload=offload, offload_block_num=offload_block_num)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "F:\ComfyUI\custom_nodes\ComfyUI_MagiHuman_fp8_ditFIX_nodes\inference\pipeline\entry.py", line 50, in infer_magihuman
latent_video, latent_audio,params=pipeline.run_offline(
^^^^^^^^^^^^^^^^^^^^^
File "F:\ComfyUI\custom_nodes\ComfyUI_MagiHuman_fp8_ditFIX_nodes\inference\pipeline\pipeline.py", line 134, in run_offline
latent_video, latent_audio, params = self.evaluator.evaluate(
^^^^^^^^^^^^^^^^^^^^^^^^
File "F:\venv\Lib\site-packages\torch\utils_contextlib.py", line 120, in decorate_context
return func(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^
File "F:\ComfyUI\custom_nodes\ComfyUI_MagiHuman_fp8_ditFIX_nodes\inference\pipeline\video_generate.py", line 334, in evaluate
br_latent_video, br_latent_audio = self.evaluate_with_latent(
^^^^^^^^^^^^^^^^^^^^^^^^^^
File "F:\venv\Lib\site-packages\torch\utils_contextlib.py", line 120, in decorate_context
return func(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^
File "F:\ComfyUI\custom_nodes\ComfyUI_MagiHuman_fp8_ditFIX_nodes\inference\pipeline\video_generate.py", line 506, in evaluate_with_latent
v_output = self.forward(eval_input_cond, use_sr_model=use_sr_model, gpu_manager=gpu_manager)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "F:\ComfyUI\custom_nodes\ComfyUI_MagiHuman_fp8_ditFIX_nodes\inference\pipeline\video_generate.py", line 247, in forward
noise_pred = self.sr_model(*eval_input, gpu_manager=gpu_manager)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "F:\venv\Lib\site-packages\torch\nn\modules\module.py", line 1775, in _wrapped_call_impl
return self._call_impl(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "F:\venv\Lib\site-packages\torch\nn\modules\module.py", line 1786, in _call_impl
return forward_call(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "F:\ComfyUI\custom_nodes\ComfyUI_MagiHuman_fp8_ditFIX_nodes\inference\model\dit\dit_module.py", line 1041, in forward
x, rope = self.adapter(x, coords_mapping, video_mask, audio_mask, text_mask)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "F:\venv\Lib\site-packages\torch\nn\modules\module.py", line 1775, in _wrapped_call_impl
return self._call_impl(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "F:\venv\Lib\site-packages\torch\nn\modules\module.py", line 1786, in _call_impl
return forward_call(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "F:\ComfyUI\custom_nodes\ComfyUI_MagiHuman_fp8_ditFIX_nodes\inference\model\dit\dit_module.py", line 859, in forward
rope = self.rope(coords_mapping)
^^^^^^^^^^^^^^^^^^^^^^^^^
File "F:\venv\Lib\site-packages\torch\nn\modules\module.py", line 1775, in _wrapped_call_impl
return self._call_impl(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "F:\venv\Lib\site-packages\torch\nn\modules\module.py", line 1786, in _call_impl
return forward_call(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "F:\ComfyUI\custom_nodes\ComfyUI_MagiHuman_fp8_ditFIX_nodes\inference\model\dit\dit_module.py", line 332, in forward
proj = coords_xyz.unsqueeze(-1) * scales.unsqueeze(-1) * self.bands
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~^~~~~~~~~~~~
RuntimeError: Promotion for Float8 Types is not supported, attempted to promote Float and Float8_e4m3fn
Prompt executed in 225.75 seconds
So reading from others comments who tried to get it to run are saying were missing too much information from the main repo to run this. Its a dead end. I gave up