mxfp8 test
#35
by lanranjun - opened
T2V:
Model Name: ltx-2.3-22b-distilled_transformer_only_mxfp8-block32.safetensors
10s duration, hot start, secondary sampling (euler_ancestralcfg_pp, euler_cfg_pp), MXFP8 distillation model with 8 steps:
- 1024 * 600:169s
- 1280 * 736:169s
- 1532 * 832:253s
- 1920 * 1088:483s
In addition, I found that using the character Lora (ltx-2.0) trained by myself would slow down the sampling speed and increase the overall running time by about 100 seconds+
with chracter lora
- 1532 * 832:350s
without chracter lora - 1532 * 832:253s
The live action scene feels slightly inferior, but the Pixar animation effect is good and the difference cannot be seen
my pc:
5060ti 16G VRAM
64G RAM