sida / train_sd3_lora.log
xiangzai's picture
Add files using upload-large-folder tool
7803bdf verified
nohup: ignoring input
检测到 4 个GPU
每个GPU批次大小: 4
总有效批次大小: 16
===== SD3 LoRA 多GPU训练开始 =====
模型: /gemini/space/hsd/project/pretrained_model/huggingface/hub/models--stabilityai--stable-diffusion-3-medium-diffusers/snapshots/ea42f8cef0f178587cf766dc8129abd379c90671
输出目录: sd3-lora-finetuned-batch-8
分辨率: 512
每个GPU批次大小: 4
梯度累积步数: 1
总有效批次大小: 16
学习率: 1e-5
最大训练步数: 500000
LoRA Rank: 32
使用GPU: 0,1,2,3
断点重训: latest
===========================================
使用 accelerate 启动多GPU训练...
/root/miniconda3/envs/SiT/lib/python3.10/site-packages/transformers/utils/hub.py:111: FutureWarning: Using `TRANSFORMERS_CACHE` is deprecated and will be removed in v5 of Transformers. Use `HF_HOME` instead.
warnings.warn(
Terminated
===========================================
训练完成!
模型保存在: sd3-lora-finetuned-batch-8
日志保存在: sd3-lora-finetuned-batch-8/logs
验证图片保存在: sd3-lora-finetuned-batch-8/validation_images
===========================================