dx8152 commited on
Commit
15dad69
·
verified ·
1 Parent(s): 7411196

Upload 21 files

Browse files
LTX2.3-1.0.4/API issues-API问题办法.txt ADDED
@@ -0,0 +1,50 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ 1. 复制LTX桌面版的快捷方式到LTX_Shortcut
2
+
3
+ 2. 运行run.bat
4
+ ----
5
+ 1. Copy the LTX desktop shortcut to LTX_Shortcut
6
+
7
+ 2. Run run.bat
8
+ ----
9
+
10
+
11
+
12
+ 【问题描述 / Problem】
13
+ 系统强制使用 FAL API 生成图片,即使本地有 GPU 可用。
14
+ System forces FAL API generation even when local GPU is available.
15
+
16
+ 【原因 / Cause】
17
+ LTX 强制要求 GPU 有 31GB VRAM 才会使用本地显卡,低于此值会强制走 API 模式。
18
+ LTX requires 31GB VRAM to use local GPU. Below this, it forces API mode.
19
+
20
+ ================================================================================
21
+ 【修复方法 / Fix Method】
22
+ ================================================================================
23
+
24
+ 运行: API issues.bat.bat (以管理员身份)
25
+ Run: API issues.bat.bat (as Administrator)
26
+
27
+ ================================================================================
28
+ ================================================================================
29
+
30
+ 【或者手动 / Or Manual】
31
+
32
+ 1. 修改 VRAM 阈值 / Modify VRAM Threshold
33
+ 文件路径 / File: C:\Program Files\LTX Desktop\resources\backend\runtime_config\runtime_policy.py
34
+ 第16行 / Line 16:
35
+ 原 / Original: return vram_gb < 31
36
+ 改为 / Change: return vram_gb < 6
37
+
38
+ 2. 清空 API Key / Clear API Key
39
+ 文件路径 / File: C:\Users\<用户名>\AppData\Local\LTXDesktop\settings.json
40
+ 原 / Original: "fal_api_key": "xxxxx"
41
+ 改为 / Change: "fal_api_key": ""
42
+
43
+ 【说明 / Note】
44
+ - VRAM 阈值改为 6GB,意味着 6GB 及以上显存都会使用本地显卡
45
+ - VRAM threshold set to 6GB means 6GB+ VRAM will use local GPU
46
+ - 清空 fal_api_key 避免系统误判为已配置 API
47
+ - Clear fal_api_key to avoid system thinking API is configured
48
+ - 修改后重启程序即可生效
49
+ - Restart LTX Desktop after changes
50
+ ================================================================================
LTX2.3-1.0.4/LTX_Shortcut/LTX Desktop.lnk ADDED
Binary file (1.94 kB). View file
 
LTX2.3-1.0.4/UI/i18n.js ADDED
@@ -0,0 +1,438 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ /**
2
+ * LTX UI i18n — 与根目录「中英文.html」思路类似,但独立脚本、避免坏 DOM/错误路径。
3
+ * 仅维护文案映射;动态节点由 index.js 在语言切换后刷新。
4
+ */
5
+ (function (global) {
6
+ const STORAGE_KEY = 'ltx_ui_lang';
7
+
8
+ const STR = {
9
+ zh: {
10
+ tabVideo: '视频生成',
11
+ tabBatch: '智能多帧',
12
+ tabUpscale: '视频增强',
13
+ tabImage: '图像生成',
14
+ promptLabel: '视觉描述词 (Prompt)',
15
+ promptPlaceholder: '在此输入视觉描述词 (Prompt)...',
16
+ promptPlaceholderUpscale: '输入画面增强引导词 (可选)...',
17
+ clearVram: '释放显存',
18
+ clearingVram: '清理中...',
19
+ settingsTitle: '系统高级设置',
20
+ langToggleAriaZh: '切换为 English',
21
+ langToggleAriaEn: 'Switch to 中文',
22
+ sysScanning: '正在扫描 GPU...',
23
+ sysBusy: '运算中...',
24
+ sysOnline: '在线 / 就绪',
25
+ sysStarting: '启动中...',
26
+ sysOffline: '未检测到后端 (Port 3000)',
27
+ advancedSettings: '高级设置',
28
+ deviceSelect: '工作设备选择',
29
+ gpuDetecting: '正在检测 GPU...',
30
+ outputPath: '输出与上传存储路径',
31
+ outputPathPh: '例如: D:\\LTX_outputs',
32
+ savePath: '保存路径',
33
+ outputPathHint:
34
+ '系统默认会在 C 盘保留输出文件。请输入新路径后点击保存按钮。',
35
+ lowVram: '低显存优化',
36
+ lowVramDesc:
37
+ '尽量关闭 fast 超分、在加载管线后尝试 CPU 分层卸载(仅当引擎提供 Diffusers 式 API 才可能生效)。每次生成结束会卸载管线。说明:整模型常驻 GPU 时占用仍可能接近满配(例如约 24GB),要明显降占用需更短时长/更低分辨率或 FP8 等小权重。',
38
+ vramLimitLabel: '可用最高显存上限 (GB, 0为全开优先显存)',
39
+ vramLimitPh: '例如: 12 (0表示无限制)',
40
+ saveLabel: '保存',
41
+ modelLoraSettings: '模型与LoRA设置',
42
+ modelFolder: '模型文件夹',
43
+ modelFolderPh: '例如: F:\\LTX2.3\\models',
44
+ loraFolder: 'LoRA文件夹',
45
+ loraFolderPh: '例如: F:\\LTX2.3\\loras',
46
+ loraFolderPath: 'LoRA 文件夹路径',
47
+ loraFolderPathPlaceholder: '留空使用默认路径',
48
+ saveScan: '保存并扫描',
49
+ loraPlacementHintWithDir:
50
+ '将 LoRA 文件放到默认模型目录: <code>{dir}</code>\\loras',
51
+ basicEngine: '基础画面 / Basic EngineSpecs',
52
+ qualityLevel: '清晰度级别',
53
+ aspectRatio: '画幅比例',
54
+ ratio169: '16:9 电影宽幅',
55
+ ratio916: '9:16 移动竖屏',
56
+ resPreviewPrefix: '最终发送规格',
57
+ fpsLabel: '帧率 (FPS)',
58
+ durationLabel: '时长 (秒)',
59
+ cameraMotion: '镜头运动方式',
60
+ motionStatic: 'Static (静止机位)',
61
+ motionDollyIn: 'Dolly In (推近)',
62
+ motionDollyOut: 'Dolly Out (拉远)',
63
+ motionDollyLeft: 'Dolly Left (向左)',
64
+ motionDollyRight: 'Dolly Right (向右)',
65
+ motionJibUp: 'Jib Up (升臂)',
66
+ motionJibDown: 'Jib Down (降臂)',
67
+ motionFocus: 'Focus Shift (焦点)',
68
+ audioGen: '生成 AI 环境音 (Audio Gen)',
69
+ selectModel: '选择模型',
70
+ selectLora: '选择 LoRA',
71
+ defaultModel: '使用默认模型',
72
+ noLora: '不使用 LoRA',
73
+ loraStrength: 'LoRA 强度',
74
+ genSource: '生成媒介 / Generation Source',
75
+ startFrame: '起始帧 (首帧)',
76
+ endFrame: '结束帧 (尾帧)',
77
+ uploadStart: '上传首帧',
78
+ uploadEnd: '上传尾帧 (可选)',
79
+ refAudio: '参考音频 (A2V)',
80
+ uploadAudio: '点击上传音频',
81
+ sourceHint:
82
+ '💡 若仅上传首帧 = 图生视频/音视频;若同时上传首尾帧 = 首尾插帧。',
83
+ imgPreset: '预设分辨率 (Presets)',
84
+ imgOptSquare: '1:1 Square (1024x1024)',
85
+ imgOptLand: '16:9 Landscape (1280x720)',
86
+ imgOptPort: '9:16 Portrait (720x1280)',
87
+ imgOptCustom: 'Custom 自定义...',
88
+ width: '宽度',
89
+ height: '高度',
90
+ samplingSteps: '采样步数 (Steps)',
91
+ upscaleSource: '待超分视频 (Source)',
92
+ upscaleUpload: '拖入低分辨率视频片段',
93
+ targetRes: '目标分辨率',
94
+ upscale1080: '1080P Full HD (2x)',
95
+ upscale720: '720P HD',
96
+ smartMultiFrameGroup: '智能多帧',
97
+ workflowModeLabel: '工作流模式(点击切换)',
98
+ wfSingle: '单次多关键帧',
99
+ wfSegments: '分段拼接',
100
+ uploadImages: '上传图片',
101
+ uploadMulti1: '点击或拖入多张图片',
102
+ uploadMulti2: '支持一次选多张,可多次添加',
103
+ batchStripTitle: '已选图片 · 顺序 = 播放先后',
104
+ batchStripHint: '在缩略图上按住拖动排序;松手落入虚线框位置',
105
+ batchFfmpegHint:
106
+ '💡 <strong>分段模式</strong>:2 张 = 1 段;3 张 = 2 段再拼接。<strong>单次模式</strong>:几张图就几个 latent 锚点,一条视频出片。<br>多段需 <code style="font-size:9px;">ffmpeg</code>:装好后加 PATH,或设环境变量 <code style="font-size:9px;">LTX_FFMPEG_PATH</code>,或在 <code style="font-size:9px;">%LOCALAPPDATA%\\LTXDesktop\\ffmpeg_path.txt</code> 第一行写 ffmpeg.exe 完整路径。',
107
+ globalPromptLabel: '本页全局补充词(可选)',
108
+ globalPromptPh: '与顶部主 Prompt 叠加;单次模式与分段模式均可用',
109
+ bgmLabel: '成片配乐(可选,统一音轨)',
110
+ bgmUploadHint: '上传一条完整 BGM(生成完成后会替换整段成片的音轨)',
111
+ mainRender: '开始渲染',
112
+ waitingTask: '等待分配渲染任务...',
113
+ libHistory: '历史资产 / ASSETS',
114
+ libLog: '系统日志 / LOGS',
115
+ refresh: '刷新',
116
+ logReady: '> LTX-2 Studio Ready. Expecting commands...',
117
+ resizeHandleTitle: '拖动调整面板高度',
118
+ batchNeedTwo: '💡 请上传至少2张图片',
119
+ batchSegTitle: '视频片段设置(分段拼接)',
120
+ batchSegClip: '片段',
121
+ batchSegDuration: '时长',
122
+ batchSegSec: '秒',
123
+ batchSegPrompt: '片段提示词',
124
+ batchSegPromptPh: '此片段的提示词,如:跳舞、吃饭...',
125
+ batchKfPanelTitle: '单次多关键帧 · 时间轴',
126
+ batchTotalDur: '总时长',
127
+ batchTotalSec: '秒',
128
+ batchPanelHint:
129
+ '用「间隔」连接相邻关键帧:第 1 张固定在 0 s,最后一张在<strong>各间隔之和</strong>的终点。顶部总时长与每张的锚点时刻会随间隔即时刷新。因后端按<strong>整数秒</strong>建序列,实际请求里的整段时长为合计秒数<strong>向上取整</strong>(至少 2),略长于小数合计时属正常。镜头与 FPS 仍用左侧「视频生成」。',
130
+ batchKfTitle: '关键帧',
131
+ batchStrength: '引导强度',
132
+ batchGapTitle: '间隔',
133
+ batchSec: '秒',
134
+ batchAnchorStart: '片头',
135
+ batchAnchorEnd: '片尾',
136
+ batchThumbDrag: '按住拖动排序',
137
+ batchThumbRemove: '删除',
138
+ batchAddMore: '+ 继续添加',
139
+ batchGapInputTitle: '上一关键帧到下一关键帧的时长(秒);总时长 = 各间隔之和',
140
+ batchStrengthTitle: '与 Comfy guide strength 类似,中间帧可调低(如 0.2)减轻闪烁',
141
+ batchTotalPillTitle: '等于下方各「间隔」之和,无需单独填写',
142
+ defaultPath: '默认路径',
143
+ phase_loading_model: '加载权重',
144
+ phase_encoding_text: 'T5 编码',
145
+ phase_validating_request: '校验请求',
146
+ phase_uploading_audio: '上传音频',
147
+ phase_uploading_image: '上传图像',
148
+ phase_inference: 'AI 推理',
149
+ phase_downloading_output: '下载结果',
150
+ phase_complete: '完成',
151
+ gpuBusyPrefix: 'GPU 运算中',
152
+ progressStepUnit: '步',
153
+ loaderGpuAlloc: 'GPU 正在分配资源...',
154
+ warnGenerating: '⚠️ 当前正在生成中,请等待完成',
155
+ warnBatchPrompt: '⚠️ 智能多帧请至少填写:顶部主提示词、本页全局补充词或某一「片段提示词」',
156
+ warnNeedPrompt: '⚠️ 请输入提示词后再开始渲染',
157
+ warnVideoLong: '⚠️ 时长设定为 {n}s 极长,可能导致显存溢出或耗时较久。',
158
+ errUpscaleNoVideo: '请先上传待超分的视频',
159
+ errBatchMinImages: '请上传至少2张图片',
160
+ errSingleKfPrompt: '单次多关键帧请至少填写顶部主提示词或本页全局补充词',
161
+ loraNoneLabel: '无',
162
+ modelDefaultLabel: '默认',
163
+ },
164
+ en: {
165
+ tabVideo: 'Video',
166
+ tabBatch: 'Multi-frame',
167
+ tabUpscale: 'Upscale',
168
+ tabImage: 'Image',
169
+ promptLabel: 'Prompt',
170
+ promptPlaceholder: 'Describe the scene...',
171
+ promptPlaceholderUpscale: 'Optional guidance for enhancement...',
172
+ clearVram: 'Clear VRAM',
173
+ clearingVram: 'Clearing...',
174
+ settingsTitle: 'Advanced settings',
175
+ langToggleAriaZh: 'Switch to English',
176
+ langToggleAriaEn: 'Switch to Chinese',
177
+ sysScanning: 'Scanning GPU...',
178
+ sysBusy: 'Busy...',
179
+ sysOnline: 'Online / Ready',
180
+ sysStarting: 'Starting...',
181
+ sysOffline: 'Backend offline (port 3000)',
182
+ advancedSettings: 'Advanced',
183
+ deviceSelect: 'GPU device',
184
+ gpuDetecting: 'Detecting GPU...',
185
+ outputPath: 'Output & upload folder',
186
+ outputPathPh: 'e.g. D:\\LTX_outputs',
187
+ savePath: 'Save path',
188
+ outputPathHint:
189
+ 'Outputs default to C: drive. Enter a folder and click Save.',
190
+ lowVram: 'Low-VRAM mode',
191
+ lowVramDesc:
192
+ 'Tries to reduce VRAM (engine-dependent). Shorter duration / lower resolution helps more.',
193
+ vramLimitLabel: 'Max VRAM Limit (GB, 0 for unlimited)',
194
+ vramLimitPh: 'e.g. 12 (0 for unlimited)',
195
+ saveLabel: 'Save',
196
+ modelLoraSettings: 'Model & LoRA folders',
197
+ modelFolder: 'Models folder',
198
+ modelFolderPh: 'e.g. F:\\LTX2.3\\models',
199
+ loraFolder: 'LoRAs folder',
200
+ loraFolderPh: 'e.g. F:\\LTX2.3\\loras',
201
+ loraFolderPath: 'LoRA folder path',
202
+ loraFolderPathPlaceholder: 'Leave empty for default path',
203
+ saveScan: 'Save & scan',
204
+ loraHint: 'Put .safetensors / .ckpt LoRAs here, then refresh lists.',
205
+ basicEngine: 'Basic / Engine',
206
+ qualityLevel: 'Quality',
207
+ aspectRatio: 'Aspect ratio',
208
+ ratio169: '16:9 widescreen',
209
+ ratio916: '9:16 portrait',
210
+ resPreviewPrefix: 'Output',
211
+ fpsLabel: 'FPS',
212
+ durationLabel: 'Duration (s)',
213
+ cameraMotion: 'Camera motion',
214
+ motionStatic: 'Static',
215
+ motionDollyIn: 'Dolly in',
216
+ motionDollyOut: 'Dolly out',
217
+ motionDollyLeft: 'Dolly left',
218
+ motionDollyRight: 'Dolly right',
219
+ motionJibUp: 'Jib up',
220
+ motionJibDown: 'Jib down',
221
+ motionFocus: 'Focus shift',
222
+ audioGen: 'AI ambient audio',
223
+ selectModel: 'Model',
224
+ selectLora: 'LoRA',
225
+ defaultModel: 'Default model',
226
+ noLora: 'No LoRA',
227
+ loraStrength: 'LoRA strength',
228
+ genSource: 'Source media',
229
+ startFrame: 'Start frame',
230
+ endFrame: 'End frame (optional)',
231
+ uploadStart: 'Upload start',
232
+ uploadEnd: 'Upload end (opt.)',
233
+ refAudio: 'Reference audio (A2V)',
234
+ uploadAudio: 'Upload audio',
235
+ sourceHint:
236
+ '💡 Start only = I2V / A2V; start + end = interpolation.',
237
+ imgPreset: 'Resolution presets',
238
+ imgOptSquare: '1:1 (1024×1024)',
239
+ imgOptLand: '16:9 (1280×720)',
240
+ imgOptPort: '9:16 (720×1280)',
241
+ imgOptCustom: 'Custom...',
242
+ width: 'Width',
243
+ height: 'Height',
244
+ samplingSteps: 'Steps',
245
+ upscaleSource: 'Source video',
246
+ upscaleUpload: 'Drop low-res video',
247
+ targetRes: 'Target resolution',
248
+ upscale1080: '1080p Full HD (2×)',
249
+ upscale720: '720p HD',
250
+ smartMultiFrameGroup: 'Smart multi-frame',
251
+ workflowModeLabel: 'Workflow',
252
+ wfSingle: 'Single pass',
253
+ wfSegments: 'Segments',
254
+ uploadImages: 'Upload images',
255
+ uploadMulti1: 'Click or drop multiple images',
256
+ uploadMulti2: 'Multi-select OK; add more anytime.',
257
+ batchStripTitle: 'Order = playback',
258
+ batchStripHint: 'Drag thumbnails to reorder.',
259
+ batchFfmpegHint:
260
+ '💡 <strong>Segments</strong>: 2 images → 1 clip; 3 → 2 clips stitched. <strong>Single</strong>: N images → N latent anchors, one video.<br>Stitching needs <code style="font-size:9px;">ffmpeg</code> on PATH, or <code style="font-size:9px;">LTX_FFMPEG_PATH</code>, or <code style="font-size:9px;">%LOCALAPPDATA%\\LTXDesktop\\ffmpeg_path.txt</code> with full path to ffmpeg.exe.',
261
+ globalPromptLabel: 'Extra prompt (optional)',
262
+ globalPromptPh: 'Appended to main prompt for both modes.',
263
+ bgmLabel: 'Full-length BGM (optional)',
264
+ bgmUploadHint: 'Replaces final mix audio after generation.',
265
+ mainRender: 'Render',
266
+ waitingTask: 'Waiting for task...',
267
+ libHistory: 'Assets',
268
+ libLog: 'Logs',
269
+ refresh: 'Refresh',
270
+ logReady: '> LTX-2 Studio ready.',
271
+ resizeHandleTitle: 'Drag to resize panel',
272
+ batchNeedTwo: '💡 Upload at least 2 images',
273
+ batchSegTitle: 'Segment settings',
274
+ batchSegClip: 'Clip',
275
+ batchSegDuration: 'Duration',
276
+ batchSegSec: 's',
277
+ batchSegPrompt: 'Prompt',
278
+ batchSegPromptPh: 'e.g. dancing, walking...',
279
+ batchKfPanelTitle: 'Single pass · timeline',
280
+ batchTotalDur: 'Total',
281
+ batchTotalSec: 's',
282
+ batchPanelHint:
283
+ 'Use gaps between keyframes: first at 0s, last at the sum of gaps. Totals update live. Backend uses whole seconds (ceil, min 2). Motion & FPS use the Video panel.',
284
+ batchKfTitle: 'Keyframe',
285
+ batchStrength: 'Strength',
286
+ batchGapTitle: 'Gap',
287
+ batchSec: 's',
288
+ batchAnchorStart: 'start',
289
+ batchAnchorEnd: 'end',
290
+ batchThumbDrag: 'Drag to reorder',
291
+ batchThumbRemove: 'Remove',
292
+ batchAddMore: '+ Add more',
293
+ batchGapInputTitle: 'Seconds between keyframes; total = sum of gaps',
294
+ batchStrengthTitle: 'Guide strength (lower on middle keys may reduce flicker)',
295
+ batchTotalPillTitle: 'Equals the sum of gaps below',
296
+ defaultPath: 'default',
297
+ phase_loading_model: 'Loading weights',
298
+ phase_encoding_text: 'T5 encode',
299
+ phase_validating_request: 'Validating',
300
+ phase_uploading_audio: 'Uploading audio',
301
+ phase_uploading_image: 'Uploading image',
302
+ phase_inference: 'Inference',
303
+ phase_downloading_output: 'Downloading',
304
+ phase_complete: 'Done',
305
+ gpuBusyPrefix: 'GPU',
306
+ progressStepUnit: 'steps',
307
+ loaderGpuAlloc: 'Allocating GPU...',
308
+ warnGenerating: '⚠️ Already generating, please wait.',
309
+ warnBatchPrompt: '⚠️ Enter main prompt, page extra prompt, or a segment prompt.',
310
+ warnNeedPrompt: '⚠️ Enter a prompt first.',
311
+ warnVideoLong: '⚠️ Duration {n}s is very long; may OOM or take a long time.',
312
+ errUpscaleNoVideo: 'Upload a video to upscale first.',
313
+ errBatchMinImages: 'Upload at least 2 images.',
314
+ errSingleKfNeedPrompt: 'Enter main or page extra prompt for single-pass keyframes.',
315
+ loraNoneLabel: 'none',
316
+ modelDefaultLabel: 'default',
317
+ loraPlacementHintWithDir:
318
+ 'Place LoRAs into the default models directory: <code>{dir}</code>\\loras',
319
+ },
320
+ };
321
+
322
+ function getLang() {
323
+ return localStorage.getItem(STORAGE_KEY) === 'en' ? 'en' : 'zh';
324
+ }
325
+
326
+ function setLang(lang) {
327
+ const L = lang === 'en' ? 'en' : 'zh';
328
+ localStorage.setItem(STORAGE_KEY, L);
329
+ document.documentElement.lang = L === 'en' ? 'en' : 'zh-CN';
330
+ try {
331
+ applyI18n();
332
+ } catch (err) {
333
+ console.error('[i18n] applyI18n failed:', err);
334
+ }
335
+ updateLangButton();
336
+ if (typeof global.onUiLanguageChanged === 'function') {
337
+ try {
338
+ global.onUiLanguageChanged();
339
+ } catch (e) {
340
+ console.warn('onUiLanguageChanged', e);
341
+ }
342
+ }
343
+ }
344
+
345
+ function t(key) {
346
+ const L = getLang();
347
+ const table = STR[L] || STR.zh;
348
+ if (Object.prototype.hasOwnProperty.call(table, key)) return table[key];
349
+ if (Object.prototype.hasOwnProperty.call(STR.zh, key)) return STR.zh[key];
350
+ return key;
351
+ }
352
+
353
+ function applyI18n(root) {
354
+ root = root || document;
355
+ root.querySelectorAll('[data-i18n]').forEach(function (el) {
356
+ var key = el.getAttribute('data-i18n');
357
+ if (!key) return;
358
+ if (el.tagName === 'OPTION') {
359
+ el.textContent = t(key);
360
+ } else {
361
+ el.textContent = t(key);
362
+ }
363
+ });
364
+ root.querySelectorAll('[data-i18n-placeholder]').forEach(function (el) {
365
+ var key = el.getAttribute('data-i18n-placeholder');
366
+ if (key) el.placeholder = t(key);
367
+ });
368
+ root.querySelectorAll('[data-i18n-title]').forEach(function (el) {
369
+ var key = el.getAttribute('data-i18n-title');
370
+ if (key) el.title = t(key);
371
+ });
372
+ root.querySelectorAll('[data-i18n-html]').forEach(function (el) {
373
+ var key = el.getAttribute('data-i18n-html');
374
+ if (key) el.innerHTML = t(key);
375
+ });
376
+ root.querySelectorAll('[data-i18n-value]').forEach(function (el) {
377
+ var key = el.getAttribute('data-i18n-value');
378
+ if (key && (el.tagName === 'INPUT' || el.tagName === 'BUTTON')) {
379
+ el.value = t(key);
380
+ }
381
+ });
382
+ }
383
+
384
+ function updateLangButton() {
385
+ var btn = document.getElementById('lang-toggle-btn');
386
+ if (!btn) return;
387
+ btn.textContent = getLang() === 'zh' ? 'EN' : '中';
388
+ btn.setAttribute(
389
+ 'aria-label',
390
+ getLang() === 'zh' ? t('langToggleAriaZh') : t('langToggleAriaEn')
391
+ );
392
+ btn.classList.toggle('active', getLang() === 'en');
393
+ }
394
+
395
+ function toggleUiLanguage() {
396
+ try {
397
+ setLang(getLang() === 'zh' ? 'en' : 'zh');
398
+ } catch (err) {
399
+ console.error('[i18n] toggleUiLanguage failed:', err);
400
+ }
401
+ }
402
+
403
+ /** 避免 CSP 拦截内联 onclick;确保按钮一定能触发 */
404
+ function bindLangToggleButton() {
405
+ var btn = document.getElementById('lang-toggle-btn');
406
+ if (!btn || btn.dataset.i18nBound === '1') return;
407
+ btn.dataset.i18nBound = '1';
408
+ btn.removeAttribute('onclick');
409
+ btn.addEventListener('click', function (ev) {
410
+ ev.preventDefault();
411
+ toggleUiLanguage();
412
+ });
413
+ }
414
+
415
+ function boot() {
416
+ document.documentElement.lang = getLang() === 'en' ? 'en' : 'zh-CN';
417
+ try {
418
+ applyI18n();
419
+ } catch (err) {
420
+ console.error('[i18n] applyI18n failed:', err);
421
+ }
422
+ updateLangButton();
423
+ bindLangToggleButton();
424
+ }
425
+
426
+ global.getUiLang = getLang;
427
+ global.setUiLang = setLang;
428
+ global.t = t;
429
+ global.applyI18n = applyI18n;
430
+ global.toggleUiLanguage = toggleUiLanguage;
431
+ global.updateLangToggleButton = updateLangButton;
432
+
433
+ if (document.readyState === 'loading') {
434
+ document.addEventListener('DOMContentLoaded', boot);
435
+ } else {
436
+ boot();
437
+ }
438
+ })(typeof window !== 'undefined' ? window : global);
LTX2.3-1.0.4/UI/index.css ADDED
@@ -0,0 +1,775 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ :root {
2
+ --accent: #2563EB; /* Refined blue – not too bright, not purple */
3
+ --accent-hover:#3B82F6;
4
+ --accent-dim: rgba(37,99,235,0.14);
5
+ --accent-ring: rgba(37,99,235,0.35);
6
+ --bg: #111113;
7
+ --panel: #18181B;
8
+ --panel-2: #1F1F23;
9
+ --item: rgba(255,255,255,0.035);
10
+ --border: rgba(255,255,255,0.08);
11
+ --border-2: rgba(255,255,255,0.05);
12
+ --text-dim: #71717A;
13
+ --text-sub: #A1A1AA;
14
+ --text: #FAFAFA;
15
+ }
16
+
17
+ * { box-sizing: border-box; -webkit-font-smoothing: antialiased; min-width: 0; }
18
+ body {
19
+ background: var(--bg); margin: 0; color: var(--text);
20
+ font-family: -apple-system, "SF Pro Display", "Segoe UI", sans-serif;
21
+ display: flex; height: 100vh; overflow: hidden;
22
+ font-size: 13px; line-height: 1.5;
23
+ }
24
+
25
+ .sidebar {
26
+ width: 460px; min-width: 460px;
27
+ background: var(--panel);
28
+ border-right: 1px solid var(--border);
29
+ display: flex; flex-direction: column; z-index: 20;
30
+ overflow-y: auto; overflow-x: hidden;
31
+ }
32
+
33
+ /* Scrollbar */
34
+ ::-webkit-scrollbar { width: 5px; height: 5px; }
35
+ ::-webkit-scrollbar-track { background: transparent; }
36
+ ::-webkit-scrollbar-thumb { background: rgba(255,255,255,0.08); border-radius: 10px; }
37
+ ::-webkit-scrollbar-thumb:hover { background: rgba(255,255,255,0.18); }
38
+
39
+ .sidebar-header { padding: 24px 24px 4px; }
40
+
41
+ .lang-toggle {
42
+ background: #333;
43
+ border: 1px solid #555;
44
+ color: var(--text-dim);
45
+ padding: 4px 10px;
46
+ border-radius: 6px;
47
+ font-size: 11px;
48
+ cursor: pointer;
49
+ transition: background 0.15s, color 0.15s, border-color 0.15s;
50
+ font-weight: 700;
51
+ min-width: 44px;
52
+ flex-shrink: 0;
53
+ }
54
+ .lang-toggle:hover {
55
+ background: var(--item);
56
+ color: var(--text);
57
+ border-color: var(--accent);
58
+ }
59
+ .lang-toggle.active {
60
+ background: #333;
61
+ color: var(--text);
62
+ border-color: #555;
63
+ }
64
+ .sidebar-section { padding: 8px 24px 18px; border-bottom: 1px solid var(--border); }
65
+
66
+ .setting-group {
67
+ background: rgba(255,255,255,0.025);
68
+ border: 1px solid var(--border-2);
69
+ border-radius: 10px;
70
+ padding: 14px;
71
+ margin-bottom: 12px;
72
+ }
73
+ .group-title {
74
+ font-size: 10px; color: var(--text-dim); font-weight: 700;
75
+ text-transform: uppercase; letter-spacing: 0.7px;
76
+ margin-bottom: 12px; padding-bottom: 5px;
77
+ border-bottom: 1px solid var(--border-2);
78
+ }
79
+
80
+ /* Mode Tabs */
81
+ .tabs {
82
+ display: flex; gap: 4px; margin-bottom: 14px;
83
+ background: rgba(255,255,255,0.04);
84
+ padding: 4px; border-radius: 10px;
85
+ border: 1px solid var(--border-2);
86
+ }
87
+ .tab {
88
+ flex: 1; padding: 9px 0; text-align: center; border-radius: 7px;
89
+ cursor: pointer; font-size: 12px; color: var(--text-dim);
90
+ transition: all 0.2s; font-weight: 600;
91
+ display: flex; align-items: center; justify-content: center;
92
+ }
93
+ .tab.active { background: var(--accent); color: #fff; box-shadow: 0 1px 6px rgba(10,132,255,0.45); }
94
+ .tab:hover:not(.active) { background: rgba(255,255,255,0.06); color: var(--text); }
95
+
96
+ .label-group { display: flex; justify-content: space-between; align-items: center; margin-bottom: 6px; }
97
+ label { display: block; font-size: 11px; color: var(--text-dim); font-weight: 600; text-transform: uppercase; letter-spacing: 0.5px; margin-bottom: 6px; }
98
+ .val-badge { font-size: 11px; color: var(--accent); font-family: "SF Mono", ui-monospace, monospace; font-weight: 600; }
99
+
100
+ input[type="text"], input[type="number"], select, textarea {
101
+ width: 100%; background: var(--panel-2);
102
+ border: 1px solid var(--border);
103
+ border-radius: 7px; color: var(--text);
104
+ padding: 8px 11px; font-size: 12.5px; outline: none; margin-bottom: 9px;
105
+ /* Only transition border/shadow – NOT background-image to prevent arrow flicker */
106
+ transition: border-color 0.15s, box-shadow 0.15s;
107
+ }
108
+ input:focus, select:focus, textarea:focus {
109
+ border-color: var(--accent);
110
+ box-shadow: 0 0 0 2px var(--accent-ring);
111
+ }
112
+ select {
113
+ -webkit-appearance: none; -moz-appearance: none; appearance: none;
114
+ /* Stable grey arrow – no background shorthand so it won't animate */
115
+ background-color: var(--panel-2);
116
+ background-image: url("data:image/svg+xml,%3Csvg xmlns='http://www.w3.org/2000/svg' width='12' height='12' viewBox='0 0 24 24' fill='none' stroke='%2371717A' stroke-width='2.5' stroke-linecap='round' stroke-linejoin='round'%3E%3Cpolyline points='6 9 12 15 18 9'/%3E%3C/svg%3E");
117
+ background-repeat: no-repeat;
118
+ background-position: right 10px center;
119
+ background-size: 12px;
120
+ padding-right: 28px;
121
+ cursor: pointer;
122
+ /* Explicitly do NOT transition background properties */
123
+ transition: border-color 0.15s, box-shadow 0.15s;
124
+ }
125
+ select:focus { background-color: var(--panel-2); }
126
+ select option { background: #27272A; color: var(--text); }
127
+ textarea { resize: vertical; min-height: 78px; font-family: inherit; }
128
+
129
+ .slider-container { display: flex; align-items: center; gap: 12px; margin-bottom: 14px; }
130
+ input[type="range"] { flex: 1; accent-color: var(--accent); height: 4px; cursor: pointer; border-radius: 2px; }
131
+
132
+ .upload-zone {
133
+ border: 1px dashed var(--border); border-radius: 10px;
134
+ padding: 18px 10px; text-align: center; cursor: pointer;
135
+ background: rgba(255,255,255,0.03); margin-bottom: 10px; position: relative;
136
+ transition: all 0.2s;
137
+ }
138
+ .upload-zone:hover, .upload-zone.dragover { background: var(--accent-dim); border-color: var(--accent); }
139
+ .upload-zone.has-images {
140
+ padding: 12px; background: rgba(255,255,255,0.025);
141
+ }
142
+ .upload-zone.has-images .upload-placeholder-mini {
143
+ display: flex; align-items: center; gap: 8px; justify-content: center;
144
+ color: var(--text-dim); font-size: 11px;
145
+ }
146
+ .upload-zone.has-images .upload-placeholder-mini span {
147
+ background: var(--item); padding: 6px 12px; border-radius: 6px;
148
+ }
149
+ #batch-images-placeholder { display: block; }
150
+ .upload-zone.has-images #batch-images-placeholder { display: none; }
151
+
152
+ /* 批量模式:上传区下方的横向缩略图条 */
153
+ .batch-thumb-strip-wrap {
154
+ margin-top: 10px;
155
+ margin-bottom: 4px;
156
+ }
157
+ .batch-thumb-strip-head {
158
+ display: flex;
159
+ flex-direction: column;
160
+ gap: 2px;
161
+ margin-bottom: 8px;
162
+ }
163
+ .batch-thumb-strip-title {
164
+ font-size: 11px;
165
+ font-weight: 700;
166
+ color: var(--text-sub);
167
+ }
168
+ .batch-thumb-strip-hint {
169
+ font-size: 10px;
170
+ color: var(--text-dim);
171
+ }
172
+ .batch-images-container {
173
+ display: flex;
174
+ flex-direction: row;
175
+ flex-wrap: nowrap;
176
+ gap: 10px;
177
+ overflow-x: auto;
178
+ overflow-y: visible;
179
+ padding: 6px 4px 14px;
180
+ margin: 0 -4px;
181
+ scrollbar-width: thin;
182
+ scrollbar-color: var(--border) transparent;
183
+ align-items: center;
184
+ }
185
+ .batch-images-container::-webkit-scrollbar { height: 6px; }
186
+ .batch-images-container::-webkit-scrollbar-thumb {
187
+ background: var(--border);
188
+ border-radius: 3px;
189
+ }
190
+ .batch-image-wrapper {
191
+ flex: 0 0 72px;
192
+ width: 72px;
193
+ height: 72px;
194
+ position: relative;
195
+ border-radius: 10px;
196
+ overflow: hidden;
197
+ background: var(--item);
198
+ border: 1px solid var(--border);
199
+ cursor: grab;
200
+ touch-action: none;
201
+ user-select: none;
202
+ -webkit-user-select: none;
203
+ transition:
204
+ flex-basis 0.38s cubic-bezier(0.22, 1, 0.36, 1),
205
+ width 0.38s cubic-bezier(0.22, 1, 0.36, 1),
206
+ min-width 0.38s cubic-bezier(0.22, 1, 0.36, 1),
207
+ margin 0.38s cubic-bezier(0.22, 1, 0.36, 1),
208
+ opacity 0.25s ease,
209
+ border-color 0.2s ease,
210
+ box-shadow 0.2s ease,
211
+ transform 0.28s cubic-bezier(0.22, 1, 0.36, 1);
212
+ }
213
+ .batch-image-wrapper:active { cursor: grabbing; }
214
+ .batch-image-wrapper.batch-thumb--source {
215
+ flex: 0 0 0;
216
+ width: 0;
217
+ min-width: 0;
218
+ height: 72px;
219
+ margin: 0;
220
+ padding: 0;
221
+ border: none;
222
+ overflow: hidden;
223
+ opacity: 0;
224
+ background: transparent;
225
+ box-shadow: none;
226
+ pointer-events: none;
227
+ /* 收起必须瞬时:若与占位框同时用 0.38s 过渡,右侧缩略图会与「突然出现」的槽位不同步而闪一下 */
228
+ transition: none !important;
229
+ }
230
+ /* 按下瞬间:冻结其它卡片与槽位动画,避免「槽位插入 + 邻居过渡」两帧打架 */
231
+ .batch-images-container.is-batch-settling .batch-image-wrapper:not(.batch-thumb--source) {
232
+ transition: none !important;
233
+ }
234
+ .batch-images-container.is-batch-settling .batch-thumb-drop-slot {
235
+ animation: none;
236
+ opacity: 1;
237
+ }
238
+ /* 拖动时跟手的浮动缩略图(避免原槽位透明后光标下像「黑块」) */
239
+ .batch-thumb-floating-ghost {
240
+ position: fixed;
241
+ left: 0;
242
+ top: 0;
243
+ z-index: 99999;
244
+ width: 76px;
245
+ height: 76px;
246
+ border-radius: 12px;
247
+ overflow: hidden;
248
+ pointer-events: none;
249
+ will-change: transform;
250
+ box-shadow:
251
+ 0 20px 50px rgba(0, 0, 0, 0.45),
252
+ 0 10px 28px rgba(0, 0, 0, 0.28),
253
+ 0 0 0 1px rgba(255, 255, 255, 0.18);
254
+ transform: translate3d(0, 0, 0) scale(1.06) rotate(-1deg);
255
+ }
256
+ .batch-thumb-floating-ghost img {
257
+ width: 100%;
258
+ height: 100%;
259
+ object-fit: cover;
260
+ display: block;
261
+ pointer-events: none;
262
+ }
263
+ .batch-thumb-drop-slot {
264
+ flex: 0 0 72px;
265
+ width: 72px;
266
+ height: 72px;
267
+ box-sizing: border-box;
268
+ border-radius: 12px;
269
+ border: 2px dashed rgba(255, 255, 255, 0.22);
270
+ background: linear-gradient(145deg, rgba(255, 255, 255, 0.09), rgba(255, 255, 255, 0.03));
271
+ pointer-events: none;
272
+ transition: border-color 0.35s ease, box-shadow 0.35s ease, opacity 0.35s ease;
273
+ animation: batch-slot-breathe 2.4s ease-in-out infinite;
274
+ box-shadow: inset 0 0 0 1px rgba(255, 255, 255, 0.06);
275
+ }
276
+ @keyframes batch-slot-breathe {
277
+ 0%, 100% { opacity: 0.88; }
278
+ 50% { opacity: 1; }
279
+ }
280
+ .batch-image-wrapper .batch-thumb-img-wrap {
281
+ width: 100%;
282
+ height: 100%;
283
+ border-radius: 9px;
284
+ overflow: hidden;
285
+ /* 必须让事件落到外层 .batch-image-wrapper,否则 HTML5 drag 无法从 draggable 父级启动 */
286
+ pointer-events: none;
287
+ }
288
+ .batch-image-wrapper .batch-thumb-img {
289
+ width: 100%;
290
+ height: 100%;
291
+ object-fit: cover;
292
+ display: block;
293
+ pointer-events: none;
294
+ user-select: none;
295
+ -webkit-user-drag: none;
296
+ }
297
+ .batch-thumb-remove {
298
+ position: absolute;
299
+ top: 3px;
300
+ right: 3px;
301
+ z-index: 5;
302
+ box-sizing: border-box;
303
+ min-width: 22px;
304
+ height: 22px;
305
+ padding: 0 5px;
306
+ margin: 0;
307
+ border: 1px solid rgba(255, 255, 255, 0.12);
308
+ border-radius: 6px;
309
+ background: rgba(0, 0, 0, 0.5);
310
+ font-family: inherit;
311
+ font-size: 14px;
312
+ font-weight: 400;
313
+ line-height: 1;
314
+ color: rgba(255, 255, 255, 0.9);
315
+ opacity: 0.72;
316
+ cursor: pointer;
317
+ display: flex;
318
+ align-items: center;
319
+ justify-content: center;
320
+ transition: background 0.12s, opacity 0.12s, border-color 0.12s;
321
+ pointer-events: auto;
322
+ }
323
+ .batch-image-wrapper:hover .batch-thumb-remove {
324
+ opacity: 1;
325
+ background: rgba(0, 0, 0, 0.68);
326
+ border-color: rgba(255, 255, 255, 0.2);
327
+ }
328
+ .batch-thumb-remove:hover {
329
+ background: rgba(80, 20, 20, 0.75) !important;
330
+ border-color: rgba(255, 180, 180, 0.35);
331
+ color: #fff;
332
+ }
333
+ .batch-thumb-remove:focus-visible {
334
+ opacity: 1;
335
+ outline: 2px solid var(--accent-dim, rgba(120, 160, 255, 0.6));
336
+ outline-offset: 1px;
337
+ }
338
+ .upload-icon { font-size: 18px; margin-bottom: 6px; opacity: 0.45; }
339
+ .upload-text { font-size: 11px; color: var(--text); }
340
+ .upload-hint { font-size: 10px; color: var(--text-dim); margin-top: 3px; }
341
+ .preview-thumb { width: 100%; height: auto; max-height: 100px; object-fit: contain; border-radius: 8px; display: none; margin-top: 10px; }
342
+ .clear-img-overlay {
343
+ position: absolute; top: 8px; right: 8px; background: rgba(255,59,48,0.85); color: white;
344
+ width: 20px; height: 20px; border-radius: 10px; display: none; align-items: center; justify-content: center;
345
+ font-size: 11px; cursor: pointer; z-index: 5;
346
+ }
347
+
348
+ .btn-outline {
349
+ background: var(--panel-2);
350
+ border: 1px solid var(--border);
351
+ color: var(--text-sub); padding: 5px 12px; border-radius: 7px;
352
+ font-size: 11.5px; font-weight: 600; cursor: pointer;
353
+ transition: background 0.15s, border-color 0.15s, color 0.15s;
354
+ display: inline-flex; align-items: center; justify-content: center; gap: 5px;
355
+ white-space: nowrap;
356
+ }
357
+ .btn-outline:hover:not(:disabled) { background: rgba(255,255,255,0.08); color: var(--text); border-color: rgba(255,255,255,0.18); }
358
+ .btn-outline:active { opacity: 0.7; }
359
+ .btn-outline:disabled { opacity: 0.3; cursor: not-allowed; }
360
+
361
+ .btn-icon {
362
+ padding: 5px; background: transparent; border: none; color: var(--text-dim);
363
+ border-radius: 6px; cursor: pointer; display: flex; align-items: center; justify-content: center;
364
+ transition: color 0.15s, background 0.15s;
365
+ }
366
+ .btn-icon:hover { color: var(--text-sub); background: rgba(255,255,255,0.07); }
367
+
368
+ .btn-primary {
369
+ width: 100%; padding: 13px;
370
+ background: var(--accent); border: none;
371
+ border-radius: 9px; color: #fff; font-weight: 700; font-size: 13.5px;
372
+ letter-spacing: 0.2px; cursor: pointer; margin-top: 14px;
373
+ transition: background 0.15s;
374
+ }
375
+ .btn-primary:hover:not(:disabled) { background: var(--accent-hover); }
376
+ .btn-primary:active { opacity: 0.82; }
377
+ .btn-primary:disabled { background: rgba(255,255,255,0.08); color: var(--text-dim); cursor: not-allowed; }
378
+
379
+ .btn-danger {
380
+ width: 100%; padding: 12px; background: #DC2626; border: none;
381
+ border-radius: 9px; color: #fff; font-weight: 700; font-size: 13.5px;
382
+ cursor: pointer; margin-top: 8px; display: none; transition: background 0.15s;
383
+ }
384
+ .btn-danger:hover { background: #EF4444; }
385
+
386
+ /* Workspace */
387
+ .workspace { flex: 1; display: flex; flex-direction: column; background: #0A0A0A; position: relative; overflow: hidden; }
388
+ .viewer { flex: 2; display: flex; align-items: center; justify-content: center; padding: 16px; background: #0A0A0A; position: relative; min-height: 40vh; }
389
+ .monitor {
390
+ width: 100%; height: 100%; max-width: 1650px; border-radius: 10px; border: 1px solid var(--border);
391
+ overflow: hidden; position: relative; background: #070707;
392
+ display: flex; align-items: center; justify-content: center;
393
+ background-image: radial-gradient(rgba(255,255,255,0.02) 1px, transparent 1px);
394
+ background-size: 18px 18px;
395
+ }
396
+ .monitor img, .monitor video {
397
+ width: auto; height: auto; max-width: 100%; max-height: 100%;
398
+ object-fit: contain; display: none; z-index: 2; border-radius: 3px;
399
+ }
400
+
401
+ .progress-container { position: absolute; bottom: 0; left: 0; width: 100%; height: 2px; background: var(--border-2); z-index: 10; }
402
+ #progress-fill { width: 0%; height: 100%; background: var(--accent); transition: width 0.5s; }
403
+ #loading-txt { font-size: 12px; color: var(--text-sub); font-weight: 600; z-index: 5; position: absolute; display: none; }
404
+
405
+
406
+
407
+ .spinner {
408
+ width: 12px; height: 12px;
409
+ border: 2px solid rgba(255,255,255,0.2);
410
+ border-top-color: currentColor;
411
+ border-radius: 50%;
412
+ animation: spin 1s linear infinite;
413
+ }
414
+ @keyframes spin { to { transform: rotate(360deg); } }
415
+
416
+ .loading-card {
417
+ display: flex; align-items: center; justify-content: center;
418
+ flex-direction: column; gap: 6px; color: var(--text-dim); font-size: 10px;
419
+ background: rgba(37,99,235,0.07) !important;
420
+ border-color: rgba(37,99,235,0.3) !important;
421
+ }
422
+ .loading-card .spinner { width: 28px; height: 28px; border-width: 3px; color: var(--accent); }
423
+ .loading-card:hover { background: rgba(37,99,235,0.14) !important; border-color: var(--accent) !important; }
424
+
425
+ .library { flex: 1.5; border-top: 1px solid var(--border); padding: 14px 20px; display: flex; flex-direction: column; background: #0F0F11; overflow-y: hidden; }
426
+ #log-container { flex: 1; overflow-y: auto; padding-right: 4px; }
427
+ #log { font-family: ui-monospace, "SF Mono", monospace; font-size: 10.5px; color: var(--text-dim); line-height: 1.7; }
428
+
429
+ /* History wrapper: scrollable area for thumbnails only */
430
+ #history-wrapper {
431
+ flex: 1;
432
+ overflow-y: auto;
433
+ min-height: 110px; /* always show at least one row */
434
+ padding-right: 4px;
435
+ }
436
+ #history-container {
437
+ display: grid;
438
+ grid-template-columns: repeat(auto-fill, minmax(150px, 1fr));
439
+ justify-content: start;
440
+ gap: 10px; align-content: flex-start;
441
+ padding-bottom: 4px;
442
+ }
443
+ /* Pagination row: hidden, using infinite scroll instead */
444
+ #pagination-bar {
445
+ display: none;
446
+ }
447
+
448
+ .history-card {
449
+ width: 100%; max-width: 200px; aspect-ratio: 16 / 9;
450
+ background: #1A1A1E; border-radius: 7px;
451
+ overflow: hidden; border: 1px solid var(--border);
452
+ cursor: pointer; position: relative; transition: border-color 0.15s, transform 0.15s;
453
+ }
454
+ .history-card:hover { border-color: var(--accent); transform: translateY(-1px); }
455
+ .history-card img, .history-card video {
456
+ width: 100%; height: 100%; object-fit: cover;
457
+ background: #1A1A1E;
458
+ }
459
+ /* 解码/加载完成前避免视频黑块猛闪,与卡片底色一致;就绪后淡入 */
460
+ .history-card .history-thumb-media {
461
+ opacity: 0;
462
+ transition: opacity 0.28s ease;
463
+ }
464
+ .history-card .history-thumb-media.history-thumb-ready {
465
+ opacity: 1;
466
+ }
467
+ .history-type-badge {
468
+ position: absolute; top: 5px; left: 5px; font-size: 8px; padding: 1px 5px; border-radius: 3px;
469
+ background: rgba(0,0,0,0.8); color: var(--text-sub); border: 1px solid rgba(255,255,255,0.06);
470
+ z-index: 2; font-weight: 700; letter-spacing: 0.4px;
471
+ }
472
+ .history-delete-btn {
473
+ position: absolute; top: 5px; right: 5px; width: 20px; height: 20px;
474
+ border-radius: 50%; border: none; background: rgba(255,50,50,0.8); color: #fff;
475
+ font-size: 10px; cursor: pointer; z-index: 3; display: flex; align-items: center; justify-content: center;
476
+ opacity: 0; transition: opacity 0.2s;
477
+ }
478
+ .history-card:hover .history-delete-btn { opacity: 1; }
479
+ .history-delete-btn:hover { background: rgba(255,0,0,0.9); }
480
+
481
+ .vram-bar { width: 160px; height: 5px; background: rgba(255,255,255,0.08); border-radius: 999px; overflow: hidden; display: inline-block; vertical-align: middle; }
482
+ .vram-used { height: 100%; background: var(--accent); width: 0%; transition: width 0.5s; }
483
+
484
+ /* 智能多帧:工作流模式卡片式单选 */
485
+ .smart-param-mode-label {
486
+ font-size: 10px;
487
+ color: var(--text-dim);
488
+ font-weight: 700;
489
+ margin-bottom: 8px;
490
+ letter-spacing: 0.04em;
491
+ text-transform: uppercase;
492
+ }
493
+ .smart-param-modes {
494
+ display: flex;
495
+ flex-direction: row;
496
+ align-items: stretch;
497
+ gap: 0;
498
+ padding: 3px;
499
+ margin-bottom: 12px;
500
+ background: var(--panel-2);
501
+ border-radius: 8px;
502
+ border: 1px solid var(--border);
503
+ }
504
+ .smart-param-mode-opt {
505
+ display: flex;
506
+ align-items: center;
507
+ justify-content: center;
508
+ flex: 1;
509
+ min-width: 0;
510
+ gap: 0;
511
+ margin: 0;
512
+ padding: 6px 8px;
513
+ border-radius: 6px;
514
+ border: none;
515
+ background: transparent;
516
+ cursor: pointer;
517
+ transition: background 0.15s, color 0.15s;
518
+ position: relative;
519
+ }
520
+ .smart-param-mode-opt:hover:not(:has(input:checked)) {
521
+ background: rgba(255, 255, 255, 0.05);
522
+ }
523
+ .smart-param-mode-opt input[type="radio"] {
524
+ position: absolute;
525
+ opacity: 0;
526
+ width: 0;
527
+ height: 0;
528
+ margin: 0;
529
+ }
530
+ .smart-param-mode-opt:has(input:checked) {
531
+ background: var(--accent);
532
+ box-shadow: none;
533
+ }
534
+ .smart-param-mode-opt:has(input:checked) .smart-param-mode-title {
535
+ color: #fff;
536
+ }
537
+ .smart-param-mode-title {
538
+ font-size: 11px;
539
+ font-weight: 600;
540
+ color: var(--text-sub);
541
+ text-align: center;
542
+ line-height: 1.25;
543
+ flex: none;
544
+ min-width: 0;
545
+ }
546
+ /* 单次多关键帧:时间轴面板 */
547
+ .batch-kf-panel {
548
+ background: var(--item);
549
+ border-radius: 10px;
550
+ padding: 12px 14px;
551
+ margin-bottom: 10px;
552
+ border: 1px solid var(--border);
553
+ }
554
+ .batch-kf-panel-hd {
555
+ display: flex;
556
+ flex-wrap: wrap;
557
+ align-items: center;
558
+ justify-content: space-between;
559
+ gap: 10px;
560
+ margin-bottom: 8px;
561
+ }
562
+ .batch-kf-panel-title {
563
+ font-size: 12px;
564
+ font-weight: 700;
565
+ color: var(--text);
566
+ }
567
+ .batch-kf-total-pill {
568
+ font-size: 11px;
569
+ color: var(--text-sub);
570
+ background: var(--panel-2);
571
+ border: 1px solid var(--border);
572
+ border-radius: 999px;
573
+ padding: 6px 12px;
574
+ white-space: nowrap;
575
+ }
576
+ .batch-kf-total-pill strong {
577
+ color: var(--accent);
578
+ font-weight: 800;
579
+ font-variant-numeric: tabular-nums;
580
+ margin: 0 2px;
581
+ }
582
+ .batch-kf-total-unit {
583
+ font-size: 10px;
584
+ color: var(--text-dim);
585
+ }
586
+ .batch-kf-panel-hint {
587
+ font-size: 10px;
588
+ color: var(--text-dim);
589
+ line-height: 1.5;
590
+ margin: 0 0 12px;
591
+ }
592
+ .batch-kf-timeline-col {
593
+ display: flex;
594
+ flex-direction: column;
595
+ gap: 0;
596
+ }
597
+ .batch-kf-kcard {
598
+ border-radius: 10px;
599
+ border: 1px solid var(--border);
600
+ background: rgba(255, 255, 255, 0.03);
601
+ padding: 10px 12px;
602
+ }
603
+ .batch-kf-kcard-head {
604
+ display: flex;
605
+ align-items: center;
606
+ gap: 12px;
607
+ margin-bottom: 10px;
608
+ }
609
+ .batch-kf-kthumb {
610
+ width: 48px;
611
+ height: 48px;
612
+ border-radius: 8px;
613
+ object-fit: cover;
614
+ flex-shrink: 0;
615
+ border: 1px solid var(--border);
616
+ }
617
+ .batch-kf-kcard-titles {
618
+ display: flex;
619
+ flex-direction: column;
620
+ gap: 4px;
621
+ min-width: 0;
622
+ }
623
+ .batch-kf-ktitle {
624
+ font-size: 12px;
625
+ font-weight: 700;
626
+ color: var(--text);
627
+ }
628
+ .batch-kf-anchor {
629
+ font-size: 11px;
630
+ color: var(--accent);
631
+ font-variant-numeric: tabular-nums;
632
+ font-weight: 600;
633
+ }
634
+ .batch-kf-kcard-ctrl {
635
+ display: flex;
636
+ flex-wrap: wrap;
637
+ align-items: center;
638
+ gap: 12px;
639
+ }
640
+ .batch-kf-klabel {
641
+ font-size: 10px;
642
+ color: var(--text-dim);
643
+ display: flex;
644
+ align-items: center;
645
+ gap: 8px;
646
+ }
647
+ .batch-kf-klabel input[type="number"] {
648
+ width: 72px;
649
+ padding: 6px 8px;
650
+ font-size: 12px;
651
+ border-radius: 6px;
652
+ border: 1px solid var(--border);
653
+ background: var(--panel);
654
+ color: var(--text);
655
+ }
656
+ /* 关键帧之间:细时间轴 + 单行紧凑间隔输入 */
657
+ .batch-kf-gap {
658
+ display: flex;
659
+ align-items: stretch;
660
+ gap: 8px;
661
+ padding: 0 0 6px;
662
+ margin: 0 0 0 10px;
663
+ }
664
+ .batch-kf-gap-rail {
665
+ width: 2px;
666
+ flex-shrink: 0;
667
+ border-radius: 2px;
668
+ background: linear-gradient(
669
+ 180deg,
670
+ rgba(255, 255, 255, 0.06),
671
+ var(--accent-dim),
672
+ rgba(255, 255, 255, 0.04)
673
+ );
674
+ min-height: 22px;
675
+ align-self: stretch;
676
+ }
677
+ .batch-kf-gap-inner {
678
+ display: flex;
679
+ align-items: center;
680
+ gap: 8px;
681
+ flex: 1;
682
+ min-width: 0;
683
+ padding: 2px 0 4px;
684
+ }
685
+ .batch-kf-gap-ix {
686
+ font-size: 10px;
687
+ font-weight: 600;
688
+ color: var(--text-dim);
689
+ font-variant-numeric: tabular-nums;
690
+ letter-spacing: -0.02em;
691
+ flex-shrink: 0;
692
+ }
693
+ .batch-kf-seg-field {
694
+ display: inline-flex;
695
+ align-items: center;
696
+ gap: 3px;
697
+ margin: 0;
698
+ cursor: text;
699
+ }
700
+ .batch-kf-seg-input {
701
+ width: 46px;
702
+ min-width: 0;
703
+ padding: 2px 5px;
704
+ font-size: 11px;
705
+ font-weight: 600;
706
+ line-height: 1.3;
707
+ border-radius: 4px;
708
+ border: 1px solid var(--border);
709
+ background: rgba(0, 0, 0, 0.2);
710
+ color: var(--text);
711
+ font-variant-numeric: tabular-nums;
712
+ }
713
+ .batch-kf-seg-input:hover {
714
+ border-color: rgba(255, 255, 255, 0.12);
715
+ }
716
+ .batch-kf-seg-input:focus {
717
+ outline: none;
718
+ border-color: var(--accent);
719
+ box-shadow: 0 0 0 1px var(--accent-ring);
720
+ }
721
+ .batch-kf-gap-unit {
722
+ font-size: 10px;
723
+ color: var(--text-dim);
724
+ font-weight: 500;
725
+ flex-shrink: 0;
726
+ }
727
+
728
+ .sub-mode-toggle { display: flex; background: var(--panel-2); border-radius: 7px; padding: 3px; border: 1px solid var(--border); }
729
+ .sub-mode-btn { flex: 1; padding: 6px 0; border-radius: 5px; border: none; background: transparent; font-size: 11.5px; color: var(--text-dim); font-weight: 600; cursor: pointer; transition: background 0.15s, color 0.15s; }
730
+ .sub-mode-btn.active { background: var(--accent); color: #fff; }
731
+ .sub-mode-btn:hover:not(.active) { background: rgba(255,255,255,0.05); color: var(--text-sub); }
732
+
733
+ .vid-section { display: none; margin-top: 12px; }
734
+ .vid-section.active-section { display: block; animation: fadeIn 0.25s ease; }
735
+ @keyframes fadeIn { from { opacity: 0; transform: translateY(4px); } to { opacity: 1; transform: translateY(0); } }
736
+
737
+ /* Status indicator */
738
+ @keyframes breathe-orange {
739
+ 0%,100% { box-shadow: 0 0 4px #FF9F0A; opacity: 0.7; }
740
+ 50% { box-shadow: 0 0 10px #FF9F0A; opacity: 1; }
741
+ }
742
+ .indicator-busy { background: #FF9F0A !important; animation: breathe-orange 1.6s infinite ease-in-out !important; box-shadow: none !important; transition: all 0.3s; }
743
+ .indicator-ready { background: #30D158 !important; box-shadow: 0 0 8px rgba(48,209,88,0.6) !important; animation: none !important; transition: all 0.3s; }
744
+ .indicator-offline { background: #636366 !important; box-shadow: none !important; animation: none !important; transition: all 0.3s; }
745
+
746
+ .res-preview-tag { font-size: 11px; color: var(--accent); margin-bottom: 10px; font-family: ui-monospace, monospace; }
747
+ .top-status { display: flex; justify-content: space-between; font-size: 12px; color: var(--text-dim); margin-bottom: 8px; align-items: center; }
748
+ .checkbox-container { display: flex; align-items: center; gap: 8px; cursor: pointer; background: rgba(255,255,255,0.02); padding: 10px; border-radius: 8px; border: 1px solid var(--border-2); }
749
+ .checkbox-container input { width: 15px; height: 15px; accent-color: var(--accent); cursor: pointer; margin: 0; }
750
+ .checkbox-container label { margin-bottom: 0; cursor: pointer; text-transform: none; color: var(--text); }
751
+ .flex-row { display: flex; gap: 10px; }
752
+ .flex-1 { flex: 1; min-width: 0; }
753
+
754
+ @media (max-width: 1024px) {
755
+ body { flex-direction: column; overflow-y: auto; }
756
+ .sidebar { width: 100%; min-width: 100%; border-right: none; border-bottom: 1px solid var(--border); height: auto; overflow: visible; }
757
+ .workspace { height: auto; min-height: 100vh; overflow: visible; }
758
+ }
759
+ :root {
760
+ --plyr-color-main: #3F51B5;
761
+ --plyr-video-control-background-hover: rgba(255,255,255,0.1);
762
+ --plyr-control-radius: 6px;
763
+ --plyr-player-width: 100%;
764
+ }
765
+ .plyr {
766
+ border-radius: 8px;
767
+ overflow: hidden;
768
+ width: 100%;
769
+ height: 100%;
770
+ }
771
+ .plyr--video .plyr__controls {
772
+ background: linear-gradient(rgba(0,0,0,0), rgba(0,0,0,0.8));
773
+ padding: 20px 15px 15px 15px;
774
+ }
775
+
LTX2.3-1.0.4/UI/index.html ADDED
@@ -0,0 +1,416 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ <!DOCTYPE html>
2
+ <html lang="zh-CN">
3
+ <head>
4
+ <meta charset="UTF-8">
5
+ <meta name="viewport" content="width=device-width, initial-scale=1.0">
6
+ <title>LTX-2 | Multi-GPU Cinematic Studio</title>
7
+ <link rel="stylesheet" href="index.css">
8
+ <link rel="stylesheet" href="https://cdnjs.cloudflare.com/ajax/libs/plyr/3.7.8/plyr.css" />
9
+ </head>
10
+ <body>
11
+
12
+ <aside class="sidebar">
13
+ <div class="sidebar-header">
14
+ <div style="display: flex; align-items: center; justify-content: space-between; margin-bottom: 12px;">
15
+ <div style="display: flex; align-items: center; gap: 10px;">
16
+ <div id="sys-indicator" class="indicator-ready" style="width: 12px; height: 12px; border-radius: 50%;"></div>
17
+ <span style="font-weight: 800; font-size: 18px;">LTX-2 STUDIO</span>
18
+ </div>
19
+ <div style="display: flex; gap: 8px; align-items: center;">
20
+ <button id="clearGpuBtn" onclick="clearGpu()" class="btn-outline" data-i18n="clearVram">释放显存</button>
21
+ <button type="button" id="lang-toggle-btn" class="lang-toggle">EN</button>
22
+ </div>
23
+ </div>
24
+
25
+ <div class="top-status" style="margin-bottom: 5px;">
26
+ <div style="display: flex; align-items: center; gap: 8px;">
27
+ <span id="sys-status" style="font-weight:bold; color: var(--text-dim); font-size: 12px;" data-i18n="sysScanning">正在扫描 GPU...</span>
28
+ </div>
29
+
30
+ <button type="button" onclick="const el = document.getElementById('sys-settings'); el.style.display = el.style.display === 'none' ? 'block' : 'none';" class="btn-icon" data-i18n-title="settingsTitle" title="系统高级设置">
31
+ <svg width="18" height="18" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round"><circle cx="12" cy="12" r="3"></circle><path d="M19.4 15a1.65 1.65 0 0 0 .33 1.82l.06.06a2 2 0 0 1 0 2.83 2 2 0 0 1-2.83 0l-.06-.06a1.65 1.65 0 0 0-1.82-.33 1.65 1.65 0 0 0-1 1.51V21a2 2 0 0 1-2 2 2 2 0 0 1-2-2v-.09A1.65 1.65 0 0 0 9 19.4a1.65 1.65 0 0 0-1.82.33l-.06.06a2 2 0 0 1-2.83 0 2 2 0 0 1 0-2.83l.06-.06a1.65 1.65 0 0 0 .33-1.82 1.65 1.65 0 0 0-1.51-1H3a2 2 0 0 1-2-2 2 2 0 0 1 2-2h.09A1.65 1.65 0 0 0 4.6 9a1.65 1.65 0 0 0-.33-1.82l-.06-.06a2 2 0 0 1 0-2.83 2 2 0 0 1 2.83 0l.06.06a1.65 1.65 0 0 0 1.82.33H9a1.65 1.65 0 0 0 1-1.51V3a2 2 0 0 1 2-2 2 2 0 0 1 2 2v.09a1.65 1.65 0 0 0 1 1.51 1.65 1.65 0 0 0 1.82-.33l.06-.06a2 2 0 0 1 2.83 0 2 2 0 0 1 0 2.83l-.06.06a1.65 1.65 0 0 0-.33 1.82V9a1.65 1.65 0 0 0 1.51 1H21a2 2 0 0 1 2 2 2 2 0 0 1-2 2h-.09a1.65 1.65 0 0 0-1.51 1z"></path></svg>
32
+ </button>
33
+
34
+ </div>
35
+
36
+ <div style="font-size: 11px; color: var(--text-dim); margin-bottom: 20px; display: flex; align-items: center; width: 100%;">
37
+ <div class="vram-bar" style="width: 120px; min-width: 120px; margin-top: 0; margin-right: 12px;"><div class="vram-used" id="vram-fill"></div></div>
38
+ <span id="vram-text" style="font-variant-numeric: tabular-nums; flex-shrink: 0; text-align: right;">0/32 GB</span>
39
+ <span id="gpu-name" style="display: none;"></span> <!-- Hidden globally to avoid duplicate -->
40
+ </div>
41
+
42
+ <div id="sys-settings" style="display: none; padding: 14px; background: rgba(0,0,0,0.4) !important; border-radius: 12px; border: 1px solid rgba(255,255,255,0.1); margin-bottom: 15px; box-shadow: 0 4px 15px rgba(0,0,0,0.5); backdrop-filter: blur(10px);">
43
+ <div style="font-size: 13px; font-weight: bold; margin-bottom: 12px; color: #fff;" data-i18n="advancedSettings">高级设置</div>
44
+
45
+ <label style="font-size: 11px; margin-bottom: 6px;" data-i18n="deviceSelect">工作设备选择</label>
46
+ <select id="gpu-selector" onchange="switchGpu(this.value)" style="margin-bottom: 12px; font-size: 11px; padding: 6px;">
47
+ <option value="" data-i18n="gpuDetecting">正在检测 GPU...</option>
48
+ </select>
49
+
50
+ <label style="font-size: 11px; margin-bottom: 6px; margin-top: 12px;" data-i18n="vramLimitLabel">可用最高显存上限 (GB, 0为全开优先显存)</label>
51
+ <div style="display: flex; gap: 6px; margin-bottom: 9px; align-items: stretch;">
52
+ <input type="number" id="vram-limit-input" data-i18n-placeholder="vramLimitPh" placeholder="例如: 12 (0表示无限制)" style="flex: 1; height: 28px; box-sizing: border-box; font-size: 12px; padding: 0 10px;">
53
+ <button onclick="saveVramLimit()" style="font-size: 12px; padding: 0 10px; height: 28px; box-sizing: border-box; white-space: nowrap; background: #333; border: 1px solid #555; color: #fff; border-radius: 7px; cursor: pointer;" data-i18n="saveLabel">保存</button>
54
+ </div>
55
+ <div id="vram-limit-status" style="font-size: 10px; color: var(--text-dim);"></div>
56
+
57
+
58
+ <label style="font-size: 11px; margin-bottom: 6px; margin-top: 12px;" data-i18n="loraFolderPath">LoRA 文件夹路径</label>
59
+ <div style="display: flex; gap: 6px; margin-bottom: 9px; align-items: stretch;">
60
+ <input type="text" id="lora-dir-input" placeholder="留空使用默认路径" data-i18n-placeholder="loraFolderPathPlaceholder" style="flex: 1; height: 28px; box-sizing: border-box; font-size: 12px; padding: 0 10px;">
61
+ <button onclick="saveLoraDir()" style="font-size: 12px; padding: 0 10px; height: 28px; box-sizing: border-box; white-space: nowrap; background: #333; border: 1px solid #555; color: #fff; border-radius: 7px; cursor: pointer;" data-i18n="saveLabel">保存</button>
62
+ </div>
63
+ <div id="lora-dir-status" style="font-size: 10px; color: var(--text-dim);"></div>
64
+ </div>
65
+ </div>
66
+
67
+ <div class="sidebar-section">
68
+ <div class="tabs">
69
+ <div id="tab-video" class="tab" onclick="switchMode('video')">
70
+ <svg width="14" height="14" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" style="margin-right: 6px;"><rect x="2" y="2" width="20" height="20" rx="2.18" ry="2.18"></rect><line x1="7" y1="2" x2="7" y2="22"></line><line x1="17" y1="2" x2="17" y2="22"></line><line x1="2" y1="12" x2="22" y2="12"></line><line x1="2" y1="7" x2="7" y2="7"></line><line x1="2" y1="17" x2="7" y2="17"></line><line x1="17" y1="17" x2="22" y2="17"></line><line x1="17" y1="7" x2="22" y2="7"></line></svg>
71
+ <span data-i18n="tabVideo">视频生成</span>
72
+ </div>
73
+ <div id="tab-batch" class="tab" onclick="switchMode('batch')">
74
+ <svg width="14" height="14" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" style="margin-right: 6px;"><rect x="3" y="3" width="7" height="7"></rect><rect x="14" y="3" width="7" height="7"></rect><rect x="14" y="14" width="7" height="7"></rect><rect x="3" y="14" width="7" height="7"></rect></svg>
75
+ <span data-i18n="tabBatch">智能多帧</span>
76
+ </div>
77
+ <div id="tab-upscale" class="tab" onclick="switchMode('upscale')">
78
+ <svg width="14" height="14" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" style="margin-right: 6px;"><polygon points="13 2 3 14 12 14 11 22 21 10 12 10 13 2"></polygon></svg>
79
+ <span data-i18n="tabUpscale">视频增强</span>
80
+ </div>
81
+ <div id="tab-image" class="tab" onclick="switchMode('image')">
82
+ <svg width="14" height="14" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" style="margin-right: 6px;"><rect x="3" y="3" width="18" height="18" rx="2" ry="2"></rect><circle cx="8.5" cy="8.5" r="1.5"></circle><polyline points="21 15 16 10 5 21"></polyline></svg>
83
+ <span data-i18n="tabImage">图像生成</span>
84
+ </div>
85
+ </div>
86
+
87
+ <label data-i18n="promptLabel">视觉描述词 (Prompt)</label>
88
+ <textarea id="prompt" data-i18n-placeholder="promptPlaceholder" placeholder="在此输入视觉描述词 (Prompt)..." style="height: 90px; margin-bottom: 0;"></textarea>
89
+ </div>
90
+
91
+ <!-- 视频模式选项 -->
92
+ <div class="sidebar-section" id="video-opts" style="display:none">
93
+ <div class="setting-group">
94
+ <div class="group-title" data-i18n="basicEngine">基础画面 / Basic EngineSpecs</div>
95
+ <div class="flex-row">
96
+ <div class="flex-1">
97
+ <label data-i18n="qualityLevel">清晰度级别</label>
98
+ <select id="vid-quality" onchange="updateResPreview()">
99
+ <option value="1080">1080P Full HD</option>
100
+ <option value="720" selected>720P Standard</option>
101
+ <option value="540">540P Preview</option>
102
+ </select>
103
+ </div>
104
+ <div class="flex-1">
105
+ <label data-i18n="aspectRatio">画幅比例</label>
106
+ <select id="vid-ratio" onchange="updateResPreview()">
107
+ <option value="16:9" data-i18n="ratio169">16:9 电影宽幅</option>
108
+ <option value="9:16" data-i18n="ratio916">9:16 移动竖屏</option>
109
+ </select>
110
+ </div>
111
+ </div>
112
+ <div id="res-preview" class="res-preview-tag" style="margin-top: -5px; margin-bottom: 12px;">最终发送: 1280x704</div>
113
+
114
+ <div class="flex-row">
115
+ <div class="flex-1">
116
+ <label data-i18n="fpsLabel">帧率 (FPS)</label>
117
+ <select id="vid-fps">
118
+ <option value="24" selected>24 FPS</option>
119
+ <option value="25">25 FPS</option>
120
+ <option value="30">30 FPS</option>
121
+ <option value="48">48 FPS</option>
122
+ <option value="60">60 FPS</option>
123
+ </select>
124
+ </div>
125
+ <div class="flex-1">
126
+ <label data-i18n="durationLabel">时长 (秒)</label>
127
+ <input type="number" id="vid-duration" value="5" min="1" max="30" step="1">
128
+ </div>
129
+ </div>
130
+
131
+ <label style="margin-top: 12px;" data-i18n="cameraMotion">镜头运动方式</label>
132
+ <select id="vid-motion">
133
+ <option value="static" selected data-i18n="motionStatic">Static (静止机位)</option>
134
+ <option value="dolly_in" data-i18n="motionDollyIn">Dolly In (推近)</option>
135
+ <option value="dolly_out" data-i18n="motionDollyOut">Dolly Out (拉远)</option>
136
+ <option value="dolly_left" data-i18n="motionDollyLeft">Dolly Left (向左)</option>
137
+ <option value="dolly_right" data-i18n="motionDollyRight">Dolly Right (向右)</option>
138
+ <option value="jib_up" data-i18n="motionJibUp">Jib Up (升臂)</option>
139
+ <option value="jib_down" data-i18n="motionJibDown">Jib Down (降臂)</option>
140
+ <option value="focus_shift" data-i18n="motionFocus">Focus Shift (焦点)</option>
141
+ </select>
142
+ <div class="checkbox-container" style="margin-top: 8px;">
143
+ <input type="checkbox" id="vid-audio" checked>
144
+ <label for="vid-audio" data-i18n="audioGen">生成 AI 环境音 (Audio Gen)</label>
145
+ </div>
146
+
147
+ <label style="margin-top: 12px;" data-i18n="selectModel">选择模型</label>
148
+ <select id="vid-model" style="margin-bottom: 8px;">
149
+ <option value="" data-i18n="defaultModel">使用默认模型</option>
150
+ </select>
151
+ <label data-i18n="selectLora">选择 LoRA</label>
152
+ <select id="vid-lora" onchange="updateLoraStrength()" style="margin-bottom: 8px;">
153
+ <option value="" data-i18n="noLora">不使用 LoRA</option>
154
+ </select>
155
+ <div id="lora-strength-container" style="display: none; align-items: center; gap: 8px; margin-top: 8px;">
156
+ <label data-i18n="loraStrength" style="margin: 0; white-space: nowrap; min-width: 65px;">LoRA 强度</label>
157
+ <input type="range" id="lora-strength" min="0.1" max="2.0" step="0.1" value="1.0" style="flex: 1; margin: 0;" oninput="document.getElementById('lora-strength-val').textContent = this.value">
158
+ <span id="lora-strength-val" style="font-size: 12px; color: var(--accent); width: 22px; text-align: right; flex-shrink: 0;">1.0</span>
159
+ </div>
160
+ </div>
161
+
162
+ <!-- 生成媒介组 -->
163
+ <div class="setting-group" id="video-source-group">
164
+ <div class="group-title" data-i18n="genSource">生成媒介 / Generation Source</div>
165
+
166
+ <div class="flex-row" style="margin-bottom: 10px;">
167
+ <div class="flex-1">
168
+ <label data-i18n="startFrame">起始帧 (首帧)</label>
169
+ <div class="upload-zone" id="start-frame-drop-zone" onclick="document.getElementById('start-frame-input').click()">
170
+ <div class="clear-img-overlay" id="clear-start-frame-overlay" onclick="event.stopPropagation(); clearFrame('start')">×</div>
171
+ <div id="start-frame-placeholder">
172
+ <div class="upload-icon">🖼️</div>
173
+ <div class="upload-text" data-i18n="uploadStart">上传首帧</div>
174
+ </div>
175
+ <img id="start-frame-preview" class="preview-thumb">
176
+ <input type="file" id="start-frame-input" accept="image/*" style="display:none" onchange="handleFrameUpload(this.files[0], 'start')">
177
+ </div>
178
+ <input type="hidden" id="start-frame-path">
179
+ </div>
180
+ <div class="flex-1">
181
+ <label data-i18n="endFrame">结束帧 (尾帧)</label>
182
+ <div class="upload-zone" id="end-frame-drop-zone" onclick="document.getElementById('end-frame-input').click()">
183
+ <div class="clear-img-overlay" id="clear-end-frame-overlay" onclick="event.stopPropagation(); clearFrame('end')">×</div>
184
+ <div id="end-frame-placeholder">
185
+ <div class="upload-icon">🏁</div>
186
+ <div class="upload-text" data-i18n="uploadEnd">上传尾帧 (可选)</div>
187
+ </div>
188
+ <img id="end-frame-preview" class="preview-thumb">
189
+ <input type="file" id="end-frame-input" accept="image/*" style="display:none" onchange="handleFrameUpload(this.files[0], 'end')">
190
+ </div>
191
+ <input type="hidden" id="end-frame-path">
192
+ </div>
193
+ </div>
194
+
195
+ <div class="flex-row">
196
+ <div class="flex-1">
197
+ <label data-i18n="refAudio">参考音频 (A2V)</label>
198
+ <div class="upload-zone" id="audio-drop-zone" onclick="document.getElementById('vid-audio-input').click()">
199
+ <div class="clear-img-overlay" id="clear-audio-overlay" onclick="event.stopPropagation(); clearUploadedAudio()">×</div>
200
+ <div id="audio-upload-placeholder">
201
+ <div class="upload-icon">🎵</div>
202
+ <div class="upload-text" data-i18n="uploadAudio">点击上传音频</div>
203
+ </div>
204
+ <div id="audio-upload-status" style="display:none;">
205
+ <div class="upload-icon" style="color:var(--accent); opacity:1;">✔️</div>
206
+ <div id="audio-filename-status" class="upload-text"></div>
207
+ </div>
208
+ <input type="file" id="vid-audio-input" accept="audio/*" style="display:none" onchange="handleAudioUpload(this.files[0])">
209
+ </div>
210
+ <input type="hidden" id="uploaded-audio-path">
211
+ </div>
212
+ </div>
213
+ <div style="font-size: 10px; color: var(--text-dim); text-align: center; margin-top: 5px;" data-i18n="sourceHint">
214
+ 💡 若仅上传首帧 = 图生视频/音视频;若同时上传首尾帧 = 首尾插帧。
215
+ </div>
216
+ </div>
217
+ </div>
218
+
219
+ <!-- 图像模式选项 -->
220
+ <div id="image-opts" class="sidebar-section" style="display:none">
221
+ <label data-i18n="imgPreset">预设分辨率 (Presets)</label>
222
+ <select id="img-res-preset" onchange="applyImgPreset(this.value)">
223
+ <option value="1024x1024" data-i18n="imgOptSquare">1:1 Square (1024x1024)</option>
224
+ <option value="1280x720" data-i18n="imgOptLand">16:9 Landscape (1280x720)</option>
225
+ <option value="720x1280" data-i18n="imgOptPort">9:16 Portrait (720x1280)</option>
226
+ <option value="custom" data-i18n="imgOptCustom">Custom 自定义...</option>
227
+ </select>
228
+
229
+ <div id="img-custom-res" class="flex-row" style="margin-top: 10px;">
230
+ <div class="flex-1"><label data-i18n="width">宽度</label><input type="number" id="img-w" value="1024" onchange="updateImgResPreview()"></div>
231
+ <div class="flex-1"><label data-i18n="height">高度</label><input type="number" id="img-h" value="1024" onchange="updateImgResPreview()"></div>
232
+ </div>
233
+ <div id="img-res-preview" class="res-preview-tag">最终发送: 1024x1024</div>
234
+
235
+ <div class="label-group" style="margin-top: 15px;">
236
+ <label data-i18n="samplingSteps">采样步数 (Steps)</label>
237
+ <span class="val-badge" id="stepsVal">28</span>
238
+ </div>
239
+ <div class="slider-container">
240
+ <input type="range" id="img-steps" min="1" max="50" value="28" oninput="document.getElementById('stepsVal').innerText=this.value">
241
+ </div>
242
+ </div>
243
+
244
+ <!-- 超分模式选项 -->
245
+ <div id="upscale-opts" class="sidebar-section" style="display:none">
246
+ <div class="setting-group">
247
+ <label data-i18n="upscaleSource">待超分视频 (Source)</label>
248
+ <div class="upload-zone" id="upscale-drop-zone" onclick="document.getElementById('upscale-video-input').click()" style="margin-bottom: 0;">
249
+ <div class="clear-img-overlay" id="clear-upscale-overlay" onclick="event.stopPropagation(); clearUpscaleVideo()">×</div>
250
+ <div id="upscale-placeholder">
251
+ <div class="upload-icon">📹</div>
252
+ <div class="upload-text" data-i18n="upscaleUpload">拖入低分辨率视频片段</div>
253
+ </div>
254
+ <div id="upscale-status" style="display:none;">
255
+ <div class="upload-icon" style="color:var(--accent); opacity:1;">✔️</div>
256
+ <div id="upscale-filename" class="upload-text"></div>
257
+ </div>
258
+ <input type="file" id="upscale-video-input" accept="video/*" style="display:none" onchange="handleUpscaleVideoUpload(this.files[0])">
259
+ </div>
260
+ <input type="hidden" id="upscale-video-path">
261
+ </div>
262
+
263
+ <div class="setting-group">
264
+ <label data-i18n="targetRes">目标分辨率</label>
265
+ <select id="upscale-res" style="margin-bottom: 0;">
266
+ <option value="1080p" data-i18n="upscale1080">1080P Full HD (2x)</option>
267
+ <option value="720p" data-i18n="upscale720">720P HD</option>
268
+ </select>
269
+ </div>
270
+ </div>
271
+
272
+ <!-- 智能多帧模式 -->
273
+ <div class="sidebar-section" id="batch-opts" style="display:none">
274
+ <div class="setting-group">
275
+ <div class="group-title" data-i18n="smartMultiFrameGroup">智能多帧</div>
276
+ <div class="smart-param-mode-label" data-i18n="workflowModeLabel">工作流模式(点击切换)</div>
277
+ <div class="smart-param-modes" role="radiogroup" aria-label="工作流模式">
278
+ <label class="smart-param-mode-opt">
279
+ <input type="radio" name="batch-workflow" value="single" checked onchange="onBatchWorkflowChange()">
280
+ <span class="smart-param-mode-title" data-i18n="wfSingle">单次多关键帧</span>
281
+ </label>
282
+ <label class="smart-param-mode-opt">
283
+ <input type="radio" name="batch-workflow" value="segments" onchange="onBatchWorkflowChange()">
284
+ <span class="smart-param-mode-title" data-i18n="wfSegments">分段拼接</span>
285
+ </label>
286
+ </div>
287
+
288
+ <label data-i18n="uploadImages">上传图片</label>
289
+ <div class="upload-zone" id="batch-images-drop-zone" onclick="document.getElementById('batch-images-input').click()" style="min-height: 72px; margin-bottom: 0;">
290
+ <div id="batch-images-placeholder">
291
+ <div class="upload-icon">📁</div>
292
+ <div class="upload-text" data-i18n="uploadMulti1">点击或拖入多张图片</div>
293
+ <div class="upload-hint" data-i18n="uploadMulti2">支持一次选多张,可多次添加</div>
294
+ </div>
295
+ <input type="file" id="batch-images-input" accept="image/*" multiple style="display:none" onchange="handleBatchImagesUpload(this.files, true)">
296
+ </div>
297
+ <input type="hidden" id="batch-images-path">
298
+
299
+ <div class="batch-thumb-strip-wrap" id="batch-thumb-strip-wrap" style="display: none;">
300
+ <div class="batch-thumb-strip-head">
301
+ <span class="batch-thumb-strip-title" data-i18n="batchStripTitle">已选图片 · 顺序 = 播放先后</span>
302
+ <span class="batch-thumb-strip-hint" data-i18n="batchStripHint">在缩略图上按住拖动排序;松手落入虚线框位置</span>
303
+ </div>
304
+ <div class="batch-images-container" id="batch-images-container"></div>
305
+ </div>
306
+
307
+ <div style="font-size: 10px; color: var(--text-dim); margin-bottom: 12px; margin-top: 10px; line-height: 1.45;" data-i18n-html="batchFfmpegHint">
308
+ 💡 <strong>分段模式</strong>:2 张 = 1 段;3 张 = 2 段再拼接。<strong>单次模式</strong>:几张图就几个 latent 锚点,一条视频出片。<br>
309
+ 多段需 <code style="font-size:9px;">ffmpeg</code>:装好后加 PATH,或设环境变量 <code style="font-size:9px;">LTX_FFMPEG_PATH</code>,或在 <code style="font-size:9px;">%LOCALAPPDATA%\LTXDesktop\ffmpeg_path.txt</code> 第一行写 ffmpeg.exe 完整路径。
310
+ </div>
311
+
312
+ <label style="margin-top: 4px;" data-i18n="globalPromptLabel">本页全局补充词(可选)</label>
313
+ <textarea id="batch-common-prompt" data-i18n-placeholder="globalPromptPh" placeholder="与顶部主 Prompt 叠加;单次模式与分段模式均可用" style="width: 100%; height: 56px; margin-bottom: 10px; padding: 8px; font-size: 11px; box-sizing: border-box; resize: vertical; border-radius: 8px; border: 1px solid var(--border); background: var(--item); color: var(--text);"></textarea>
314
+
315
+ <label style="margin-top: 8px;" data-i18n="bgmLabel">成片配乐(可选,统一音轨)</label>
316
+ <div class="upload-zone" id="batch-audio-drop-zone" onclick="document.getElementById('batch-audio-input').click()" style="min-height: 44px; margin-bottom: 8px; position: relative;">
317
+ <div class="clear-img-overlay" id="clear-batch-audio-overlay" onclick="event.stopPropagation(); clearBatchBackgroundAudio()" style="display: none;">×</div>
318
+ <div id="batch-audio-placeholder">
319
+ <div class="upload-text" style="font-size: 11px;" data-i18n="bgmUploadHint">上传一条完整 BGM(生成完成后会替换整段成片的音轨)</div>
320
+ </div>
321
+ <div id="batch-audio-status" style="display: none; font-size: 11px; color: var(--accent);"></div>
322
+ <input type="file" id="batch-audio-input" accept="audio/*" style="display:none" onchange="handleBatchBackgroundAudioUpload(this.files[0])">
323
+ </div>
324
+ <input type="hidden" id="batch-background-audio-path">
325
+
326
+ <div id="batch-segments-container" style="margin-top: 15px;"></div>
327
+ </div>
328
+
329
+ <div class="setting-group">
330
+ <div class="group-title" data-i18n="basicEngine">基础画面 / Basic EngineSpecs</div>
331
+ <div class="flex-row">
332
+ <div class="flex-1">
333
+ <label data-i18n="qualityLevel">清晰度级别</label>
334
+ <select id="batch-quality" onchange="updateBatchResPreview()">
335
+ <option value="1080">1080P Full HD</option>
336
+ <option value="720" selected>720P Standard</option>
337
+ <option value="540">540P Preview</option>
338
+ </select>
339
+ </div>
340
+ <div class="flex-1">
341
+ <label data-i18n="aspectRatio">画幅比例</label>
342
+ <select id="batch-ratio" onchange="updateBatchResPreview()">
343
+ <option value="16:9" data-i18n="ratio169">16:9 电影宽幅</option>
344
+ <option value="9:16" data-i18n="ratio916">9:16 移动竖屏</option>
345
+ </select>
346
+ </div>
347
+ </div>
348
+ <div id="batch-res-preview" class="res-preview-tag" style="margin-top: -5px; margin-bottom: 12px;">最终发送: 1280x704</div>
349
+
350
+ <label data-i18n="selectModel">选择模型</label>
351
+ <select id="batch-model" style="margin-bottom: 8px;">
352
+ <option value="" data-i18n="defaultModel">使用默认模型</option>
353
+ </select>
354
+ <label data-i18n="selectLora">选择 LoRA</label>
355
+ <select id="batch-lora" onchange="updateBatchLoraStrength()" style="margin-bottom: 8px;">
356
+ <option value="" data-i18n="noLora">不使用 LoRA</option>
357
+ </select>
358
+ <div id="batch-lora-strength-container" style="display: none;">
359
+ <label data-i18n="loraStrength">LoRA 强度</label>
360
+ <input type="range" id="batch-lora-strength" min="0.1" max="2.0" step="0.1" value="1.2" style="width: 100%;" oninput="document.getElementById('batch-lora-strength-val').textContent = this.value">
361
+ <span id="batch-lora-strength-val" style="font-size: 12px; color: var(--accent);">1.2</span>
362
+ </div>
363
+ </div>
364
+ </div>
365
+
366
+ <div style="padding: 0 30px 30px 30px;">
367
+ <button class="btn-primary" id="mainBtn" onclick="run()" data-i18n="mainRender">开始渲染</button>
368
+ </div>
369
+ </aside>
370
+
371
+ <main class="workspace">
372
+ <section class="viewer" id="viewer-section">
373
+ <div class="monitor" id="viewer">
374
+ <div id="loading-txt" data-i18n="waitingTask">等待分配渲染任务...</div>
375
+ <img id="res-img" src="">
376
+ <div id="video-wrapper" style="width:100%; height:100%; display:none; max-height:100%; align-items:center; justify-content:center;">
377
+ <video id="res-video" autoplay loop playsinline></video>
378
+ </div>
379
+ <div class="progress-container"><div id="progress-fill"></div></div>
380
+ </div>
381
+ </section>
382
+
383
+ <!-- Drag Handle -->
384
+ <div id="resize-handle" style="
385
+ height: 5px; background: transparent; cursor: row-resize;
386
+ flex-shrink: 0; position: relative; z-index: 50;
387
+ display: flex; align-items: center; justify-content: center;
388
+ " data-i18n-title="resizeHandleTitle" title="拖动调整面板高度">
389
+ <div style="width: 40px; height: 3px; background: var(--border); border-radius: 999px; pointer-events: none;"></div>
390
+ </div>
391
+
392
+ <section class="library" id="library-section">
393
+ <div style="display: flex; justify-content: space-between; margin-bottom: 15px; align-items: center; border-bottom: 1px solid var(--border); padding-bottom: 10px;">
394
+ <div style="display: flex; gap: 20px;">
395
+ <span id="tab-history" style="font-size: 11px; font-weight: 800; color: var(--accent); cursor: pointer; border-bottom: 2px solid var(--accent); padding-bottom: 11px; margin-bottom: -11px;" onclick="switchLibTab('history')" data-i18n="libHistory">历史资产 / ASSETS</span>
396
+ <span id="tab-log" style="font-size: 11px; font-weight: 800; color: var(--text-dim); cursor: pointer; border-bottom: 2px solid transparent; padding-bottom: 11px; margin-bottom: -11px;" onclick="switchLibTab('log')" data-i18n="libLog">系统日志 / LOGS</span>
397
+ </div>
398
+ <button type="button" onclick="fetchHistory(currentHistoryPage)" style="background: var(--item); border: 1px solid var(--border); border-radius: 6px; color: var(--text-dim); font-size: 11px; padding: 4px 10px; cursor: pointer;" data-i18n="refresh">刷新</button>
399
+ </div>
400
+
401
+ <div id="log-container" style="display: none; flex: 1; flex-direction: column;">
402
+ <div id="log" data-i18n="logReady">> LTX-2 Studio Ready. Expecting commands...</div>
403
+ </div>
404
+
405
+ <div id="history-wrapper">
406
+ <div id="history-container"></div>
407
+ </div>
408
+ <div id="pagination-bar" style="display:none;"></div>
409
+ </section>
410
+ </main>
411
+ <script src="https://cdnjs.cloudflare.com/ajax/libs/plyr/3.7.8/plyr.min.js"></script>
412
+ <script src="i18n.js"></script>
413
+ <script src="index.js"></script>
414
+
415
+ </body>
416
+ </html>
LTX2.3-1.0.4/UI/index.js ADDED
@@ -0,0 +1,2114 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ // ─── Resizable panel drag logic ───────────────────────────────────────────────
2
+ (function() {
3
+ const handle = document.getElementById('resize-handle');
4
+ const viewer = document.getElementById('viewer-section');
5
+ const library = document.getElementById('library-section');
6
+ const workspace = document.querySelector('.workspace');
7
+ let dragging = false, startY = 0, startVH = 0;
8
+
9
+ handle.addEventListener('mousedown', (e) => {
10
+ dragging = true;
11
+ startY = e.clientY;
12
+ startVH = viewer.getBoundingClientRect().height;
13
+ document.body.style.cursor = 'row-resize';
14
+ document.body.style.userSelect = 'none';
15
+ handle.querySelector('div').style.background = 'var(--accent)';
16
+ e.preventDefault();
17
+ });
18
+ document.addEventListener('mousemove', (e) => {
19
+ if (!dragging) return;
20
+ const wsH = workspace.getBoundingClientRect().height;
21
+ const delta = e.clientY - startY;
22
+ let newVH = startVH + delta;
23
+ // Clamp: viewer min 150px, library min 100px
24
+ newVH = Math.max(150, Math.min(wsH - 100 - 5, newVH));
25
+ viewer.style.flex = 'none';
26
+ viewer.style.height = newVH + 'px';
27
+ library.style.flex = '1';
28
+ });
29
+ document.addEventListener('mouseup', () => {
30
+ if (dragging) {
31
+ dragging = false;
32
+ document.body.style.cursor = '';
33
+ document.body.style.userSelect = '';
34
+ handle.querySelector('div').style.background = 'var(--border)';
35
+ }
36
+ });
37
+ // Hover highlight
38
+ handle.addEventListener('mouseenter', () => { handle.querySelector('div').style.background = 'var(--text-dim)'; });
39
+ handle.addEventListener('mouseleave', () => { if (!dragging) handle.querySelector('div').style.background = 'var(--border)'; });
40
+ })();
41
+ // ──────────────────────────────────────────────────────────────────────────────
42
+
43
+
44
+
45
+
46
+
47
+
48
+ // 动态获取当前访问的域名或 IP,自动对齐 3000 端口
49
+ const BASE = `http://${window.location.hostname}:3000`;
50
+
51
+ function _t(k) {
52
+ return typeof window.t === 'function' ? window.t(k) : k;
53
+ }
54
+
55
+ let currentMode = 'image';
56
+ let pollInterval = null;
57
+ let availableModels = [];
58
+ let availableLoras = [];
59
+
60
+ // 建议增加一个简单的调试日志,方便在控制台确认地址是否正确
61
+ console.log("Connecting to Backend API at:", BASE);
62
+
63
+ // 模型扫描功能
64
+ async function scanModels() {
65
+ try {
66
+ const url = `${BASE}/api/models`;
67
+ console.log("Scanning models from:", url);
68
+ const res = await fetch(url);
69
+ const data = await res.json().catch(() => ({}));
70
+ console.log("Models response:", res.status, data);
71
+ if (!res.ok) {
72
+ const msg = data.message || data.error || res.statusText;
73
+ addLog(`❌ 模型扫描失败 (${res.status}): ${msg}`);
74
+ availableModels = [];
75
+ updateModelDropdown();
76
+ updateBatchModelDropdown();
77
+ return;
78
+ }
79
+ availableModels = data.models || [];
80
+ updateModelDropdown();
81
+ updateBatchModelDropdown();
82
+ if (availableModels.length > 0) {
83
+ addLog(`📂 已扫描到 ${availableModels.length} 个模型: ${availableModels.map(m => m.name).join(', ')}`);
84
+ }
85
+ } catch (e) {
86
+ console.log("Model scan error:", e);
87
+ addLog(`❌ 模型扫描异常: ${e.message || e}`);
88
+ }
89
+ }
90
+
91
+ function updateModelDropdown() {
92
+ const select = document.getElementById('vid-model');
93
+ if (!select) return;
94
+ select.innerHTML = '<option value="">' + _t('defaultModel') + '</option>';
95
+ availableModels.forEach(model => {
96
+ const opt = document.createElement('option');
97
+ opt.value = model.path;
98
+ opt.textContent = model.name;
99
+ select.appendChild(opt);
100
+ });
101
+ }
102
+
103
+ // LoRA 扫描功能
104
+ async function scanLoras() {
105
+ try {
106
+ const url = `${BASE}/api/loras`;
107
+ console.log("Scanning LoRA from:", url);
108
+ const res = await fetch(url);
109
+ const data = await res.json().catch(() => ({}));
110
+ console.log("LoRA response:", res.status, data);
111
+ if (!res.ok) {
112
+ const msg = data.message || data.error || res.statusText;
113
+ addLog(`❌ LoRA 扫描失败 (${res.status}): ${msg}`);
114
+ availableLoras = [];
115
+ updateLoraDropdown();
116
+ updateBatchLoraDropdown();
117
+ return;
118
+ }
119
+ availableLoras = data.loras || [];
120
+ updateLoraDropdown();
121
+ updateBatchLoraDropdown();
122
+ if (data.loras_dir) {
123
+ const hintEl = document.getElementById('lora-placement-hint');
124
+ if (hintEl) {
125
+ const tpl = _t('loraPlacementHintWithDir');
126
+ hintEl.innerHTML = tpl.replace(
127
+ '{dir}',
128
+ escapeHtmlAttr(data.models_dir || data.loras_dir)
129
+ );
130
+ }
131
+ }
132
+ if (availableLoras.length > 0) {
133
+ addLog(`📂 已扫描到 ${availableLoras.length} 个 LoRA: ${availableLoras.map(l => l.name).join(', ')}`);
134
+ }
135
+ } catch (e) {
136
+ console.log("LoRA scan error:", e);
137
+ addLog(`❌ LoRA 扫描异常: ${e.message || e}`);
138
+ }
139
+ }
140
+
141
+ function updateLoraDropdown() {
142
+ const select = document.getElementById('vid-lora');
143
+ if (!select) return;
144
+ select.innerHTML = '<option value="">' + _t('noLora') + '</option>';
145
+ availableLoras.forEach(lora => {
146
+ const opt = document.createElement('option');
147
+ opt.value = lora.path;
148
+ opt.textContent = lora.name;
149
+ select.appendChild(opt);
150
+ });
151
+ }
152
+
153
+ function updateLoraStrength() {
154
+ const select = document.getElementById('vid-lora');
155
+ const container = document.getElementById('lora-strength-container');
156
+ if (select && container) {
157
+ container.style.display = select.value ? 'flex' : 'none';
158
+ }
159
+ }
160
+
161
+ // 更新批量模式的模型和LoRA下拉框
162
+ function updateBatchModelDropdown() {
163
+ const select = document.getElementById('batch-model');
164
+ if (!select) return;
165
+ select.innerHTML = '<option value="">' + _t('defaultModel') + '</option>';
166
+ availableModels.forEach(model => {
167
+ const opt = document.createElement('option');
168
+ opt.value = model.path;
169
+ opt.textContent = model.name;
170
+ select.appendChild(opt);
171
+ });
172
+ }
173
+
174
+ function updateBatchLoraDropdown() {
175
+ const select = document.getElementById('batch-lora');
176
+ if (!select) return;
177
+ select.innerHTML = '<option value="">' + _t('noLora') + '</option>';
178
+ availableLoras.forEach(lora => {
179
+ const opt = document.createElement('option');
180
+ opt.value = lora.path;
181
+ opt.textContent = lora.name;
182
+ select.appendChild(opt);
183
+ });
184
+ }
185
+
186
+ // 页面加载时更新批量模式的下拉框
187
+ function initBatchDropdowns() {
188
+ updateBatchModelDropdown();
189
+ updateBatchLoraDropdown();
190
+ }
191
+
192
+ // 已移除:模型/LoRA 目录自定义与浏览(保持后端默认路径扫描)
193
+
194
+ // 页面加载时扫描模型和LoRA(使用后端默认目录规则)
195
+ (function() {
196
+ ['vid-quality', 'batch-quality'].forEach((id) => {
197
+ const sel = document.getElementById(id);
198
+ if (sel && sel.value === '544') sel.value = '540';
199
+ });
200
+
201
+ setTimeout(() => {
202
+ scanModels();
203
+ scanLoras();
204
+ initBatchDropdowns();
205
+ }, 1500);
206
+ })();
207
+
208
+ // 分辨率自动计算逻辑
209
+ function updateResPreview() {
210
+ const q = document.getElementById('vid-quality').value; // "1080", "720", "540"
211
+ const r = document.getElementById('vid-ratio').value;
212
+
213
+ // 核心修复:后端解析器期待 "1080p", "720p", "540p" 这种标签格式
214
+ let resLabel = q === "1080" ? "1080p" : q === "720" ? "720p" : "540p";
215
+
216
+ /* 与后端一致:宽高均为 64 的倍数(LTX 内核要求) */
217
+ let resDisplay;
218
+ if (r === "16:9") {
219
+ resDisplay = q === "1080" ? "1920x1088" : q === "720" ? "1280x704" : "1024x576";
220
+ } else {
221
+ resDisplay = q === "1080" ? "1088x1920" : q === "720" ? "704x1280" : "576x1024";
222
+ }
223
+
224
+ document.getElementById('res-preview').innerText = `${_t('resPreviewPrefix')}: ${resLabel} (${resDisplay})`;
225
+ return resLabel;
226
+ }
227
+
228
+ // 图片分辨率预览
229
+ function updateImgResPreview() {
230
+ const w = document.getElementById('img-w').value;
231
+ const h = document.getElementById('img-h').value;
232
+ document.getElementById('img-res-preview').innerText = `${_t('resPreviewPrefix')}: ${w}x${h}`;
233
+ }
234
+
235
+ // 批量模式分辨率预览
236
+ function updateBatchResPreview() {
237
+ const q = document.getElementById('batch-quality').value;
238
+ const r = document.getElementById('batch-ratio').value;
239
+ let resLabel = q === "1080" ? "1080p" : q === "720" ? "720p" : "540p";
240
+ let resDisplay;
241
+ if (r === "16:9") {
242
+ resDisplay = q === "1080" ? "1920x1088" : q === "720" ? "1280x704" : "1024x576";
243
+ } else {
244
+ resDisplay = q === "1080" ? "1088x1920" : q === "720" ? "704x1280" : "576x1024";
245
+ }
246
+ document.getElementById('batch-res-preview').innerText = `${_t('resPreviewPrefix')}: ${resLabel} (${resDisplay})`;
247
+ return resLabel;
248
+ }
249
+
250
+ // 批量模式 LoRA 强度切换
251
+ function updateBatchLoraStrength() {
252
+ const select = document.getElementById('batch-lora');
253
+ const container = document.getElementById('batch-lora-strength-container');
254
+ if (select && container) {
255
+ container.style.display = select.value ? 'flex' : 'none';
256
+ }
257
+ }
258
+
259
+ // 切换图片预设分辨率
260
+ function applyImgPreset(val) {
261
+ if (val === "custom") {
262
+ document.getElementById('img-custom-res').style.display = 'flex';
263
+ } else {
264
+ const [w, h] = val.split('x');
265
+ document.getElementById('img-w').value = w;
266
+ document.getElementById('img-h').value = h;
267
+ updateImgResPreview();
268
+ // 隐藏自定义区域或保持显示供微调
269
+ // document.getElementById('img-custom-res').style.display = 'none';
270
+ }
271
+ }
272
+
273
+
274
+
275
+ // 处理帧图片上传
276
+ async function handleFrameUpload(file, frameType) {
277
+ if (!file) return;
278
+
279
+ const preview = document.getElementById(`${frameType}-frame-preview`);
280
+ const placeholder = document.getElementById(`${frameType}-frame-placeholder`);
281
+ const clearOverlay = document.getElementById(`clear-${frameType}-frame-overlay`);
282
+
283
+ const previewReader = new FileReader();
284
+ previewReader.onload = (e) => {
285
+ preview.src = e.target.result;
286
+ preview.style.display = 'block';
287
+ placeholder.style.display = 'none';
288
+ clearOverlay.style.display = 'flex';
289
+ };
290
+ previewReader.readAsDataURL(file);
291
+
292
+ const reader = new FileReader();
293
+ reader.onload = async (e) => {
294
+ const b64Data = e.target.result;
295
+ addLog(`正在上传 ${frameType === 'start' ? '起始帧' : '结束帧'}: ${file.name}...`);
296
+ try {
297
+ const res = await fetch(`${BASE}/api/system/upload-image`, {
298
+ method: 'POST',
299
+ headers: { 'Content-Type': 'application/json' },
300
+ body: JSON.stringify({ image: b64Data, filename: file.name })
301
+ });
302
+ const data = await res.json();
303
+ if (res.ok && data.path) {
304
+ document.getElementById(`${frameType}-frame-path`).value = data.path;
305
+ addLog(`✅ ${frameType === 'start' ? '起始帧' : '结束帧'}上传成功`);
306
+ } else {
307
+ throw new Error(data.error || data.detail || "上传失败");
308
+ }
309
+ } catch (e) {
310
+ addLog(`❌ 帧图片上传失败: ${e.message}`);
311
+ }
312
+ };
313
+ reader.readAsDataURL(file);
314
+ }
315
+
316
+ function clearFrame(frameType) {
317
+ document.getElementById(`${frameType}-frame-input`).value = "";
318
+ document.getElementById(`${frameType}-frame-path`).value = "";
319
+ document.getElementById(`${frameType}-frame-preview`).style.display = 'none';
320
+ document.getElementById(`${frameType}-frame-preview`).src = "";
321
+ document.getElementById(`${frameType}-frame-placeholder`).style.display = 'block';
322
+ document.getElementById(`clear-${frameType}-frame-overlay`).style.display = 'none';
323
+ addLog(`🧹 已清除${frameType === 'start' ? '起始帧' : '结束帧'}`);
324
+ }
325
+
326
+ // 处理图片上传
327
+ async function handleImageUpload(file) {
328
+ if (!file) return;
329
+
330
+ // 预览图片
331
+ const preview = document.getElementById('upload-preview');
332
+ const placeholder = document.getElementById('upload-placeholder');
333
+ const clearOverlay = document.getElementById('clear-img-overlay');
334
+
335
+ const previewReader = new FileReader();
336
+ preview.onload = () => {
337
+ preview.style.display = 'block';
338
+ placeholder.style.display = 'none';
339
+ clearOverlay.style.display = 'flex';
340
+ };
341
+ previewReader.onload = (e) => preview.src = e.target.result;
342
+ previewReader.readAsDataURL(file);
343
+
344
+ // 使用 FileReader 转换为 Base64,绕过后端缺失 python-multipart 的问题
345
+ const reader = new FileReader();
346
+ reader.onload = async (e) => {
347
+ const b64Data = e.target.result;
348
+ addLog(`正在上传参考图: ${file.name}...`);
349
+ try {
350
+ const res = await fetch(`${BASE}/api/system/upload-image`, {
351
+ method: 'POST',
352
+ headers: { 'Content-Type': 'application/json' },
353
+ body: JSON.stringify({
354
+ image: b64Data,
355
+ filename: file.name
356
+ })
357
+ });
358
+ const data = await res.json();
359
+ if (res.ok && data.path) {
360
+ document.getElementById('uploaded-img-path').value = data.path;
361
+ addLog(`✅ 参考图上传成功: ${file.name}`);
362
+ } else {
363
+ const errMsg = data.error || data.detail || "上传失败";
364
+ throw new Error(typeof errMsg === 'string' ? errMsg : JSON.stringify(errMsg));
365
+ }
366
+ } catch (e) {
367
+ addLog(`❌ 图片上传失败: ${e.message}`);
368
+ }
369
+ };
370
+ reader.onerror = () => addLog("❌ 读取本地文件失败");
371
+ reader.readAsDataURL(file);
372
+ }
373
+
374
+ function clearUploadedImage() {
375
+ document.getElementById('vid-image-input').value = "";
376
+ document.getElementById('uploaded-img-path').value = "";
377
+ document.getElementById('upload-preview').style.display = 'none';
378
+ document.getElementById('upload-preview').src = "";
379
+ document.getElementById('upload-placeholder').style.display = 'block';
380
+ document.getElementById('clear-img-overlay').style.display = 'none';
381
+ addLog("🧹 已清除参考图");
382
+ }
383
+
384
+ // 处理音频上传
385
+ async function handleAudioUpload(file) {
386
+ if (!file) return;
387
+
388
+ const placeholder = document.getElementById('audio-upload-placeholder');
389
+ const statusDiv = document.getElementById('audio-upload-status');
390
+ const filenameStatus = document.getElementById('audio-filename-status');
391
+ const clearOverlay = document.getElementById('clear-audio-overlay');
392
+
393
+ placeholder.style.display = 'none';
394
+ filenameStatus.innerText = file.name;
395
+ statusDiv.style.display = 'block';
396
+ clearOverlay.style.display = 'flex';
397
+
398
+ const reader = new FileReader();
399
+ reader.onload = async (e) => {
400
+ const b64Data = e.target.result;
401
+ addLog(`正在上传音频: ${file.name}...`);
402
+ try {
403
+ // 复用图片上传接口,后端已支持任意文件类型
404
+ const res = await fetch(`${BASE}/api/system/upload-image`, {
405
+ method: 'POST',
406
+ headers: { 'Content-Type': 'application/json' },
407
+ body: JSON.stringify({
408
+ image: b64Data,
409
+ filename: file.name
410
+ })
411
+ });
412
+ const data = await res.json();
413
+ if (res.ok && data.path) {
414
+ document.getElementById('uploaded-audio-path').value = data.path;
415
+ addLog(`✅ 音频上传成功: ${file.name}`);
416
+ } else {
417
+ const errMsg = data.error || data.detail || "上传失败";
418
+ throw new Error(typeof errMsg === 'string' ? errMsg : JSON.stringify(errMsg));
419
+ }
420
+ } catch (e) {
421
+ addLog(`❌ 音频上传失败: ${e.message}`);
422
+ }
423
+ };
424
+ reader.onerror = () => addLog("❌ 读取本地音频文件失败");
425
+ reader.readAsDataURL(file);
426
+ }
427
+
428
+ function clearUploadedAudio() {
429
+ document.getElementById('vid-audio-input').value = "";
430
+ document.getElementById('uploaded-audio-path').value = "";
431
+ document.getElementById('audio-upload-placeholder').style.display = 'block';
432
+ document.getElementById('audio-upload-status').style.display = 'none';
433
+ document.getElementById('clear-audio-overlay').style.display = 'none';
434
+ addLog("🧹 已清除音频文件");
435
+ }
436
+
437
+ // 处理超分视频上传
438
+ async function handleUpscaleVideoUpload(file) {
439
+ if (!file) return;
440
+ const placeholder = document.getElementById('upscale-placeholder');
441
+ const statusDiv = document.getElementById('upscale-status');
442
+ const filenameStatus = document.getElementById('upscale-filename');
443
+ const clearOverlay = document.getElementById('clear-upscale-overlay');
444
+
445
+ filenameStatus.innerText = file.name;
446
+ placeholder.style.display = 'none';
447
+ statusDiv.style.display = 'block';
448
+ clearOverlay.style.display = 'flex';
449
+
450
+ const reader = new FileReader();
451
+ reader.onload = async (e) => {
452
+ const b64Data = e.target.result;
453
+ addLog(`正在上传待超分视频: ${file.name}...`);
454
+ try {
455
+ const res = await fetch(`${BASE}/api/system/upload-image`, {
456
+ method: 'POST',
457
+ headers: { 'Content-Type': 'application/json' },
458
+ body: JSON.stringify({ image: b64Data, filename: file.name })
459
+ });
460
+ const data = await res.json();
461
+ if (res.ok && data.path) {
462
+ document.getElementById('upscale-video-path').value = data.path;
463
+ addLog(`✅ 视频上传成功`);
464
+ } else {
465
+ throw new Error(data.error || "上传失败");
466
+ }
467
+ } catch (e) {
468
+ addLog(`❌ 视频上传失败: ${e.message}`);
469
+ }
470
+ };
471
+ reader.readAsDataURL(file);
472
+ }
473
+
474
+ function clearUpscaleVideo() {
475
+ document.getElementById('upscale-video-input').value = "";
476
+ document.getElementById('upscale-video-path').value = "";
477
+ document.getElementById('upscale-placeholder').style.display = 'block';
478
+ document.getElementById('upscale-status').style.display = 'none';
479
+ document.getElementById('clear-upscale-overlay').style.display = 'none';
480
+ addLog("🧹 已清除待超分视频");
481
+ }
482
+
483
+ // 初始化拖拽上传逻辑
484
+ function initDragAndDrop() {
485
+ const audioDropZone = document.getElementById('audio-drop-zone');
486
+ const startFrameDropZone = document.getElementById('start-frame-drop-zone');
487
+ const endFrameDropZone = document.getElementById('end-frame-drop-zone');
488
+ const upscaleDropZone = document.getElementById('upscale-drop-zone');
489
+ const batchImagesDropZone = document.getElementById('batch-images-drop-zone');
490
+
491
+ const zones = [audioDropZone, startFrameDropZone, endFrameDropZone, upscaleDropZone, batchImagesDropZone];
492
+
493
+ ['dragenter', 'dragover', 'dragleave', 'drop'].forEach(eventName => {
494
+ zones.forEach(zone => {
495
+ if (!zone) return;
496
+ zone.addEventListener(eventName, (e) => {
497
+ e.preventDefault();
498
+ e.stopPropagation();
499
+ }, false);
500
+ });
501
+ });
502
+
503
+ ['dragenter', 'dragover'].forEach(eventName => {
504
+ zones.forEach(zone => {
505
+ if (!zone) return;
506
+ zone.addEventListener(eventName, () => zone.classList.add('dragover'), false);
507
+ });
508
+ });
509
+
510
+ ['dragleave', 'drop'].forEach(eventName => {
511
+ zones.forEach(zone => {
512
+ if (!zone) return;
513
+ zone.addEventListener(eventName, () => zone.classList.remove('dragover'), false);
514
+ });
515
+ });
516
+
517
+ audioDropZone.addEventListener('drop', (e) => {
518
+ const file = e.dataTransfer.files[0];
519
+ if (file && file.type.startsWith('audio/')) handleAudioUpload(file);
520
+ }, false);
521
+
522
+ startFrameDropZone.addEventListener('drop', (e) => {
523
+ const file = e.dataTransfer.files[0];
524
+ if (file && file.type.startsWith('image/')) handleFrameUpload(file, 'start');
525
+ }, false);
526
+
527
+ endFrameDropZone.addEventListener('drop', (e) => {
528
+ const file = e.dataTransfer.files[0];
529
+ if (file && file.type.startsWith('image/')) handleFrameUpload(file, 'end');
530
+ }, false);
531
+
532
+ upscaleDropZone.addEventListener('drop', (e) => {
533
+ const file = e.dataTransfer.files[0];
534
+ if (file && file.type.startsWith('video/')) handleUpscaleVideoUpload(file);
535
+ }, false);
536
+
537
+ // 批量图片拖拽上传
538
+ if (batchImagesDropZone) {
539
+ batchImagesDropZone.addEventListener('drop', (e) => {
540
+ e.preventDefault();
541
+ e.stopPropagation();
542
+ batchImagesDropZone.classList.remove('dragover');
543
+ const files = Array.from(e.dataTransfer.files).filter(f => f.type.startsWith('image/'));
544
+ if (files.length > 0) handleBatchImagesUpload(files);
545
+ }, false);
546
+ }
547
+ }
548
+
549
+ // 批量图片上传处理
550
+ let batchImages = [];
551
+ /** 单次多关键帧:按 path 记引导强度;按段索引 0..n-2 记「上一张→本张」间隔秒数 */
552
+ const batchKfStrengthByPath = {};
553
+ const batchKfSegDurByIndex = {};
554
+
555
+ function escapeHtmlAttr(s) {
556
+ return String(s)
557
+ .replace(/&/g, '&amp;')
558
+ .replace(/"/g, '&quot;')
559
+ .replace(/</g, '&lt;');
560
+ }
561
+
562
+ function defaultKeyframeStrengthForIndex(i, n) {
563
+ if (n <= 2) return '1';
564
+ if (i === 0) return '0.62';
565
+ if (i === n - 1) return '1';
566
+ return '0.42';
567
+ }
568
+
569
+ function captureBatchKfTimelineFromDom() {
570
+ batchImages.forEach((img, i) => {
571
+ if (!img.path) return;
572
+ const sEl = document.getElementById(`batch-kf-strength-${i}`);
573
+ if (sEl) batchKfStrengthByPath[img.path] = sEl.value.trim();
574
+ });
575
+ const n = batchImages.length;
576
+ for (let j = 0; j < n - 1; j++) {
577
+ const el = document.getElementById(`batch-kf-seg-dur-${j}`);
578
+ if (el) batchKfSegDurByIndex[j] = el.value.trim();
579
+ }
580
+ }
581
+
582
+ /** 读取间隔(秒),非法则回退为 minSeg */
583
+ function readBatchKfSegmentSeconds(n, minSeg) {
584
+ const seg = [];
585
+ for (let j = 0; j < n - 1; j++) {
586
+ let v = parseFloat(document.getElementById(`batch-kf-seg-dur-${j}`)?.value);
587
+ if (!Number.isFinite(v) || v < minSeg) v = minSeg;
588
+ seg.push(v);
589
+ }
590
+ return seg;
591
+ }
592
+
593
+ function updateBatchKfTimelineDerivedUI() {
594
+ if (!batchWorkflowIsSingle() || batchImages.length < 2) return;
595
+ const n = batchImages.length;
596
+ const minSeg = 0.1;
597
+ const seg = readBatchKfSegmentSeconds(n, minSeg);
598
+ let t = 0;
599
+ for (let i = 0; i < n; i++) {
600
+ const label = document.getElementById(`batch-kf-anchor-label-${i}`);
601
+ if (!label) continue;
602
+ if (i === 0) {
603
+ label.textContent = `0.0 s · ${_t('batchAnchorStart')}`;
604
+ } else {
605
+ t += seg[i - 1];
606
+ label.textContent =
607
+ i === n - 1
608
+ ? `${t.toFixed(1)} s · ${_t('batchAnchorEnd')}`
609
+ : `${t.toFixed(1)} s`;
610
+ }
611
+ }
612
+ const totalEl = document.getElementById('batch-kf-total-seconds');
613
+ if (totalEl) {
614
+ const sum = seg.reduce((a, b) => a + b, 0);
615
+ totalEl.textContent = sum.toFixed(1);
616
+ }
617
+ }
618
+ async function handleBatchImagesUpload(files, append = true) {
619
+ if (!files || files.length === 0) return;
620
+ addLog(`正在上传 ${files.length} 张图片...`);
621
+
622
+ for (let i = 0; i < files.length; i++) {
623
+ const file = files[i];
624
+ const reader = new FileReader();
625
+
626
+ const imgData = await new Promise((resolve) => {
627
+ reader.onload = async (e) => {
628
+ const b64Data = e.target.result;
629
+ try {
630
+ const res = await fetch(`${BASE}/api/system/upload-image`, {
631
+ method: 'POST',
632
+ headers: { 'Content-Type': 'application/json' },
633
+ body: JSON.stringify({ image: b64Data, filename: file.name })
634
+ });
635
+ const data = await res.json();
636
+ if (res.ok && data.path) {
637
+ resolve({ name: file.name, path: data.path, preview: e.target.result });
638
+ } else {
639
+ resolve(null);
640
+ }
641
+ } catch (e) {
642
+ resolve(null);
643
+ }
644
+ };
645
+ reader.readAsDataURL(file);
646
+ });
647
+
648
+ if (imgData) {
649
+ batchImages.push(imgData);
650
+ addLog(`✅ 图片 ${i + 1}/${files.length} 上传成功: ${file.name}`);
651
+ }
652
+ }
653
+
654
+ renderBatchImages();
655
+ updateBatchSegments();
656
+ }
657
+
658
+ async function handleBatchBackgroundAudioUpload(file) {
659
+ if (!file) return;
660
+ const ph = document.getElementById('batch-audio-placeholder');
661
+ const st = document.getElementById('batch-audio-status');
662
+ const overlay = document.getElementById('clear-batch-audio-overlay');
663
+ const reader = new FileReader();
664
+ reader.onload = async (e) => {
665
+ const b64Data = e.target.result;
666
+ addLog(`正在上传成片配乐: ${file.name}...`);
667
+ try {
668
+ const res = await fetch(`${BASE}/api/system/upload-image`, {
669
+ method: 'POST',
670
+ headers: { 'Content-Type': 'application/json' },
671
+ body: JSON.stringify({ image: b64Data, filename: file.name })
672
+ });
673
+ const data = await res.json();
674
+ if (res.ok && data.path) {
675
+ const hid = document.getElementById('batch-background-audio-path');
676
+ if (hid) hid.value = data.path;
677
+ if (ph) ph.style.display = 'none';
678
+ if (st) {
679
+ st.style.display = 'block';
680
+ st.textContent = '✓ ' + file.name;
681
+ }
682
+ if (overlay) overlay.style.display = 'flex';
683
+ addLog('✅ 成片配乐已上传(将覆盖各片段自带音轨)');
684
+ } else {
685
+ addLog(`❌ 配乐上传失败: ${data.error || '未知错误'}`);
686
+ }
687
+ } catch (err) {
688
+ addLog(`❌ 配乐上传失败: ${err.message}`);
689
+ }
690
+ };
691
+ reader.onerror = () => addLog('❌ 读取音频文件失败');
692
+ reader.readAsDataURL(file);
693
+ }
694
+
695
+ function clearBatchBackgroundAudio() {
696
+ const hid = document.getElementById('batch-background-audio-path');
697
+ const inp = document.getElementById('batch-audio-input');
698
+ if (hid) hid.value = '';
699
+ if (inp) inp.value = '';
700
+ const ph = document.getElementById('batch-audio-placeholder');
701
+ const st = document.getElementById('batch-audio-status');
702
+ const overlay = document.getElementById('clear-batch-audio-overlay');
703
+ if (ph) ph.style.display = 'block';
704
+ if (st) {
705
+ st.style.display = 'none';
706
+ st.textContent = '';
707
+ }
708
+ if (overlay) overlay.style.display = 'none';
709
+ addLog('🧹 已清除成片配乐');
710
+ }
711
+
712
+ function syncBatchDropZoneChrome() {
713
+ const dropZone = document.getElementById('batch-images-drop-zone');
714
+ const placeholder = document.getElementById('batch-images-placeholder');
715
+ const stripWrap = document.getElementById('batch-thumb-strip-wrap');
716
+ if (batchImages.length === 0) {
717
+ if (dropZone) {
718
+ dropZone.classList.remove('has-images');
719
+ const mini = dropZone.querySelector('.upload-placeholder-mini');
720
+ if (mini) mini.remove();
721
+ }
722
+ if (placeholder) placeholder.style.display = 'block';
723
+ if (stripWrap) stripWrap.style.display = 'none';
724
+ return;
725
+ }
726
+ if (placeholder) placeholder.style.display = 'none';
727
+ if (dropZone) dropZone.classList.add('has-images');
728
+ if (stripWrap) stripWrap.style.display = 'block';
729
+ if (dropZone && !dropZone.querySelector('.upload-placeholder-mini')) {
730
+ const mini = document.createElement('div');
731
+ mini.className = 'upload-placeholder-mini';
732
+ mini.innerHTML = '<span>' + _t('batchAddMore') + '</span>';
733
+ dropZone.appendChild(mini);
734
+ }
735
+ }
736
+
737
+ let batchDragPlaceholderEl = null;
738
+ let batchPointerState = null;
739
+ let batchPendingPhX = null;
740
+ let batchPhMoveRaf = null;
741
+
742
+ function batchRemoveFloatingGhost() {
743
+ document.querySelectorAll('.batch-thumb-floating-ghost').forEach((n) => n.remove());
744
+ }
745
+
746
+ function batchCancelPhMoveRaf() {
747
+ if (batchPhMoveRaf != null) {
748
+ cancelAnimationFrame(batchPhMoveRaf);
749
+ batchPhMoveRaf = null;
750
+ }
751
+ batchPendingPhX = null;
752
+ }
753
+
754
+ function batchEnsurePlaceholder() {
755
+ if (batchDragPlaceholderEl && batchDragPlaceholderEl.isConnected) return batchDragPlaceholderEl;
756
+ const el = document.createElement('div');
757
+ el.className = 'batch-thumb-drop-slot';
758
+ el.setAttribute('aria-hidden', 'true');
759
+ batchDragPlaceholderEl = el;
760
+ return el;
761
+ }
762
+
763
+ function batchRemovePlaceholder() {
764
+ if (batchDragPlaceholderEl && batchDragPlaceholderEl.parentNode) {
765
+ batchDragPlaceholderEl.parentNode.removeChild(batchDragPlaceholderEl);
766
+ }
767
+ }
768
+
769
+ function batchComputeInsertIndex(container, placeholder) {
770
+ let t = 0;
771
+ for (const child of container.children) {
772
+ if (child === placeholder) return t;
773
+ if (child.classList && child.classList.contains('batch-image-wrapper')) {
774
+ if (!child.classList.contains('batch-thumb--source')) t++;
775
+ }
776
+ }
777
+ return t;
778
+ }
779
+
780
+ function batchMovePlaceholderFromPoint(container, clientX) {
781
+ const ph = batchEnsurePlaceholder();
782
+ const wrappers = [...container.querySelectorAll('.batch-image-wrapper')];
783
+ let insertBefore = null;
784
+ for (const w of wrappers) {
785
+ if (w.classList.contains('batch-thumb--source')) continue;
786
+ const r = w.getBoundingClientRect();
787
+ if (clientX < r.left + r.width / 2) {
788
+ insertBefore = w;
789
+ break;
790
+ }
791
+ }
792
+ if (insertBefore === null) {
793
+ const vis = wrappers.filter((w) => !w.classList.contains('batch-thumb--source'));
794
+ const last = vis[vis.length - 1];
795
+ if (last) {
796
+ if (last.nextSibling) {
797
+ container.insertBefore(ph, last.nextSibling);
798
+ } else {
799
+ container.appendChild(ph);
800
+ }
801
+ } else {
802
+ container.appendChild(ph);
803
+ }
804
+ } else {
805
+ container.insertBefore(ph, insertBefore);
806
+ }
807
+ }
808
+
809
+ function batchFlushPlaceholderMove() {
810
+ batchPhMoveRaf = null;
811
+ if (!batchPointerState || batchPendingPhX == null) return;
812
+ batchMovePlaceholderFromPoint(batchPointerState.container, batchPendingPhX);
813
+ }
814
+
815
+ function handleBatchPointerMove(e) {
816
+ if (!batchPointerState) return;
817
+ e.preventDefault();
818
+ const st = batchPointerState;
819
+ st.ghostTX = e.clientX - st.offsetX;
820
+ st.ghostTY = e.clientY - st.offsetY;
821
+ batchPendingPhX = e.clientX;
822
+ if (batchPhMoveRaf == null) {
823
+ batchPhMoveRaf = requestAnimationFrame(batchFlushPlaceholderMove);
824
+ }
825
+ }
826
+
827
+ function batchGhostFrame() {
828
+ const st = batchPointerState;
829
+ if (!st || !st.ghostEl || !st.ghostEl.isConnected) {
830
+ return;
831
+ }
832
+ const t = 0.42;
833
+ st.ghostCX += (st.ghostTX - st.ghostCX) * t;
834
+ st.ghostCY += (st.ghostTY - st.ghostCY) * t;
835
+ st.ghostEl.style.transform =
836
+ `translate3d(${st.ghostCX}px,${st.ghostCY}px,0) scale(1.06) rotate(-1deg)`;
837
+ st.ghostRaf = requestAnimationFrame(batchGhostFrame);
838
+ }
839
+
840
+ function batchStartGhostLoop() {
841
+ const st = batchPointerState;
842
+ if (!st || !st.ghostEl) return;
843
+ if (st.ghostRaf != null) cancelAnimationFrame(st.ghostRaf);
844
+ st.ghostRaf = requestAnimationFrame(batchGhostFrame);
845
+ }
846
+
847
+ function batchEndPointerDrag(e) {
848
+ if (!batchPointerState) return;
849
+ if (e.pointerId !== batchPointerState.pointerId) return;
850
+ const st = batchPointerState;
851
+
852
+ batchCancelPhMoveRaf();
853
+ if (st.ghostRaf != null) {
854
+ cancelAnimationFrame(st.ghostRaf);
855
+ st.ghostRaf = null;
856
+ }
857
+ if (st.ghostEl && st.ghostEl.parentNode) {
858
+ st.ghostEl.remove();
859
+ }
860
+ batchPointerState = null;
861
+
862
+ document.removeEventListener('pointermove', handleBatchPointerMove);
863
+ document.removeEventListener('pointerup', batchEndPointerDrag);
864
+ document.removeEventListener('pointercancel', batchEndPointerDrag);
865
+
866
+ try {
867
+ if (st.wrapperEl) st.wrapperEl.releasePointerCapture(st.pointerId);
868
+ } catch (_) {}
869
+
870
+ const { fromIndex, container, wrapperEl } = st;
871
+ container.classList.remove('is-batch-settling');
872
+ if (!batchDragPlaceholderEl || !batchDragPlaceholderEl.parentNode) {
873
+ if (wrapperEl) wrapperEl.classList.remove('batch-thumb--source');
874
+ renderBatchImages();
875
+ updateBatchSegments();
876
+ return;
877
+ }
878
+ const to = batchComputeInsertIndex(container, batchDragPlaceholderEl);
879
+ batchRemovePlaceholder();
880
+ if (wrapperEl) wrapperEl.classList.remove('batch-thumb--source');
881
+
882
+ if (fromIndex !== to && fromIndex >= 0 && to >= 0) {
883
+ const [item] = batchImages.splice(fromIndex, 1);
884
+ batchImages.splice(to, 0, item);
885
+ updateBatchSegments();
886
+ }
887
+ renderBatchImages();
888
+ }
889
+
890
+ function handleBatchPointerDown(e) {
891
+ if (batchPointerState) return;
892
+ if (e.button !== 0) return;
893
+ if (e.target.closest && e.target.closest('.batch-thumb-remove')) return;
894
+
895
+ const wrapper = e.currentTarget;
896
+ const container = document.getElementById('batch-images-container');
897
+ if (!container) return;
898
+
899
+ e.preventDefault();
900
+ e.stopPropagation();
901
+
902
+ const fromIndex = parseInt(wrapper.dataset.index, 10);
903
+ if (Number.isNaN(fromIndex)) return;
904
+
905
+ const rect = wrapper.getBoundingClientRect();
906
+ const offsetX = e.clientX - rect.left;
907
+ const offsetY = e.clientY - rect.top;
908
+ const startLeft = rect.left;
909
+ const startTop = rect.top;
910
+
911
+ const ghost = document.createElement('div');
912
+ ghost.className = 'batch-thumb-floating-ghost';
913
+ const gImg = document.createElement('img');
914
+ const srcImg = wrapper.querySelector('img');
915
+ gImg.src = srcImg ? srcImg.src : '';
916
+ gImg.alt = '';
917
+ ghost.appendChild(gImg);
918
+ document.body.appendChild(ghost);
919
+
920
+ batchPointerState = {
921
+ fromIndex,
922
+ pointerId: e.pointerId,
923
+ wrapperEl: wrapper,
924
+ container,
925
+ ghostEl: ghost,
926
+ offsetX,
927
+ offsetY,
928
+ ghostTX: e.clientX - offsetX,
929
+ ghostTY: e.clientY - offsetY,
930
+ ghostCX: startLeft,
931
+ ghostCY: startTop,
932
+ ghostRaf: null
933
+ };
934
+
935
+ ghost.style.transform =
936
+ `translate3d(${startLeft}px,${startTop}px,0) scale(1.06) rotate(-1deg)`;
937
+
938
+ container.classList.add('is-batch-settling');
939
+ wrapper.classList.add('batch-thumb--source');
940
+ const ph = batchEnsurePlaceholder();
941
+ container.insertBefore(ph, wrapper.nextSibling);
942
+ /* 不在 pointerdown 立刻重算槽位;双 rAF 后再恢复邻居 transition,保证先完成本帧布局再动画 */
943
+ requestAnimationFrame(() => {
944
+ requestAnimationFrame(() => {
945
+ container.classList.remove('is-batch-settling');
946
+ });
947
+ });
948
+
949
+ batchStartGhostLoop();
950
+
951
+ document.addEventListener('pointermove', handleBatchPointerMove, { passive: false });
952
+ document.addEventListener('pointerup', batchEndPointerDrag);
953
+ document.addEventListener('pointercancel', batchEndPointerDrag);
954
+
955
+ try {
956
+ wrapper.setPointerCapture(e.pointerId);
957
+ } catch (_) {}
958
+ }
959
+
960
+ function removeBatchImage(index) {
961
+ if (index < 0 || index >= batchImages.length) return;
962
+ batchImages.splice(index, 1);
963
+ renderBatchImages();
964
+ updateBatchSegments();
965
+ }
966
+
967
+ // 横向缩略图:Pointer 拖动排序(避免 HTML5 DnD 在 WebView/部分浏览器失效)
968
+ function renderBatchImages() {
969
+ const container = document.getElementById('batch-images-container');
970
+ if (!container) return;
971
+
972
+ syncBatchDropZoneChrome();
973
+ batchRemovePlaceholder();
974
+ batchCancelPhMoveRaf();
975
+ batchRemoveFloatingGhost();
976
+ batchPointerState = null;
977
+ container.classList.remove('is-batch-settling');
978
+ container.innerHTML = '';
979
+
980
+ batchImages.forEach((img, index) => {
981
+ const wrapper = document.createElement('div');
982
+ wrapper.className = 'batch-image-wrapper';
983
+ wrapper.dataset.index = String(index);
984
+ wrapper.title = _t('batchThumbDrag');
985
+
986
+ const imgWrap = document.createElement('div');
987
+ imgWrap.className = 'batch-thumb-img-wrap';
988
+ const im = document.createElement('img');
989
+ im.className = 'batch-thumb-img';
990
+ im.src = img.preview;
991
+ im.alt = img.name || '';
992
+ im.draggable = false;
993
+ imgWrap.appendChild(im);
994
+
995
+ const del = document.createElement('button');
996
+ del.type = 'button';
997
+ del.className = 'batch-thumb-remove';
998
+ del.title = _t('batchThumbRemove');
999
+ del.setAttribute('aria-label', _t('batchThumbRemove'));
1000
+ del.textContent = '×';
1001
+ del.addEventListener('pointerdown', (ev) => ev.stopPropagation());
1002
+ del.addEventListener('click', (ev) => {
1003
+ ev.stopPropagation();
1004
+ removeBatchImage(index);
1005
+ });
1006
+
1007
+ wrapper.appendChild(imgWrap);
1008
+ wrapper.appendChild(del);
1009
+
1010
+ wrapper.addEventListener('pointerdown', handleBatchPointerDown);
1011
+
1012
+ container.appendChild(wrapper);
1013
+ });
1014
+ }
1015
+
1016
+ function batchWorkflowIsSingle() {
1017
+ const r = document.querySelector('input[name="batch-workflow"]:checked');
1018
+ return !!(r && r.value === 'single');
1019
+ }
1020
+
1021
+ function onBatchWorkflowChange() {
1022
+ updateBatchSegments();
1023
+ }
1024
+
1025
+ // 更新片段设置 UI(分段模式)或单次多关键帧设置
1026
+ function updateBatchSegments() {
1027
+ const container = document.getElementById('batch-segments-container');
1028
+ if (!container) return;
1029
+
1030
+ if (batchImages.length < 2) {
1031
+ container.innerHTML =
1032
+ '<div style="color: var(--text-dim); font-size: 11px;">' +
1033
+ escapeHtmlAttr(_t('batchNeedTwo')) +
1034
+ '</div>';
1035
+ return;
1036
+ }
1037
+
1038
+ if (batchWorkflowIsSingle()) {
1039
+ if (batchImages.length >= 2) captureBatchKfTimelineFromDom();
1040
+ const n = batchImages.length;
1041
+ const defaultTotal = 8;
1042
+ const defaultSeg =
1043
+ n > 1 ? (defaultTotal / (n - 1)).toFixed(1) : '4';
1044
+ let blocks = '';
1045
+ batchImages.forEach((img, i) => {
1046
+ const path = img.path || '';
1047
+ const stDef = defaultKeyframeStrengthForIndex(i, n);
1048
+ const stStored = batchKfStrengthByPath[path];
1049
+ const stVal = stStored !== undefined && stStored !== ''
1050
+ ? escapeHtmlAttr(stStored)
1051
+ : stDef;
1052
+ const prev = escapeHtmlAttr(img.preview || '');
1053
+ if (i > 0) {
1054
+ const j = i - 1;
1055
+ const sdStored = batchKfSegDurByIndex[j];
1056
+ const segVal =
1057
+ sdStored !== undefined && sdStored !== ''
1058
+ ? escapeHtmlAttr(sdStored)
1059
+ : defaultSeg;
1060
+ blocks += `
1061
+ <div class="batch-kf-gap">
1062
+ <div class="batch-kf-gap-rail" aria-hidden="true"></div>
1063
+ <div class="batch-kf-gap-inner">
1064
+ <span class="batch-kf-gap-ix">${i}→${i + 1}</span>
1065
+ <label class="batch-kf-seg-field">
1066
+ <input type="number" class="batch-kf-seg-input" id="batch-kf-seg-dur-${j}"
1067
+ value="${segVal}" min="0.1" max="120" step="0.1"
1068
+ title="${escapeHtmlAttr(_t('batchGapInputTitle'))}"
1069
+ oninput="updateBatchKfTimelineDerivedUI()">
1070
+ <span class="batch-kf-gap-unit">${escapeHtmlAttr(_t('batchSec'))}</span>
1071
+ </label>
1072
+ </div>
1073
+ </div>`;
1074
+ }
1075
+ blocks += `
1076
+ <div class="batch-kf-kcard">
1077
+ <div class="batch-kf-kcard-head">
1078
+ <img class="batch-kf-kthumb" src="${prev}" alt="">
1079
+ <div class="batch-kf-kcard-titles">
1080
+ <span class="batch-kf-ktitle">${escapeHtmlAttr(_t('batchKfTitle'))} ${i + 1} / ${n}</span>
1081
+ <span class="batch-kf-anchor" id="batch-kf-anchor-label-${i}">—</span>
1082
+ </div>
1083
+ </div>
1084
+ <div class="batch-kf-kcard-ctrl">
1085
+ <label class="batch-kf-klabel">${escapeHtmlAttr(_t('batchStrength'))}
1086
+ <input type="number" id="batch-kf-strength-${i}" value="${stVal}" min="0.1" max="1" step="0.01"
1087
+ title="${escapeHtmlAttr(_t('batchStrengthTitle'))}">
1088
+ </label>
1089
+ </div>
1090
+ </div>`;
1091
+ });
1092
+ container.innerHTML = `
1093
+ <div class="batch-kf-panel" id="batch-kf-timeline-root">
1094
+ <div class="batch-kf-panel-hd">
1095
+ <div class="batch-kf-panel-title">${escapeHtmlAttr(_t('batchKfPanelTitle'))}</div>
1096
+ <div class="batch-kf-total-pill" title="${escapeHtmlAttr(_t('batchTotalPillTitle'))}">
1097
+ ${escapeHtmlAttr(_t('batchTotalDur'))} <strong id="batch-kf-total-seconds">—</strong> <span class="batch-kf-total-unit">${escapeHtmlAttr(_t('batchTotalSec'))}</span>
1098
+ </div>
1099
+ </div>
1100
+ <p class="batch-kf-panel-hint">${escapeHtmlAttr(_t('batchPanelHint'))}</p>
1101
+ <div class="batch-kf-timeline-col">
1102
+ ${blocks}
1103
+ </div>
1104
+ </div>`;
1105
+ updateBatchKfTimelineDerivedUI();
1106
+ return;
1107
+ }
1108
+
1109
+ let html =
1110
+ '<div style="font-size: 12px; font-weight: bold; margin-bottom: 10px;">' +
1111
+ escapeHtmlAttr(_t('batchSegTitle')) +
1112
+ '</div>';
1113
+
1114
+ for (let i = 0; i < batchImages.length - 1; i++) {
1115
+ const segPh = escapeHtmlAttr(_t('batchSegPromptPh'));
1116
+ html += `
1117
+ <div style="background: var(--item); border-radius: 8px; padding: 10px; margin-bottom: 10px; border: 1px solid var(--border);">
1118
+ <div style="display: flex; align-items: center; justify-content: space-between; margin-bottom: 8px;">
1119
+ <div style="display: flex; align-items: center; gap: 8px;">
1120
+ <img src="${batchImages[i].preview}" style="width: 40px; height: 40px; border-radius: 4px; object-fit: cover;">
1121
+ <span style="color: var(--accent);">→</span>
1122
+ <img src="${batchImages[i + 1].preview}" style="width: 40px; height: 40px; border-radius: 4px; object-fit: cover;">
1123
+ <span style="font-size: 11px; color: var(--text-dim);">${escapeHtmlAttr(_t('batchSegClip'))} ${i + 1}</span>
1124
+ </div>
1125
+ <div style="display: flex; align-items: center; gap: 6px;">
1126
+ <label style="font-size: 10px; color: var(--text-dim);">${escapeHtmlAttr(_t('batchSegDuration'))}</label>
1127
+ <input type="number" id="batch-segment-duration-${i}" value="5" min="1" max="30" step="1" style="width: 50px; padding: 4px; font-size: 11px;">
1128
+ <span style="font-size: 10px; color: var(--text-dim);">${escapeHtmlAttr(_t('batchSegSec'))}</span>
1129
+ </div>
1130
+ </div>
1131
+ <div>
1132
+ <label style="font-size: 10px;">${escapeHtmlAttr(_t('batchSegPrompt'))}</label>
1133
+ <textarea id="batch-segment-prompt-${i}" placeholder="${segPh}" style="width: 100%; height: 60px; padding: 6px; font-size: 11px; box-sizing: border-box; resize: vertical;"></textarea>
1134
+ </div>
1135
+ </div>
1136
+ `;
1137
+ }
1138
+
1139
+ container.innerHTML = html;
1140
+ }
1141
+
1142
+ let _isGeneratingFlag = false;
1143
+
1144
+ // 系统状态轮询
1145
+ async function checkStatus() {
1146
+ try {
1147
+ const h = await fetch(`${BASE}/health`).then(r => r.json()).catch(() => ({status: "error"}));
1148
+ const g = await fetch(`${BASE}/api/gpu-info`).then(r => r.json()).catch(() => ({gpu_info: {}}));
1149
+ const p = await fetch(`${BASE}/api/generation/progress`).then(r => r.json()).catch(() => ({progress: 0}));
1150
+ const sysGpus = await fetch(`${BASE}/api/system/list-gpus`).then(r => r.json()).catch(() => ({gpus: []}));
1151
+
1152
+ const activeGpu = (sysGpus.gpus || []).find(x => x.active) || (sysGpus.gpus || [])[0] || {};
1153
+ const gpuName = activeGpu.name || g.gpu_info?.name || "GPU";
1154
+
1155
+ const s = document.getElementById('sys-status');
1156
+ const indicator = document.getElementById('sys-indicator');
1157
+
1158
+ const isReady = h.status === "ok" || h.status === "ready" || h.models_loaded;
1159
+ const backendActive = (p && p.progress > 0);
1160
+
1161
+ if (_isGeneratingFlag || backendActive) {
1162
+ s.innerText = `${gpuName}: ${_t('sysBusy')}`;
1163
+ if(indicator) indicator.className = 'indicator-busy';
1164
+ } else {
1165
+ s.innerText = isReady ? `${gpuName}: ${_t('sysOnline')}` : `${gpuName}: ${_t('sysStarting')}`;
1166
+ if(indicator) indicator.className = isReady ? 'indicator-ready' : 'indicator-offline';
1167
+ }
1168
+ s.style.color = "var(--text-dim)";
1169
+
1170
+ const vUsedMB = g.gpu_info?.vramUsed || 0;
1171
+ const vTotalMB = activeGpu.vram_mb || g.gpu_info?.vram || 32768;
1172
+ const vUsedGB = vUsedMB / 1024;
1173
+ const vTotalGB = vTotalMB / 1024;
1174
+
1175
+ document.getElementById('vram-fill').style.width = (vUsedMB / vTotalMB * 100) + "%";
1176
+ document.getElementById('vram-text').innerText = `${vUsedGB.toFixed(1)} / ${vTotalGB.toFixed(0)} GB`;
1177
+ } catch(e) { document.getElementById('sys-status').innerText = _t('sysOffline'); }
1178
+ }
1179
+ setInterval(checkStatus, 1000); // 提升到 1 秒一次实时监控
1180
+ checkStatus();
1181
+ initDragAndDrop();
1182
+ listGpus(); // 初始化 GPU 列表
1183
+ // 已移除:输出目录自定义(保持后端默认路径)
1184
+
1185
+ updateResPreview();
1186
+ updateBatchResPreview();
1187
+ updateImgResPreview();
1188
+ refreshPromptPlaceholder();
1189
+
1190
+ window.onUiLanguageChanged = function () {
1191
+ updateResPreview();
1192
+ updateBatchResPreview();
1193
+ updateImgResPreview();
1194
+ refreshPromptPlaceholder();
1195
+ if (typeof currentMode !== 'undefined' && currentMode === 'batch') {
1196
+ updateBatchSegments();
1197
+ }
1198
+ updateModelDropdown();
1199
+ updateLoraDropdown();
1200
+ updateBatchModelDropdown();
1201
+ updateBatchLoraDropdown();
1202
+ };
1203
+
1204
+ async function setOutputDir() {
1205
+ const dir = document.getElementById('global-out-dir').value.trim();
1206
+ localStorage.setItem('output_dir', dir);
1207
+ try {
1208
+ const res = await fetch(`${BASE}/api/system/set-dir`, {
1209
+ method: 'POST',
1210
+ headers: { 'Content-Type': 'application/json' },
1211
+ body: JSON.stringify({ directory: dir })
1212
+ });
1213
+ if (res.ok) {
1214
+ addLog(`✅ 存储路径更新成功! 当前路径: ${dir || _t('defaultPath')}`);
1215
+ if (typeof fetchHistory === 'function') fetchHistory(currentHistoryPage);
1216
+ }
1217
+ } catch (e) {
1218
+ addLog(`❌ 设置路径时连接异常: ${e.message}`);
1219
+ }
1220
+ }
1221
+
1222
+ async function browseOutputDir() {
1223
+ try {
1224
+ const res = await fetch(`${BASE}/api/system/browse-dir`);
1225
+ const data = await res.json();
1226
+ if (data.status === "success" && data.directory) {
1227
+ document.getElementById('global-out-dir').value = data.directory;
1228
+ // auto apply immediately
1229
+ setOutputDir();
1230
+ addLog(`📂 检测到新路径,已自动套用!`);
1231
+ } else if (data.error) {
1232
+ addLog(`❌ 内部系统权限拦截了弹窗: ${data.error}`);
1233
+ }
1234
+ } catch (e) {
1235
+ addLog(`❌ 无法调出文件夹浏览弹窗, 请直接复制粘贴绝对路径。`);
1236
+ }
1237
+ }
1238
+
1239
+ async function getOutputDir() {
1240
+ try {
1241
+ const res = await fetch(`${BASE}/api/system/get-dir`);
1242
+ const data = await res.json();
1243
+ if (data.directory && data.directory.indexOf('LTXDesktop') === -1 && document.getElementById('global-out-dir')) {
1244
+ document.getElementById('global-out-dir').value = data.directory;
1245
+ }
1246
+ } catch (e) {}
1247
+ }
1248
+
1249
+ async function saveLoraDir() {
1250
+ const input = document.getElementById('lora-dir-input');
1251
+ const status = document.getElementById('lora-dir-status');
1252
+ if (!input || !status) return;
1253
+
1254
+ const loraDir = input.value.trim();
1255
+ try {
1256
+ const res = await fetch(`${BASE}/api/lora-dir`, {
1257
+ method: 'POST',
1258
+ headers: { 'Content-Type': 'application/json' },
1259
+ body: JSON.stringify({ loraDir: loraDir })
1260
+ });
1261
+ const data = await res.json();
1262
+ if (data && data.status === 'ok') {
1263
+ status.textContent = '✓ 已保存';
1264
+ status.style.color = '#4caf50';
1265
+ setTimeout(() => { status.textContent = ''; }, 3000);
1266
+ } else {
1267
+ status.textContent = '✗ 保存失败: ' + (data.message || JSON.stringify(data));
1268
+ status.style.color = '#f44336';
1269
+ }
1270
+ } catch (e) {
1271
+ status.textContent = '✗ 保存失败: ' + e.message;
1272
+ status.style.color = '#f44336';
1273
+ }
1274
+ }
1275
+
1276
+ async function loadLoraDir() {
1277
+ try {
1278
+ const res = await fetch(`${BASE}/api/lora-dir`);
1279
+ const data = await res.json();
1280
+ if (data && document.getElementById('lora-dir-input')) {
1281
+ document.getElementById('lora-dir-input').value = data.loraDir || '';
1282
+ }
1283
+ } catch (e) {}
1284
+ }
1285
+
1286
+ function switchMode(m) {
1287
+ currentMode = m;
1288
+ document.getElementById('tab-image').classList.toggle('active', m === 'image');
1289
+ document.getElementById('tab-video').classList.toggle('active', m === 'video');
1290
+ document.getElementById('tab-batch').classList.toggle('active', m === 'batch');
1291
+ document.getElementById('tab-upscale').classList.toggle('active', m === 'upscale');
1292
+
1293
+ document.getElementById('image-opts').style.display = m === 'image' ? 'block' : 'none';
1294
+ document.getElementById('video-opts').style.display = m === 'video' ? 'block' : 'none';
1295
+ document.getElementById('batch-opts').style.display = m === 'batch' ? 'block' : 'none';
1296
+ document.getElementById('upscale-opts').style.display = m === 'upscale' ? 'block' : 'none';
1297
+ if (m === 'batch') updateBatchSegments();
1298
+
1299
+ // 如果切到图像模式,隐藏���示词框外的其他东西
1300
+ refreshPromptPlaceholder();
1301
+ }
1302
+
1303
+ function refreshPromptPlaceholder() {
1304
+ const pe = document.getElementById('prompt');
1305
+ if (!pe) return;
1306
+ pe.placeholder =
1307
+ currentMode === 'upscale' ? _t('promptPlaceholderUpscale') : _t('promptPlaceholder');
1308
+ }
1309
+
1310
+ function showGeneratingView() {
1311
+ if (!_isGeneratingFlag) return;
1312
+ const resImg = document.getElementById('res-img');
1313
+ const videoWrapper = document.getElementById('video-wrapper');
1314
+ if (resImg) resImg.style.display = "none";
1315
+ if (videoWrapper) videoWrapper.style.display = "none";
1316
+ if (player) {
1317
+ try { player.stop(); } catch(_) {}
1318
+ } else {
1319
+ const vid = document.getElementById('res-video');
1320
+ if (vid) { vid.pause(); vid.removeAttribute('src'); vid.load(); }
1321
+ }
1322
+ const loadingTxt = document.getElementById('loading-txt');
1323
+ if (loadingTxt) loadingTxt.style.display = "flex";
1324
+ }
1325
+
1326
+ async function run() {
1327
+ // 防止重复点击(_isGeneratingFlag 比 btn.disabled 更可靠)
1328
+ if (_isGeneratingFlag) {
1329
+ addLog(_t('warnGenerating'));
1330
+ return;
1331
+ }
1332
+
1333
+ const btn = document.getElementById('mainBtn');
1334
+ const promptEl = document.getElementById('prompt');
1335
+ const prompt = promptEl ? promptEl.value.trim() : '';
1336
+
1337
+ function batchHasUsablePrompt() {
1338
+ if (prompt) return true;
1339
+ const c = document.getElementById('batch-common-prompt')?.value?.trim();
1340
+ if (c) return true;
1341
+ if (typeof batchWorkflowIsSingle === 'function' && batchWorkflowIsSingle()) {
1342
+ return false;
1343
+ }
1344
+ if (batchImages.length < 2) return false;
1345
+ for (let i = 0; i < batchImages.length - 1; i++) {
1346
+ if (document.getElementById(`batch-segment-prompt-${i}`)?.value?.trim()) return true;
1347
+ }
1348
+ return false;
1349
+ }
1350
+
1351
+ if (currentMode !== 'upscale') {
1352
+ if (currentMode === 'batch') {
1353
+ if (!batchHasUsablePrompt()) {
1354
+ addLog(_t('warnBatchPrompt'));
1355
+ return;
1356
+ }
1357
+ } else if (!prompt) {
1358
+ addLog(_t('warnNeedPrompt'));
1359
+ return;
1360
+ }
1361
+ }
1362
+
1363
+ if (!btn) {
1364
+ console.error('mainBtn not found');
1365
+ return;
1366
+ }
1367
+
1368
+ // 先设置标志 + 禁用按钮,然后用顶层 try/finally 保证一定能解锁
1369
+ _isGeneratingFlag = true;
1370
+ btn.disabled = true;
1371
+
1372
+ try {
1373
+ // 安全地操作 UI 元素(改用 if 判空,防止 Plyr 接管后 getElementById 返回 null)
1374
+ const loader = document.getElementById('loading-txt');
1375
+ const resImg = document.getElementById('res-img');
1376
+ const resVideo = document.getElementById('res-video');
1377
+
1378
+ if (loader) {
1379
+ loader.style.display = "flex";
1380
+ loader.style.flexDirection = "column";
1381
+ loader.style.alignItems = "center";
1382
+ loader.style.gap = "12px";
1383
+ loader.innerHTML = `
1384
+ <div class="spinner" style="width:48px;height:48px;border-width:4px;color:var(--accent);"></div>
1385
+ <div id="loader-step-text" style="font-size:13px;font-weight:700;color:var(--text-sub);">${escapeHtmlAttr(_t('loaderGpuAlloc'))}</div>
1386
+ `;
1387
+ }
1388
+ if (resImg) resImg.style.display = "none";
1389
+ // 必须隐藏整个 video-wrapper(Plyr 外层容器),否则第二次生成时视频会与 spinner 叠加
1390
+ const videoWrapper = document.getElementById('video-wrapper');
1391
+ if (videoWrapper) videoWrapper.style.display = "none";
1392
+ if (player) { try { player.stop(); } catch(_) {} }
1393
+ else if (resVideo) { resVideo.pause?.(); resVideo.removeAttribute?.('src'); }
1394
+
1395
+ checkStatus();
1396
+
1397
+ // 重置后端状态锁(非关键,失败不影响主流程)
1398
+ try { await fetch(`${BASE}/api/system/reset-state`, { method: 'POST' }); } catch(_) {}
1399
+
1400
+ startProgressPolling();
1401
+
1402
+ // ---- 新增:在历史记录区插入「正在渲染」缩略图卡片 ----
1403
+ const historyContainer = document.getElementById('history-container');
1404
+ if (historyContainer) {
1405
+ const old = document.getElementById('current-loading-card');
1406
+ if (old) old.remove();
1407
+ const loadingCard = document.createElement('div');
1408
+ loadingCard.className = 'history-card loading-card';
1409
+ loadingCard.id = 'current-loading-card';
1410
+ loadingCard.onclick = showGeneratingView;
1411
+ loadingCard.innerHTML = `
1412
+ <div class="spinner"></div>
1413
+ <div id="loading-card-step" style="font-size:10px;color:var(--text-dim);margin-top:4px;">等待中...</div>
1414
+ `;
1415
+ historyContainer.prepend(loadingCard);
1416
+ }
1417
+
1418
+ // ---- 构建请求 ----
1419
+ let endpoint, payload;
1420
+ if (currentMode === 'image') {
1421
+ const w = parseInt(document.getElementById('img-w').value);
1422
+ const h = parseInt(document.getElementById('img-h').value);
1423
+ endpoint = '/api/generate-image';
1424
+ payload = {
1425
+ prompt, width: w, height: h,
1426
+ numSteps: parseInt(document.getElementById('img-steps').value),
1427
+ numImages: 1
1428
+ };
1429
+ addLog(`正在发起图像渲染: ${w}x${h}, Steps: ${payload.numSteps}`);
1430
+
1431
+ } else if (currentMode === 'video') {
1432
+ const res = updateResPreview();
1433
+ const dur = parseFloat(document.getElementById('vid-duration').value);
1434
+ const fps = document.getElementById('vid-fps').value;
1435
+ if (dur > 20) addLog(_t('warnVideoLong').replace('{n}', String(dur)));
1436
+
1437
+ const audio = document.getElementById('vid-audio').checked ? "true" : "false";
1438
+ const audioPath = document.getElementById('uploaded-audio-path').value;
1439
+ const startFramePathValue = document.getElementById('start-frame-path').value;
1440
+ const endFramePathValue = document.getElementById('end-frame-path').value;
1441
+
1442
+ let finalImagePath = null, finalStartFramePath = null, finalEndFramePath = null;
1443
+ if (startFramePathValue && endFramePathValue) {
1444
+ finalStartFramePath = startFramePathValue;
1445
+ finalEndFramePath = endFramePathValue;
1446
+ } else if (startFramePathValue) {
1447
+ finalImagePath = startFramePathValue;
1448
+ }
1449
+
1450
+ endpoint = '/api/generate';
1451
+ const modelSelect = document.getElementById('vid-model');
1452
+ const loraSelect = document.getElementById('vid-lora');
1453
+ const loraStrengthInput = document.getElementById('lora-strength');
1454
+ const modelPath = modelSelect ? modelSelect.value : '';
1455
+ const loraPath = loraSelect ? loraSelect.value : '';
1456
+ const loraStrength = loraStrengthInput ? (parseFloat(loraStrengthInput.value) || 1.0) : 1.0;
1457
+ console.log("modelPath:", modelPath);
1458
+ console.log("loraPath:", loraPath);
1459
+ console.log("loraStrength:", loraStrength);
1460
+ payload = {
1461
+ prompt, resolution: res, model: "ltx-2",
1462
+ cameraMotion: document.getElementById('vid-motion').value,
1463
+ negativePrompt: "low quality, blurry, noisy, static noise, distorted",
1464
+ duration: String(dur), fps, audio,
1465
+ imagePath: finalImagePath,
1466
+ audioPath: audioPath || null,
1467
+ startFramePath: finalStartFramePath,
1468
+ endFramePath: finalEndFramePath,
1469
+ aspectRatio: document.getElementById('vid-ratio').value,
1470
+ modelPath: modelPath || null,
1471
+ loraPath: loraPath || null,
1472
+ loraStrength: loraStrength,
1473
+ };
1474
+ addLog(`正在发起视频渲染: ${res}, 时长: ${dur}s, FPS: ${fps}, 模型: ${modelPath ? modelPath.split(/[/\\]/).pop() : _t('modelDefaultLabel')}, LoRA: ${loraPath ? loraPath.split(/[/\\]/).pop() : _t('loraNoneLabel')}`);
1475
+
1476
+ } else if (currentMode === 'upscale') {
1477
+ const videoPath = document.getElementById('upscale-video-path').value;
1478
+ const targetRes = document.getElementById('upscale-res').value;
1479
+ if (!videoPath) throw new Error(_t('errUpscaleNoVideo'));
1480
+ endpoint = '/api/system/upscale-video';
1481
+ payload = { video_path: videoPath, resolution: targetRes, prompt: "high quality, detailed, 4k", strength: 0.7 };
1482
+ addLog(`正在发起视频超分: 目标 ${targetRes}`);
1483
+ } else if (currentMode === 'batch') {
1484
+ const res = updateBatchResPreview();
1485
+ const commonPromptEl = document.getElementById('batch-common-prompt');
1486
+ const commonPrompt = commonPromptEl ? commonPromptEl.value : '';
1487
+ const modelSelect = document.getElementById('batch-model');
1488
+ const loraSelect = document.getElementById('batch-lora');
1489
+ const loraStrengthInput = document.getElementById('batch-lora-strength');
1490
+ const modelPath = modelSelect ? modelSelect.value : '';
1491
+ const loraPath = loraSelect ? loraSelect.value : '';
1492
+ const loraStrength = loraStrengthInput ? (parseFloat(loraStrengthInput.value) || 1.2) : 1.2;
1493
+
1494
+ if (batchImages.length < 2) {
1495
+ throw new Error(_t('errBatchMinImages'));
1496
+ }
1497
+
1498
+ if (batchWorkflowIsSingle()) {
1499
+ captureBatchKfTimelineFromDom();
1500
+ const fps = document.getElementById('vid-fps').value;
1501
+ const parts = [prompt.trim(), commonPrompt.trim()].filter(Boolean);
1502
+ const combinedPrompt = parts.join(', ');
1503
+ if (!combinedPrompt) {
1504
+ throw new Error(_t('errSingleKfPrompt'));
1505
+ }
1506
+ const nKf = batchImages.length;
1507
+ const minSeg = 0.1;
1508
+ const segDurs = [];
1509
+ for (let j = 0; j < nKf - 1; j++) {
1510
+ let v = parseFloat(document.getElementById(`batch-kf-seg-dur-${j}`)?.value);
1511
+ if (!Number.isFinite(v) || v < minSeg) v = minSeg;
1512
+ segDurs.push(v);
1513
+ }
1514
+ const sumSec = segDurs.reduce((a, b) => a + b, 0);
1515
+ const dur = Math.max(2, Math.ceil(sumSec - 1e-9));
1516
+ const times = [0];
1517
+ let acc = 0;
1518
+ for (let j = 0; j < nKf - 1; j++) {
1519
+ acc += segDurs[j];
1520
+ times.push(acc);
1521
+ }
1522
+ const strengths = [];
1523
+ for (let i = 0; i < nKf; i++) {
1524
+ const sEl = document.getElementById(`batch-kf-strength-${i}`);
1525
+ let sv = parseFloat(sEl?.value);
1526
+ if (!Number.isFinite(sv)) {
1527
+ sv = parseFloat(defaultKeyframeStrengthForIndex(i, nKf));
1528
+ }
1529
+ if (!Number.isFinite(sv)) sv = 1;
1530
+ sv = Math.max(0.1, Math.min(1.0, sv));
1531
+ strengths.push(sv);
1532
+ }
1533
+ endpoint = '/api/generate';
1534
+ payload = {
1535
+ prompt: combinedPrompt,
1536
+ resolution: res,
1537
+ model: "ltx-2",
1538
+ cameraMotion: document.getElementById('vid-motion').value,
1539
+ negativePrompt: "low quality, blurry, noisy, static noise, distorted",
1540
+ duration: String(dur),
1541
+ fps,
1542
+ audio: "false",
1543
+ imagePath: null,
1544
+ audioPath: null,
1545
+ startFramePath: null,
1546
+ endFramePath: null,
1547
+ keyframePaths: batchImages.map((b) => b.path),
1548
+ keyframeStrengths: strengths,
1549
+ keyframeTimes: times,
1550
+ aspectRatio: document.getElementById('batch-ratio').value,
1551
+ modelPath: modelPath || null,
1552
+ loraPath: loraPath || null,
1553
+ loraStrength: loraStrength,
1554
+ };
1555
+ addLog(
1556
+ `单次多关键帧: ${nKf} 锚点, 轴长合计 ${sumSec.toFixed(1)}s → 请求时长 ${dur}s, ${res}, FPS ${fps}`
1557
+ );
1558
+ } else {
1559
+ const segments = [];
1560
+ for (let i = 0; i < batchImages.length - 1; i++) {
1561
+ const duration = parseFloat(document.getElementById(`batch-segment-duration-${i}`)?.value || 5);
1562
+ const segmentPrompt = document.getElementById(`batch-segment-prompt-${i}`)?.value || '';
1563
+ const segParts = [prompt.trim(), commonPrompt.trim(), segmentPrompt.trim()].filter(Boolean);
1564
+ const combinedSegPrompt = segParts.join(', ');
1565
+ segments.push({
1566
+ startImage: batchImages[i].path,
1567
+ endImage: batchImages[i + 1].path,
1568
+ duration: duration,
1569
+ prompt: combinedSegPrompt
1570
+ });
1571
+ }
1572
+
1573
+ endpoint = '/api/generate-batch';
1574
+ const bgAudioEl = document.getElementById('batch-background-audio-path');
1575
+ const backgroundAudioPath = (bgAudioEl && bgAudioEl.value) ? bgAudioEl.value.trim() : null;
1576
+ payload = {
1577
+ segments: segments,
1578
+ resolution: res,
1579
+ model: "ltx-2",
1580
+ aspectRatio: document.getElementById('batch-ratio').value,
1581
+ modelPath: modelPath || null,
1582
+ loraPath: loraPath || null,
1583
+ loraStrength: loraStrength,
1584
+ negativePrompt: "low quality, blurry, noisy, static noise, distorted",
1585
+ backgroundAudioPath: backgroundAudioPath || null
1586
+ };
1587
+ addLog(`分段拼接: ${segments.length} 段, ${res}${backgroundAudioPath ? ',含统一配乐' : ''}`);
1588
+ }
1589
+ }
1590
+
1591
+ // ---- 发��请求 ----
1592
+ const res = await fetch(BASE + endpoint, {
1593
+ method: 'POST',
1594
+ headers: { 'Content-Type': 'application/json' },
1595
+ body: JSON.stringify(payload)
1596
+ });
1597
+ const data = await res.json();
1598
+ if (!res.ok) {
1599
+ const errMsg = data.error || data.detail || "API 拒绝了请求";
1600
+ throw new Error(typeof errMsg === 'string' ? errMsg : JSON.stringify(errMsg));
1601
+ }
1602
+
1603
+ // ---- 显示结果 ----
1604
+ const rawPath = data.image_paths ? data.image_paths[0] : data.video_path;
1605
+ if (rawPath) {
1606
+ try { displayOutput(rawPath); } catch (dispErr) { addLog(`⚠️ 播放器显示异常: ${dispErr.message}`); }
1607
+ }
1608
+
1609
+ // 强制刷新历史记录(不依赖 isLoadingHistory 标志,确保新生成的视频立即显示)
1610
+ setTimeout(() => {
1611
+ isLoadingHistory = false; // 强制重置状态
1612
+ if (typeof fetchHistory === 'function') fetchHistory(1);
1613
+ }, 500);
1614
+
1615
+ } catch (e) {
1616
+ const errText = e && e.message ? e.message : String(e);
1617
+ addLog(`❌ 渲染中断: ${errText}`);
1618
+ const loader = document.getElementById('loading-txt');
1619
+ if (loader) {
1620
+ loader.style.display = 'flex';
1621
+ loader.textContent = '';
1622
+ const span = document.createElement('span');
1623
+ span.style.cssText = 'color:var(--text-sub);font-size:13px;padding:12px;text-align:center;';
1624
+ span.textContent = `渲染失败:${errText}`;
1625
+ loader.appendChild(span);
1626
+ }
1627
+
1628
+ } finally {
1629
+ // ✅ 无论发生什么,这里一定执行,确保按钮永远可以再次点击
1630
+ _isGeneratingFlag = false;
1631
+ btn.disabled = false;
1632
+ stopProgressPolling();
1633
+ checkStatus();
1634
+ // 生成完毕后自动释放显存(不 await 避免阻塞 UI 解锁)
1635
+ setTimeout(() => { clearGpu(); }, 500);
1636
+ }
1637
+ }
1638
+
1639
+ async function clearGpu() {
1640
+ const btn = document.getElementById('clearGpuBtn');
1641
+ btn.disabled = true;
1642
+ btn.innerText = _t('clearingVram');
1643
+ try {
1644
+ const res = await fetch(`${BASE}/api/system/clear-gpu`, {
1645
+ method: 'POST',
1646
+ headers: { 'Content-Type': 'application/json' }
1647
+ });
1648
+ const data = await res.json();
1649
+ if (res.ok) {
1650
+ addLog(`🧹 显存清理成功: ${data.message}`);
1651
+ // 立即触发状态刷新
1652
+ checkStatus();
1653
+ setTimeout(checkStatus, 1000);
1654
+ } else {
1655
+ const errMsg = data.error || data.detail || "后端未实现此接口 (404)";
1656
+ throw new Error(errMsg);
1657
+ }
1658
+ } catch(e) {
1659
+ addLog(`❌ 清理显存失败: ${e.message}`);
1660
+ } finally {
1661
+ btn.disabled = false;
1662
+ btn.innerText = _t('clearVram');
1663
+ }
1664
+ }
1665
+
1666
+ async function listGpus() {
1667
+ try {
1668
+ const res = await fetch(`${BASE}/api/system/list-gpus`);
1669
+ const data = await res.json();
1670
+ if (res.ok && data.gpus) {
1671
+ const selector = document.getElementById('gpu-selector');
1672
+ selector.innerHTML = data.gpus.map(g =>
1673
+ `<option value="${g.id}" ${g.active ? 'selected' : ''}>GPU ${g.id}: ${g.name} (${g.vram})</option>`
1674
+ ).join('');
1675
+
1676
+ // 更新当前显示的 GPU 名称
1677
+ const activeGpu = data.gpus.find(g => g.active);
1678
+ if (activeGpu) document.getElementById('gpu-name').innerText = activeGpu.name;
1679
+ }
1680
+ } catch (e) {
1681
+ console.error("Failed to list GPUs", e);
1682
+ }
1683
+ }
1684
+
1685
+ async function switchGpu(id) {
1686
+ if (!id) return;
1687
+ addLog(`🔄 正在切换到 GPU ${id}...`);
1688
+ try {
1689
+ const res = await fetch(`${BASE}/api/system/switch-gpu`, {
1690
+ method: 'POST',
1691
+ headers: { 'Content-Type': 'application/json' },
1692
+ body: JSON.stringify({ gpu_id: parseInt(id) })
1693
+ });
1694
+ const data = await res.json();
1695
+ if (res.ok) {
1696
+ addLog(`✅ 已成功切换到 GPU ${id},模型将重新加载。`);
1697
+ listGpus(); // 重新获取列表以同步状态
1698
+ setTimeout(checkStatus, 1000);
1699
+ } else {
1700
+ throw new Error(data.error || "切换失败");
1701
+ }
1702
+ } catch (e) {
1703
+ addLog(`❌ GPU 切换失败: ${e.message}`);
1704
+ }
1705
+ }
1706
+
1707
+ function startProgressPolling() {
1708
+ if (pollInterval) clearInterval(pollInterval);
1709
+ pollInterval = setInterval(async () => {
1710
+ try {
1711
+ const res = await fetch(`${BASE}/api/generation/progress`);
1712
+ const d = await res.json();
1713
+ if (d.progress > 0) {
1714
+ const ph = String(d.phase || 'inference');
1715
+ const phaseKey = 'phase_' + ph;
1716
+ let phaseStr = _t(phaseKey);
1717
+ if (phaseStr === phaseKey) phaseStr = ph;
1718
+
1719
+ let stepLabel;
1720
+ if (d.current_step !== undefined && d.current_step !== null && d.total_steps) {
1721
+ stepLabel = `${d.current_step}/${d.total_steps} ${_t('progressStepUnit')}`;
1722
+ } else {
1723
+ stepLabel = `${d.progress}%`;
1724
+ }
1725
+
1726
+ document.getElementById('progress-fill').style.width = d.progress + "%";
1727
+ const loaderStep = document.getElementById('loader-step-text');
1728
+ const busyLine = `${_t('gpuBusyPrefix')}: ${stepLabel} [${phaseStr}]`;
1729
+ if (loaderStep) loaderStep.innerText = busyLine;
1730
+ else {
1731
+ const loadingTxt = document.getElementById('loading-txt');
1732
+ if (loadingTxt) loadingTxt.innerText = busyLine;
1733
+ }
1734
+
1735
+ // 同步更新历史缩略图卡片上的进度文字
1736
+ const cardStep = document.getElementById('loading-card-step');
1737
+ if (cardStep) cardStep.innerText = stepLabel;
1738
+ }
1739
+ } catch(e) {}
1740
+ }, 1000);
1741
+ }
1742
+
1743
+ function stopProgressPolling() {
1744
+ clearInterval(pollInterval);
1745
+ pollInterval = null;
1746
+ document.getElementById('progress-fill').style.width = "0%";
1747
+ // 移除渲染中的卡片(生成已结束)
1748
+ const lc = document.getElementById('current-loading-card');
1749
+ if (lc) lc.remove();
1750
+ }
1751
+
1752
+ function displayOutput(fileOrPath) {
1753
+ const img = document.getElementById('res-img');
1754
+ const vid = document.getElementById('res-video');
1755
+ const loader = document.getElementById('loading-txt');
1756
+
1757
+ // 关键BUG修复:切换前强制清除并停止现有视频和声音,避免后台继续播放
1758
+ if(player) {
1759
+ player.stop();
1760
+ } else {
1761
+ vid.pause();
1762
+ vid.removeAttribute('src');
1763
+ vid.load();
1764
+ }
1765
+
1766
+ let url = "";
1767
+ let fileName = fileOrPath;
1768
+ if (fileOrPath.indexOf('\\') !== -1 || fileOrPath.indexOf('/') !== -1) {
1769
+ url = `${BASE}/api/system/file?path=${encodeURIComponent(fileOrPath)}&t=${Date.now()}`;
1770
+ fileName = fileOrPath.split(/[\\/]/).pop();
1771
+ } else {
1772
+ const outInput = document.getElementById('global-out-dir');
1773
+ const globalDir = outInput ? outInput.value.replace(/\\/g, '/').replace(/\/$/, '') : "";
1774
+ if (globalDir && globalDir !== "") {
1775
+ url = `${BASE}/api/system/file?path=${encodeURIComponent(globalDir + '/' + fileOrPath)}&t=${Date.now()}`;
1776
+ } else {
1777
+ url = `${BASE}/outputs/${fileOrPath}?t=${Date.now()}`;
1778
+ }
1779
+ }
1780
+
1781
+ loader.style.display = "none";
1782
+ if (currentMode === 'image') {
1783
+ img.src = url;
1784
+ img.style.display = "block";
1785
+ addLog(`✅ 图像渲染成功: ${fileName}`);
1786
+ } else {
1787
+ document.getElementById('video-wrapper').style.display = "flex";
1788
+
1789
+ if(player) {
1790
+ player.source = {
1791
+ type: 'video',
1792
+ sources: [{ src: url, type: 'video/mp4' }]
1793
+ };
1794
+ player.play();
1795
+ } else {
1796
+ vid.src = url;
1797
+ }
1798
+ addLog(`✅ 视频渲染成功: ${fileName}`);
1799
+ }
1800
+ }
1801
+
1802
+
1803
+
1804
+ function addLog(msg) {
1805
+ const log = document.getElementById('log');
1806
+ if (!log) {
1807
+ console.log('[LTX]', msg);
1808
+ return;
1809
+ }
1810
+ const time = new Date().toLocaleTimeString();
1811
+ log.innerHTML += `<div style="margin-bottom:5px"> <span style="color:var(--text-dim)">[${time}]</span> ${msg}</div>`;
1812
+ log.scrollTop = log.scrollHeight;
1813
+ }
1814
+
1815
+
1816
+ // Force switch to video mode on load
1817
+ window.addEventListener('DOMContentLoaded', () => switchMode('video'));
1818
+
1819
+
1820
+
1821
+
1822
+
1823
+
1824
+
1825
+
1826
+
1827
+
1828
+
1829
+
1830
+ let currentHistoryPage = 1;
1831
+ let isLoadingHistory = false;
1832
+ /** 与上次成功渲染一致时,silent 轮询跳过整表 innerHTML,避免缩略图周期性重新加载 */
1833
+ let _historyListFingerprint = '';
1834
+
1835
+ function switchLibTab(tab) {
1836
+ document.getElementById('log-container').style.display = tab === 'log' ? 'flex' : 'none';
1837
+ const hw = document.getElementById('history-wrapper');
1838
+ if (hw) hw.style.display = tab === 'history' ? 'block' : 'none';
1839
+
1840
+ document.getElementById('tab-log').style.color = tab === 'log' ? 'var(--accent)' : 'var(--text-dim)';
1841
+ document.getElementById('tab-log').style.borderColor = tab === 'log' ? 'var(--accent)' : 'transparent';
1842
+
1843
+ document.getElementById('tab-history').style.color = tab === 'history' ? 'var(--accent)' : 'var(--text-dim)';
1844
+ document.getElementById('tab-history').style.borderColor = tab === 'history' ? 'var(--accent)' : 'transparent';
1845
+
1846
+ if (tab === 'history') {
1847
+ fetchHistory();
1848
+ }
1849
+ }
1850
+
1851
+ async function fetchHistory(isFirstLoad = false, silent = false) {
1852
+ if (isLoadingHistory) return;
1853
+ isLoadingHistory = true;
1854
+
1855
+ try {
1856
+ // 加载所有历史,不分页
1857
+ const res = await fetch(`${BASE}/api/system/history?page=1&limit=10000`);
1858
+ if (!res.ok) {
1859
+ isLoadingHistory = false;
1860
+ return;
1861
+ }
1862
+ const data = await res.json();
1863
+
1864
+ const validHistory = (data.history || []).filter(item => item && item.filename);
1865
+ const fingerprint = validHistory.length === 0
1866
+ ? '__empty__'
1867
+ : validHistory.map(h => `${h.type}|${h.filename}`).join('\0');
1868
+
1869
+ if (silent && fingerprint === _historyListFingerprint) {
1870
+ return;
1871
+ }
1872
+
1873
+ const container = document.getElementById('history-container');
1874
+ if (!container) {
1875
+ return;
1876
+ }
1877
+
1878
+ let loadingCardHtml = "";
1879
+ const lc = document.getElementById('current-loading-card');
1880
+ if (lc && _isGeneratingFlag) {
1881
+ loadingCardHtml = lc.outerHTML;
1882
+ }
1883
+
1884
+ if (validHistory.length === 0) {
1885
+ container.innerHTML = loadingCardHtml;
1886
+ const newLcEmpty = document.getElementById('current-loading-card');
1887
+ if (newLcEmpty) newLcEmpty.onclick = showGeneratingView;
1888
+ _historyListFingerprint = fingerprint;
1889
+ return;
1890
+ }
1891
+
1892
+ container.innerHTML = loadingCardHtml;
1893
+
1894
+ const outInput = document.getElementById('global-out-dir');
1895
+ const globalDir = outInput ? outInput.value.replace(/\\/g, '/').replace(/\/$/, '') : "";
1896
+
1897
+ const cardsHtml = validHistory.map((item, index) => {
1898
+ const url = (globalDir && globalDir !== "")
1899
+ ? `${BASE}/api/system/file?path=${encodeURIComponent(globalDir + '/' + item.filename)}`
1900
+ : `${BASE}/outputs/${item.filename}`;
1901
+
1902
+ const safeFilename = item.filename.replace(/'/g, "\\'").replace(/"/g, '\\"');
1903
+ const media = item.type === 'video'
1904
+ ? `<video data-src="${url}#t=0.001" class="lazy-load history-thumb-media" muted loop preload="none" playsinline onmouseover="if(this.readyState >= 2) this.play()" onmouseout="this.pause()" style="pointer-events: none; object-fit: cover; width: 100%; height: 100%;"></video>`
1905
+ : `<img data-src="${url}" class="lazy-load history-thumb-media" alt="" style="object-fit: cover; width: 100%; height: 100%;">`;
1906
+ return `<div class="history-card" onclick="displayHistoryOutput('${safeFilename}', '${item.type}')">
1907
+ <div class="history-type-badge">${item.type === 'video' ? '🎬 VID' : '🎨 IMG'}</div>
1908
+ <button class="history-delete-btn" onclick="event.stopPropagation(); deleteHistoryItem('${safeFilename}', '${item.type}', this)">✕</button>
1909
+ ${media}
1910
+ </div>`;
1911
+ }).join('');
1912
+
1913
+ container.insertAdjacentHTML('beforeend', cardsHtml);
1914
+
1915
+ // 重新绑定loading card点击事件
1916
+ const newLc = document.getElementById('current-loading-card');
1917
+ if (newLc) newLc.onclick = showGeneratingView;
1918
+
1919
+ // 加载可见的图片
1920
+ loadVisibleImages();
1921
+ _historyListFingerprint = fingerprint;
1922
+ } catch(e) {
1923
+ console.error("Failed to load history", e);
1924
+ } finally {
1925
+ isLoadingHistory = false;
1926
+ }
1927
+ }
1928
+
1929
+ async function deleteHistoryItem(filename, type, btn) {
1930
+ if (!confirm(`确定要删除 "${filename}" 吗?`)) return;
1931
+
1932
+ try {
1933
+ const res = await fetch(`${BASE}/api/system/delete-file`, {
1934
+ method: 'POST',
1935
+ headers: {'Content-Type': 'application/json'},
1936
+ body: JSON.stringify({filename: filename, type: type})
1937
+ });
1938
+
1939
+ if (res.ok) {
1940
+ // 删除成功后移除元素
1941
+ const card = btn.closest('.history-card');
1942
+ if (card) {
1943
+ card.remove();
1944
+ }
1945
+ } else {
1946
+ alert('删除失败');
1947
+ }
1948
+ } catch(e) {
1949
+ console.error('Delete failed', e);
1950
+ alert('删除失败');
1951
+ }
1952
+ }
1953
+
1954
+ function loadVisibleImages() {
1955
+ const hw = document.getElementById('history-wrapper');
1956
+ if (!hw) return;
1957
+
1958
+ const lazyMedias = document.querySelectorAll('#history-container .lazy-load');
1959
+
1960
+ // 每次只加载3个媒体元素(图片或视频)
1961
+ let loadedCount = 0;
1962
+ lazyMedias.forEach(media => {
1963
+ if (loadedCount >= 3) return;
1964
+
1965
+ const src = media.dataset.src;
1966
+ if (!src) return;
1967
+
1968
+ // 检查是否在可见区域附近
1969
+ const rect = media.getBoundingClientRect();
1970
+ const containerRect = hw.getBoundingClientRect();
1971
+
1972
+ if (rect.top < containerRect.bottom + 300 && rect.bottom > containerRect.top - 100) {
1973
+ let revealed = false;
1974
+ let thumbRevealTimer;
1975
+ const revealThumb = () => {
1976
+ if (revealed) return;
1977
+ revealed = true;
1978
+ if (thumbRevealTimer) clearTimeout(thumbRevealTimer);
1979
+ media.classList.add('history-thumb-ready');
1980
+ };
1981
+ thumbRevealTimer = setTimeout(revealThumb, 4000);
1982
+
1983
+ if (media.tagName === 'VIDEO') {
1984
+ media.addEventListener('loadeddata', revealThumb, { once: true });
1985
+ media.addEventListener('error', revealThumb, { once: true });
1986
+ } else {
1987
+ media.addEventListener('load', revealThumb, { once: true });
1988
+ media.addEventListener('error', revealThumb, { once: true });
1989
+ }
1990
+
1991
+ media.src = src;
1992
+ media.classList.remove('lazy-load');
1993
+
1994
+ if (media.tagName === 'VIDEO') {
1995
+ media.preload = 'metadata';
1996
+ if (media.readyState >= 2) revealThumb();
1997
+ } else if (media.complete && media.naturalWidth > 0) {
1998
+ revealThumb();
1999
+ }
2000
+
2001
+ loadedCount++;
2002
+ }
2003
+ });
2004
+
2005
+ // 继续检查直到没有更多媒体需要加载
2006
+ if (loadedCount > 0) {
2007
+ setTimeout(loadVisibleImages, 100);
2008
+ }
2009
+ }
2010
+
2011
+ // 监听history-wrapper的滚动事件来懒加载
2012
+ function initHistoryScrollListener() {
2013
+ const hw = document.getElementById('history-wrapper');
2014
+ if (!hw) return;
2015
+
2016
+ let scrollTimeout;
2017
+ hw.addEventListener('scroll', () => {
2018
+ if (scrollTimeout) clearTimeout(scrollTimeout);
2019
+ scrollTimeout = setTimeout(() => {
2020
+ loadVisibleImages();
2021
+ }, 100);
2022
+ });
2023
+ }
2024
+
2025
+ // 页面加载时初始化滚动监听
2026
+ window.addEventListener('DOMContentLoaded', () => {
2027
+ setTimeout(initHistoryScrollListener, 500);
2028
+ });
2029
+
2030
+ function displayHistoryOutput(file, type) {
2031
+ document.getElementById('res-img').style.display = 'none';
2032
+ document.getElementById('video-wrapper').style.display = 'none';
2033
+
2034
+ const mode = type === 'video' ? 'video' : 'image';
2035
+ switchMode(mode);
2036
+ displayOutput(file);
2037
+ }
2038
+
2039
+ window.addEventListener('DOMContentLoaded', () => {
2040
+ // Initialize Plyr Custom Video Component
2041
+ if(window.Plyr) {
2042
+ player = new Plyr('#res-video', {
2043
+ controls: [
2044
+ 'play-large', 'play', 'progress', 'current-time',
2045
+ 'mute', 'volume', 'fullscreen'
2046
+ ],
2047
+ settings: [],
2048
+ loop: { active: true },
2049
+ autoplay: true
2050
+ });
2051
+ }
2052
+
2053
+ // Fetch current directory context to show in UI
2054
+ fetch(`${BASE}/api/system/get-dir`)
2055
+ .then((res) => res.json())
2056
+ .then((data) => {
2057
+ if (data && data.directory) {
2058
+ const outInput = document.getElementById('global-out-dir');
2059
+ if (outInput) outInput.value = data.directory;
2060
+ }
2061
+ })
2062
+ .catch((e) => console.error(e))
2063
+ .finally(() => {
2064
+ /* 先同步输出目录再拉历史,避免短时间内两次 fetchHistory 整表重绘导致缩略图闪两下 */
2065
+ switchLibTab('history');
2066
+ });
2067
+
2068
+ // Load LoRA dir from settings
2069
+ loadLoraDir();
2070
+
2071
+ let historyRefreshInterval = null;
2072
+ function startHistoryAutoRefresh() {
2073
+ if (historyRefreshInterval) return;
2074
+ historyRefreshInterval = setInterval(() => {
2075
+ const hc = document.getElementById('history-container');
2076
+ if (hc && hc.offsetParent !== null && !_isGeneratingFlag) {
2077
+ fetchHistory(1, true);
2078
+ }
2079
+ }, 5000);
2080
+ }
2081
+ startHistoryAutoRefresh();
2082
+ });
2083
+
2084
+
2085
+ async function saveVramLimit() {
2086
+ const lim = document.getElementById("vram-limit-input").value;
2087
+ const status = document.getElementById("vram-limit-status");
2088
+ status.textContent = "保存中...";
2089
+ try {
2090
+ const res = await fetch(`${BASE}/api/vram-limit`, {
2091
+ method: "POST", headers: { "Content-Type": "application/json" },
2092
+ body: JSON.stringify({ vramLimit: lim })
2093
+ });
2094
+ const d = await res.json();
2095
+ if (d.status === 'ok') {
2096
+ status.textContent = "保存成功";
2097
+ status.style.color = '#4caf50';
2098
+ } else throw new Error(d.message || "Unknown error");
2099
+ } catch (e) {
2100
+ status.textContent = e.message;
2101
+ status.style.color = '#f44336';
2102
+ }
2103
+ }
2104
+ async function fetchVramLimit() {
2105
+ try {
2106
+ const res = await fetch(`${BASE}/api/vram-limit`);
2107
+ const d = await res.json();
2108
+ if (d.vramLimit !== undefined && d.vramLimit !== null) {
2109
+ document.getElementById("vram-limit-input").value = d.vramLimit;
2110
+ }
2111
+ } catch (e) {}
2112
+ }
2113
+ try { fetchVramLimit(); } catch(e) {}
2114
+
LTX2.3-1.0.4/main.py ADDED
@@ -0,0 +1,264 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ import os
2
+ import sys
3
+ import subprocess
4
+ import threading
5
+ import time
6
+ import socket
7
+ import logging
8
+ from fastapi import FastAPI
9
+ from fastapi.responses import FileResponse
10
+ from fastapi.staticfiles import StaticFiles
11
+ import uvicorn
12
+
13
+ # ============================================================
14
+ # 配置区 (动态路径适配与补丁挂载)
15
+ # ============================================================
16
+ def resolve_ltx_path():
17
+ import glob, tempfile, subprocess
18
+ sc_dir = os.path.join(os.getcwd(), "LTX_Shortcut")
19
+ os.makedirs(sc_dir, exist_ok=True)
20
+ lnk_files = glob.glob(os.path.join(sc_dir, "*.lnk"))
21
+ if not lnk_files:
22
+ print("\033[91m[ERROR] 未在 LTX_Shortcut 文件夹中找到快捷方式!\n请打开程序目录下的 LTX_Shortcut 文件夹,并将官方 LTX Desktop 的快捷方式复制进去后重试。\033[0m")
23
+ sys.exit(1)
24
+
25
+ lnk_path = lnk_files[0]
26
+ # 使用 VBScript 解析快捷方式,兼容所有 Windows 系统
27
+ vbs_code = f'''Set sh = CreateObject("WScript.Shell")\nSet obj = sh.CreateShortcut("{os.path.abspath(lnk_path)}")\nWScript.Echo obj.TargetPath'''
28
+ fd, vbs_path = tempfile.mkstemp(suffix='.vbs')
29
+ with os.fdopen(fd, 'w') as f:
30
+ f.write(vbs_code)
31
+ try:
32
+ out = subprocess.check_output(['cscript', '//nologo', vbs_path], stderr=subprocess.STDOUT)
33
+ target_exe = out.decode('ansi').strip()
34
+ finally:
35
+ os.remove(vbs_path)
36
+
37
+ if not target_exe or not os.path.exists(target_exe):
38
+ # 如果快捷方式解析失败,或者解析出来的是朋友电脑的路径(当前电脑不存在),自动全盘搜索默认路径
39
+ default_paths = [
40
+ os.path.join(os.environ.get("LOCALAPPDATA", ""), r"Programs\LTX Desktop\LTX Desktop.exe"),
41
+ r"C:\Program Files\LTX Desktop\LTX Desktop.exe",
42
+ r"D:\Program Files\LTX Desktop\LTX Desktop.exe",
43
+ r"E:\Program Files\LTX Desktop\LTX Desktop.exe"
44
+ ]
45
+ found = False
46
+ for p in default_paths:
47
+ if os.path.exists(p):
48
+ target_exe = p
49
+ print(f"\033[96m[INFO] 自动检测到 LTX 原版安装路径: {p}\033[0m")
50
+ found = True
51
+ break
52
+
53
+ if not found:
54
+ print(f"\033[91m[ERROR] 未能找到原版 LTX Desktop 的安装路径!\033[0m")
55
+ print("请清理 LTX_Shortcut 文件夹,并将您当前电脑上真正的原版快捷方式重贴复制进去。")
56
+ sys.exit(1)
57
+
58
+ return os.path.dirname(target_exe)
59
+
60
+ USER_PROFILE = os.path.expanduser("~")
61
+ PYTHON_EXE = os.path.join(USER_PROFILE, r"AppData\Local\LTXDesktop\python\python.exe")
62
+ DATA_DIR = os.path.join(USER_PROFILE, r"AppData\Local\LTXDesktop")
63
+
64
+ # 1. 动态获取主安装路径
65
+ LTX_INSTALL_DIR = resolve_ltx_path()
66
+ BACKEND_DIR = os.path.join(LTX_INSTALL_DIR, r"resources\backend")
67
+ UI_FILE_NAME = "UI/index.html"
68
+
69
+ # 环境致命检测:如果官方 Python 还没解压释放,立刻强制中断整个程序
70
+ if not os.path.exists(PYTHON_EXE):
71
+ print(f"\n\033[1;41m [致命错误] 您的电脑上尚未配置好 LTX 的官方渲染核心框架! \033[0m")
72
+ print(f"\033[93m此应用仅是 UI 图形控制台,必需依赖原版软件环境才能生成。在 ({PYTHON_EXE}) 未找到运行引擎。\n")
73
+ print(">> 解决方案:\n1. 请先在您的电脑上正常安装【LTX Desktop 官方原版软件】。")
74
+ print("2. 必需:双击打开运行一次原版软件!(运行后原版软件会在后台自动释放环境)")
75
+ print("3. 把原版软件的快捷方式复制到本文档的 LTX_Shortcut 文件夹里面。")
76
+ print("4. 全部完成后,再重新启动本 run.bat 脚本即可!\033[0m\n")
77
+ os._exit(1)
78
+
79
+ # 2. 从目录读取改动过的 Python 文件 (热修复拦截器)
80
+ PATCHES_DIR = os.path.join(os.getcwd(), "patches")
81
+ os.makedirs(PATCHES_DIR, exist_ok=True)
82
+
83
+ # 3. 默认输出定向至程序根目录
84
+ LOCAL_OUTPUTS = os.path.join(os.getcwd(), "outputs")
85
+ os.makedirs(LOCAL_OUTPUTS, exist_ok=True)
86
+
87
+ # 强制注入自定义输出录至 LTX 缓存数据中
88
+ os.makedirs(DATA_DIR, exist_ok=True)
89
+ with open(os.path.join(DATA_DIR, "custom_dir.txt"), 'w', encoding='utf-8') as f:
90
+ f.write(LOCAL_OUTPUTS)
91
+
92
+ os.environ["LTX_APP_DATA_DIR"] = DATA_DIR
93
+
94
+ # 将 patches 目录优先级提升,做到 Python 无损替换
95
+ os.environ["PYTHONPATH"] = f"{PATCHES_DIR};{BACKEND_DIR}"
96
+
97
+ def get_lan_ip():
98
+ try:
99
+ host_name = socket.gethostname()
100
+ _, _, ip_list = socket.gethostbyname_ex(host_name)
101
+
102
+ candidates = []
103
+ for ip in ip_list:
104
+ if ip.startswith("192.168."):
105
+ return ip
106
+ elif ip.startswith("10.") or (ip.startswith("172.") and 16 <= int(ip.split('.')[1]) <= 31):
107
+ candidates.append(ip)
108
+
109
+ if candidates:
110
+ return candidates[0]
111
+
112
+ # Fallback to the default socket routing approach if no obvious LAN IP found
113
+ s = socket.socket(socket.AF_INET, socket.SOCK_DGRAM)
114
+ s.connect(("8.8.8.8", 80))
115
+ ip = s.getsockname()[0]
116
+ s.close()
117
+ return ip
118
+ except:
119
+ return "127.0.0.1"
120
+
121
+ LAN_IP = get_lan_ip()
122
+
123
+ # ============================================================
124
+ # 服务启动逻辑
125
+ # ============================================================
126
+ def check_port_in_use(port):
127
+ with socket.socket(socket.AF_INET, socket.SOCK_STREAM) as s:
128
+ return s.connect_ex(('127.0.0.1', port)) == 0
129
+
130
+ def launch_backend():
131
+ """启动核心引擎 - 监听 0.0.0.0 确保局域网可调"""
132
+ if check_port_in_use(3000):
133
+ print(f"\n\033[1;41m [致命错误] 3000 端口已被占用,无法启动核心引擎! \033[0m")
134
+ print("\033[93m>> 绝大多数情况下,这是因为【官方原版 LTX Desktop】正在您的电脑后台运行。\033[0m")
135
+ print(">> 冲突会导致显存爆炸。请检查右下角系统托盘图标,右键完全退出官方软件。")
136
+ print(">> 退出后重新双击 run.bat 启动本程序!\n")
137
+ os._exit(1)
138
+
139
+ print(f"\033[96m[CORE] 核心引擎正在启动...\033[0m")
140
+ # 只开启重要级别的 Python 应用层日志,去除无用的 HTTP 刷屏
141
+ import logging as _logging
142
+ _logging.basicConfig(
143
+ level=_logging.INFO,
144
+ format="[%(asctime)s] %(levelname)s %(name)s: %(message)s",
145
+ datefmt="%H:%M:%S",
146
+ force=True
147
+ )
148
+
149
+ # 构建绝对无损的环境拦截器:防止其他电脑被 cwd 劫持加载原版文件
150
+ launcher_code = f"""
151
+ import sys
152
+ import os
153
+
154
+ patch_dir = r"{PATCHES_DIR}"
155
+ backend_dir = r"{BACKEND_DIR}"
156
+
157
+ # 防御性清除:强行剥离所有的默认 backend_dir 引用
158
+ sys.path = [p for p in sys.path if p and os.path.normpath(p) != os.path.normpath(backend_dir)]
159
+ sys.path = [p for p in sys.path if p and p != "." and p != ""]
160
+
161
+ # 绝对插队注入:优先搜索 PATCHES_DIR
162
+ sys.path.insert(0, patch_dir)
163
+ sys.path.insert(1, backend_dir)
164
+
165
+ import uvicorn
166
+ from ltx2_server import app
167
+
168
+ if __name__ == '__main__':
169
+ uvicorn.run(app, host="0.0.0.0", port=3000, log_level="info", access_log=False)
170
+ """
171
+ launcher_path = os.path.join(PATCHES_DIR, "launcher.py")
172
+ with open(launcher_path, "w", encoding="utf-8") as f:
173
+ f.write(launcher_code)
174
+
175
+ cmd = [PYTHON_EXE, launcher_path]
176
+ env = os.environ.copy()
177
+ result = subprocess.run(cmd, cwd=BACKEND_DIR, env=env)
178
+ if result.returncode != 0:
179
+ print(f"\n\033[1;41m [致命错误] 核心引擎异常崩溃退出! (Exit Code: {result.returncode})\033[0m")
180
+ print(">> 请检查上述终端报错信息。确认显卡驱动是否正常。")
181
+ os._exit(1)
182
+
183
+ ui_app = FastAPI()
184
+ # 已移除存在安全隐患的静态资源挂载目录
185
+
186
+ @ui_app.get("/")
187
+ async def serve_index():
188
+ return FileResponse(os.path.join(os.getcwd(), UI_FILE_NAME))
189
+
190
+ @ui_app.get("/index.css")
191
+ async def serve_css():
192
+ return FileResponse(os.path.join(os.getcwd(), "UI/index.css"))
193
+
194
+ @ui_app.get("/index.js")
195
+ async def serve_js():
196
+ return FileResponse(os.path.join(os.getcwd(), "UI/index.js"))
197
+
198
+
199
+ @ui_app.get("/i18n.js")
200
+ async def serve_i18n():
201
+ return FileResponse(os.path.join(os.getcwd(), "UI/i18n.js"))
202
+
203
+
204
+ def launch_ui_server():
205
+ print(f"\033[92m[UI] 工作站已就绪!\033[0m")
206
+ print(f"\033[92m[LOCAL] 本机访问: http://127.0.0.1:4000\033[0m")
207
+ print(f"\033[93m[WIFI] 局域网访问: http://{LAN_IP}:4000\033[0m")
208
+
209
+ # 彻底压制 WinError 10054 (客户端强制断开) 的底层警告报错
210
+ if sys.platform == 'win32':
211
+ # Uvicorn 内部会拉起循环,所以只能通过底层 Logging Filter 拦截控制台噪音
212
+ class UvicornAsyncioNoiseFilter(logging.Filter):
213
+ """压掉客户端断开、Win Proactor 管道收尾等无害 asyncio 控制台刷屏。"""
214
+
215
+ def filter(self, record):
216
+ if record.name != "asyncio":
217
+ return True
218
+ msg = record.getMessage()
219
+ if "_call_connection_lost" in msg or "_ProactorBasePipeTransport" in msg:
220
+ return False
221
+ if hasattr(record, "exc_info") and record.exc_info:
222
+ exc_type, exc_value, _ = record.exc_info
223
+ if isinstance(exc_value, ConnectionResetError) and getattr(
224
+ exc_value, "winerror", None
225
+ ) == 10054:
226
+ return False
227
+ if "10054" in msg and "ConnectionResetError" in msg:
228
+ return False
229
+ return True
230
+
231
+ logging.getLogger("asyncio").addFilter(UvicornAsyncioNoiseFilter())
232
+
233
+ uvicorn.run(ui_app, host="0.0.0.0", port=4000, log_level="warning", access_log=False)
234
+
235
+ if __name__ == "__main__":
236
+ os.system('cls' if os.name == 'nt' else 'clear')
237
+ print("\033[1;97;44m LTX-2 CINEMATIC WORKSTATION | NETWORK ENABLED \033[0m\n")
238
+
239
+ threading.Thread(target=launch_backend, daemon=True).start()
240
+
241
+ # 强制校验 3000 端口是否存活
242
+ print("\033[93m[SYS] 正在等待内部核心 3000 端口启动...\033[0m")
243
+ backend_ready = False
244
+ for _ in range(30):
245
+ try:
246
+ with socket.socket(socket.AF_INET, socket.SOCK_STREAM) as s:
247
+ if s.connect_ex(('127.0.0.1', 3000)) == 0:
248
+ backend_ready = True
249
+ break
250
+ except Exception:
251
+ pass
252
+ time.sleep(1)
253
+
254
+ if backend_ready:
255
+ print("\033[92m[SYS] 3000 端口已通过连通性握手验证!后端装载成功。\033[0m")
256
+ else:
257
+ print("\033[1;41m [崩坏警告] 等待 30 秒后,3000 端口依然无法连通! \033[0m")
258
+ print(">> Uvicorn 可能在后台陷入了死锁,或者被防火墙拦截,前端大概率将无法连接到后端!")
259
+ print(">> 请检查上方是否有 Python 报错。\n")
260
+
261
+ try:
262
+ launch_ui_server()
263
+ except KeyboardInterrupt:
264
+ sys.exit(0)
LTX2.3-1.0.4/patches/API模式问题修复说明.md ADDED
@@ -0,0 +1,41 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # LTX 本地显卡模式修复
2
+
3
+ ## 问题描述
4
+ 系统强制使用 FAL API 生成图片,即使本地有 GPU 可用。
5
+
6
+ ## 原因
7
+ LTX 强制要求 GPU 有 31GB VRAM 才会使用本地显卡,低于此值会强制走 API 模式。
8
+
9
+ ## 修复方法
10
+
11
+ ### 方法一:自动替换(推荐)
12
+ 运行程序后,patches 目录中的文件会自动替换原版文件。
13
+
14
+ ### 方法二:手动替换
15
+
16
+ #### 1. 修改 VRAM 阈值
17
+ - **原文件**: `C:\Program Files\LTX Desktop\resources\backend\runtime_config\runtime_policy.py`
18
+ - **找到** (第16行):
19
+ ```python
20
+ return vram_gb < 31
21
+ ```
22
+ - **改为**:
23
+ ```python
24
+ return vram_gb < 6
25
+ ```
26
+
27
+ #### 2. 清空无效 API Key
28
+ - **原文件**: `C:\Users\Administrator\AppData\Local\LTXDesktop\settings.json`
29
+ - **找到**:
30
+ ```json
31
+ "fal_api_key": "12123",
32
+ ```
33
+ - **改为**:
34
+ ```json
35
+ "fal_api_key": "",
36
+ ```
37
+
38
+ ## 说明
39
+ - VRAM 阈值改为 6GB,意味着 6GB 及以上显存都会使用本地显卡
40
+ - 清空 fal_api_key 避免系统误判为已配置 API
41
+ - 修改后重启程序即可生效
LTX2.3-1.0.4/patches/api_types.py ADDED
@@ -0,0 +1,395 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ """Pydantic request/response models and TypedDicts for ltx2_server."""
2
+
3
+ from __future__ import annotations
4
+
5
+ from typing import Literal, NamedTuple, TypeAlias, TypedDict
6
+ from typing import Annotated
7
+
8
+ from pydantic import BaseModel, Field, StringConstraints
9
+
10
+ NonEmptyPrompt = Annotated[str, StringConstraints(strip_whitespace=True, min_length=1)]
11
+ ModelFileType = Literal[
12
+ "checkpoint",
13
+ "upsampler",
14
+ "distilled_lora",
15
+ "ic_lora",
16
+ "depth_processor",
17
+ "person_detector",
18
+ "pose_processor",
19
+ "text_encoder",
20
+ "zit",
21
+ ]
22
+
23
+
24
+ class ImageConditioningInput(NamedTuple):
25
+ """Image conditioning triplet used by all video pipelines."""
26
+
27
+ path: str
28
+ frame_idx: int
29
+ strength: float
30
+
31
+
32
+ # ============================================================
33
+ # TypedDicts for module-level state globals
34
+ # ============================================================
35
+
36
+
37
+ class GenerationState(TypedDict):
38
+ id: str | None
39
+ cancelled: bool
40
+ result: str | list[str] | None
41
+ error: str | None
42
+ status: str # "idle" | "running" | "complete" | "cancelled" | "error"
43
+ phase: str
44
+ progress: int
45
+ current_step: int
46
+ total_steps: int
47
+
48
+
49
+ JsonObject: TypeAlias = dict[str, object]
50
+ VideoCameraMotion = Literal[
51
+ "none",
52
+ "dolly_in",
53
+ "dolly_out",
54
+ "dolly_left",
55
+ "dolly_right",
56
+ "jib_up",
57
+ "jib_down",
58
+ "static",
59
+ "focus_shift",
60
+ ]
61
+
62
+ RetakeMode: TypeAlias = Literal[
63
+ "replace_audio_and_video", "replace_video", "replace_audio"
64
+ ]
65
+
66
+
67
+ # ============================================================
68
+ # Response Models
69
+ # ============================================================
70
+
71
+
72
+ class ModelStatusItem(BaseModel):
73
+ id: str
74
+ name: str
75
+ loaded: bool
76
+ downloaded: bool
77
+
78
+
79
+ class GpuTelemetry(BaseModel):
80
+ name: str
81
+ vram: int
82
+ vramUsed: int
83
+
84
+
85
+ class HealthResponse(BaseModel):
86
+ status: str
87
+ models_loaded: bool
88
+ active_model: str | None
89
+ gpu_info: GpuTelemetry
90
+ sage_attention: bool
91
+ models_status: list[ModelStatusItem]
92
+
93
+
94
+ class GpuInfoResponse(BaseModel):
95
+ cuda_available: bool
96
+ mps_available: bool = False
97
+ gpu_available: bool = False
98
+ gpu_name: str | None
99
+ vram_gb: int | None
100
+ gpu_info: GpuTelemetry
101
+
102
+
103
+ class RuntimePolicyResponse(BaseModel):
104
+ force_api_generations: bool
105
+
106
+
107
+ class GenerationProgressResponse(BaseModel):
108
+ status: str
109
+ phase: str
110
+ progress: int
111
+ currentStep: int | None
112
+ totalSteps: int | None
113
+
114
+
115
+ class ModelInfo(BaseModel):
116
+ id: str
117
+ name: str
118
+ description: str
119
+
120
+
121
+ class ModelFileStatus(BaseModel):
122
+ id: ModelFileType
123
+ name: str
124
+ description: str
125
+ downloaded: bool
126
+ size: int
127
+ expected_size: int
128
+ required: bool = True
129
+ is_folder: bool = False
130
+ optional_reason: str | None = None
131
+
132
+
133
+ class TextEncoderStatus(BaseModel):
134
+ downloaded: bool
135
+ size_bytes: int
136
+ size_gb: float
137
+ expected_size_gb: float
138
+
139
+
140
+ class ModelsStatusResponse(BaseModel):
141
+ models: list[ModelFileStatus]
142
+ all_downloaded: bool
143
+ total_size: int
144
+ downloaded_size: int
145
+ total_size_gb: float
146
+ downloaded_size_gb: float
147
+ models_path: str
148
+ has_api_key: bool
149
+ text_encoder_status: TextEncoderStatus
150
+ use_local_text_encoder: bool
151
+
152
+
153
+ class DownloadProgressRunningResponse(BaseModel):
154
+ status: Literal["downloading"]
155
+ current_downloading_file: ModelFileType | None
156
+ current_file_progress: float
157
+ total_progress: float
158
+ total_downloaded_bytes: int
159
+ expected_total_bytes: int
160
+ completed_files: set[ModelFileType]
161
+ all_files: set[ModelFileType]
162
+ error: None = None
163
+ speed_bytes_per_sec: float
164
+
165
+
166
+ class DownloadProgressCompleteResponse(BaseModel):
167
+ status: Literal["complete"]
168
+
169
+
170
+ class DownloadProgressErrorResponse(BaseModel):
171
+ status: Literal["error"]
172
+ error: str
173
+
174
+
175
+ DownloadProgressResponse: TypeAlias = (
176
+ DownloadProgressRunningResponse
177
+ | DownloadProgressCompleteResponse
178
+ | DownloadProgressErrorResponse
179
+ )
180
+
181
+
182
+ class SuggestGapPromptResponse(BaseModel):
183
+ status: str = "success"
184
+ suggested_prompt: str
185
+
186
+
187
+ class GenerateVideoCompleteResponse(BaseModel):
188
+ status: Literal["complete"]
189
+ video_path: str
190
+
191
+
192
+ class GenerateVideoCancelledResponse(BaseModel):
193
+ status: Literal["cancelled"]
194
+
195
+
196
+ GenerateVideoResponse: TypeAlias = (
197
+ GenerateVideoCompleteResponse | GenerateVideoCancelledResponse
198
+ )
199
+
200
+
201
+ class GenerateImageCompleteResponse(BaseModel):
202
+ status: Literal["complete"]
203
+ image_paths: list[str]
204
+
205
+
206
+ class GenerateImageCancelledResponse(BaseModel):
207
+ status: Literal["cancelled"]
208
+
209
+
210
+ GenerateImageResponse: TypeAlias = (
211
+ GenerateImageCompleteResponse | GenerateImageCancelledResponse
212
+ )
213
+
214
+
215
+ class CancelCancellingResponse(BaseModel):
216
+ status: Literal["cancelling"]
217
+ id: str
218
+
219
+
220
+ class CancelNoActiveGenerationResponse(BaseModel):
221
+ status: Literal["no_active_generation"]
222
+
223
+
224
+ CancelResponse: TypeAlias = CancelCancellingResponse | CancelNoActiveGenerationResponse
225
+
226
+
227
+ class RetakeVideoResponse(BaseModel):
228
+ status: Literal["complete"]
229
+ video_path: str
230
+
231
+
232
+ class RetakePayloadResponse(BaseModel):
233
+ status: Literal["complete"]
234
+ result: JsonObject
235
+
236
+
237
+ class RetakeCancelledResponse(BaseModel):
238
+ status: Literal["cancelled"]
239
+
240
+
241
+ RetakeResponse: TypeAlias = (
242
+ RetakeVideoResponse | RetakePayloadResponse | RetakeCancelledResponse
243
+ )
244
+
245
+
246
+ class IcLoraExtractResponse(BaseModel):
247
+ conditioning: str
248
+ original: str
249
+ conditioning_type: Literal["canny", "depth"]
250
+ frame_time: float
251
+
252
+
253
+ class IcLoraGenerateCompleteResponse(BaseModel):
254
+ status: Literal["complete"]
255
+ video_path: str
256
+
257
+
258
+ class IcLoraGenerateCancelledResponse(BaseModel):
259
+ status: Literal["cancelled"]
260
+
261
+
262
+ IcLoraGenerateResponse: TypeAlias = (
263
+ IcLoraGenerateCompleteResponse | IcLoraGenerateCancelledResponse
264
+ )
265
+
266
+
267
+ class ModelDownloadStartResponse(BaseModel):
268
+ status: Literal["started"]
269
+ message: str
270
+ sessionId: str
271
+
272
+
273
+ class TextEncoderDownloadStartedResponse(BaseModel):
274
+ status: Literal["started"]
275
+ message: str
276
+ sessionId: str
277
+
278
+
279
+ class TextEncoderAlreadyDownloadedResponse(BaseModel):
280
+ status: Literal["already_downloaded"]
281
+ message: str
282
+
283
+
284
+ TextEncoderDownloadResponse: TypeAlias = (
285
+ TextEncoderDownloadStartedResponse | TextEncoderAlreadyDownloadedResponse
286
+ )
287
+
288
+
289
+ class StatusResponse(BaseModel):
290
+ status: str
291
+
292
+
293
+ class ErrorResponse(BaseModel):
294
+ error: str
295
+ message: str | None = None
296
+
297
+
298
+ # ============================================================
299
+ # Request Models
300
+ # ============================================================
301
+
302
+
303
+ class GenerateVideoRequest(BaseModel):
304
+ prompt: NonEmptyPrompt
305
+ resolution: str = "512p"
306
+ model: str = "fast"
307
+ cameraMotion: VideoCameraMotion = "none"
308
+ negativePrompt: str = ""
309
+ duration: str = "2"
310
+ fps: str = "24"
311
+ audio: str = "false"
312
+ imagePath: str | None = None
313
+ audioPath: str | None = None
314
+ startFramePath: str | None = None
315
+ endFramePath: str | None = None
316
+ # 多张图单次推理:latent 时间轴多锚点(Comfy LTXVAddGuideMulti 思路);≥2 路径时优先于首尾帧
317
+ keyframePaths: list[str] | None = None
318
+ # 与 keyframePaths 等长、0.1–1.0;不传则按 Comfy 类工作流自动降低中间帧强度,减轻闪烁
319
+ keyframeStrengths: list[float] | None = None
320
+ # 与 keyframePaths 等长,单位秒,落在 [0, 整段时长];全提供时按时间映射 latent,否则仍自动均分
321
+ keyframeTimes: list[float] | None = None
322
+ aspectRatio: Literal["16:9", "9:16"] = "16:9"
323
+ modelPath: str | None = None
324
+ loraPath: str | None = None
325
+ loraStrength: float = 1.0
326
+
327
+
328
+ class GenerateImageRequest(BaseModel):
329
+ prompt: NonEmptyPrompt
330
+ width: int = 1024
331
+ height: int = 1024
332
+ numSteps: int = 4
333
+ numImages: int = 1
334
+
335
+
336
+ def _default_model_types() -> set[ModelFileType]:
337
+ return set()
338
+
339
+
340
+ class ModelDownloadRequest(BaseModel):
341
+ modelTypes: set[ModelFileType] = Field(default_factory=_default_model_types)
342
+
343
+
344
+ class RequiredModelsResponse(BaseModel):
345
+ modelTypes: list[ModelFileType]
346
+
347
+
348
+ class SuggestGapPromptRequest(BaseModel):
349
+ beforePrompt: str = ""
350
+ afterPrompt: str = ""
351
+ beforeFrame: str | None = None
352
+ afterFrame: str | None = None
353
+ gapDuration: float = 5
354
+ mode: str = "t2v"
355
+ inputImage: str | None = None
356
+
357
+
358
+ class RetakeRequest(BaseModel):
359
+ video_path: str
360
+ start_time: float = 0
361
+ duration: float = 0
362
+ prompt: str = ""
363
+ mode: str = "replace_video_only"
364
+ width: int | None = None
365
+ height: int | None = None
366
+
367
+
368
+ class IcLoraExtractRequest(BaseModel):
369
+ video_path: str
370
+ conditioning_type: Literal["canny", "depth"] = "canny"
371
+ frame_time: float = 0
372
+
373
+
374
+ class IcLoraImageInput(BaseModel):
375
+ path: str
376
+ frame: int = 0
377
+ strength: float = 1.0
378
+
379
+
380
+ def _default_ic_lora_images() -> list[IcLoraImageInput]:
381
+ return []
382
+
383
+
384
+ class IcLoraGenerateRequest(BaseModel):
385
+ video_path: str
386
+ conditioning_type: Literal["canny", "depth"]
387
+ prompt: NonEmptyPrompt
388
+ conditioning_strength: float = 1.0
389
+ num_inference_steps: int = 30
390
+ cfg_guidance_scale: float = 1.0
391
+ negative_prompt: str = ""
392
+ images: list[IcLoraImageInput] = Field(default_factory=_default_ic_lora_images)
393
+
394
+
395
+ ConditioningType: TypeAlias = Literal["canny", "depth"]
LTX2.3-1.0.4/patches/app_factory.py ADDED
The diff for this file is too large to render. See raw diff
 
LTX2.3-1.0.4/patches/app_settings_patch.py ADDED
@@ -0,0 +1,22 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ """运行时补丁:给 AppSettings 添加 lora_dir 字段(如果不存在)。"""
2
+
3
+ import sys
4
+ import os
5
+
6
+
7
+ def patch_app_settings():
8
+ try:
9
+ from state.app_settings import AppSettings
10
+ from pydantic import Field
11
+
12
+ if "lora_dir" not in AppSettings.model_fields:
13
+ AppSettings.model_fields["lora_dir"] = Field(
14
+ default="", validation_alias="loraDir", serialization_alias="loraDir"
15
+ )
16
+ AppSettings.model_rebuild(_force=True)
17
+ print("[PATCH] AppSettings patched: added lora_dir field")
18
+ except Exception as e:
19
+ print(f"[PATCH] AppSettings patch failed: {e}")
20
+
21
+
22
+ patch_app_settings()
LTX2.3-1.0.4/patches/handlers/__pycache__/video_generation_handler.cpython-313.pyc ADDED
Binary file (36.5 kB). View file
 
LTX2.3-1.0.4/patches/handlers/video_generation_handler.py ADDED
@@ -0,0 +1,868 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ """Video generation orchestration handler."""
2
+
3
+ from __future__ import annotations
4
+
5
+ import logging
6
+ import os
7
+ import tempfile
8
+ import time
9
+ import uuid
10
+ from datetime import datetime
11
+ from pathlib import Path
12
+ from threading import RLock
13
+ from typing import TYPE_CHECKING
14
+
15
+ from PIL import Image
16
+
17
+ from api_types import (
18
+ GenerateVideoRequest,
19
+ GenerateVideoResponse,
20
+ ImageConditioningInput,
21
+ VideoCameraMotion,
22
+ )
23
+ from _routes._errors import HTTPError
24
+ from handlers.base import StateHandlerBase
25
+ from handlers.generation_handler import GenerationHandler
26
+ from handlers.pipelines_handler import PipelinesHandler
27
+ from handlers.text_handler import TextHandler
28
+ from runtime_config.model_download_specs import resolve_model_path
29
+ from server_utils.media_validation import (
30
+ normalize_optional_path,
31
+ validate_audio_file,
32
+ validate_image_file,
33
+ )
34
+ from services.interfaces import LTXAPIClient
35
+ from state.app_state_types import AppState
36
+ from state.app_settings import should_video_generate_with_ltx_api
37
+
38
+ if TYPE_CHECKING:
39
+ from runtime_config.runtime_config import RuntimeConfig
40
+
41
+ logger = logging.getLogger(__name__)
42
+
43
+ FORCED_API_MODEL_MAP: dict[str, str] = {
44
+ "fast": "ltx-2-3-fast",
45
+ "pro": "ltx-2-3-pro",
46
+ }
47
+ FORCED_API_RESOLUTION_MAP: dict[str, dict[str, str]] = {
48
+ "1080p": {"16:9": "1920x1080", "9:16": "1080x1920"},
49
+ "1440p": {"16:9": "2560x1440", "9:16": "1440x2560"},
50
+ "2160p": {"16:9": "3840x2160", "9:16": "2160x3840"},
51
+ }
52
+ A2V_FORCED_API_RESOLUTION = "1920x1080"
53
+ FORCED_API_ALLOWED_ASPECT_RATIOS = {"16:9", "9:16"}
54
+ FORCED_API_ALLOWED_FPS = {24, 25, 48, 50}
55
+
56
+
57
+ def _get_allowed_durations(model_id: str, resolution_label: str, fps: int) -> set[int]:
58
+ if model_id == "ltx-2-3-fast" and resolution_label == "1080p" and fps in {24, 25}:
59
+ return {6, 8, 10, 12, 14, 16, 18, 20}
60
+ return {6, 8, 10}
61
+
62
+
63
+ class VideoGenerationHandler(StateHandlerBase):
64
+ def __init__(
65
+ self,
66
+ state: AppState,
67
+ lock: RLock,
68
+ generation_handler: GenerationHandler,
69
+ pipelines_handler: PipelinesHandler,
70
+ text_handler: TextHandler,
71
+ ltx_api_client: LTXAPIClient,
72
+ config: RuntimeConfig,
73
+ ) -> None:
74
+ super().__init__(state, lock, config)
75
+ self._generation = generation_handler
76
+ self._pipelines = pipelines_handler
77
+ self._text = text_handler
78
+ self._ltx_api_client = ltx_api_client
79
+
80
+ def generate(self, req: GenerateVideoRequest) -> GenerateVideoResponse:
81
+ if should_video_generate_with_ltx_api(
82
+ force_api_generations=self.config.force_api_generations,
83
+ settings=self.state.app_settings,
84
+ ):
85
+ return self._generate_forced_api(req)
86
+
87
+ if self._generation.is_generation_running():
88
+ raise HTTPError(409, "Generation already in progress")
89
+
90
+ resolution = req.resolution
91
+
92
+ duration = int(float(req.duration))
93
+ fps = int(float(req.fps))
94
+
95
+ audio_path = normalize_optional_path(req.audioPath)
96
+ if audio_path:
97
+ return self._generate_a2v(req, duration, fps, audio_path=audio_path)
98
+
99
+ logger.info("Resolution %s - using fast pipeline", resolution)
100
+
101
+ RESOLUTION_MAP_16_9: dict[str, tuple[int, int]] = {
102
+ "540p": (1024, 576),
103
+ "720p": (1280, 704),
104
+ "1080p": (1920, 1088),
105
+ }
106
+
107
+ def get_16_9_size(res: str) -> tuple[int, int]:
108
+ return RESOLUTION_MAP_16_9.get(res, (1280, 704))
109
+
110
+ def get_9_16_size(res: str) -> tuple[int, int]:
111
+ w, h = get_16_9_size(res)
112
+ return h, w
113
+
114
+ match req.aspectRatio:
115
+ case "9:16":
116
+ width, height = get_9_16_size(resolution)
117
+ case "16:9":
118
+ width, height = get_16_9_size(resolution)
119
+
120
+ num_frames = self._compute_num_frames(duration, fps)
121
+
122
+ image = None
123
+ image_path = normalize_optional_path(req.imagePath)
124
+ if image_path:
125
+ image = self._prepare_image(image_path, width, height)
126
+ logger.info("Image: %s -> %sx%s", image_path, width, height)
127
+
128
+ generation_id = self._make_generation_id()
129
+ seed = self._resolve_seed()
130
+
131
+ logger.info(
132
+ f"Request loraPath: '{req.loraPath}', loraStrength: {req.loraStrength}, inferenceSteps: {req.inferenceSteps}"
133
+ )
134
+
135
+ # 尝试支持自定义步数(实验性)
136
+ inference_steps = req.inferenceSteps
137
+ logger.info(f"Using inference steps: {inference_steps}")
138
+
139
+ loras = None
140
+ if req.loraPath and req.loraPath.strip():
141
+ try:
142
+ import os
143
+ from pathlib import Path
144
+ from ltx_core.loader import LoraPathStrengthAndSDOps
145
+ from ltx_core.loader.sd_ops import LTXV_LORA_COMFY_RENAMING_MAP
146
+
147
+ lora_path = req.loraPath.strip()
148
+ logger.info(
149
+ f"LoRA path: {lora_path}, exists: {os.path.exists(lora_path)}"
150
+ )
151
+
152
+ if os.path.exists(lora_path):
153
+ loras = [
154
+ LoraPathStrengthAndSDOps(
155
+ path=lora_path,
156
+ strength=req.loraStrength,
157
+ sd_ops=LTXV_LORA_COMFY_RENAMING_MAP,
158
+ )
159
+ ]
160
+ logger.info(
161
+ f"LoRA prepared: {lora_path} with strength {req.loraStrength}"
162
+ )
163
+ else:
164
+ logger.warning(f"LoRA file not found: {lora_path}")
165
+ except Exception as e:
166
+ logger.warning(f"Failed to load LoRA: {e}")
167
+ import traceback
168
+
169
+ logger.warning(f"LoRA traceback: {traceback.format_exc()}")
170
+ loras = None
171
+
172
+ lora_path_req = (req.loraPath or "").strip()
173
+ desired_sig = (
174
+ "fast",
175
+ lora_path_req if loras is not None else "",
176
+ round(float(req.loraStrength), 4) if loras is not None else 0.0,
177
+ )
178
+ try:
179
+ if loras is not None:
180
+ # 强制卸载并重新加载带LoRA的pipeline
181
+ logger.info("Unloading pipeline for LoRA...")
182
+ from keep_models_runtime import force_unload_gpu_pipeline
183
+
184
+ force_unload_gpu_pipeline(self._pipelines)
185
+
186
+ # 强制垃圾回收
187
+ import gc
188
+
189
+ gc.collect()
190
+ # 释放 CUDA 缓存,降低 LoRA 首次构建的显存峰值/碎片风险
191
+ try:
192
+ import torch
193
+ if torch.cuda.is_available():
194
+ torch.cuda.empty_cache()
195
+ torch.cuda.ipc_collect()
196
+ except Exception:
197
+ pass
198
+
199
+ gemma_root = self._pipelines._text_handler.resolve_gemma_root()
200
+ from runtime_config.model_download_specs import resolve_model_path
201
+ from services.fast_video_pipeline.ltx_fast_video_pipeline import (
202
+ LTXFastVideoPipeline,
203
+ )
204
+
205
+ checkpoint_path = str(
206
+ resolve_model_path(
207
+ self._pipelines.models_dir,
208
+ self._pipelines.config.model_download_specs,
209
+ "checkpoint",
210
+ )
211
+ )
212
+ upsampler_path = str(
213
+ resolve_model_path(
214
+ self._pipelines.models_dir,
215
+ self._pipelines.config.model_download_specs,
216
+ "upsampler",
217
+ )
218
+ )
219
+
220
+ logger.info(
221
+ f"Creating pipeline with LoRA: {loras}, steps: {inference_steps}"
222
+ )
223
+ from lora_injection import (
224
+ _lora_init_kwargs,
225
+ inject_loras_into_fast_pipeline,
226
+ )
227
+
228
+ lora_kw = _lora_init_kwargs(LTXFastVideoPipeline, loras)
229
+ pipeline = LTXFastVideoPipeline(
230
+ checkpoint_path,
231
+ gemma_root,
232
+ upsampler_path,
233
+ self._pipelines.config.device,
234
+ **lora_kw,
235
+ )
236
+ n_inj = inject_loras_into_fast_pipeline(pipeline, loras)
237
+ if hasattr(pipeline, "pipeline") and hasattr(
238
+ pipeline.pipeline, "model_ledger"
239
+ ):
240
+ try:
241
+ pipeline.pipeline.model_ledger.loras = tuple(loras)
242
+ except Exception:
243
+ pass
244
+ logger.info(
245
+ "LoRA 注入: init_kw=%s, 注入点=%s, model_ledger.loras=%s",
246
+ list(lora_kw.keys()),
247
+ n_inj,
248
+ getattr(
249
+ getattr(pipeline.pipeline, "model_ledger", None),
250
+ "loras",
251
+ None,
252
+ ),
253
+ )
254
+
255
+ from state.app_state_types import (
256
+ VideoPipelineState,
257
+ VideoPipelineWarmth,
258
+ GpuSlot,
259
+ )
260
+
261
+ state = VideoPipelineState(
262
+ pipeline=pipeline,
263
+ warmth=VideoPipelineWarmth.COLD,
264
+ is_compiled=False,
265
+ )
266
+
267
+ self._pipelines.state.gpu_slot = GpuSlot(
268
+ active_pipeline=state, generation=None
269
+ )
270
+ logger.info("Pipeline with LoRA loaded successfully")
271
+ else:
272
+ # 无论有没有LoRA,都尝试使用自定义步数重新加载pipeline
273
+ logger.info(f"Loading pipeline with {inference_steps} steps")
274
+ from keep_models_runtime import force_unload_gpu_pipeline
275
+
276
+ force_unload_gpu_pipeline(self._pipelines)
277
+
278
+ import gc
279
+
280
+ gc.collect()
281
+
282
+ gemma_root = self._pipelines._text_handler.resolve_gemma_root()
283
+ from runtime_config.model_download_specs import resolve_model_path
284
+ from services.fast_video_pipeline.ltx_fast_video_pipeline import (
285
+ LTXFastVideoPipeline,
286
+ )
287
+
288
+ checkpoint_path = str(
289
+ resolve_model_path(
290
+ self._pipelines.models_dir,
291
+ self._pipelines.config.model_download_specs,
292
+ "checkpoint",
293
+ )
294
+ )
295
+ upsampler_path = str(
296
+ resolve_model_path(
297
+ self._pipelines.models_dir,
298
+ self._pipelines.config.model_download_specs,
299
+ "upsampler",
300
+ )
301
+ )
302
+
303
+ pipeline = LTXFastVideoPipeline(
304
+ checkpoint_path,
305
+ gemma_root,
306
+ upsampler_path,
307
+ self._pipelines.config.device,
308
+ )
309
+
310
+ from state.app_state_types import (
311
+ VideoPipelineState,
312
+ VideoPipelineWarmth,
313
+ GpuSlot,
314
+ )
315
+
316
+ state = VideoPipelineState(
317
+ pipeline=pipeline,
318
+ warmth=VideoPipelineWarmth.COLD,
319
+ is_compiled=False,
320
+ )
321
+
322
+ self._pipelines.state.gpu_slot = GpuSlot(
323
+ active_pipeline=state, generation=None
324
+ )
325
+
326
+ self._pipelines._pipeline_signature = desired_sig
327
+
328
+ self._generation.start_generation(generation_id)
329
+
330
+ output_path = self.generate_video(
331
+ prompt=req.prompt,
332
+ image=image,
333
+ height=height,
334
+ width=width,
335
+ num_frames=num_frames,
336
+ fps=fps,
337
+ seed=seed,
338
+ camera_motion=req.cameraMotion,
339
+ negative_prompt=req.negativePrompt,
340
+ )
341
+
342
+ self._generation.complete_generation(output_path)
343
+ return GenerateVideoResponse(status="complete", video_path=output_path)
344
+
345
+ except Exception as e:
346
+ self._generation.fail_generation(str(e))
347
+ if "cancelled" in str(e).lower():
348
+ logger.info("Generation cancelled by user")
349
+ return GenerateVideoResponse(status="cancelled")
350
+
351
+ raise HTTPError(500, str(e)) from e
352
+
353
+ def generate_video(
354
+ self,
355
+ prompt: str,
356
+ image: Image.Image | None,
357
+ height: int,
358
+ width: int,
359
+ num_frames: int,
360
+ fps: float,
361
+ seed: int,
362
+ camera_motion: VideoCameraMotion,
363
+ negative_prompt: str,
364
+ ) -> str:
365
+ t_total_start = time.perf_counter()
366
+ gen_mode = "i2v" if image is not None else "t2v"
367
+ logger.info(
368
+ "[%s] Generation started (model=fast, %dx%d, %d frames, %d fps)",
369
+ gen_mode,
370
+ width,
371
+ height,
372
+ num_frames,
373
+ int(fps),
374
+ )
375
+
376
+ if self._generation.is_generation_cancelled():
377
+ raise RuntimeError("Generation was cancelled")
378
+
379
+ if not resolve_model_path(
380
+ self.models_dir, self.config.model_download_specs, "checkpoint"
381
+ ).exists():
382
+ raise RuntimeError(
383
+ "Models not downloaded. Please download the AI models first using the Model Status menu."
384
+ )
385
+
386
+ total_steps = 8
387
+
388
+ self._generation.update_progress("loading_model", 5, 0, total_steps)
389
+ t_load_start = time.perf_counter()
390
+ pipeline_state = self._pipelines.load_gpu_pipeline("fast", should_warm=False)
391
+ t_load_end = time.perf_counter()
392
+ logger.info("[%s] Pipeline load: %.2fs", gen_mode, t_load_end - t_load_start)
393
+
394
+ self._generation.update_progress("encoding_text", 10, 0, total_steps)
395
+
396
+ enhanced_prompt = prompt + self.config.camera_motion_prompts.get(
397
+ camera_motion, ""
398
+ )
399
+
400
+ images: list[ImageConditioningInput] = []
401
+ temp_image_path: str | None = None
402
+ if image is not None:
403
+ temp_image_path = tempfile.NamedTemporaryFile(
404
+ suffix=".png", delete=False
405
+ ).name
406
+ image.save(temp_image_path)
407
+ images = [
408
+ ImageConditioningInput(path=temp_image_path, frame_idx=0, strength=1.0)
409
+ ]
410
+
411
+ output_path = self._make_output_path()
412
+
413
+ try:
414
+ settings = self.state.app_settings
415
+ use_api_encoding = not self._text.should_use_local_encoding()
416
+ if image is not None:
417
+ enhance = use_api_encoding and settings.prompt_enhancer_enabled_i2v
418
+ else:
419
+ enhance = use_api_encoding and settings.prompt_enhancer_enabled_t2v
420
+
421
+ encoding_method = "api" if use_api_encoding else "local"
422
+ t_text_start = time.perf_counter()
423
+ self._text.prepare_text_encoding(enhanced_prompt, enhance_prompt=enhance)
424
+ t_text_end = time.perf_counter()
425
+ logger.info(
426
+ "[%s] Text encoding (%s): %.2fs",
427
+ gen_mode,
428
+ encoding_method,
429
+ t_text_end - t_text_start,
430
+ )
431
+
432
+ self._generation.update_progress("inference", 15, 0, total_steps)
433
+
434
+ height = round(height / 64) * 64
435
+ width = round(width / 64) * 64
436
+
437
+ t_inference_start = time.perf_counter()
438
+ pipeline_state.pipeline.generate(
439
+ prompt=enhanced_prompt,
440
+ seed=seed,
441
+ height=height,
442
+ width=width,
443
+ num_frames=num_frames,
444
+ frame_rate=fps,
445
+ images=images,
446
+ output_path=str(output_path),
447
+ )
448
+ t_inference_end = time.perf_counter()
449
+ logger.info(
450
+ "[%s] Inference: %.2fs", gen_mode, t_inference_end - t_inference_start
451
+ )
452
+
453
+ if self._generation.is_generation_cancelled():
454
+ if output_path.exists():
455
+ output_path.unlink()
456
+ raise RuntimeError("Generation was cancelled")
457
+
458
+ t_total_end = time.perf_counter()
459
+ logger.info(
460
+ "[%s] Total generation: %.2fs (load=%.2fs, text=%.2fs, inference=%.2fs)",
461
+ gen_mode,
462
+ t_total_end - t_total_start,
463
+ t_load_end - t_load_start,
464
+ t_text_end - t_text_start,
465
+ t_inference_end - t_inference_start,
466
+ )
467
+
468
+ self._generation.update_progress("complete", 100, total_steps, total_steps)
469
+ return str(output_path)
470
+ finally:
471
+ self._text.clear_api_embeddings()
472
+ if temp_image_path and os.path.exists(temp_image_path):
473
+ os.unlink(temp_image_path)
474
+
475
+ def _generate_a2v(
476
+ self, req: GenerateVideoRequest, duration: int, fps: int, *, audio_path: str
477
+ ) -> GenerateVideoResponse:
478
+ if req.model != "pro":
479
+ logger.warning(
480
+ "A2V local requested with model=%s; A2V always uses pro pipeline",
481
+ req.model,
482
+ )
483
+ validated_audio_path = validate_audio_file(audio_path)
484
+ audio_path_str = str(validated_audio_path)
485
+
486
+ # 支持竖屏和横屏
487
+ RESOLUTION_MAP: dict[str, tuple[int, int]] = {
488
+ "540p": (1024, 576),
489
+ "720p": (1280, 704),
490
+ "1080p": (1920, 1088),
491
+ }
492
+
493
+ base_w, base_h = RESOLUTION_MAP.get(req.resolution, (1280, 704))
494
+
495
+ # 根据 aspectRatio 调整分辨率
496
+ if req.aspectRatio == "9:16":
497
+ width, height = base_h, base_w # 竖屏
498
+ else:
499
+ width, height = base_w, base_h # 横屏
500
+
501
+ num_frames = self._compute_num_frames(duration, fps)
502
+
503
+ image = None
504
+ temp_image_path: str | None = None
505
+ image_path = normalize_optional_path(req.imagePath)
506
+ if image_path:
507
+ image = self._prepare_image(image_path, width, height)
508
+
509
+ # 获取首尾帧
510
+ start_frame_path = normalize_optional_path(getattr(req, "startFramePath", None))
511
+ end_frame_path = normalize_optional_path(getattr(req, "endFramePath", None))
512
+
513
+ seed = self._resolve_seed()
514
+
515
+ generation_id = self._make_generation_id()
516
+
517
+ temp_image_paths: list[str] = []
518
+ try:
519
+ a2v_state = self._pipelines.load_a2v_pipeline()
520
+ self._generation.start_generation(generation_id)
521
+
522
+ enhanced_prompt = req.prompt + self.config.camera_motion_prompts.get(
523
+ req.cameraMotion, ""
524
+ )
525
+ neg = (
526
+ req.negativePrompt
527
+ if req.negativePrompt
528
+ else self.config.default_negative_prompt
529
+ )
530
+
531
+ images: list[ImageConditioningInput] = []
532
+ temp_image_paths: list[str] = []
533
+
534
+ # 首帧
535
+ if start_frame_path:
536
+ start_img = self._prepare_image(start_frame_path, width, height)
537
+ temp_start_path = tempfile.NamedTemporaryFile(
538
+ suffix=".png", delete=False
539
+ ).name
540
+ start_img.save(temp_start_path)
541
+ temp_image_paths.append(temp_start_path)
542
+ images.append(
543
+ ImageConditioningInput(
544
+ path=temp_start_path, frame_idx=0, strength=1.0
545
+ )
546
+ )
547
+
548
+ # 中间图片(如果有)
549
+ if image is not None and not start_frame_path:
550
+ temp_image_path = tempfile.NamedTemporaryFile(
551
+ suffix=".png", delete=False
552
+ ).name
553
+ image.save(temp_image_path)
554
+ temp_image_paths.append(temp_image_path)
555
+ images.append(
556
+ ImageConditioningInput(
557
+ path=temp_image_path, frame_idx=0, strength=1.0
558
+ )
559
+ )
560
+
561
+ # 尾帧
562
+ if end_frame_path:
563
+ last_latent_idx = (num_frames - 1) // 8 + 1 - 1
564
+ end_img = self._prepare_image(end_frame_path, width, height)
565
+ temp_end_path = tempfile.NamedTemporaryFile(
566
+ suffix=".png", delete=False
567
+ ).name
568
+ end_img.save(temp_end_path)
569
+ temp_image_paths.append(temp_end_path)
570
+ images.append(
571
+ ImageConditioningInput(
572
+ path=temp_end_path, frame_idx=last_latent_idx, strength=1.0
573
+ )
574
+ )
575
+
576
+ output_path = self._make_output_path()
577
+
578
+ total_steps = 11 # distilled: 8 steps (stage 1) + 3 steps (stage 2)
579
+
580
+ a2v_settings = self.state.app_settings
581
+ a2v_use_api = not self._text.should_use_local_encoding()
582
+ if image is not None:
583
+ a2v_enhance = a2v_use_api and a2v_settings.prompt_enhancer_enabled_i2v
584
+ else:
585
+ a2v_enhance = a2v_use_api and a2v_settings.prompt_enhancer_enabled_t2v
586
+
587
+ self._generation.update_progress("loading_model", 5, 0, total_steps)
588
+ self._generation.update_progress("encoding_text", 10, 0, total_steps)
589
+ self._text.prepare_text_encoding(
590
+ enhanced_prompt, enhance_prompt=a2v_enhance
591
+ )
592
+ self._generation.update_progress("inference", 15, 0, total_steps)
593
+
594
+ a2v_state.pipeline.generate(
595
+ prompt=enhanced_prompt,
596
+ negative_prompt=neg,
597
+ seed=seed,
598
+ height=height,
599
+ width=width,
600
+ num_frames=num_frames,
601
+ frame_rate=fps,
602
+ num_inference_steps=total_steps,
603
+ images=images,
604
+ audio_path=audio_path_str,
605
+ audio_start_time=0.0,
606
+ audio_max_duration=None,
607
+ output_path=str(output_path),
608
+ )
609
+
610
+ if self._generation.is_generation_cancelled():
611
+ if output_path.exists():
612
+ output_path.unlink()
613
+ raise RuntimeError("Generation was cancelled")
614
+
615
+ self._generation.update_progress("complete", 100, total_steps, total_steps)
616
+ self._generation.complete_generation(str(output_path))
617
+ return GenerateVideoResponse(status="complete", video_path=str(output_path))
618
+
619
+ except Exception as e:
620
+ self._generation.fail_generation(str(e))
621
+ if "cancelled" in str(e).lower():
622
+ logger.info("Generation cancelled by user")
623
+ return GenerateVideoResponse(status="cancelled")
624
+ raise HTTPError(500, str(e)) from e
625
+ finally:
626
+ self._text.clear_api_embeddings()
627
+ # 清理所有临时图片
628
+ for tmp_path in temp_image_paths:
629
+ if tmp_path and os.path.exists(tmp_path):
630
+ try:
631
+ os.unlink(tmp_path)
632
+ except Exception:
633
+ pass
634
+ if temp_image_path and os.path.exists(temp_image_path):
635
+ try:
636
+ os.unlink(temp_image_path)
637
+ except Exception:
638
+ pass
639
+
640
+ def _prepare_image(self, image_path: str, width: int, height: int) -> Image.Image:
641
+ validated_path = validate_image_file(image_path)
642
+ try:
643
+ img = Image.open(validated_path).convert("RGB")
644
+ except Exception:
645
+ raise HTTPError(400, f"Invalid image file: {image_path}") from None
646
+ img_w, img_h = img.size
647
+ target_ratio = width / height
648
+ img_ratio = img_w / img_h
649
+ if img_ratio > target_ratio:
650
+ new_h = height
651
+ new_w = int(img_w * (height / img_h))
652
+ else:
653
+ new_w = width
654
+ new_h = int(img_h * (width / img_w))
655
+ resized = img.resize((new_w, new_h), Image.Resampling.LANCZOS)
656
+ left = (new_w - width) // 2
657
+ top = (new_h - height) // 2
658
+ return resized.crop((left, top, left + width, top + height))
659
+
660
+ @staticmethod
661
+ def _make_generation_id() -> str:
662
+ return uuid.uuid4().hex[:8]
663
+
664
+ @staticmethod
665
+ def _compute_num_frames(duration: int, fps: int) -> int:
666
+ n = ((duration * fps) // 8) * 8 + 1
667
+ return max(n, 9)
668
+
669
+ def _resolve_seed(self) -> int:
670
+ settings = self.state.app_settings
671
+ if settings.seed_locked:
672
+ logger.info("Using locked seed: %s", settings.locked_seed)
673
+ return settings.locked_seed
674
+ return int(time.time()) % 2147483647
675
+
676
+ def _make_output_path(self) -> Path:
677
+ timestamp = datetime.now().strftime("%Y%m%d_%H%M%S")
678
+ return (
679
+ self.config.outputs_dir
680
+ / f"ltx2_video_{timestamp}_{self._make_generation_id()}.mp4"
681
+ )
682
+
683
+ def _generate_forced_api(self, req: GenerateVideoRequest) -> GenerateVideoResponse:
684
+ if self._generation.is_generation_running():
685
+ raise HTTPError(409, "Generation already in progress")
686
+
687
+ generation_id = self._make_generation_id()
688
+ self._generation.start_api_generation(generation_id)
689
+
690
+ audio_path = normalize_optional_path(req.audioPath)
691
+ image_path = normalize_optional_path(req.imagePath)
692
+ has_input_audio = bool(audio_path)
693
+ has_input_image = bool(image_path)
694
+
695
+ try:
696
+ self._generation.update_progress("validating_request", 5, None, None)
697
+
698
+ api_key = self.state.app_settings.ltx_api_key.strip()
699
+ logger.info(
700
+ "Forced API generation route selected (key_present=%s)", bool(api_key)
701
+ )
702
+ if not api_key:
703
+ raise HTTPError(400, "PRO_API_KEY_REQUIRED")
704
+
705
+ requested_model = req.model.strip().lower()
706
+ api_model_id = FORCED_API_MODEL_MAP.get(requested_model)
707
+ if api_model_id is None:
708
+ raise HTTPError(400, "INVALID_FORCED_API_MODEL")
709
+
710
+ resolution_label = req.resolution
711
+ resolution_by_aspect = FORCED_API_RESOLUTION_MAP.get(resolution_label)
712
+ if resolution_by_aspect is None:
713
+ raise HTTPError(400, "INVALID_FORCED_API_RESOLUTION")
714
+
715
+ aspect_ratio = req.aspectRatio.strip()
716
+ if aspect_ratio not in FORCED_API_ALLOWED_ASPECT_RATIOS:
717
+ raise HTTPError(400, "INVALID_FORCED_API_ASPECT_RATIO")
718
+
719
+ api_resolution = resolution_by_aspect[aspect_ratio]
720
+
721
+ prompt = req.prompt
722
+
723
+ if self._generation.is_generation_cancelled():
724
+ raise RuntimeError("Generation was cancelled")
725
+
726
+ if has_input_audio:
727
+ if requested_model != "pro":
728
+ logger.warning(
729
+ "A2V requested with model=%s; overriding to 'pro'",
730
+ requested_model,
731
+ )
732
+ api_model_id = FORCED_API_MODEL_MAP["pro"]
733
+ if api_resolution != A2V_FORCED_API_RESOLUTION:
734
+ logger.warning(
735
+ "A2V requested with resolution=%s; overriding to '%s'",
736
+ api_resolution,
737
+ A2V_FORCED_API_RESOLUTION,
738
+ )
739
+ api_resolution = A2V_FORCED_API_RESOLUTION
740
+ validated_audio_path = validate_audio_file(audio_path)
741
+ validated_image_path: Path | None = None
742
+ if image_path is not None:
743
+ validated_image_path = validate_image_file(image_path)
744
+
745
+ self._generation.update_progress("uploading_audio", 20, None, None)
746
+ audio_uri = self._ltx_api_client.upload_file(
747
+ api_key=api_key,
748
+ file_path=str(validated_audio_path),
749
+ )
750
+ image_uri: str | None = None
751
+ if validated_image_path is not None:
752
+ self._generation.update_progress("uploading_image", 35, None, None)
753
+ image_uri = self._ltx_api_client.upload_file(
754
+ api_key=api_key,
755
+ file_path=str(validated_image_path),
756
+ )
757
+ self._generation.update_progress("inference", 55, None, None)
758
+ video_bytes = self._ltx_api_client.generate_audio_to_video(
759
+ api_key=api_key,
760
+ prompt=prompt,
761
+ audio_uri=audio_uri,
762
+ image_uri=image_uri,
763
+ model=api_model_id,
764
+ resolution=api_resolution,
765
+ )
766
+ self._generation.update_progress("downloading_output", 85, None, None)
767
+ elif has_input_image:
768
+ validated_image_path = validate_image_file(image_path)
769
+
770
+ duration = self._parse_forced_numeric_field(
771
+ req.duration, "INVALID_FORCED_API_DURATION"
772
+ )
773
+ fps = self._parse_forced_numeric_field(
774
+ req.fps, "INVALID_FORCED_API_FPS"
775
+ )
776
+ if fps not in FORCED_API_ALLOWED_FPS:
777
+ raise HTTPError(400, "INVALID_FORCED_API_FPS")
778
+ if duration not in _get_allowed_durations(
779
+ api_model_id, resolution_label, fps
780
+ ):
781
+ raise HTTPError(400, "INVALID_FORCED_API_DURATION")
782
+
783
+ generate_audio = self._parse_audio_flag(req.audio)
784
+ self._generation.update_progress("uploading_image", 20, None, None)
785
+ image_uri = self._ltx_api_client.upload_file(
786
+ api_key=api_key,
787
+ file_path=str(validated_image_path),
788
+ )
789
+ self._generation.update_progress("inference", 55, None, None)
790
+ video_bytes = self._ltx_api_client.generate_image_to_video(
791
+ api_key=api_key,
792
+ prompt=prompt,
793
+ image_uri=image_uri,
794
+ model=api_model_id,
795
+ resolution=api_resolution,
796
+ duration=float(duration),
797
+ fps=float(fps),
798
+ generate_audio=generate_audio,
799
+ camera_motion=req.cameraMotion,
800
+ )
801
+ self._generation.update_progress("downloading_output", 85, None, None)
802
+ else:
803
+ duration = self._parse_forced_numeric_field(
804
+ req.duration, "INVALID_FORCED_API_DURATION"
805
+ )
806
+ fps = self._parse_forced_numeric_field(
807
+ req.fps, "INVALID_FORCED_API_FPS"
808
+ )
809
+ if fps not in FORCED_API_ALLOWED_FPS:
810
+ raise HTTPError(400, "INVALID_FORCED_API_FPS")
811
+ if duration not in _get_allowed_durations(
812
+ api_model_id, resolution_label, fps
813
+ ):
814
+ raise HTTPError(400, "INVALID_FORCED_API_DURATION")
815
+
816
+ generate_audio = self._parse_audio_flag(req.audio)
817
+ self._generation.update_progress("inference", 55, None, None)
818
+ video_bytes = self._ltx_api_client.generate_text_to_video(
819
+ api_key=api_key,
820
+ prompt=prompt,
821
+ model=api_model_id,
822
+ resolution=api_resolution,
823
+ duration=float(duration),
824
+ fps=float(fps),
825
+ generate_audio=generate_audio,
826
+ camera_motion=req.cameraMotion,
827
+ )
828
+ self._generation.update_progress("downloading_output", 85, None, None)
829
+
830
+ if self._generation.is_generation_cancelled():
831
+ raise RuntimeError("Generation was cancelled")
832
+
833
+ output_path = self._write_forced_api_video(video_bytes)
834
+ if self._generation.is_generation_cancelled():
835
+ output_path.unlink(missing_ok=True)
836
+ raise RuntimeError("Generation was cancelled")
837
+
838
+ self._generation.update_progress("complete", 100, None, None)
839
+ self._generation.complete_generation(str(output_path))
840
+ return GenerateVideoResponse(status="complete", video_path=str(output_path))
841
+ except HTTPError as e:
842
+ self._generation.fail_generation(e.detail)
843
+ raise
844
+ except Exception as e:
845
+ self._generation.fail_generation(str(e))
846
+ if "cancelled" in str(e).lower():
847
+ logger.info("Generation cancelled by user")
848
+ return GenerateVideoResponse(status="cancelled")
849
+ raise HTTPError(500, str(e)) from e
850
+
851
+ def _write_forced_api_video(self, video_bytes: bytes) -> Path:
852
+ output_path = self._make_output_path()
853
+ output_path.write_bytes(video_bytes)
854
+ return output_path
855
+
856
+ @staticmethod
857
+ def _parse_forced_numeric_field(raw_value: str, error_detail: str) -> int:
858
+ try:
859
+ return int(float(raw_value))
860
+ except (TypeError, ValueError):
861
+ raise HTTPError(400, error_detail) from None
862
+
863
+ @staticmethod
864
+ def _parse_audio_flag(audio_value: str | bool) -> bool:
865
+ if isinstance(audio_value, bool):
866
+ return audio_value
867
+ normalized = audio_value.strip().lower()
868
+ return normalized in {"1", "true", "yes", "on"}
LTX2.3-1.0.4/patches/keep_models_runtime.py ADDED
@@ -0,0 +1,16 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ """仅提供强制卸载 GPU 管线。「保持模型加载」功能已移除。"""
2
+
3
+ from __future__ import annotations
4
+
5
+ from typing import Any
6
+
7
+
8
+ def force_unload_gpu_pipeline(pipelines: Any) -> None:
9
+ """释放推理管线占用的显存(切换 GPU、清理、LoRA 重建等场景)。"""
10
+ try:
11
+ pipelines.unload_gpu_pipeline()
12
+ except Exception:
13
+ try:
14
+ type(pipelines).unload_gpu_pipeline(pipelines)
15
+ except Exception:
16
+ pass
LTX2.3-1.0.4/patches/launcher.py ADDED
@@ -0,0 +1,20 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+
2
+ import sys
3
+ import os
4
+
5
+ patch_dir = os.path.dirname(os.path.abspath(__file__))
6
+ backend_dir = r"C:\Program Files\LTX Desktop\resources\backend"
7
+
8
+ # 防御性清除:强行剥离所有的默认 backend_dir 引用
9
+ sys.path = [p for p in sys.path if p and os.path.normpath(p) != os.path.normpath(backend_dir)]
10
+ sys.path = [p for p in sys.path if p and p != "." and p != ""]
11
+
12
+ # 绝对插队注入:优先搜索 PATCHES_DIR
13
+ sys.path.insert(0, patch_dir)
14
+ sys.path.insert(1, backend_dir)
15
+
16
+ import uvicorn
17
+ from ltx2_server import app
18
+
19
+ if __name__ == '__main__':
20
+ uvicorn.run(app, host="0.0.0.0", port=3000, log_level="info", access_log=False)
LTX2.3-1.0.4/patches/lora_build_hook.py ADDED
@@ -0,0 +1,104 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ """
2
+ 在 SingleGPUModelBuilder.build() 时合并「当前请求」的用户 LoRA。
3
+
4
+ 桌面版 Fast 管线往往只在 model_ledger 上挂 loras,真正 load 权重时仍用
5
+ 初始化时的空 loras Builder;此处对 DiT/Transformer 的 Builder 在 build 前注入。
6
+ """
7
+
8
+ from __future__ import annotations
9
+
10
+ import contextvars
11
+ import logging
12
+ from dataclasses import replace
13
+ from typing import Any
14
+
15
+ logger = logging.getLogger(__name__)
16
+
17
+ # 当前 HTTP 请求/生成任务中要额外融合的 LoRA(LoraPathStrengthAndSDOps 元组)
18
+ _pending_user_loras: contextvars.ContextVar[tuple[Any, ...] | None] = contextvars.ContextVar(
19
+ "ltx_pending_user_loras", default=None
20
+ )
21
+
22
+ _HOOK_INSTALLED = False
23
+
24
+
25
+ def pending_loras_token(loras: tuple[Any, ...] | None):
26
+ """返回 contextvar Token,供 finally reset;loras 为 None 表示本任务不用额外 LoRA。"""
27
+ return _pending_user_loras.set(loras)
28
+
29
+
30
+ def reset_pending_loras(token: contextvars.Token | None) -> None:
31
+ if token is not None:
32
+ _pending_user_loras.reset(token)
33
+
34
+
35
+ def _get_pending() -> tuple[Any, ...] | None:
36
+ return _pending_user_loras.get()
37
+
38
+
39
+ def _is_ltx_diffusion_transformer_builder(builder: Any) -> bool:
40
+ """避免给 Gemma / VAE / Upsampler 的 Builder 误加视频 LoRA。"""
41
+ cfg = getattr(builder, "model_class_configurator", None)
42
+ if cfg is None:
43
+ return False
44
+ name = getattr(cfg, "__name__", "") or ""
45
+ # 排除明显非 DiT 的
46
+ for bad in (
47
+ "Gemma",
48
+ "VideoEncoder",
49
+ "VideoDecoder",
50
+ "AudioEncoder",
51
+ "AudioDecoder",
52
+ "Vocoder",
53
+ "EmbeddingsProcessor",
54
+ "LatentUpsampler",
55
+ ):
56
+ if bad in name:
57
+ return False
58
+ try:
59
+ from ltx_core.model.transformer import LTXModelConfigurator
60
+
61
+ if isinstance(cfg, type):
62
+ try:
63
+ if issubclass(cfg, LTXModelConfigurator):
64
+ return True
65
+ except TypeError:
66
+ pass
67
+ if cfg is LTXModelConfigurator:
68
+ return True
69
+ except ImportError:
70
+ pass
71
+ # 兜底:LTX 主 transformer 配置器命名习惯(排除已列出的 VAE/Gemma)
72
+ return "LTX" in name and "ModelConfigurator" in name
73
+
74
+
75
+ def install_lora_build_hook() -> None:
76
+ global _HOOK_INSTALLED
77
+ if _HOOK_INSTALLED:
78
+ return
79
+ try:
80
+ from ltx_core.loader.single_gpu_model_builder import SingleGPUModelBuilder
81
+ except ImportError:
82
+ logger.warning("lora_build_hook: 无法导入 SingleGPUModelBuilder,跳过")
83
+ return
84
+
85
+ _orig_build = SingleGPUModelBuilder.build
86
+
87
+ def build(self: Any, *args: Any, **kwargs: Any) -> Any:
88
+ extra = _get_pending()
89
+ if extra and _is_ltx_diffusion_transformer_builder(self):
90
+ have = {getattr(x, "path", None) for x in self.loras}
91
+ add = tuple(x for x in extra if getattr(x, "path", None) not in have)
92
+ if add:
93
+ merged = (*tuple(self.loras), *add)
94
+ self = replace(self, loras=merged)
95
+ logger.info(
96
+ "lora_build_hook: 已向 DiT Builder 合并 %d 个用户 LoRA: %s",
97
+ len(add),
98
+ [getattr(x, "path", x) for x in add],
99
+ )
100
+ return _orig_build(self, *args, **kwargs)
101
+
102
+ SingleGPUModelBuilder.build = build # type: ignore[method-assign]
103
+ _HOOK_INSTALLED = True
104
+ logger.info("lora_build_hook: 已挂载 SingleGPUModelBuilder.build")
LTX2.3-1.0.4/patches/lora_injection.py ADDED
@@ -0,0 +1,139 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ """将用户 LoRA 注入 Fast 视频管线:兼容 ModelLedger 与 LTX-2 DiffusionStage/Builder。"""
2
+
3
+ from __future__ import annotations
4
+
5
+ import inspect
6
+ import logging
7
+ from typing import Any
8
+
9
+ logger = logging.getLogger(__name__)
10
+
11
+
12
+ def _lora_init_kwargs(
13
+ pipeline_cls: type, loras: list[Any] | tuple[Any, ...]
14
+ ) -> dict[str, Any]:
15
+ if not loras:
16
+ return {}
17
+ try:
18
+ sig = inspect.signature(pipeline_cls.__init__)
19
+ names = sig.parameters.keys()
20
+ except (TypeError, ValueError):
21
+ return {}
22
+ tup = tuple(loras)
23
+ for key in ("loras", "lora", "extra_loras", "user_loras"):
24
+ if key in names:
25
+ return {key: tup}
26
+ return {}
27
+
28
+
29
+ def inject_loras_into_fast_pipeline(ltx_pipe: Any, loras: list[Any] | tuple[Any, ...]) -> int:
30
+ """在已构造的管线上尽量把 LoRA 写进会参与 build 的 Builder / ledger。返回成功写入的处数。"""
31
+ if not loras:
32
+ return 0
33
+ tup = tuple(loras)
34
+ patched = 0
35
+ visited: set[int] = set()
36
+
37
+ def visit(obj: Any, depth: int) -> None:
38
+ nonlocal patched
39
+ if obj is None or depth > 10:
40
+ return
41
+ oid = id(obj)
42
+ if oid in visited:
43
+ return
44
+ visited.add(oid)
45
+
46
+ # ModelLedger.loras(旧桌面)
47
+ ml = getattr(obj, "model_ledger", None)
48
+ if ml is not None:
49
+ try:
50
+ ml.loras = tup
51
+ patched += 1
52
+ logger.info("LoRA: 已设置 model_ledger.loras")
53
+ except Exception as e:
54
+ logger.debug("model_ledger.loras: %s", e)
55
+
56
+ # SingleGPUModelBuilder.with_loras(常见与变体属性名)
57
+ for holder in (obj, ml):
58
+ if holder is None:
59
+ continue
60
+ candidates: list[Any] = []
61
+ for attr in (
62
+ "_transformer_builder",
63
+ "transformer_builder",
64
+ "_model_builder",
65
+ "model_builder",
66
+ ):
67
+ tb = getattr(holder, attr, None)
68
+ if tb is not None:
69
+ candidates.append((attr, tb))
70
+ try:
71
+ for attr in dir(holder):
72
+ al = attr.lower()
73
+ if "transformer" in al and "builder" in al and attr not in (
74
+ "_transformer_builder",
75
+ "transformer_builder",
76
+ ):
77
+ tb = getattr(holder, attr, None)
78
+ if tb is not None:
79
+ candidates.append((attr, tb))
80
+ except Exception:
81
+ pass
82
+ for attr, tb in candidates:
83
+ if hasattr(tb, "with_loras"):
84
+ try:
85
+ new_tb = tb.with_loras(tup)
86
+ setattr(holder, attr, new_tb)
87
+ patched += 1
88
+ logger.info("LoRA: 已更新 %s.with_loras", attr)
89
+ except Exception as e:
90
+ logger.debug("with_loras %s: %s", attr, e)
91
+
92
+ # DiffusionStage(类名或 isinstance)
93
+ is_diffusion = type(obj).__name__ == "DiffusionStage"
94
+ if not is_diffusion:
95
+ try:
96
+ from ltx_pipelines.utils.blocks import DiffusionStage as _DS
97
+
98
+ is_diffusion = isinstance(obj, _DS)
99
+ except ImportError:
100
+ pass
101
+ if is_diffusion:
102
+ tb = getattr(obj, "_transformer_builder", None)
103
+ if tb is not None and hasattr(tb, "with_loras"):
104
+ try:
105
+ obj._transformer_builder = tb.with_loras(tup)
106
+ patched += 1
107
+ logger.info("LoRA: 已写入 DiffusionStage._transformer_builder")
108
+ except Exception as e:
109
+ logger.debug("DiffusionStage: %s", e)
110
+
111
+ # 常见嵌套属性
112
+ for name in (
113
+ "pipeline",
114
+ "inner",
115
+ "_inner",
116
+ "fast_pipeline",
117
+ "_pipeline",
118
+ "stage_1",
119
+ "stage_2",
120
+ "stage",
121
+ "_stage",
122
+ "stages",
123
+ "diffusion",
124
+ "_diffusion",
125
+ ):
126
+ try:
127
+ ch = getattr(obj, name, None)
128
+ except Exception:
129
+ continue
130
+ if ch is not None and ch is not obj:
131
+ visit(ch, depth + 1)
132
+
133
+ if isinstance(obj, (list, tuple)):
134
+ for item in obj[:8]:
135
+ visit(item, depth + 1)
136
+
137
+ root = getattr(ltx_pipe, "pipeline", ltx_pipe)
138
+ visit(root, 0)
139
+ return patched
LTX2.3-1.0.4/patches/low_vram_runtime.py ADDED
@@ -0,0 +1,227 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ """低显存模式:尽量降峰值显存(以速度换显存);效果取决于官方管线是否支持 offload。"""
2
+
3
+ from __future__ import annotations
4
+
5
+ import gc
6
+ import logging
7
+ import os
8
+ import types
9
+ from pathlib import Path
10
+ from typing import Any
11
+
12
+ logger = logging.getLogger("ltx_low_vram")
13
+
14
+
15
+ def _ltx_desktop_config_dir() -> Path:
16
+ p = (
17
+ Path(os.environ.get("LOCALAPPDATA", os.path.expanduser("~/AppData/Local")))
18
+ / "LTXDesktop"
19
+ )
20
+ p.mkdir(parents=True, exist_ok=True)
21
+ return p.resolve()
22
+
23
+
24
+ def low_vram_pref_path() -> Path:
25
+ return _ltx_desktop_config_dir() / "low_vram_mode.pref"
26
+
27
+
28
+ def read_low_vram_pref() -> bool:
29
+ f = low_vram_pref_path()
30
+ if not f.is_file():
31
+ return False
32
+ return f.read_text(encoding="utf-8").strip().lower() in ("1", "true", "yes", "on")
33
+
34
+
35
+ def write_low_vram_pref(enabled: bool) -> None:
36
+ low_vram_pref_path().write_text(
37
+ "true\n" if enabled else "false\n", encoding="utf-8"
38
+ )
39
+
40
+
41
+ def apply_low_vram_config_tweaks(handler: Any) -> None:
42
+ """在官方 RuntimeConfig 上尽量关闭 fast 超分等(若字段存在)。"""
43
+ cfg = getattr(handler, "config", None)
44
+ if cfg is None:
45
+ return
46
+ fm = getattr(cfg, "fast_model", None)
47
+ if fm is None:
48
+ return
49
+ try:
50
+ if hasattr(fm, "model_copy"):
51
+ updated = fm.model_copy(update={"use_upscaler": False})
52
+ setattr(cfg, "fast_model", updated)
53
+ elif hasattr(fm, "use_upscaler"):
54
+ setattr(fm, "use_upscaler", False)
55
+ except Exception as e:
56
+ logger.debug("low_vram: 无法关闭 fast_model.use_upscaler: %s", e)
57
+
58
+
59
+ def install_low_vram_on_pipelines(handler: Any) -> None:
60
+ """启动时读取偏好,挂到 pipelines 上供各补丁读取。"""
61
+ pl = handler.pipelines
62
+ low = read_low_vram_pref()
63
+ setattr(pl, "low_vram_mode", bool(low))
64
+ if low:
65
+ apply_low_vram_config_tweaks(handler)
66
+ logger.info(
67
+ "low_vram_mode: 已开启(尝试关闭 fast 超分;若显存仍高,多为权重常驻 GPU,需降分辨率/时长或 FP8 权重)"
68
+ )
69
+
70
+
71
+ def install_low_vram_pipeline_hooks(pl: Any) -> None:
72
+ """在 load_gpu_pipeline / load_a2v 返回后尝试 Diffusers 式 CPU offload(无则静默)。"""
73
+ if getattr(pl, "_ltx_low_vram_hooks_installed", False):
74
+ return
75
+ pl._ltx_low_vram_hooks_installed = True
76
+
77
+ if hasattr(pl, "load_gpu_pipeline"):
78
+ _orig_gpu = pl.load_gpu_pipeline
79
+ pl._ltx_orig_load_gpu_for_low_vram = _orig_gpu
80
+
81
+ def _load_gpu_wrapped(self: Any, *a: Any, **kw: Any) -> Any:
82
+ r = _orig_gpu(*a, **kw)
83
+ if getattr(self, "low_vram_mode", False):
84
+ try_sequential_offload_on_pipeline_state(r)
85
+ return r
86
+
87
+ pl.load_gpu_pipeline = types.MethodType(_load_gpu_wrapped, pl)
88
+
89
+ if hasattr(pl, "load_a2v_pipeline"):
90
+ _orig_a2v = pl.load_a2v_pipeline
91
+ pl._ltx_orig_load_a2v_for_low_vram = _orig_a2v
92
+
93
+ def _load_a2v_wrapped(self: Any, *a: Any, **kw: Any) -> Any:
94
+ r = _orig_a2v(*a, **kw)
95
+ if getattr(self, "low_vram_mode", False):
96
+ try_sequential_offload_on_pipeline_state(r)
97
+ return r
98
+
99
+ pl.load_a2v_pipeline = types.MethodType(_load_a2v_wrapped, pl)
100
+
101
+ # Monkey patch: 接管 1.0.3 新增的底层 layer streaming 来实现完美的线性显存控制
102
+ if not getattr(pl, "_ltx_layer_streaming_patched", False):
103
+ pl._ltx_layer_streaming_patched = True
104
+ try:
105
+ def _patch_pipeline_class(cls_name, mod_name):
106
+ import importlib
107
+ try:
108
+ mod = importlib.import_module(mod_name)
109
+ pipeline_cls = getattr(mod, cls_name)
110
+ _orig_call = pipeline_cls.__call__
111
+
112
+ def _patched_call(self, *args, **kwargs):
113
+ lim = get_vram_limit()
114
+ if lim is not None:
115
+ if lim == 0:
116
+ # 0表示无限,完全关闭流传输,峰值会在26GB左右,速度最快
117
+ kwargs["streaming_prefetch_count"] = None
118
+ logger.info(f"low_vram_mode: VRAM limit is unlimited (0). Disabled layer streaming.")
119
+ else:
120
+ # 实测反馈:streaming_prefetch_count 的显存成本模型。
121
+ # 数据表现:count=1 -> 峰值10G;count=8 -> 峰值14.7G;count=14 -> 峰值19G。
122
+ # 精确建模:每提高 1 count,全局真实峰值严格提升 ≈ 0.67 GB。
123
+ if lim <= 10.0:
124
+ count = 1
125
+ elif lim >= 25.0:
126
+ count = None # 接近极致直接放开
127
+ else:
128
+ # 基于 10.0GB 进行精确的四舍五入映射,让它绝对贴紧用户输入的数值
129
+ extra_gb = float(lim) - 10.0
130
+ count = max(1, min(32, 1 + round(extra_gb / 0.67)))
131
+
132
+ kwargs["streaming_prefetch_count"] = count
133
+ logger.info(f"low_vram_mode: Dynamically tuned layer streaming prefetch count to {count} for {lim}GB limit.")
134
+
135
+ return _orig_call(self, *args, **kwargs)
136
+
137
+ pipeline_cls.__call__ = _patched_call
138
+ logger.info(f"low_vram_mode: Successfully patched {cls_name} to override streaming_prefetch_count")
139
+ except Exception as e:
140
+ pass
141
+
142
+ _patch_pipeline_class("DistilledPipeline", "ltx_pipelines.distilled")
143
+ _patch_pipeline_class("LTXRetakePipeline", "services.retake_pipeline.ltx_retake_pipeline")
144
+ _patch_pipeline_class("ICLoRAPipeline", "services.ic_lora_pipeline.ltx_ic_lora_pipeline")
145
+ _patch_pipeline_class("A2VPipeline", "services.a2v_pipeline.distilled_a2v_pipeline")
146
+ except Exception:
147
+ pass
148
+
149
+
150
+ def get_vram_limit() -> float | None:
151
+ try:
152
+ import json
153
+ from pathlib import Path
154
+ settings_file = _ltx_desktop_config_dir() / "settings.json"
155
+ if settings_file.exists():
156
+ with open(settings_file, "r", encoding="utf-8") as f:
157
+ data = json.load(f)
158
+ if "vram_limit" in data:
159
+ lim = data["vram_limit"]
160
+ if lim != "":
161
+ return float(lim)
162
+ except Exception:
163
+ pass
164
+ return None
165
+
166
+ def try_sequential_offload_on_pipeline_state(state: Any) -> None:
167
+ """按设定最高显存分配,爆显存后写入系统内存"""
168
+ if state is None:
169
+ return
170
+ root = getattr(state, "pipeline", state)
171
+ candidates: list[Any] = [root]
172
+ inner = getattr(root, "pipeline", None)
173
+ if inner is not None and inner is not root:
174
+ candidates.append(inner)
175
+
176
+ vram_limit = get_vram_limit()
177
+
178
+ # We always apply the macro-level offload (enable_model_cpu_offload)
179
+ # to guarantee that T5 and VAE are evicted when DiT is generating, and vice versa.
180
+ # The micro-level (DiT intra-layer streaming) is already controlled by our __call__ hook.
181
+
182
+ # Fallback to defaults (which applies the pipeline-level macro offload)
183
+ for obj in candidates:
184
+ for method_name in (
185
+ "enable_model_cpu_offload",
186
+ "enable_sequential_cpu_offload",
187
+ ):
188
+ fn = getattr(obj, method_name, None)
189
+ if callable(fn):
190
+ try:
191
+ fn()
192
+ logger.info(
193
+ "low_vram_mode: 已对管线调用 %s()",
194
+ method_name,
195
+ )
196
+ return
197
+ except Exception as e:
198
+ logger.debug(
199
+ "low_vram_mode: %s() 失败(可忽略): %s",
200
+ method_name,
201
+ e,
202
+ )
203
+
204
+
205
+ def maybe_release_pipeline_after_task(handler: Any) -> None:
206
+ """单次生成结束后:低显存模式下强制卸载管线并回收缓存。"""
207
+ pl = getattr(handler, "pipelines", None) or getattr(handler, "_pipelines", None)
208
+ if pl is None or not getattr(pl, "low_vram_mode", False):
209
+ return
210
+ try:
211
+ from keep_models_runtime import force_unload_gpu_pipeline
212
+
213
+ force_unload_gpu_pipeline(pl)
214
+ except Exception as e:
215
+ logger.debug("low_vram_mode: 任务后卸载失败: %s", e)
216
+ try:
217
+ pl._pipeline_signature = None
218
+ except Exception:
219
+ pass
220
+ gc.collect()
221
+ try:
222
+ import torch
223
+
224
+ if torch.cuda.is_available():
225
+ torch.cuda.empty_cache()
226
+ except Exception:
227
+ pass
LTX2.3-1.0.4/patches/runtime_policy.py ADDED
@@ -0,0 +1,21 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ """Runtime policy decisions for forced API mode."""
2
+
3
+ from __future__ import annotations
4
+
5
+
6
+ def decide_force_api_generations(
7
+ system: str, cuda_available: bool, vram_gb: int | None
8
+ ) -> bool:
9
+ """Return whether API-only generation must be forced for this runtime."""
10
+ if system == "Darwin":
11
+ return True
12
+
13
+ if system in ("Windows", "Linux"):
14
+ if not cuda_available:
15
+ return True
16
+ if vram_gb is None:
17
+ return True
18
+ return vram_gb < 6
19
+
20
+ # Fail closed for non-target platforms unless explicitly relaxed.
21
+ return True
LTX2.3-1.0.4/patches/settings.json ADDED
@@ -0,0 +1,23 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "use_torch_compile": false,
3
+ "load_on_startup": false,
4
+ "ltx_api_key": "1231",
5
+ "user_prefers_ltx_api_video_generations": false,
6
+ "fal_api_key": "",
7
+ "use_local_text_encoder": true,
8
+ "fast_model": {
9
+ "use_upscaler": true
10
+ },
11
+ "pro_model": {
12
+ "steps": 20,
13
+ "use_upscaler": true
14
+ },
15
+ "prompt_cache_size": 100,
16
+ "prompt_enhancer_enabled_t2v": true,
17
+ "prompt_enhancer_enabled_i2v": false,
18
+ "gemini_api_key": "",
19
+ "seed_locked": false,
20
+ "locked_seed": 42,
21
+ "models_dir": "",
22
+ "lora_dir": ""
23
+ }
LTX2.3-1.0.4/run.bat ADDED
@@ -0,0 +1,38 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ @echo off
2
+ title LTX-2 Cinematic Workstation
3
+
4
+ echo =========================================================
5
+ echo LTX-2 Cinematic UI Booting...
6
+ echo =========================================================
7
+ echo.
8
+
9
+ set "LTX_PY=%USERPROFILE%\AppData\Local\LTXDesktop\python\python.exe"
10
+ set "LTX_UI_URL=http://127.0.0.1:4000/"
11
+
12
+ if exist "%LTX_PY%" (
13
+ echo [SUCCESS] LTX Bundled Python environment detected!
14
+ echo [INFO] Browser will open automatically when UI is ready...
15
+ start "" powershell -NoProfile -WindowStyle Hidden -Command "$ProgressPreference='SilentlyContinue'; $deadline=(Get-Date).AddSeconds(60); while((Get-Date) -lt $deadline){ try { Invoke-WebRequest -UseBasicParsing '%LTX_UI_URL%' -TimeoutSec 2 | Out-Null; Start-Process '%LTX_UI_URL%'; exit 0 } catch { Start-Sleep -Seconds 1 } }"
16
+ echo [INFO] Starting workspace natively...
17
+ echo ---------------------------------------------------------
18
+ "%LTX_PY%" main.py
19
+ pause
20
+ exit /b
21
+ )
22
+
23
+ python --version >nul 2>&1
24
+ if %errorlevel% equ 0 (
25
+ echo [WARNING] LTX Bundled Python not found.
26
+ echo [INFO] Browser will open automatically when UI is ready...
27
+ start "" powershell -NoProfile -WindowStyle Hidden -Command "$ProgressPreference='SilentlyContinue'; $deadline=(Get-Date).AddSeconds(60); while((Get-Date) -lt $deadline){ try { Invoke-WebRequest -UseBasicParsing '%LTX_UI_URL%' -TimeoutSec 2 | Out-Null; Start-Process '%LTX_UI_URL%'; exit 0 } catch { Start-Sleep -Seconds 1 } }"
28
+ echo [INFO] Falling back to global Python environment...
29
+ echo ---------------------------------------------------------
30
+ python main.py
31
+ pause
32
+ exit /b
33
+ )
34
+
35
+ echo [ERROR] FATAL: No Python interpreter found on this system.
36
+ echo [INFO] Please run install.bat to download and set up Python!
37
+ echo.
38
+ pause