LmStudio Version problem. Lmstudio 1.4 works

#2
by Cordobian - opened

but Lmstudio 1.5 doesnt work.
LLMProcess] Failed to load model _0x951f6c [Error]: Failed to load model.
at _0x590496.loadModel (/Applications/LM Studio.app/Contents/Resources/app/.webpack/lib/llmworker.js:1:562095)
at process.processTicksAndRejections (node:internal/process/task_queues:104:5)
at async _0x590496.handleMessage (/Applications/LM Studio.app/Contents/Resources/app/.webpack/lib/llmworker.js:1:554276) {
cause: 'Error when loading model: ValueError: Missing 84 parameters: \n' +
'vision_tower.blocks.0.attn.proj.biases,\n' +
'vision_tower.blocks.0.attn.qkv.biases,\n' +
'vision_tower.blocks.0.mlp.linear_fc1.biases,\n' +
'vision_tower.blocks.1.attn.proj.biases,\n' +
'vision_tower.blocks.1.attn.qkv.biases,\n' +
'vision_tower.blocks.1.mlp.linear_fc1.biases,\n' +
'vision_tower.blocks.10.attn.proj.biases,\n' +
'vision_tower.blocks.10.attn.qkv.biases,\n' +
'vision_tower.blocks.10.mlp.linear_fc1.biases,\n' +
'vision_tower.blocks.11.attn.proj.biases,\n' +
'vision_tower.blocks.11.attn.qkv.biases,\n' +
'vision_tower.blocks.11.mlp.linear_fc1.biases,\n' +
'vision_tower.blocks.12.attn.proj.biases,\n' +
'vision_tower.blocks.12.attn.qkv.biases,\n' +
'vision_tower.blocks.12.mlp.linear_fc1.biases,\n' +
'vision_tower.blocks.13.attn.proj.biases,\n' +
'vision_tower.blocks.13.attn.qkv.biases,\n' +
'vision_tower.blocks.13.mlp.linear_fc1.biases,\n' +
'vision_tower.blocks.14.attn.proj.biases,\n' +
'vision_tower.blocks.14.attn.qkv.biases,\n' +
'vision_tower.blocks.14.mlp.linear_fc1.biases,\n' +
'vision_tower.blocks.15.attn.proj.biases,\n' +
'vision_tower.blocks.15.attn.qkv.biases,\n' +
'vision_tower.blocks.15.mlp.linear_fc1.biases,\n' +
'vision_tower.blocks.16.attn.proj.biases,\n' +
'vision_tower.blocks.16.attn.qkv.biases,\n' +
'vision_tower.blocks.16.mlp.linear_fc1.biases,\n' +
'vision_tower.blocks.17.attn.proj.biases,\n' +
'vision_tower.blocks.17.attn.qkv.biases,\n' +
'vision_tower.blocks.17.mlp.linear_fc1.biases,\n' +
'vision_tower.blocks.18.attn.proj.biases,\n' +
'vision_tower.blocks.18.attn.qkv.biases,\n' +
'vision_tower.blocks.18.mlp.linear_fc1.biases,\n' +
'vision_tower.blocks.19.attn.proj.biases,\n' +
'vision_tower.blocks.19.attn.qkv.biases,\n' +
'vision_tower.blocks.19.mlp.linear_fc1.biases,\n' +
'vision_tower.blocks.2.attn.proj.biases,\n' +
'vision_tower.blocks.2.attn.qkv.biases,\n' +
'vision_tower.blocks.2.mlp.linear_fc1.biases,\n' +
'vision_tower.blocks.20.attn.proj.biases,\n' +
'vision_tower.blocks.20.attn.qkv.biases,\n' +
'vision_tower.blocks.20.mlp.linear_fc1.biases,\n' +
'vision_tower.blocks.21.attn.proj.biases,\n' +
'vision_tower.blocks.21.attn.qkv.biases,\n' +
'vision_tower.blocks.21.mlp.linear_fc1.biases,\n' +
'vision_tower.blocks.22.attn.proj.biases,\n' +
'vision_tower.blocks.22.attn.qkv.biases,\n' +
'vision_tower.blocks.22.mlp.linear_fc1.biases,\n' +
'vision_tower.blocks.23.attn.proj.biases,\n' +
'vision_tower.blocks.23.attn.qkv.biases,\n' +
'vision_tower.blocks.23.mlp.linear_fc1.biases,\n' +
'vision_tower.blocks.24.attn.proj.biases,\n' +
'vision_tower.blocks.24.attn.qkv.biases,\n' +
'vision_tower.blocks.24.mlp.linear_fc1.biases,\n' +
'vision_tower.blocks.25.attn.proj.biases,\n' +
'vision_tower.blocks.25.attn.qkv.biases,\n' +
'vision_tower.blocks.25.mlp.linear_fc1.biases,\n' +
'vision_tower.blocks.26.attn.proj.biases,\n' +
'vision_tower.blocks.26.attn.qkv.biases,\n' +
'vision_tower.blocks.26.mlp.linear_fc1.biases,\n' +
'vision_tower.blocks.3.attn.proj.biases,\n' +
'vision_tower.blocks.3.attn.qkv.biases,\n' +
'vision_tower.blocks.3.mlp.linear_fc1.biases,\n' +
'vision_tower.blocks.4.attn.proj.biases,\n' +
'vision_tower.blocks.4.attn.qkv.biases,\n' +
'vision_tower.blocks.4.mlp.linear_fc1.biases,\n' +
'vision_tower.blocks.5.attn.proj.biases,\n' +
'vision_tower.blocks.5.attn.qkv.biases,\n' +
'vision_tower.blocks.5.mlp.linear_fc1.biases,\n' +
'vision_tower.blocks.6.attn.proj.biases,\n' +
'vision_tower.blocks.6.attn.qkv.biases,\n' +
'vision_tower.blocks.6.mlp.linear_fc1.biases,\n' +
'vision_tower.blocks.7.attn.proj.biases,\n' +
'vision_tower.blocks.7.attn.qkv.biases,\n' +
'vision_tower.blocks.7.mlp.linear_fc1.biases,\n' +
'vision_tower.blocks.8.attn.proj.biases,\n' +
'vision_tower.blocks.8.attn.qkv.biases,\n' +
'vision_tower.blocks.8.mlp.linear_fc1.biases,\n' +
'vision_tower.blocks.9.attn.proj.biases,\n' +
'vision_tower.blocks.9.attn.qkv.biases,\n' +
'vision_tower.blocks.9.mlp.linear_fc1.biases,\n' +
'vision_tower.merger.linear_fc1.biases,\n' +
'vision_tower.merger.linear_fc2.biases,\n' +
'vision_tower.pos_embed.biases.',
suggestion: undefined,
errorData: undefined,
data: undefined,
displayData: undefined,
title: 'Failed to load model.'

Fixed. You need to download the updated *.safetensors files.

TheCluster changed discussion status to closed

Thx, solved and working at 1.6 version

Sign up or log in to comment