ModuleNotFoundError: No module named 'transformers_modules.BytedanceDouyinContent.SAIL-VL-1'
I'm encountering a problem when I try to use your model for inference. I created a fresh conda env (python=3.10.16) and installed the required packages (pip3 install einops transformers timm). But when I run:
path = "BytedanceDouyinContent/SAIL-VL-1.5-2B"
model = AutoModel.from_pretrained(
path,
torch_dtype=torch.bfloat16,
trust_remote_code=True).eval().cuda()
tokenizer = AutoTokenizer.from_pretrained(path, trust_remote_code=True, use_fast=False)
I get the following error:
ModuleNotFoundError: No module named 'transformers_modules.BytedanceDouyinContent.SAIL-VL-1'
Any idea why this might be happening?
Yes, I am also having the same problem.
Sorry for the late reply. This is due to the name of the model folder, you can remove any dot (.) in the name, e.g., from 1.5 -> 1d5, to solve this issue.
You are using a model of type internvl_chat to instantiate a model of type sailvl. This is not supported for all configurations of models and can yield errors.
Could you specify the versions of libraries, else
ValueError: Unsupported architecture: InternLM2ForCausalLM
this error occurs, after fixing dot problem
Sorry for the late reply. This is due to the name of the model folder, you can remove any dot (.) in the name, e.g., from 1.5 -> 1d5, to solve this issue.
Indeed, if I use "BytedanceDouyinContent/SAIL-VL-1d5-2B" as the path then I can load the model. However, the model that is loaded uses AIMv2 as the vision encoder and not SAILViT-Huge as is mentioned in your model card, so I'm not very confident that the model I'm loading is actually SAIL-VL-1.5-2B. Could you please clarify this?