Batch inference

#11
by eeyrw - opened

I have no idea how to inference in batch mode. Is there some reference example?

It seems pretty easy:

images = [PIL.Image.open('1.jpg'),PIL.Image.open('2.jpg')]

model = DINOv3ViTModel.from_pretrained("facebook/dinov3-vitl16-pretrain-lvd1689m", dtype=torch.bfloat16).to(device='cuda')
processor = DINOv3ViTImageProcessorFast.from_pretrained("facebook/dinov3-vitl16-pretrain-lvd1689m")

inputs = processor(images=images, return_tensors="pt").to(device='cuda')
 with torch.inference_mode():
    outputs = model(**inputs)

pooled_output = outputs.pooler_output
print("Pooled output shape:", pooled_output.shape)

But how can I incorporate this with pytorch dataloader and dataset?

Sign up or log in to comment