Food Vision
Collection
Models and datasets for food-focused vision-language tasks such as food detection, item extraction, and structured visual understanding. • 5 items • Updated
This model is a fine-tuned version of HuggingFaceTB/SmolVLM2-500M-Instruct specialized in structured food extraction. It analyzes images to determine if they contain food, generates a short title, and extracts lists of visible food and drink items in a specific JSON format.
It has been trained using TRL in a two-stage process to ensure high accuracy in structured output generation.
This model relies on a specific system prompt and user prompt structure to output the correct JSON format.
import torch
from transformers import AutoProcessor, AutoModelForImageTextToText
from PIL import Image
import requests
# 1) Load fine-tuned model and processor
model_id = "berkeruveyik/FoodExtraqt-Vision-SmoLVLM2-500M-fine-tune-v3" # Replace with your model ID
print("Loading model and processor...")
processor = AutoProcessor.from_pretrained(model_id)
model = AutoModelForImageTextToText.from_pretrained(
model_id,
torch_dtype=torch.bfloat16,
device_map="auto",
)
model.eval()
print("Model ready!")
# 2) Prompts
SYSTEM_MESSAGE = """You are an expert food and drink image extractor.
You provide structured data to visual inputs classifying them as edible food/drink or not.
as well as titling the image with a simple simple food/drink related caption.
Finally you extract any and all visible food/drink items to lists."""
USER_PROMPT = """Classify the given input image into food or not, and if edible food or drink items are present, extract them into lists. If no food/drink items are visible, return an empty list.
Only return valid JSON in the following form:
````json
{
"is_food": 0,
"image_title": "",
"food_items": [],
"drink_items": []
}
```"""
# 3) Load image
image_url = "https://www.shutterstock.com/image-photo/fried-salmon-steak-cooked-green-600nw-2489026949.jpg"
print(f"\nLoading image from: {image_url}")
resp = requests.get(image_url, stream=True, headers={"User-Agent": "Mozilla/5.0"})
resp.raise_for_status()
image = Image.open(resp.raw).convert("RGB")
# 4) Prepare inputs
messages = [
{
"role": "user",
"content": [
{"type": "image", "image": image},
{"type": "text", "text": SYSTEM_MESSAGE + "\n\n" + USER_PROMPT},
],
}
]
text = processor.apply_chat_template(
messages,
add_generation_prompt=True,
tokenize=False,
)
inputs = processor(
text=text,
images=image,
return_tensors="pt",
)
# Move tensors to model device and dtype
inputs = {k: v.to(model.device) for k, v in inputs.items()}
inputs = {
k: (v.to(dtype=model.dtype) if torch.is_floating_point(v) and v.dtype == torch.float32 else v)
for k, v in inputs.items()
}
# 5) Generate
print("\nGenerating output...")
with torch.no_grad():
generated_ids = model.generate(**inputs, max_new_tokens=256, do_sample=False)
# 6) Decode only the newly generated tokens
prompt_len = inputs["input_ids"].shape[1]
output_text = processor.batch_decode(
generated_ids[:, prompt_len:],
skip_special_tokens=True
)[0]
print("\n" + "="*60)
print("OUTPUT:")
print("="*60)
print(output_text)
print("="*60)
This model was fine-tuned in two stages to preserve visual capabilities while learning the strict JSON output structure:
The following hyperparameters were used during the second stage of training:
Cite TRL as:
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallou{\'e}dec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{[https://github.com/huggingface/trl](https://github.com/huggingface/trl)}}
}
Base model
HuggingFaceTB/SmolLM2-360M