llm_pack_detection / README.md
Yannik019's picture
Update README.md
c645508 verified
metadata
pretty_name: BagBuddy Dataset
size_categories:
  - n<1K
task_categories:
  - image-classification
  - object-detection
configs:
  - config_name: default
    data_files:
      - split: train
        path: data/train-*
dataset_info:
  features:
    - name: image
      dtype: image
    - name: caption
      dtype: string
    - name: bucket
      dtype: string
    - name: sample_id
      dtype: string
    - name: labels
      list: string
    - name: annotation_indices
      list: int64
    - name: annotation_x
      list: int64
    - name: annotation_y
      list: int64
    - name: annotation_count
      dtype: int64
  splits:
    - name: train
      num_bytes: 12169999
      num_examples: 40
  download_size: 12169290
  dataset_size: 12169999

LLM-Pack: Grocery Detection Dataset

A small object detection and scene understanding dataset containing tabletop grocery scenes with annotated item names and object locations. The dataset consists of 40 images with varying object counts, designed for evaluating object detection, counting, and multimodal reasoning systems in cluttered grocery scenarios.

Dataset Overview

  • Total scenes: 40
  • Object counts per scene: 6, 8, 10, 12, 14, 16, 18, or 20 items
  • Samples per object-count category: 5
  • Annotations: Object names + object center coordinates
  • Image resolution: 1920×1080
  • Task type: Object detection / scene understanding / counting

Dataset Structure

Each sample contains:

{
    "image": PIL.Image,
    "caption": str,
    "bucket": str, # number of items on the image
    "sample_id": str,

    "labels": List[str],
    "annotation_indices": List[int],
    "annotation_x": List[int],
    "annotation_y": List[int],

    "annotation_count": int
}

Annotation Format

Object annotations are stored as aligned lists.

Example:

{
    "labels": [
        "Glass Beer Bottle",
        "Apples",
        "Noodles in Plastic Bag"
    ],

    "annotation_x": [1480, 1251, 1123],
    "annotation_y": [445, 822, 810]
}

Each (annotation_x[i], annotation_y[i]) pair corresponds to the center position of labels[i] in the image.

Usage

from datasets import load_dataset

dataset = load_dataset(
    "Yannik019/llm_pack_detection",
    split="train"
)

print(dataset)

Example

A full usage example is available here:

Intended Use

This dataset is intended for:

  • Object detection benchmarking
  • Vision-language model evaluation
  • Scene understanding research
  • Tabletop grocery perception
  • Referring object localization

Citation

@misc{blei2025llmpackintuitivegroceryhandling,
      title={LLM-Pack: Intuitive Grocery Handling for Logistics Applications}, 
      author={Yannik Blei and Michael Krawez and Tobias Jülg and Pierre Krack and Florian Walter and Wolfram Burgard},
      year={2025},
      eprint={2503.08445},
      archivePrefix={arXiv},
      primaryClass={cs.RO},
      url={https://arxiv.org/abs/2503.08445}, 
}