File size: 3,047 Bytes
26f0b57 0940b94 26f0b57 b90eb2e 26f0b57 c645508 26f0b57 f195adf 26f0b57 c645508 f195adf 26f0b57 c645508 26f0b57 c645508 26f0b57 | 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 | ---
pretty_name: BagBuddy Dataset
size_categories:
- n<1K
task_categories:
- image-classification
- object-detection
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
dataset_info:
features:
- name: image
dtype: image
- name: caption
dtype: string
- name: bucket
dtype: string
- name: sample_id
dtype: string
- name: labels
list: string
- name: annotation_indices
list: int64
- name: annotation_x
list: int64
- name: annotation_y
list: int64
- name: annotation_count
dtype: int64
splits:
- name: train
num_bytes: 12169999
num_examples: 40
download_size: 12169290
dataset_size: 12169999
---
# LLM-Pack: Grocery Detection Dataset
A small object detection and scene understanding dataset containing tabletop grocery scenes with annotated item names and object locations.
The dataset consists of 40 images with varying object counts, designed for evaluating object detection, counting, and multimodal reasoning systems in cluttered grocery scenarios.
## Dataset Overview
- **Total scenes:** 40
- **Object counts per scene:** 6, 8, 10, 12, 14, 16, 18, or 20 items
- **Samples per object-count category:** 5
- **Annotations:** Object names + object center coordinates
- **Image resolution:** 1920×1080
- **Task type:** Object detection / scene understanding / counting
## Dataset Structure
Each sample contains:
```python
{
"image": PIL.Image,
"caption": str,
"bucket": str, # number of items on the image
"sample_id": str,
"labels": List[str],
"annotation_indices": List[int],
"annotation_x": List[int],
"annotation_y": List[int],
"annotation_count": int
}
```
## Annotation Format
Object annotations are stored as aligned lists.
Example:
```python
{
"labels": [
"Glass Beer Bottle",
"Apples",
"Noodles in Plastic Bag"
],
"annotation_x": [1480, 1251, 1123],
"annotation_y": [445, 822, 810]
}
```
Each `(annotation_x[i], annotation_y[i])` pair corresponds to the center position of `labels[i]` in the image.
## Usage
```python
from datasets import load_dataset
dataset = load_dataset(
"Yannik019/llm_pack_detection",
split="train"
)
print(dataset)
```
## Example
A full usage example is available here:
- [example.py](https://huggingface.co/datasets/Yannik019/llm_pack_detection/blob/main/example.py)
## Intended Use
This dataset is intended for:
- Object detection benchmarking
- Vision-language model evaluation
- Scene understanding research
- Tabletop grocery perception
- Referring object localization
## Citation
```bibtex
@misc{blei2025llmpackintuitivegroceryhandling,
title={LLM-Pack: Intuitive Grocery Handling for Logistics Applications},
author={Yannik Blei and Michael Krawez and Tobias Jülg and Pierre Krack and Florian Walter and Wolfram Burgard},
year={2025},
eprint={2503.08445},
archivePrefix={arXiv},
primaryClass={cs.RO},
url={https://arxiv.org/abs/2503.08445},
}
```
|