Datasets:
Horama/animal-200-detection
Subset of Horama/animal-200 (scraped wildlife images) annotated automatically with Grounding DINO to produce bounding boxes for animal detection.
Origin
| Property | Value |
|---|---|
| Source dataset | Horama/animal_scrapped (aka Horama/inat-raw) |
| Image sources | DuckDuckGo, Wikimedia Commons, Wikipedia |
| Species coverage | ~200 animal species |
| Pre-classification | CLIP zero-shot (openai/clip-vit-large-patch14) |
| Filtering | alive=True AND watermark=False only |
CLIP pre-classification
Before annotation, every image in Horama/animal_scrapped was classified
with CLIP along three axes:
| Axis | Positive prompts | Negative prompts | Purpose |
|---|---|---|---|
| alive | "a photo of a living animal in nature/zoo" | dead, skeleton, drawing, taxidermy, statue, no animal… | Discard non-living depictions |
| watermark | large/intrusive watermark, stock watermark… | clean photo, subtle corner text… | Discard images with obstructive watermarks |
| distance | close-up / medium / far (3 prompt sets) | — | Control the distance distribution per split |
Only images classified as alive and without watermark are kept. The distance label is then used to subsample with distance-aware quotas:
| Split | Base images/species | Distance targets |
|---|---|---|
| train | 230 (+120 for 90 priority species) | 40 % close-up, 50 % medium, 10 % far |
| val | 25 | 33 % / 34 % / 33 % balanced |
| test | 25 | 33 % / 34 % / 33 % balanced |
Close-up images include synthetic crops generated from medium-distance detections (15 % padding around the bounding box).
Annotation pipeline
Horama/animal_scrapped
│
▼
Filter (alive, no watermark)
│
▼
Subsample per species / distance
│
▼
Grounding DINO (IDEA-Research/grounding-dino-base)
prompt: "animal ."
confidence threshold: 0.25
│
▼
COCO-format annotations (category 0 = animal)
│
▼
Push to Horama/animal_annotated
The annotation was parallelized across 4 workers (pods), each processing
a disjoint partition of species. Results are stored under worker_0/ through worker_3/.
Dataset structure
Horama/animal_annotated/
├── worker_0/
│ ├── annotations/
│ │ ├── train.json # COCO JSON for this worker's train split
│ │ ├── val.json
│ │ └── test.json
│ ├── images/
│ │ ├── train/
│ │ │ ├── part_0000/ # sharded to stay < 10 000 files per dir
│ │ │ │ ├── Species_name_001.jpg
│ │ │ │ └── ...
│ │ │ └── part_0001/
│ │ ├── val/
│ │ │ └── part_0000/
│ │ └── test/
│ │ └── part_0000/
│ ├── dataset_stats.json
│ └── checkpoint.json
├── worker_1/
│ └── ...
├── worker_2/
│ └── ...
└── worker_3/
└── ...
Each worker_*/annotations/{split}.json follows the standard
COCO Object Detection format:
{
"images": [
{"id": 1, "file_name": "part_0000/Species_name_001.jpg", "width": 1024, "height": 768}
],
"annotations": [
{"id": 1, "image_id": 1, "category_id": 0, "bbox": [x, y, w, h]}
],
"categories": [
{"id": 0, "name": "animal"}
]
}
- bbox format:
[x, y, width, height]in absolute pixels (COCO convention) - category_id: always
0(animal) — person annotations are added later during merge - Images are sharded into
part_XXXX/sub-directories (max 9 000 files each)
Workers
4 workers processed species in parallel on separate compute pods:
| Worker | Prefix | Species partition |
|---|---|---|
| 0 | worker_0/ |
Species 0, 4, 8, 12, … |
| 1 | worker_1/ |
Species 1, 5, 9, 13, … |
| 2 | worker_2/ |
Species 2, 6, 10, 14, … |
| 3 | worker_3/ |
Species 3, 7, 11, 15, … |
Species are assigned round-robin by alphabetical index, so each worker handles a roughly equal share of the ~200 species.
Model details
| Property | Value |
|---|---|
| Model | IDEA-Research/grounding-dino-base |
| Prompt | "animal ." |
| Confidence threshold | 0.25 |
| Framework | HuggingFace Transformers (AutoModelForZeroShotObjectDetection) |
| Inference | torch.no_grad(), best device (CUDA > MPS > CPU) |
License
This dataset is released under AGPL-3.0, which means it can be freely used, modified and redistributed as long as all derivative work remains open-source and publicly available under the same license.
If you need to use this dataset in a closed-source or commercial context, please contact us to discuss a commercial license arrangement.
Limitations
Annotations are automatically generated (no manual review) — expect noise, missed detections on unusual poses/species, and occasional false positives.
The confidence threshold of 0.25 favors recall over precision.
Some species may have fewer images than the target quota if the source dataset had insufficient qualifying images (alive, no watermark).
Citation
@misc{horama_animal200_detection_2026,
title = {Animal-200-Detection: Grounding DINO Annotated Wildlife Images (200 Species)},
author = {Horama},
year = {2026},
url = {https://huggingface.co/datasets/Horama/animal-200-detection},
note = {Bounding box annotations generated with Grounding DINO for animal detection across 200 species}
}
- Downloads last month
- 29,426