Dataset Viewer
Auto-converted to Parquet Duplicate
img_fn
image
metadata_fn
string
width
int64
height
int64
boxes
string
objects
string
segms
string
keywords
string
question_orig
string
question
string
answer_choices
string
answer_orig
string
answer_label
int64
EC-VCR/0.json
1,389
792
[[713.4542236328125, 147.4614715576172, 1376.064453125, 791.4486694335938, 0.9988954067230225], [11.962038040161133, 140.7969512939453, 714.1812133789062, 792.0, 0.9984012246131897], [489.35992431640625, 171.94842529296875, 927.5675659179688, 727.7913818359375, 0.992032527923584], [0.0, 458.7644348144531, 121.324127197...
["person", "person", "person", "backpack", "chair"]
[[[[1164, 149], [1163, 150], [1153, 150], [1152, 151], [1150, 151], [1149, 152], [1148, 152], [1147, 153], [1146, 153], [1134, 165], [1133, 165], [1131, 167], [1130, 167], [1129, 168], [1128, 168], [1127, 169], [1125, 169], [1124, 170], [1123, 170], [1122, 171], [1121, 171], [1120, 172], [1119, 172], [1117, 174], [1116...
["marriage", "wedding", "groom"]
["what", "is", "[person1]", "and", "[person2]", "doing", "?"]
["what", "is", [0], "and", [1], "doing", "?"]
[[[0], "is", "marrying", [1]], [[1], "is", "asking", "to", "marry", [0], "daughter"], [[0], "and", [1], "are", "participating", "in", "'katb", "el", "kitab'"], [[0], "is", "fighting", [1]]]
["[person1]", "and", "[person2]", "are", "participating", "in", "'katb", "el", "kitab'"]
2
EC-VCR/0.json
1,389
792
[[713.4542236328125, 147.4614715576172, 1376.064453125, 791.4486694335938, 0.9988954067230225], [11.962038040161133, 140.7969512939453, 714.1812133789062, 792.0, 0.9984012246131897], [489.35992431640625, 171.94842529296875, 927.5675659179688, 727.7913818359375, 0.992032527923584], [0.0, 458.7644348144531, 121.324127197...
["person", "person", "person", "backpack", "chair"]
[[[[1164, 149], [1163, 150], [1153, 150], [1152, 151], [1150, 151], [1149, 152], [1148, 152], [1147, 153], [1146, 153], [1134, 165], [1133, 165], [1131, 167], [1130, 167], [1129, 168], [1128, 168], [1127, 169], [1125, 169], [1124, 170], [1123, 170], [1122, 171], [1121, 171], [1120, 172], [1119, 172], [1117, 174], [1116...
["marriage", "wedding", "groom"]
["what", "is", "[person3]", "job", "here", "?"]
["what", "is", [2], "job", "here", "?"]
[[[2], "is", "a", "judge"], [[2], "is", "a", "'Maazoon"], [[2], "is", "a", "lawyer"], [[2], "doesn't", "have", "a", "job"]]
["[person3]", "is", "a", "'Maazoon"]
1
EC-VCR/0.json
1,389
792
[[713.4542236328125, 147.4614715576172, 1376.064453125, 791.4486694335938, 0.9988954067230225], [11.962038040161133, 140.7969512939453, 714.1812133789062, 792.0, 0.9984012246131897], [489.35992431640625, 171.94842529296875, 927.5675659179688, 727.7913818359375, 0.992032527923584], [0.0, 458.7644348144531, 121.324127197...
["person", "person", "person", "backpack", "chair"]
[[[[1164, 149], [1163, 150], [1153, 150], [1152, 151], [1150, 151], [1149, 152], [1148, 152], [1147, 153], [1146, 153], [1134, 165], [1133, 165], [1131, 167], [1130, 167], [1129, 168], [1128, 168], [1127, 169], [1125, 169], [1124, 170], [1123, 170], [1122, 171], [1121, 171], [1120, 172], [1119, 172], [1117, 174], [1116...
["marriage", "wedding", "groom"]
["Who", "does", "[person2]", "represent", "in", "this", "scenario"]
["Who", "does", [1], "represent", "in", "this", "scenario"]
[["Son", "of", [0]], ["Brother", "of", [0]], ["Friend", "of", [0]], ["The", "groom"]]
["The", "groom"]
3
EC-VCR/0.json
1,389
792
[[713.4542236328125, 147.4614715576172, 1376.064453125, 791.4486694335938, 0.9988954067230225], [11.962038040161133, 140.7969512939453, 714.1812133789062, 792.0, 0.9984012246131897], [489.35992431640625, 171.94842529296875, 927.5675659179688, 727.7913818359375, 0.992032527923584], [0.0, 458.7644348144531, 121.324127197...
["person", "person", "person", "backpack", "chair"]
[[[[1164, 149], [1163, 150], [1153, 150], [1152, 151], [1150, 151], [1149, 152], [1148, 152], [1147, 153], [1146, 153], [1134, 165], [1133, 165], [1131, 167], [1130, 167], [1129, 168], [1128, 168], [1127, 169], [1125, 169], [1124, 170], [1123, 170], [1122, 171], [1121, 171], [1120, 172], [1119, 172], [1117, 174], [1116...
["marriage", "wedding", "groom"]
["Did", "[person1]", "accept", "the", "proposal", "of", "[person2]"]
["Did", [0], "accept", "the", "proposal", "of", [1]]
[["Yes"], ["No"], ["Still", "undecided"], ["Not", "mentioned"]]
["Yes"]
0
EC-VCR/1.json
1,140
815
"[[33.26756286621094, 197.32814025878906, 542.6204833984375, 797.9537353515625, 0.9970550537109375],(...TRUNCATED)
"[\"person\", \"person\", \"person\", \"person\", \"person\", \"person\", \"person\", \"person\", \"(...TRUNCATED)
"[[[[157, 198], [156, 199], [148, 199], [147, 200], [145, 200], [144, 201], [142, 201], [141, 202], (...TRUNCATED)
["marriage", "wedding", "groom"]
["Who", "does", "[person1]", "represent", "in", "this", "scenario"]
["Who", "does", [0], "represent", "in", "this", "scenario"]
"[[\"Son\", \"of\", [1]], [\"Brother\", \"of\", [1]], [\"Friend\", \"of\", [1]], [\"The\", \"groom\"(...TRUNCATED)
["The", "groom"]
3
EC-VCR/1.json
1,140
815
"[[33.26756286621094, 197.32814025878906, 542.6204833984375, 797.9537353515625, 0.9970550537109375],(...TRUNCATED)
"[\"person\", \"person\", \"person\", \"person\", \"person\", \"person\", \"person\", \"person\", \"(...TRUNCATED)
"[[[[157, 198], [156, 199], [148, 199], [147, 200], [145, 200], [144, 201], [142, 201], [141, 202], (...TRUNCATED)
["marriage", "wedding", "groom"]
["How", "are", "all", "these", "people", "related", "to", "[person1]"]
["How", "are", "all", "these", "people", "related", "to", [0]]
"[[\"They\", \"are\", \"family\", \"members\"], [\"They\", \"are\", \"not\", \"related\"], [\"They\"(...TRUNCATED)
["They", "are", "family", "members"]
0
EC-VCR/1.json
1,140
815
"[[33.26756286621094, 197.32814025878906, 542.6204833984375, 797.9537353515625, 0.9970550537109375],(...TRUNCATED)
"[\"person\", \"person\", \"person\", \"person\", \"person\", \"person\", \"person\", \"person\", \"(...TRUNCATED)
"[[[[157, 198], [156, 199], [148, 199], [147, 200], [145, 200], [144, 201], [142, 201], [141, 202], (...TRUNCATED)
["marriage", "wedding", "groom"]
["What", "event", "is", "[person1]", "participating", "in]"]
["What", "event", "is", [0], "participating", "in]"]
"[[\"Marriage\", \"ceremony\", \"preparations\"], [\"Eid\", \"celebration\"], [\"Birthday\", \"party(...TRUNCATED)
["Marriage", "ceremony", "preparations"]
0
EC-VCR/3.json
1,200
926
"[[10.633572578430176, 58.78593826293945, 767.4583740234375, 846.9822998046875, 0.9911869168281555],(...TRUNCATED)
["person", "person", "person"]
"[[[[343, 63], [342, 64], [332, 64], [331, 65], [322, 65], [321, 66], [319, 66], [318, 67], [317, 67(...TRUNCATED)
["marriage", "wedding", "bride"]
["Who", "does", "[person1]", "represent", "in", "this", "scenario"]
["Who", "does", [0], "represent", "in", "this", "scenario"]
"[[\"daughter\", \"of\", [1]], [\"Sister\", \"of\", [1]], [\"Mother\", \"of\", [1]], [\"The\", \"Bri(...TRUNCATED)
["The", "Bride"]
3
EC-VCR/3.json
1,200
926
"[[10.633572578430176, 58.78593826293945, 767.4583740234375, 846.9822998046875, 0.9911869168281555],(...TRUNCATED)
["person", "person", "person"]
"[[[[343, 63], [342, 64], [332, 64], [331, 65], [322, 65], [321, 66], [319, 66], [318, 67], [317, 67(...TRUNCATED)
["marriage", "wedding", "bride"]
["What", "is", "[person2]", "doing", "to", "[person1]"]
["What", "is", [1], "doing", "to", [0]]
"[[[1], \"is\", \"asking\", [0], \"to\", \"marry\", \"her\"], [[1], \"is\", \"curing\", [0], \"hand\(...TRUNCATED)
["[person2]", "is", "drawing", "henna", "on", "[person1]", "hand"]
2
EC-VCR/3.json
1,200
926
"[[10.633572578430176, 58.78593826293945, 767.4583740234375, 846.9822998046875, 0.9911869168281555],(...TRUNCATED)
["person", "person", "person"]
"[[[[343, 63], [342, 64], [332, 64], [331, 65], [322, 65], [321, 66], [319, 66], [318, 67], [317, 67(...TRUNCATED)
["marriage", "wedding", "bride"]
["What", "event", "is", "[person1]", "participating", "in]"]
["What", "event", "is", [0], "participating", "in]"]
"[[\"Henna\", \"ceremony\"], [\"Eid\", \"celebration\"], [\"Birthday\", \"party\"], [\"Graduation\",(...TRUNCATED)
["Henna", "ceremony"]
0
End of preview. Expand in Data Studio

Dataset Summary

EC-VCR (Egyptian Culture Visual Commonsense Reasoning) is a multimodal benchmark designed to evaluate the cultural reasoning capabilities of Vision-Language Models (VLMs) within the specific context of Egypt.

Inspired by the methodology of GD-VCR (Geo-Diverse Visual Commonsense Reasoning), this dataset moves beyond simple recognition ("What is this?") to high-order cognitive reasoning ("Why is this person performing this action?" or "What social event is taking place?"). It addresses the "cultural blind spot" in current AI models by focusing on scenarios unique to Egyptian daily life, traditions, and social dynamics.

This dataset is structured to support Visual Question Answering (VQA) and Visual Commonsense Reasoning (VCR) tasks, providing rich annotations including bounding boxes, object labels, and segmentation masks.

Supported Tasks

  • Visual Commonsense Reasoning (VCR): Answering "Why" and "How" questions that require external cultural knowledge.
  • Visual Question Answering (VQA): Standard question-answering based on image content.
  • Object Detection: Leveraging the provided bounding boxes and object tags.

Dataset Structure

Data Instances

Each instance in the dataset represents a single question-answer pair associated with an image and its corresponding visual annotations.

{
  "img_fn": "EC-VCR/1.jpg",
  "metadata_fn": "EC-VCR/1.json",
  "width": 1920,
  "height": 1080,
  "boxes": [[100, 200, 50, 80], [300, 400, 60, 90]],
  "objects": ["person", "car"],
  "segms": [[[100, 200, 105, 205, ...]], [[300, 400, ...]]],
  "keywords": ["wedding", "street", "celebration"],
  "question_orig": "Why are [person1] and [person2] wearing matching outfits?",
  "question": ["Why", "are", [0]", "and", "[1]", "wearing", "matching", "outfits", "?"],
  "answer_orig": [
    "They are participating in a local festival procession.",
    "They are security guards for the building.",
    "They are part of a wedding entourage.",
    "They are casually walking to work."
  ],
  "answer_label": 2
}

Data Fields

  • img_fn: String. The relative path to the image file.
  • metadata_fn: String. The relative path to the source JSON containing segmentation and detailed metadata.
  • width: Integer. The width of the image in pixels.
  • height: Integer. The height of the image in pixels.
  • boxes: List of Lists. Bounding boxes for detected objects formatted as [x1, y1, x2, y2] (or [x, y, w, h] depending on your specific format).
  • objects: List of Strings. Class labels corresponding to the detected objects in boxes.
  • segms: List of Lists. Polygon points representing the segmentation masks for each object.
  • keywords: List of Strings. Categorical tags describing the scene context (e.g., "festival", "market").
  • question_orig: String. The raw, natural language question string, often containing tags like [person1] to reference specific bounding boxes.
  • question: List of Strings. The tokenized or parsed version of the question, separating tags and punctuation for model input.
  • answer_orig: List of Strings. The list of possible answer choices (candidates) for the multiple-choice task.
  • answer_label: Integer. The zero-based index pointing to the correct answer in the answer_orig list.

Dataset Creation

Curation Rationale

Standard VCR datasets are heavily skewed toward Western contexts. As highlighted by the GD-VCR paper, models trained on these datasets fail to generalize to non-Western regions. EC-VCR fills this gap for Egypt, covering local customs, street scenes, and social interactions that global models often misinterpret.

Source Data

The images are collected and curated from movies, documentries and other online sources.

(Note: You can add specific details here about your source, e.g., "Images were collected from Egyptian movies, TV series, and public domain cultural photography," similar to the GD-VCR methodology.)

Annotation Process

The dataset follows a VCR-style annotation pipeline:

  1. Object Detection: Key objects are localized using bounding boxes and segmentation masks (Detectron2 package was used).
  2. Question Generation: Questions are designed to be high-order, requiring the model to combine visual cues (detected objects) with implicit cultural knowledge.

Usage

Loading the Dataset

from datasets import load_dataset

# Load the dataset
dataset = load_dataset("YourUsername/EC-VCR")

# Access an example
example = dataset['train'][0]
image = example['image']
question = example['question']
annotations = example['boxes']

print(f"Question: {question}")
image.show()

Benchmarking & Evaluation

EC-VCR is designed to test Cultural alignment. High accuracy on this dataset indicates that a model understands:

  1. Visual Recognition: Identifying local objects (e.g., Fanoos).
  2. Social Reasoning: Understanding the intent and context behind actions in an Egyptian setting (e.g., distinct gestures, seating arrangements, or ceremonial traditions).

Citation

If you use this dataset, please cite the following work:

@misc{gamil2025ecvcr,
  author = {Mohamed Gamil and Abdelrahman Elsayed and Abdelrahman Lila and Ahmed Gad and Hesham Abdelgawad and Mohamed Aref and Ahmed Fares},
  title = {EC-VCR: A Visual Commonsense Reasoning Benchmark for Egyptian Culture},
  year = {2026},
  publisher = {Hugging Face},
  howpublished = {\url{https://huggingface.co/datasets/CulTex-VLM/EG-VCR}}
}

Methodology inspired by:

@article{yin2021broaden,
  title={Broaden the Vision: Geo-Diverse Visual Commonsense Reasoning},
  author={Yin, Da and Li, Liunian Harold and Hu, Ziniu and Peng, Nanyun and Chang, Kai-Wei},
  journal={arXiv preprint arXiv:2109.06860},
  year={2021}
}
Downloads last month
43

Collection including CulTex-VLM/EC-VCR

Paper for CulTex-VLM/EC-VCR