| --- |
| license: mit |
| language: |
| - en |
| pretty_name: TactileEval |
| size_categories: |
| - 10K<n<100K |
| --- |
| |
| # TactileEval Dataset |
|
|
| TactileEval decomposes tactile-graphic quality into five BANA-aligned quality |
| dimensions (view, parts, background, texture, line quality) across six object |
| families, yielding 30 task families and 14,095 option-level annotations. Each |
| record corresponds to a natural photo / tactile drawing pair and a specific |
| quality option with majority-vote label, vote counts, and provenance metadata. |
|
|
| ## Repository layout |
|
|
| ``` |
| images/ |
| <Family>/<Object>/<Natural|Tactile>/*.{jpg,png} |
| processed/ |
| records_full.jsonl |
| splits/{train,val,test}.jsonl |
| family_splits/F{1..6}/{train,val,test}.jsonl |
| dataset_summary.{csv,json} |
| ``` |
|
|
| JSONL fields: |
|
|
| | Field | Description | |
| |-------|-------------| |
| | `pair_id` | `NaturalRel::TactileRel` identifier | |
| | `task_family` | Task code (e.g., F1QL) | |
| | `option_id` | Option identifier (e.g., `too_thick`) | |
| | `option_description` | Plain-language description | |
| | `natural_image`, `tactile_image` | Paths relative to `images/` | |
| | `votes_total`, `positives`, `negatives` | Vote stats | |
| | `label` | Majority label (0/1) | |
| | `vote_fraction` | positives / total | |
| | `status_counts` | counts of approved/submitted ballots | |
|
|
| ## Splits |
|
|
| - `splits/train.jsonl` (11,348 records) |
| - `splits/val.jsonl` (1,341 records) |
| - `splits/test.jsonl` (1,406 records) |
|
|
| `family_splits/` mirrors these splits per object family. |
|
|
| ## Usage |
|
|
| ```python |
| from datasets import load_dataset |
| |
| ds = load_dataset("Adnank1998/TactileEval", name="full", split="train") |
| family = load_dataset("Adnank1998/TactileEval", name="family_f1", split="train") |
| example = ds[0] |
| print(example["natural_image"], example["label"]) |
| ``` |
|
|
| Images are stored under `images/`; join the relative path returned in |
| `natural_image`/`tactile_image` with the local dataset root to load the files. |
|
|
| Available configurations: |
| - `full`: All families (default). |
| - `family_f1` through `family_f6`: Per-family subsets matching the paper splits. |
|
|
| Each split lives in `processed/`, enabling the Hugging Face dataset viewer via |
| the bundled `dataset_infos.json`. |
|
|
| Available configurations: |
| - `full`: All families (default). |
| - `family_f1` … `family_f6`: Per-family subsets matching the paper splits. |
|
|
| Each split lives in `processed/`, so the Hugging Face dataset viewer can load the files directly via the bundled `dataset_infos.json`. |
|
|
|
|
| ## Citation |
|
|
| ``` |
| @misc{khan2026tactileevalstepautomatedfinegrained, |
| title={TactileEval: A Step Towards Automated Fine-Grained Evaluation and Editing of Tactile Graphics}, |
| author={Adnan Khan and Abbas Akkasi and Majid Komeili}, |
| year={2026}, |
| eprint={2604.19829}, |
| archivePrefix={arXiv}, |
| primaryClass={cs.CV}, |
| url={https://arxiv.org/abs/2604.19829} |
| } |
| ``` |
|
|
| ## Contact |
|
|
| Questions? Open an issue or email adnankhan5@cmail.carleton.ca. |
|
|
| ## Acknowledgements |
|
|
| This work was supported in part by MITACS and the Digital Alliance of Canada. |
| We thank the student volunteers at the Intelligent Machines Lab (iML), Carleton |
| University, for their contributions, and Joshua Olojede and Hoda Vafaeesefat |
| for their help with the AMT annotation environment. |
|
|