PlantInquiryVQA / README.md
SyedNazmusSakib's picture
Update README.md
af8733d verified
metadata
license: cc-by-4.0
task_categories:
  - visual-question-answering
  - image-classification
language:
  - en
tags:
  - plant-disease
  - plant-pathology
  - agriculture
  - multi-turn-vqa
  - chain-of-inquiry
  - multimodal
  - benchmark
  - medical-imaging
  - biology
  - reasoning
pretty_name: PlantInquiryVQA  Thinking Like a Botanist
size_categories:
  - 100K<n<1M
configs:
  - config_name: default
    data_files:
      - split: train
        path: data/train.csv
      - split: test
        path: data/test.csv
dataset_info:
  features:
    - name: image_id
      dtype: string
    - name: crop
      dtype: string
    - name: disease
      dtype: string
    - name: category
      dtype:
        class_label:
          names:
            - disease
            - healthy
            - insect_damage
            - senescence
    - name: severity
      dtype: string
    - name: question
      dtype: string
    - name: answer
      dtype: string
    - name: question_category
      dtype: string
    - name: visual_grounding
      dtype: string
    - name: question_number
      dtype: float64
    - name: dataset_source
      dtype: string
  splits:
    - name: train
      num_examples: 82800
    - name: test
      num_examples: 55268
  download_size: 3700000000
  dataset_size: 3700000000

PlantInquiryVQA — Thinking Like a Botanist

Benchmark and framework for multi-turn, intent-driven visual question answering in plant pathology.

Accepted at ACL 2026 Findings.

Images QA Pairs License Code License Data ACL 2026


Overview

PlantInquiryVQA formalises diagnostic reasoning in plant pathology as a Chain-of-Inquiry (CoI) — an ordered sequence of visually-grounded questions that adapts to the plant's severity and the expert's epistemic intent (Diagnosis / Prognosis / Management).

The benchmark evaluates whether modern Multimodal Large Language Models (MLLMs) can reason like a botanist, not just classify a leaf. Key findings from benchmarking 18 state-of-the-art models:

  • All 18 MLLMs describe symptoms competently but fail at reliable clinical reasoning (top Clinical Utility score = 0.188 / 1.0)
  • Structured question-guided inquiry improves diagnostic accuracy by ~48% over direct diagnosis
  • Structured CoI reduces hallucination significantly compared to free-form dialogue

Dataset at a Glance

Attribute Value
Leaf images 24,950
QA pairs 138,068
Train / Test split 82,800 / 55,268
Crop species 34
Disease categories 116
Image categories disease · healthy · insect_damage · senescence
Severity levels MILD · MODERATE · SEVERE
Source datasets 39 component datasets (see paper Appendix A.1)
Image corpus size ~3.5 GB

Covered crop species (34 total)

Apple · Arabian Jasmine · Bitter Gourd · Blueberry · Bottle Gourd · Cauliflower · Cherry · Corn · Cotton · Cucumber · Eggplant/Brinjal · Grape · Guava · Hibiscus · Jackfruit · Lemon · Litchi · Mango · Orange · Papaya · Peach · Peas · Pepper · Pepper Bell · Potato · Raspberry · Rice · Rubber · Soybean · Squash · Strawberry · Sunflower · Tea · Tomato


Quick Load

from datasets import load_dataset

# Load train / test splits (metadata only — no images)
ds = load_dataset("SyedNazmusSakib/PlantInquiryVQA")
train = ds["train"]
test  = ds["test"]

print(train[0])
# {
#   'image_id': 'f650d82227e534b8.jpg',
#   'crop': 'Bottle Gourd',
#   'disease': 'healthy',
#   'category': 'healthy',
#   'severity': '',
#   'question': 'What crop is shown in this image?',
#   'answer': 'This leaf is from a Bottle Gourd plant ...',
#   'question_category': 'crop_identification',
#   'visual_grounding': '',
#   'question_number': 1.0,
#   'dataset_source': 'non_disease'
# }

Load with images

Images live in the images/ folder of this repository, named by image_id.

from datasets import load_dataset
from huggingface_hub import hf_hub_download
from PIL import Image

ds = load_dataset("SyedNazmusSakib/PlantInquiryVQA", split="test")

def attach_image(row):
    img_path = hf_hub_download(
        repo_id="SyedNazmusSakib/PlantInquiryVQA",
        repo_type="dataset",
        filename=f"images/{row['image_id']}",
    )
    row["image"] = Image.open(img_path).convert("RGB")
    return row

# Attach images on demand (lazy)
sample = attach_image(ds[0])

Download the full image corpus locally

# Install helper
pip install huggingface_hub

# Download everything to ./images/
python -c "
from huggingface_hub import snapshot_download
snapshot_download(
    repo_id='SyedNazmusSakib/PlantInquiryVQA',
    repo_type='dataset',
    local_dir='./PlantInquiryVQA',
    allow_patterns=['images/*', 'data/*.csv', 'visual_cues/*', 'diseases_knowledge_base/*'],
)
"

Or use the provided script (after cloning the GitHub repo):

python scripts/download_images.py

Repository Structure

SyedNazmusSakib/PlantInquiryVQA (HuggingFace)
├── README.md                         ← this file (dataset card)
├── CITATION.cff                      ← machine-readable citation
├── requirements.txt
│
├── data/
│   ├── train.csv                     ← 82,800 QA rows  (80% split)
│   └── test.csv                      ← 55,268 QA rows  (20% split)
│
├── images/                           ← 24,950 leaf JPEGs (~3.5 GB)
│   ├── 00009faac7cf68de.jpg
│   └── ...
│
├── diseases_knowledge_base/
│   ├── all_cards.jsonl               ← 116-disease expert knowledge cards
│   └── <crop>/<disease>.json
│
└── visual_cues/
    └── visual_cues.json              ← 24,950 × expert-verified visual cues

CSV Schema

Column Type Description
image_id string Filename of the leaf image (key into images/)
crop string Crop species (34 classes)
disease string Disease name or "healthy"
category enum disease · healthy · insect_damage · senescence
severity enum MILD · MODERATE · SEVERE · (empty for healthy)
question string The CoI question posed to the model
answer string Ground-truth expert answer
question_category string Semantic category of the question
visual_grounding string Expert-verified visual cues referenced
question_number float Position in the CoI chain (1 = first)
dataset_source string disease_only or non_disease

Evaluation Protocols

Protocol History given to model Purpose
Guided (Test 1, main) Ground-truth answers Upper bound — isolates per-turn reasoning
Scaffolded (Test 2) None Lower bound — raw single-turn capability
Cascading (ablation) Model's own prior answers Realistic deployment — compounding errors
Unconstrained (ablation) No CoI templates Worst case — free-form dialogue

Benchmark Results (18 MLLMs)

Best-in-class per metric (full table in paper Table 2):

Metric Leader Score
Disease Accuracy (S_dis) Gemini-3-Flash 0.444
Clinical Utility (S_clin) Llama-3.2-90B-Vision 0.185
Safety Score (S_safe) Llama-3.2-90B-Vision 0.214
Visual Grounding (S_vg) Qwen-VL-Plus 0.508
Explainability Efficiency (E) Grok-4.1-Fast 5.20

Models benchmarked include: Gemini 3 Flash/Pro, Claude (via OpenRouter), GPT-4o, Qwen3-VL (8B/32B/235B), Qwen2.5-VL (7B/32B/72B), LLaMA-3.2 (11B/90B), LLaMA-4 Maverick, Grok-4.1-Fast, Pixtral-12B, Mistral Medium 3.1, Mistral Small 24B, Ministral (3B/8B), Nemotron-12B, Phi-4-Multimodal, Seed-1.6-Flash.


Domain-Specific Metrics

Defined in paper Appendix A.1:

  • S_dis — Disease Identification Score (strict entity match)
  • S_safe — Safety Score (false-reassurance penalty)
  • S_clin = 0.5·S_dis + 0.3·S_act − 0.2·(1 − S_safe) — composite clinical utility
  • S_vg — Visual Grounding recall of expert-verified cues
  • E — Explainability Efficiency (verified cues per 100 words)
  • B — Prevalence Bias (Eq. 7)
  • F — Cross-Class Fairness (Eq. 8)

Supplementary Files

diseases_knowledge_base/

Expert disease cards for each of the 116 disease categories across 34 crops. Each card contains:

  • Disease description and causal agent
  • Visual diagnostic criteria
  • Severity progression markers
  • Management recommendations

visual_cues/visual_cues.json

24,950-entry lookup table mapping each image_id to expert-verified visual cues used for visual grounding evaluation.


Reproducing Results

git clone https://github.com/SyedNazmusSakib/PlantInquiryVQA
cd PlantInquiryVQA
pip install -r requirements.txt
cp .env.example .env    # fill in API keys

# Download images from this HF repo
python scripts/download_images.py

# Run evaluation (Guided setting)
python eval/test_1_gemini3_flash.py

# Aggregate all results
python eval/compute_cascading_all_models.py
python eval/compute_fairness_all_models.py

Licence

Component Licence
Code & eval scripts MIT
Dataset annotations & QA pairs CC BY 4.0
Source images Retain upstream licences (see paper Appendix A.1 for all 39 component dataset licences — most CC BY 4.0; some CC0 / CC BY-NC 3.0)

Citation

If you use PlantInquiryVQA, please cite:

@article{sakib2026thinking,
  title={Thinking Like a Botanist: Challenging Multimodal Language Models with Intent-Driven Chain-of-Inquiry},
  author={Sakib, Syed Nazmus and Haque, Nafiul and Amin, Shahrear Bin and Abdullah, Hasan Muhammad and Hasan, Md Mehedi and Hossain, Mohammad Zabed and Arman, Shifat E},
  journal={arXiv preprint arXiv:2604.20983},
  year={2026}
}

Contact: Open an issue on GitHub or reach out to the corresponding author listed in the paper.

We thank Ali Akbar for large-scale data collection and Abdullah Shahriar for creating the figures and diagrams.