MeowBench / README.md
smgjch's picture
Update README.md
69e890c verified
metadata
license: apache-2.0
language:
  - en
size_categories:
  - n<1K

Dataset Card for MeowBench

MeowBench is a high-fidelity, expert-verified quad-modal benchmark designed to evaluate Multimodal Large Language Models (MLLMs) on feline intention decoding. It is the official evaluation suite for the Meow-Omni 1 model.

Dataset Summary

MeowBench is designed to solve the challenge of "semantic aliasing" in animal behaviour. It provides a rigorous testing ground for models to determine if they can move beyond superficial pattern matching to genuine latent state reasoning by correlating external observational data (video/audio) with internal biological markers (ECG/EEG/IMU).

Uses

Direct Use

  • Benchmarking Multimodal Large Language Models on animal behaviour interpretation.
  • Evaluating a model's ability to ingest and reason over video, audio, and high-frequency biological time-series data.

Out-of-Scope Use

  • Real-world veterinary diagnosis without human oversight.
  • Direct application to non-feline species (unless testing for zero-shot transfer capabilities).

Dataset Structure

Each sample in MeowBench is structured as a Multiple Choice Question (MCQ):

  • Input: A synchronized (or intent-matched) triplet of Video, Audio, and Time-Series data.
  • Question: A natural language prompt asking for the animal's underlying intent (e.g., "Based on the provided biometrics and visual cues, is the subject exhibiting play-aggression or predatory intent?").
  • Options: One ground-truth intention label and three expert-curated distractors sampled from the broader intention collection.

Construction & Verification

  1. Next-Behaviour Prediction (NBP) Logic: Intent labels are derived from the behaviour immediately following a temporal transition point in the raw data.
  2. Intent-Matched Synthesis: Due to the scarcity of naturally synchronized quad-modal data, samples were synthesized by matching unimodal data sharing the same intent.
  3. Ethologist Audit: To ensure biological and biomechanical plausibility, eight Professional Feline Ethologists manually reviewed every sample. Only samples where the biometric acceleration and heart-rate patterns were consistent with the audio-visual displays were retained.

Leaderboard

Model Vision Audio TS Accuracy
Acoustic SOTA (Ntalampiras et al., SVM/HMM) 36.86%
TS SOTA (Chen et al., 1D-CNN + LSTM on IMU) 48.98%
Video SOTA (Qwen3.5-122B-A10B, zero-shot) 61.95%
Qwen3.5-Omni-Plus (V + A) 65.36%
Qwen3.5-Omni-Plus (V + TS†) ✓† 66.21%
Qwen3.5-Omni-Plus (TS† + A) ✓† 42.15%
Qwen3.5-Omni-Plus (V + A + TS†) ✓† 66.89%
Meow-Omni 1 (Ours) 71.16%

† Qwen3.5-Omni-Plus does not accept raw time-series as a native modality; TS data was injected as a structured textual summary (array statistics per channel). Meow-Omni 1 processes raw TS natively.

🔗 The Meow-Omni Ecosystem

To facilitate reproducibility and further research in computational ethology, we have released the following components:

  • Main Model: Meow-Omni 1 — The full fine-tuned quad-modal MLLM.
  • Base Model: Meow-Omni 1-Base — The model weights prior to specific intent-alignment.
  • Training Dataset: Meow-10K — The synchronized 10k sample dataset used for training.

📝 Citation

If you find our work helpful, please cite us using the following BibTeX entry:

@misc{hu2026meowomni1multimodallarge,
      title={Meow-Omni 1: A Multimodal Large Language Model for Feline Ethology}, 
      author={Jucheng Hu and Zhangquan Chen and Yulin Chen and Chengjie Hong and Liang Zhou and Tairan Wang and Sifei Li and Giulio Zhu and Feng Zhou and Yiheng Zeng and Suorong Yang and Dongzhan Zhou},
      year={2026},
      eprint={2605.09152},
      archivePrefix={arXiv},
      primaryClass={cs.CL},
      url={https://arxiv.org/abs/2605.09152}, 
}