Create README.md
Browse files
README.md
ADDED
|
@@ -0,0 +1,53 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
---
|
| 2 |
+
license: apache-2.0
|
| 3 |
+
language:
|
| 4 |
+
- en
|
| 5 |
+
size_categories:
|
| 6 |
+
- n<1K
|
| 7 |
+
---
|
| 8 |
+
|
| 9 |
+
|
| 10 |
+
# Dataset Card for MeowBench
|
| 11 |
+
|
| 12 |
+
MeowBench is a high-fidelity, expert-verified quad-modal benchmark designed to evaluate Multimodal Large Language Models (MLLMs) on feline intention decoding. It is the official evaluation suite for the **Meow-Omni 1** model.
|
| 13 |
+
|
| 14 |
+
|
| 15 |
+
### Dataset Summary
|
| 16 |
+
MeowBench is designed to solve the challenge of **"semantic aliasing"** in animal behaviour. It provides a rigorous testing ground for models to determine if they can move beyond superficial pattern matching to genuine latent state reasoning by correlating external observational data (video/audio) with internal biological markers (ECG/EEG/IMU).
|
| 17 |
+
|
| 18 |
+
## Uses
|
| 19 |
+
|
| 20 |
+
### Direct Use
|
| 21 |
+
- Benchmarking Multimodal Large Language Models on animal behaviour interpretation.
|
| 22 |
+
- Evaluating a model's ability to ingest and reason over video, audio, and high-frequency biological time-series data.
|
| 23 |
+
|
| 24 |
+
### Out-of-Scope Use
|
| 25 |
+
- Real-world veterinary diagnosis without human oversight.
|
| 26 |
+
- Direct application to non-feline species (unless testing for zero-shot transfer capabilities).
|
| 27 |
+
|
| 28 |
+
## Dataset Structure
|
| 29 |
+
|
| 30 |
+
Each sample in MeowBench is structured as a **Multiple Choice Question (MCQ)**:
|
| 31 |
+
- **Input:** A synchronized (or intent-matched) triplet of Video, Audio, and Time-Series data.
|
| 32 |
+
- **Question:** A natural language prompt asking for the animal's underlying intent (e.g., "Based on the provided biometrics and visual cues, is the subject exhibiting play-aggression or predatory intent?").
|
| 33 |
+
- **Options:** One ground-truth intention label and three expert-curated distractors sampled from the broader intention collection.
|
| 34 |
+
|
| 35 |
+
### Construction & Verification
|
| 36 |
+
1. **Next-Behaviour Prediction (NBP) Logic:** Intent labels are derived from the behaviour immediately following a temporal transition point in the raw data.
|
| 37 |
+
2. **Intent-Matched Synthesis:** Due to the scarcity of naturally synchronized quad-modal data, samples were synthesized by matching unimodal data sharing the same intent.
|
| 38 |
+
3. **Ethologist Audit:** To ensure biological and biomechanical plausibility, **eight Professional Feline Ethologists** manually reviewed every sample. Only samples where the biometric acceleration and heart-rate patterns were consistent with the audio-visual displays were retained.
|
| 39 |
+
|
| 40 |
+
# 🔗 The Meow-Omni Ecosystem
|
| 41 |
+
|
| 42 |
+
To facilitate reproducibility and further research in computational ethology, we have released the following components:
|
| 43 |
+
|
| 44 |
+
* **Main Model:** [Meow-Omni 1](https://huggingface.co/smgjch/Meow-Omni-1) — The full fine-tuned quad-modal MLLM.
|
| 45 |
+
* **Base Model:** [Meow-Omni 1-Base](https://huggingface.co/smgjch/Meow-Omni-1-Base) — The model weights prior to specific intent-alignment.
|
| 46 |
+
* **Training Dataset:** [Meow-10K](https://huggingface.co/datasets/smgjch/meow-10k) — The synchronized 10k sample dataset used for training.
|
| 47 |
+
|
| 48 |
+
## Citation
|
| 49 |
+
|
| 50 |
+
Coming Soon.
|
| 51 |
+
|
| 52 |
+
|
| 53 |
+
|