File size: 3,319 Bytes
0a4deb9 | 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 | ---
license: apache-2.0
language:
- en
tags:
- evaluation
- video
- multimodal
---
# CleverHans-Evaluation
Scripts and a small in-domain test set to evaluate Qwen3-Omni variants on **Video-MME**, **LVBench**, and **audio–video sync** (custom JSONL).
## What’s in the repo
| Path | Purpose |
|------|---------|
| `setup_env.sh` | Installs **Anaconda** if conda is missing, then creates `video` (or `CONDA_ENV`) and pip-installs eval deps |
| `setup_data.sh` | Downloads **all** eval data: Video-MME, LVBench, and sync eval videos + audio (to `/opt/dlami/nvme`) |
| `COMMANDS.md` | Copy-paste commands: data download, merge, eval per model/dataset |
| `data/kto_training_data_v2_test.jsonl` | In-domain sync test (426 lines) |
| `scripts/*.py` | Download, merge, eval, metrics helpers |
## Quick start
```bash
git clone https://huggingface.co/Rakancorle11/CleverHans-Evaluation
cd CleverHans-Evaluation
huggingface-cli login # if needed for gated models
chmod +x setup_env.sh setup_data.sh
bash setup_env.sh # on a machine with no conda: downloads Anaconda to ~/anaconda3 first
source ~/anaconda3/etc/profile.d/conda.sh # if this is your first shell after install
conda activate video
bash setup_data.sh # downloads Video-MME, LVBench, sync videos + audio to /opt/dlami/nvme
# Then follow COMMANDS.md — you choose which model on which benchmark.
```
**Fresh OS notes:** install `wget` before running (`sudo apt install -y wget`). System **ffmpeg** is recommended (`sudo apt install -y ffmpeg`). Override `INSTALL_DIR` / `ANACONDA_VERSION` / `CUDA_INDEX_URL` via environment variables if needed (see comments in `setup_env.sh`).
## Models (HF IDs)
| Role | Model |
|------|--------|
| Vanilla | `Qwen/Qwen3-Omni-30B-A3B-Instruct` |
| Full SFT (merge / eval base) | `Rakancorle11/qwen3omni_full_sft_revised_thinker_key` |
| DPO LoRA | `Rakancorle11/Qwen3Omni-onpolicy-dpo-lora-w_audio_v2_8632`, `_v3_8632`, `_v4_8632`, `_v5_12075` |
Merge LoRA into a full checkpoint for **vLLM** with `scripts/merge_adapter.py`. For **transformers-only** Video-MME/LVBench you can pass `--adapter` instead of merging.
## Data
- **Video-MME / LVBench / Sync eval data**: all downloaded by `bash setup_data.sh`.
- **Sync eval media** (original oops videos, random-shift videos, extracted audio): pulled from `hasnat79/ual_bench`, `Rakancorle11/random_shift_video`, `Rakancorle11/extracted_audio` into `/opt/dlami/nvme/video_source/`.
## Default paths (convention)
Scripts assume a fixed split on every machine:
| What | Where |
|------|--------|
| Benchmark videos, merged full models, sync `video_source/` (original + shifted + audio) | `/opt/dlami/nvme/...` |
| Eval outputs (`eval_results.jsonl`, `metrics.json`, …) | `/home/ubuntu/eval_results/videomme`, `.../lvbench`, `.../sync` |
Override with `--video-dir`, `--output-dir`, `--data-root` if your layout differs.
## Requirements
- Strong GPU(s), ~200GB+ disk for benchmarks + merged weights
- vLLM: `--tp` must divide **20** (audio encoder heads); e.g. `--tp 4`, not 8
- `setup_env.sh` uses CUDA 12.4 PyTorch wheels by default; override `CUDA_INDEX_URL` if needed
## Notes
- Eval scripts **resume** from existing `eval_results.jsonl`.
- In-domain sync: use `--data-root` so paths are not tied to `/home/ubuntu/video_source`.
|