Datasets:
SoccerNet-GAR: Pixels or Positions? Benchmarking Modalities in Group Activity Recognition
SoccerNet-GAR is a large-scale multimodal dataset for Group Activity Recognition (GAR) built from all 64 matches of the FIFA World Cup 2022 tournament. It provides synchronized broadcast video and player tracking data for 87,939 annotated group activities across 10 action classes, enabling direct comparison between video-based and tracking-based approaches.
Dataset Details
Description
SoccerNet-GAR is the first dataset to provide synchronized tracking and video modalities for the same action instances in group activity recognition. For each annotated event, a 4.5-second temporal window is extracted from both the broadcast video and player tracking streams, centered on the event timestamp. Within each window, 16 samples are taken at 30 fps with a 9-frame interval (effectively sampled at approximately 3.3 fps over 4.5 seconds).
The dataset contains two input modalities:
- Video Modality: Broadcast footage at 720p resolution. Each frame is part of a temporal sequence sampled within the event window, capturing appearance cues, scene context, and visual motion patterns.
- Tracking Modality: 2D player positions and 3D ball coordinates sampled at 30 fps, automatically extracted from broadcast footage and manually refined by annotators. Player positions span x in [-60, 60]m, y in [-42, 41]m; ball positions include height z in [-8, 25]m. Each entity state encodes spatial coordinates, entity identity (one-hot encoding), and motion dynamics (displacement vectors between consecutive frames). Positional role metadata (goalkeeper, defender, midfielder, forward) is provided for each player.
| Property | Value |
|---|---|
| Curated by | KAUST, University of Liege |
| Original Data Source | Gradient Sports (formerly PFF FC) |
| Total Events | 87,939 |
| Matches | 64 (Football World Cup 2022) |
| Action Classes | 10 |
| Modalities | Video + Tracking |
| Avg. Events per Match | 1,374 |
Sources
- Repository: https://github.com/drishyakarki/pixels_vs_positions
- Paper: Pixels or Positions? Benchmarking Modalities in Group Activity Recognition (arXiv:2511.12606)
How to Use
Results from the paper can be reproduced using OpenSportsLib. Configuration files for all experiments are provided in the pixels_vs_positions repository.
Tracking (Positions)
Download data from the tracking-parquet branch:
from huggingface_hub import snapshot_download
snapshot_download(
repo_id="OpenSportsLab/soccernetpro-classification-GAR",
repo_type="dataset",
revision="tracking-parquet",
local_dir="sngar-tracking",
)
Extract the splits:
unzip train.zip && unzip valid.zip && unzip test.zip
rm train.zip valid.zip test.zip
Train and evaluate:
from opensportslib import model
myModel = model.classification(
config="path/to/classification_tracking.yaml",
data_dir="sngar-tracking",
)
myModel.train(
train_set="sngar-tracking/annotations_train.json",
valid_set="sngar-tracking/annotations_valid.json",
)
myModel.infer(test_set="sngar-tracking/annotations_test.json")
Video (Pixels)
Download data from the frames branch:
from huggingface_hub import snapshot_download
snapshot_download(
repo_id="OpenSportsLab/soccernetpro-classification-GAR",
repo_type="dataset",
revision="frames",
local_dir="sngar-frames",
)
The training split is stored as a multi-part zip. Reassemble and extract:
cat train.zip.part_aa train.zip.part_ab > train.zip
unzip train.zip && unzip valid.zip && unzip test.zip
rm train.zip.part_aa train.zip.part_ab train.zip valid.zip test.zip
Train and evaluate:
from opensportslib import model
if __name__ == '__main__':
myModel = model.classification(
config="path/to/sngar-frames.yaml",
data_dir="sngar-frames",
)
myModel.train(
train_set="sngar-frames/annotations_train.json",
valid_set="sngar-frames/annotations_valid.json",
use_ddp=False,
)
myModel.infer(test_set="sngar-frames/annotations_test.json")
See the pixels_vs_positions repository for the specific config files needed to reproduce each experiment in the paper.
Dataset Structure
Action Classes
The dataset contains 10 action classes reflecting common football events:
| Class | Count | Proportion |
|---|---|---|
| PASS | 57,521 | 65.4% |
| TACKLE | 10,943 | 12.4% |
| OUT | 5,873 | 6.7% |
| HEADER | 5,723 | 6.5% |
| THROW IN | 2,598 | 3.0% |
| CROSS | 2,175 | 2.5% |
| FREE KICK | 1,788 | 2.0% |
| SHOT | 1,041 | 1.2% |
| GOAL | 188 | 0.2% |
| HIGH PASS | 89 | 0.1% |
The dataset exhibits severe class imbalance (646:1 ratio between PASS and HIGH PASS), reflecting the natural distribution of football events.
Splits
Data is split at the match level to prevent leakage:
| Split | Matches | Events | Proportion |
|---|---|---|---|
| Train | 45 | 62,159 | 70.7% |
| Validation | 9 | 12,091 | 13.7% |
| Test | 10 | 13,689 | 15.6% |
Data Quality
- Player tracking completeness: 99.9% of 1,485,008 frames contain all 11 players per team.
- Ball visibility: 93.4% of frames contain ball tracking data.
- Event-level ball coverage: 85.9% of annotated events have complete ball tracking within their temporal window.
Branches
This repository is organized into the following branches:
| Branch | Contents |
|---|---|
main |
Dataset card and documentation. |
paper-data |
The exact dataset needed to reproduce the results in the paper. Contains broadcast videos (1 npy clip per event) and tracking files (1 parquet file per full match). |
frames |
1 npy clip per event for the video modality. Annotations are in SoccerNetPro format. |
tracking-parquet |
1 parquet clip per event for the tracking modality. Annotations are in SoccerNetPro format. |
multimodal-data |
Combined video (npy) and tracking (parquet) data with 1 file per event per modality. Uses a unified annotation file for both modalities in SoccerNetPro format. |
Benchmark Results
Pixels vs. Positions
| Modality | Model | Params | Bal. Acc. | F1 | Training |
|---|---|---|---|---|---|
| Tracking | GIN + MaxPool + Positional Edges | 180K | 77.8% | 57.0% | 4 GPU hours |
| Video | VideoMAEv2-B (finetuned) | 86.3M | 60.9% | 50.1% | 28 GPU hours |
The tracking model outperforms the video baseline by 16.9 percentage points in balanced accuracy and 6.9 percentage points in macro F1 while using 479x fewer parameters and training 7x faster.
Per-Class Comparison (Test Set, Balanced Accuracy)
| Class | Samples | Tracking | Video |
|---|---|---|---|
| PASS | 9,009 | 81.1 | 77.6 |
| TACKLE | 1,690 | 54.0 | 32.2 |
| OUT | 884 | 94.2 | 75.8 |
| HEADER | 867 | 65.2 | 66.3 |
| THROW IN | 12 | 84.2 | 78.6 |
| CROSS | 392 | 86.7 | 77.2 |
| FREE KICK | 347 | 90.4 | 79.4 |
| SHOT | 272 | 76.3 | 63.4 |
| GOAL | 186 | 73.3 | 16.7 |
| HIGH PASS | 30 | 83.3 | 41.7 |
Tracking dominates on 9 of 10 classes, with its largest gains on less frequent classes like GOAL (+56.7 pp) and HIGH PASS (+41.7 pp). Video shows a slight advantage only on HEADER (+1.1 pp). Tracking models learn discriminative features even in severely data-scarce regimes (GOAL: 73.3%, HIGH PASS: 83.3%), whereas video models collapse on these classes (16.7% and 41.7%).
Uses
Direct Use
- Benchmarking video-based vs. tracking-based group activity recognition
- Training and evaluating GAR models on football broadcast data
- Studying multimodal fusion approaches combining visual and positional features
- Analyzing spatial interaction patterns in team sports
Dataset Creation
Curation Rationale
No standardized benchmark previously existed that aligns broadcast video and tracking data for the same group activities. This made fair, apples-to-apples comparison between video-based and tracking-based approaches impossible. SoccerNet-GAR was created to fill this gap by providing synchronized multimodal observations under a unified evaluation protocol.
Source Data
The dataset was constructed from the PFF FC website (now Gradient Sports), which provides broadcast videos, player tracking data, and event annotations across all 64 FIFA World Cup 2022 tournament matches.
Data Cleaning and Alignment
Event annotations are aligned with both input modalities by merging them with tracking streams using UTC timestamps. Three successive filters ensure data quality:
- Temporal alignment: Events where no tracking frame falls within a 10 ms tolerance of the event timestamp are removed.
- Modality coverage: Events lacking corresponding data in either modality are discarded.
- Duplicate resolution: When a single timestamp is annotated with more than one action class (e.g., a goal also labeled as a shot), only the most semantically specific label is retained based on a predefined priority ordering.
Together, these filters remove 6,346 events (6.8% of raw annotations), yielding the final dataset of 87,939 annotated group activities.
Annotation Process
Event annotations with precise timestamps were created by trained annotators and verified through quality control procedures by PFF FC using both video and tracking views. Each event is labeled with one of 10 group activities and temporally marked at the moment of occurrence.
Comparison with Existing Datasets
| Dataset | Year | Domain | Events | Classes | Modalities |
|---|---|---|---|---|---|
| CAD | 2009 | Pedestrian | 2,511 | 5 | V |
| Volleyball | 2016 | Volleyball | 4,830 | 8 | V |
| SoccerNet | 2018 | Football | 6,637 | 3 | V |
| NBA | 2020 | Basketball | 9,172 | 9 | V |
| SoccerNet-v2 | 2021 | Football | 110,458 | 17 | V |
| NETS | 2022 | Basketball | 61,053 | 3 | T |
| SoccerNet-BAS | 2024 | Football | 11,041 | 12 | V |
| Cafe | 2024 | Indoor | 10,297 | 6 | V |
| FIFAWC | 2024 | Football | 5,196 | 12 | V |
| SoccerNet-GAR | 2026 | Football | 87,939 | 10 | V + T |
SoccerNet-GAR is the second largest GAR dataset (after SoccerNet-v2) and the only one providing synchronized video and tracking modalities for the same action instances.
Citation
@article{karki2025pixels,
title={Pixels or Positions? Benchmarking Modalities in Group Activity Recognition},
author={Karki, Drishya and Ramazanova, Merey and Cioppa, Anthony and Giancola, Silvio and Ghanem, Bernard},
journal={arXiv preprint arXiv:2511.12606},
year={2025}
}
Authors
- Drishya Karki (KAUST)
- Merey Ramazanova (KAUST)
- Anthony Cioppa (University of Liege)
- Silvio Giancola (KAUST)
- Bernard Ghanem (KAUST)
Contact
- Downloads last month
- 24