YAML Metadata Warning:empty or missing yaml metadata in repo card
Check out the documentation for more information.
2024-24679-fencing-touch-predictor
Project: CMU Fencing Classification Project Author: Ethan Kessler (Carnegie Mellon University) License: MIT Date: 2025
Description:
This model predicts which frame a touch is scored in a fencing bout from red/green intensity extracted from video frames. It was trained using AutoGluon on a curated dataset with frame information extracted using CV2 from 10 short fencing videos retrieved from https://actions.quarte-riposte.com/.
Framework:
- Model: WeightedEnsemble_L3
- Ensemble Weights: 'NeuralNetFastAI_BAG_L2': 0.55, 'LightGBMXT_BAG_L2': 0.15, 'ExtraTreesMSE_BAG_L2': 0.15, 'ExtraTreesMSE_BAG_L1': 0.1, 'LightGBM_r131_BAG_L2': 0.0
Performance:
- Validation Score ≈ -0.0539 (-root_mean_squared_error)
- Training runtime ≈ 0.02s
- Validation runtime ≈ 0.0s
Example Usage:
from autogluon.tabular import TabularPredictor model = AG_PRED("emkessle//2024-24679-fencing-touch-predictor") feat_cols = ["red_ratio", "green_ratio", "red_diff", "green_diff", "z_red", "z_green"] features = df[feat_cols].copy() result = ag_predictor().predict(features) annotated = result[0].plot() Notes:
- Intended for referee-assistive scoring and highlight extraction.
- Trained on clean data for Olympic-level scenarios. Limitations:
- Reduced accuracy under poor lighting or glare.
- May not generalize to local/college amateur venues.
- Does not identify fencers or track motion. Ethical Use:
For research, education, and sports analytics only. All data sourced from public fencing footage. Citation:
Kessler, E. (2025). "2024-24679-fencing-touch-predictor" Hugging Face: https://huggingface.co/emkessle/2024-24679-fencing-touch-predictor
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support