Datasets:
metadata
license: cc-by-4.0
language:
- en
configs:
- config_name: Contrastive_Learning
data_files:
- split: train
path: contrastive_learning/train/data-00000-of-00001.arrow
- split: validation
path: contrastive_learning/validation/data-00000-of-00001.arrow
- split: test
path: contrastive_learning/test/data-00000-of-00001.arrow
- config_name: Thresholding
data_files:
- split: train
path: thresholding/train/data-00000-of-00001.arrow
- split: validation
path: thresholding/validation/data-00000-of-00001.arrow
- split: test
path: thresholding/test/data-00000-of-00001.arrow
AV task used in "Tokenization is Sensitive to Language Variation" paper, Arxiv link.
Note that "Contrastive Learning" Train/Dev files were used with contrastive learning (SupConLoss) to fine-tune BERT models. Then a threshold was chosen based on the Thresholding Dev file and the performance was calculated on the Thresholding Test file.
@article{wegmann2025tokenization,
title={Tokenization is Sensitive to Language Variation},
author={Wegmann, Anna and Nguyen, Dong and Jurgens, David},
journal={arXiv preprint arXiv:2502.15343},
year={2025}
}