Fast tokenizer request

#2
by celophi - opened

Hello,

Thanks for releasing ku-nlp/deberta-v3-base-japanese. I noticed that the model currently only supports the slow Python tokenizer (use_fast=False).
Would it be possible to provide a Rust-backed "fast" tokenizer version as well? This would improve preprocessing speed and make integration with frameworks easier.

Thank you for considering it.

Sign up or log in to comment