Please provide instructions on how to run with the transformers library.
I tried running it with TTS pipeline but the audio seems broken when specifying
22000 sample rate. It would be nice with some official instructions on how to run a tts example using the transformers library
You need to take a good look at the model card where there are links to the inference code)
see https://github.com/nineninesix-ai/kani-tts/tree/main/notebooks! there is a notebook for inference)
You can run this on google colab with T4 gpu for example.
It would be nice to have the model runable with the transformers library, even if it is custom wrapper code that requires trust_remote_code=True
if a full notebook is required to generate a single example fewer people will probably use it.
On hf hub, I think many people are hoping for / expecting a few lines of code to run it as a markdown code block.
Moreover the repo you mentioned is kind of buried since it says Source Code: https://github.com/nineninesix-ai/kanitts-vllm
I understand if there are some technical details that makes this difficult or complicated, but I think it's worth bringing to attention nonetheless. :)
Thanks for the feedback! We'll soon create a PyPi package for better user experience!
Thank you so much!