onnx version available

#7
by eschmidbauer - opened

Just sharing this here in case anyone is interested. I get pretty fast inference on CPU with onnx

Thanks! Worked great on Apple silicon macbook (M3 Max)

See also
https://k2-fsa.github.io/sherpa/onnx/cohere_transcribe/index.html

We support using cohere transcribe in 12 programming languages.

Screenshot 2026-04-03 at 14.50.06

Sign up or log in to comment