MedASR ONNX
This repository contains the google/medasr model converted to ONNX format for on-device inference using sherpa-onnx in the Android application.
Conversion Details
The model was exported to ONNX format using the k2-fsa/sherpa-onnx project scripts.
Prerequisites
- You must have a Hugging Face account and an access token with access to the
google/medasrmodel (you must accept the license agreement on the model card).
Export Instructions
# Clone sherpa-onnx
git clone https://github.com/k2-fsa/sherpa-onnx.git
cd sherpa-onnx
# Checkout the specific commit that was used for the conversion
git checkout bb49014c4e5dfbd60a58d1a0a7ad351883eebf40
cd scripts/medasr
# Setup environment & Convert
uv venv --python 3.12
source .venv/bin/activate
uv pip install accelerate bitsandbytes librosa onnx==1.17.0 onnxruntime==1.17.1 onnxscript "numpy<2" kaldi-native-fbank "git+https://github.com/huggingface/transformers.git@65dc261512cbdb1ee72b88ae5b222f2605aad8e5"
# Export the model
python export_onnx.py
This procedure generates the model.int8.onnx and tokens.txt files, which are used directly for on-device Automatic Speech Recognition in MedGem.
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support
Model tree for kamalkraj/medasr-onnx
Base model
google/medasr