Add comprehensive model card with MLX inference examples and evaluation results

#1
by YingxuHe - opened
MERaLiON org

Adds a rich model card adapted from the original MERaLiON-2-10B README with:

  • MLX quantization info (8-bit group quantization)
  • MLX inference code examples
  • No-repeat n-gram blocking documentation
  • ASR evaluation results (WER comparison with HuggingFace)
  • Supported task prompts

Source: https://github.com/YingxuH/mlx_conversion

YingxuHe changed pull request status to closed

Sign up or log in to comment