Need support for mlx inference
#1
by Narutoouz - opened
- Apple mlx community should support this model
- create inference engine vllm-omni
- this models architecture is
- unified multimodal diffusion models will be future
Narutoouz changed discussion title from anyway for mlx inference? to Need support for mlx inference
Thank you for the suggestion. We completely agree that unified multimodal diffusion models represent the future of AI architecture.
To support this, we’re hard at work on the infrastructure (https://github.com/AIDASLab/Dynin-Omni):
- vLLM-Omni: Support coming in v0.18.0.
- dInfer: Integration currently in progress.
- sglang: Planned next.
Apple MLX support is a fantastic idea for broader accessibility, and we would welcome any community interest or contributions to make that happen as we expand our ecosystem.