Efficient MoE-based LLM Collection Mixture-of-Experts Large Language Models with Advanced Quantization β’ 5 items β’ Updated Mar 11 β’ 23
Efficient Large Vision-Language Model Collection ERGO: LVLM trained with RL on efficiency objectives; https://github.com/nota-github/ERGO β’ 3 items β’ Updated Feb 22 β’ 25
Efficient MoE-based LLM Collection Mixture-of-Experts Large Language Models with Advanced Quantization β’ 5 items β’ Updated Mar 11 β’ 23
Running Agents 82 Compressed Wav2Lip π 82 Generate lip-sync videos from uploaded videos and audio files
Running Agents Featured 116 Compressed Stable Diffusion π 116 Compare image generation results from original and compressed AI models
Running Agents Featured 116 Compressed Stable Diffusion π 116 Compare image generation results from original and compressed AI models