-
LFM2.5 1.2B Thinking WebGPU
💧110Run LFM2.5-1.2B-Thinking directly in your browser on WebGPU
-
Voxtral Realtime WebGPU
💬106Real-time speech transcription, entirely in your browser.
-
Nemotron 3 Nano WebGPU
⚛75A compact reasoning-capable model running in your browser.
-
Qwen3.5 WebGPU
😻72Run Qwen3.5 (0.8B, 2B, 4B) in-browser with Transformers.js
Collections
Discover the best community collections!
Collections trending this week
-
nvidia/NVIDIA-Nemotron-3-Super-120B-A12B-BF16
Text Generation • 124B • Updated • 500k • 329 -
nvidia/NVIDIA-Nemotron-3-Super-120B-A12B-FP8
Text Generation • 124B • Updated • 1.02M • 229 -
nvidia/NVIDIA-Nemotron-3-Super-120B-A12B-NVFP4
Text Generation • 67B • Updated • 1.62M • 266 -
nvidia/NVIDIA-Nemotron-3-Super-120B-A12B-Base-BF16
Text Generation • 124B • Updated • 14.7k • 26
-
meta-llama/Llama-3.2-1B
Text Generation • 1B • Updated • 1.25M • 2.36k -
meta-llama/Llama-3.2-3B
Text Generation • 3B • Updated • 1.15M • 747 -
meta-llama/Llama-3.2-1B-Instruct
Text Generation • 1B • Updated • 4.26M • • 1.36k -
meta-llama/Llama-3.2-3B-Instruct
Text Generation • 3B • Updated • 5.17M • 2.09k
-
meta-llama/Meta-Llama-3-8B
Text Generation • 8B • Updated • 3.2M • • 6.51k -
meta-llama/Meta-Llama-3-8B-Instruct
Text Generation • 8B • Updated • 1.3M • • 4.47k -
meta-llama/Meta-Llama-3-70B-Instruct
Text Generation • 71B • Updated • 48.3k • • 1.51k -
meta-llama/Meta-Llama-3-70B
Text Generation • 71B • Updated • 131k • • 874
-
LFM2.5 1.2B Thinking WebGPU
💧110Run LFM2.5-1.2B-Thinking directly in your browser on WebGPU
-
Voxtral Realtime WebGPU
💬106Real-time speech transcription, entirely in your browser.
-
Nemotron 3 Nano WebGPU
⚛75A compact reasoning-capable model running in your browser.
-
Qwen3.5 WebGPU
😻72Run Qwen3.5 (0.8B, 2B, 4B) in-browser with Transformers.js
-
nvidia/NVIDIA-Nemotron-3-Super-120B-A12B-BF16
Text Generation • 124B • Updated • 500k • 329 -
nvidia/NVIDIA-Nemotron-3-Super-120B-A12B-FP8
Text Generation • 124B • Updated • 1.02M • 229 -
nvidia/NVIDIA-Nemotron-3-Super-120B-A12B-NVFP4
Text Generation • 67B • Updated • 1.62M • 266 -
nvidia/NVIDIA-Nemotron-3-Super-120B-A12B-Base-BF16
Text Generation • 124B • Updated • 14.7k • 26
-
meta-llama/Llama-3.2-1B
Text Generation • 1B • Updated • 1.25M • 2.36k -
meta-llama/Llama-3.2-3B
Text Generation • 3B • Updated • 1.15M • 747 -
meta-llama/Llama-3.2-1B-Instruct
Text Generation • 1B • Updated • 4.26M • • 1.36k -
meta-llama/Llama-3.2-3B-Instruct
Text Generation • 3B • Updated • 5.17M • 2.09k
-
meta-llama/Meta-Llama-3-8B
Text Generation • 8B • Updated • 3.2M • • 6.51k -
meta-llama/Meta-Llama-3-8B-Instruct
Text Generation • 8B • Updated • 1.3M • • 4.47k -
meta-llama/Meta-Llama-3-70B-Instruct
Text Generation • 71B • Updated • 48.3k • • 1.51k -
meta-llama/Meta-Llama-3-70B
Text Generation • 71B • Updated • 131k • • 874