Dataset Viewer
Auto-converted to Parquet Duplicate
text
stringlengths
0
58
nohup: ignoring input
Iniciando benchmark de 11 modelos...
[gemma4:e2b] General / 2B / Dense
Run 1: prefill=635.98 decode=84.76
Run 2: prefill=1095.17 decode=84.09
Run 3: prefill=1092.58 decode=85.22
-> gemma4:e2b: decode=84.69 tok/s
[gemma4:e4b] General / 4B / Dense
Run 1: prefill=161.56 decode=20.91
Run 2: prefill=420.65 decode=21.48
Run 3: prefill=420.84 decode=21.31
-> gemma4:e4b: decode=21.23 tok/s
[gemma4:26b] General / 26B / MoE
Run 1: prefill=3.83 decode=8.97
Run 2: prefill=226.03 decode=9.55
Run 3: prefill=194.78 decode=9.97
-> gemma4:26b: decode=9.50 tok/s
[qwopus] Reasoning / 27B / IQ3_XS
Run 1: prefill=3.99 decode=3.70
Run 2: prefill=9.29 decode=3.79
Run 3: prefill=8.00 decode=3.79
-> qwopus: decode=3.76 tok/s
[deepseek-r1:14b] Reasoning / 14B / Dense
Run 1: prefill=30.47 decode=4.90
Run 2: prefill=73.63 decode=4.94
Run 3: prefill=74.34 decode=4.89
-> deepseek-r1:14b: decode=4.91 tok/s
[qwen2.5:14b] General / 14B / Dense
Run 1: prefill=93.92 decode=4.82
Run 2: prefill=196.16 decode=4.89
Run 3: prefill=196.91 decode=4.82
-> qwen2.5:14b: decode=4.84 tok/s
[qwen2.5:7b] General / 7B / Dense
Run 1: prefill=574.59 decode=60.49
Run 2: prefill=953.09 decode=61.39
Run 3: prefill=1483.54 decode=61.39
-> qwen2.5:7b: decode=61.09 tok/s
[qwen2.5-coder:7b] Coding / 7B / Dense
Run 1: prefill=222.64 decode=60.80
Run 2: prefill=1504.39 decode=61.20
Run 3: prefill=1492.61 decode=60.58
-> qwen2.5-coder:7b: decode=60.86 tok/s
[mistral:7b] General / 7B / Dense
Run 1: prefill=279.20 decode=59.85
Run 2: prefill=604.70 decode=60.60
Run 3: prefill=411.98 decode=60.62
-> mistral:7b: decode=60.36 tok/s
[phi3:mini] General / 3.8B / Dense
Run 1: prefill=27.76 decode=20.43
Run 2: prefill=283.88 decode=19.89
Run 3: prefill=390.73 decode=20.40
-> phi3:mini: decode=20.24 tok/s
[llama3.2:3b] General / 3B / Dense
Run 1: prefill=690.30 decode=107.59
Run 2: prefill=912.40 decode=107.21
Run 3: prefill=1893.20 decode=108.58
-> llama3.2:3b: decode=107.79 tok/s
Listo: /home/gio/HF_repos/benchmarks/all_models/results.md

AMD ROCm Benchmarks — Positronica Labs

Benchmarks de modelos locales corriendo en GPU AMD con ROCm en Linux. Para la comunidad LatAm que corre AI sin Apple ni NVIDIA.

Benchmarks disponibles

Carpeta Modelos Descripción
gemma4/ Gemma 4 E2B, E4B, 26B Gemma 4 día 3 del lanzamiento
all_models/ 11 modelos Master benchmark con guía de uso

Hardware base

AMD RX 6700 XT 12GB + Ryzen 5 5600G + 16GB RAM + Pop!_OS 24.04

GitHub LinkedIn

Hardware de segunda mano. Software libre. AI para todos.

Downloads last month
36