dsa
goodasdgood
Β·
AI & ML interests
None yet
Recent Activity
new activity about 13 hours ago
black-forest-labs/FLUX.2-dev-NVFP4:split new activity about 14 hours ago
black-forest-labs/FLUX.1-dev-onnx:Example of how to run inference on this with optimum (optimum[onnx-runtime])? new activity about 14 hours ago
black-forest-labs/FLUX.1-dev-onnx:codeOrganizations
None yet
split
1
#3 opened about 14 hours ago
by
goodasdgood
Example of how to run inference on this with optimum (optimum[onnx-runtime])?
1
#4 opened about 1 year ago
by
ernestyalumni
code
#9 opened about 14 hours ago
by
goodasdgood
How to use this model
π 1
1
#6 opened 9 months ago
by
CaoHaiNam
2 gpu
9
#3 opened 9 days ago
by
goodasdgood
Demo . 4bit.
4
#15 opened 5 months ago
by
sccssc
colab cpu
#3 opened 11 days ago
by
goodasdgood
gguf
#5 opened 15 days ago
by
goodasdgood
colab t4
#37 opened 16 days ago
by
goodasdgood
lmstudio 0.4.2
1
#2 opened 2 months ago
by
goodasdgood
rrun
1
#1 opened over 1 year ago
by
goodasdgood
llama.cpp support
π₯π 11
9
#1 opened over 1 year ago
by
ayyylol
Phi-3.5-MoE-instruct
6
#117 opened over 1 year ago
by
goodasdgood
cpu
2
#16 opened over 1 year ago
by
goodasdgood
one part
14
#1 opened over 1 year ago
by
goodasdgood
cpu
3
#16 opened over 1 year ago
by
goodasdgood
how to use in cpu only
π 1
11
#33 opened over 1 year ago
by
jtc1246
LLAMA CPP Python code
3
#3 opened over 1 year ago
by deleted
T4 - bfloat 16 not support
10
#2 opened over 1 year ago
by
SylvainV