Add SWE-Bench Pro evaluation results
#46 opened about 2 months ago
by
nielsr
Regarding the calculation of VRAM requirements for deploying meta-llama/Llama-4-Maverick-17B-128E-Instruct
#45 opened 6 months ago
by
a58982284
MarCognity-AI for Meta – LLaMA 4 Maverick
#44 opened 6 months ago
by
elly99
Tool Calling
#43 opened 7 months ago
by
vipulchoube
Trying to run with TGI - i try to run the model with doctor I am using 8 h200 gpu on Amazon ec2 p5en.48xlarge
1
#42 opened 10 months ago
by
sayak340
Access rejected
#38 opened 11 months ago
by
sheepyyy
Remove `<|python_start|>` and `<|python_end|>` tags from chat template
#37 opened 12 months ago
by
jhuntbach
Add reasoning capabilities for llama 4 and add this model to huggingchat
#36 opened 12 months ago
by
devopsML
Request: DOI
#35 opened 12 months ago
by
EVANTRD
Llama4
#34 opened 12 months ago
by
duckingsimsen
Gated Repo Permission Still Pending for Llama-4
#33 opened 12 months ago
by
brando
World's Largest Dataset
#32 opened about 1 year ago
by deleted
When are we getting direct HuggingFace inference provider support?
#30 opened about 1 year ago
by
TejAndrewsACC
13 B and34 B Pleeease!!! Most people cannot even run this.
❤️👍 3
1
#28 opened about 1 year ago
by
UniversalLove333
Llama-4-Maverick-03-26-Experimental
👍 9
1
#27 opened about 1 year ago
by
ChuckMcSneed
Access Rejected
3
#24 opened about 1 year ago
by
rajkaranswain16
torch compile compatibility issue
➕ 3
6
#23 opened about 1 year ago
by
jhmun
Rejected access?
2
#22 opened about 1 year ago
by
pluttodk
Deploying production ready Llama-4 [Maverick] on your AWS with vLLM
🚀🔥 3
#21 opened about 1 year ago
by
agam30
GPU requirement
2
#20 opened about 1 year ago
by
meetzuber
How to run int4?
1
#19 opened about 1 year ago
by
BootsofLagrangian
[request for feedback] faster downloads with xet
5
#18 opened about 1 year ago
by
clem
Thanks a lot!
1
#17 opened about 1 year ago
by
FalconNet
License
1
#16 opened about 1 year ago
by
mrfakename
change to spda
2
#14 opened about 1 year ago
by
wukaixingxp