Your request to access this repo has been rejected by the repo's authors.
13
#74 opened almost 2 years ago
by
Hoo1196
Changing "eos_token" to eot-id fix the issue of overflow of model response - at least using the Messages API.
👍 1
1
#73 opened almost 2 years ago
by
myonlyeye
How to Fine-tune Llama-3 8B Instruct.
➕👀 2
8
#72 opened almost 2 years ago
by
elysiia
Update config.json
1
#71 opened almost 2 years ago
by
ArthurZ
The request to access the repo has been sent for several days, why hasn't it passed yet?
7
#70 opened almost 2 years ago
by
water-cui
AttributeError: type object 'AttentionMaskConverter' has no attribute '_ignore_causal_mask_sdpa' [ ]:
4
#69 opened almost 2 years ago
by
tianke0711
Access given still not working.
10
#68 opened almost 2 years ago
by
adityar23
Uncaught (in promise) SyntaxError: Unexpected token 'E', "Expected r"... is not valid JSON
🤯 1
5
#66 opened almost 2 years ago
by
sxmss1
Llama responses are broken during conversation
👍🔥 2
1
#64 opened almost 2 years ago
by
gusakovskyi
for using model
#63 opened almost 2 years ago
by
yogeshm
Access Denied
#59 opened almost 2 years ago
by
Jerry-hyl
Not outputting <|eot_id|> on SageMaker
#58 opened almost 2 years ago
by
zhengsj
Update README.md
#57 opened almost 2 years ago
by
inuwamobarak
Batched inference on multi-GPUs
🔥👍 14
4
#56 opened almost 2 years ago
by
d-i-o-n
Badly Encoded Tokens/Mojibake
2
#55 opened almost 2 years ago
by
muchanem
Denied permission to DL
👍 1
11
#51 opened about 2 years ago
by
TimPine
request to access is still pending a review
31
#50 opened about 2 years ago
by
Hoo1196
mlx_lm.server gives wonky answers
1
#49 opened about 2 years ago
by
conleysa
Tokenizer mismatch all the time
2
#47 opened about 2 years ago
by
tian9
Could anyone can tell me how to set the prompt template when i use the model in the pycharm by transformers?
1
#46 opened about 2 years ago
by
LAKSERS
Instruct format?
3
#44 opened about 2 years ago
by
m-conrad-202
MPS support quantification
👍 6
5
#39 opened about 2 years ago
by
tonimelisma
`meta-llama/Meta-Llama-3-8B-Instruct` model with sagemaker
1
#38 opened about 2 years ago
by
aak7912
Problem with the tokenizer
👍 8
2
#37 opened about 2 years ago
by
Douedos
how to output an answer without side chatter
👍 1
8
#36 opened about 2 years ago
by
Gerald001
ValueError: You can't train a model that has been loaded in 8-bit precision on a different device than the one you're training on.
12
#35 opened about 2 years ago
by
madhurjindal
Does instruct need add_generation_prompt?
1
#33 opened about 2 years ago
by
bdambrosio
Error while downloading the model
3
#32 opened about 2 years ago
by
amarnadh1998
Garbage responses
2
#30 opened about 2 years ago
by
RainmakerP
GPU requirements
10
#29 opened about 2 years ago
by
Gerald001
can I run it on CPU ?
5
#28 opened about 2 years ago
by
aljbali
ChatLLM.cpp fully supports Llama-3 now
#24 opened about 2 years ago
by
J22
Transformers pipeline update please
➕ 2
#23 opened about 2 years ago
by
ip210
The best 8B in the planet right now. PERIOD!
2
#22 opened about 2 years ago
by
cyberneticos
Is it really good?
5
#20 opened about 2 years ago
by
urtuuuu
The result for Llama 2 13B's GSM-8K (8-shot, CoT) is 77.4, which seems incorrect.
🤝 1
2
#19 opened about 2 years ago
by
Hi-archer
how is possible to create images with llama 3
👍 1
6
#17 opened about 2 years ago
by
robinaicol
OMG insomnia in the community
#16 opened about 2 years ago
by
Languido
It's like Christmas without December! 🌲🎁🤖
❤️ 1
#15 opened about 2 years ago
by
Joseph717171
What is the conversation template?
👍 4
8
#14 opened about 2 years ago
by
aeminkocal
Update numbering format of Prohibited Uses
#13 opened about 2 years ago
by
BallisticAI
Max output tokens?
4
#12 opened about 2 years ago
by
stri8ted
IAM READYYYYYY
🚀 4
2
#3 opened about 2 years ago
by
10100101j
Non-English language capabilities
👍 3
6
#2 opened about 2 years ago
by
oliviermills