Olaf Kowalsky
Olafangensan
AI & ML interests
None yet
Recent Activity
new activity about 1 month ago
Olafangensan/GLM-4.7-Flash-heretic-GGUF:Heretic settings new activity 3 months ago
DavidAU/GLM-4.7-Flash-Uncensored-Heretic-NEO-CODE-Imatrix-MAX-GGUF:Oh hey, that's me! updated a model 3 months ago
Olafangensan/GLM-4.7-Flash-heretic-GGUFOrganizations
None yet
Heretic settings
2
#1 opened about 1 month ago
by
wakari
Oh hey, that's me!
๐ 5
1
#1 opened 3 months ago
by
Olafangensan
HyperNova-60B.IQ4_XS crashing on newest llama.cpp
#1 opened 3 months ago
by
Olafangensan
How to skip the thinking?
1
#2 opened 8 months ago
by
Olafangensan
Why are the file sizes so similar?
1
#1 opened 8 months ago
by
Olafangensan
Is the prompt format correct?
3
#1 opened 9 months ago
by
Olafangensan
REALLY slow with flash attention and quantized cache.
7
#2 opened about 1 year ago
by
Olafangensan
This model is SUCH A TRYHARD
๐ง 2
3
#1 opened about 1 year ago
by
Olafangensan
recommended generation parameters
7
#5 opened about 1 year ago
by
erichartford
Long output issues
1
#1 opened over 1 year ago
by
Olafangensan
Feedback
11
#1 opened over 1 year ago
by
xxx777xxxASD
Better performance with ChatML for novel writing.
#6 opened over 1 year ago
by
Olafangensan
Feedback
47
#1 opened over 1 year ago
by
MarinaraSpaghetti
GGUF quants version?
๐ 1
2
#1 opened over 1 year ago
by
celsowm
Feedback
14
#1 opened over 1 year ago
by
MarinaraSpaghetti
eos_token_id error in oobabooga
6
#1 opened over 1 year ago
by
Olafangensan
Wrong position embedding size?
1
#1 opened almost 2 years ago
by
lanking
Unexpected Behaviors
7
#1 opened over 1 year ago
by
Olafangensan
KoboltCPP settings? Preset?
1
#2 opened almost 2 years ago
by
Olafangensan
Finally, an instruct fine tune!
โค๏ธ 1
#9 opened almost 2 years ago
by
Olafangensan