-
In Search of Needles in a 10M Haystack: Recurrent Memory Finds What LLMs Miss
Paper • 2402.10790 • Published • 42 -
Scaling LLM Test-Time Compute Optimally can be More Effective than Scaling Model Parameters
Paper • 2408.03314 • Published • 63 -
Quiet-STaR: Language Models Can Teach Themselves to Think Before Speaking
Paper • 2403.09629 • Published • 79
Gabriel Pendl
jompaaa
AI & ML interests
None yet
Recent Activity
liked a model 4 minutes ago
zai-org/GLM-5.1 liked a model 6 days ago
microsoft/harrier-oss-v1-27b liked a model 7 days ago
jinaai/jina-embeddings-v4