publishedAt
timestamp[ns]date
2023-02-13 12:55:54
2026-04-15 20:00:00
title
stringlengths
6
206
thumbnail
stringlengths
77
77
numComments
int64
0
143
submittedBy
dict
isAuthorParticipating
bool
2 classes
mediaUrls
listlengths
0
15
paper_id
stringlengths
10
10
paper_authors
listlengths
1
3.3k
paper_publishedAt
timestamp[ns]date
2023-02-13 17:55:54
2026-04-16 00:00:00
paper_title
stringlengths
6
206
paper_summary
stringlengths
165
1.92k
paper_upvotes
int64
0
673
paper_discussionId
stringlengths
24
24
paper_projectPage
stringlengths
15
247
paper_githubRepo
stringlengths
25
132
2025-02-20T22:33:22.039000
SigLIP 2: Multilingual Vision-Language Encoders with Improved Semantic Understanding, Localization, and Dense Features
https://cdn-thumbnails.h…s/2502.14786.png
7
{ "_id": "60f1abe7544c2adfd699860c", "avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/1674929746905-60f1abe7544c2adfd699860c.jpeg", "followerCount": 6280, "fullname": "AK", "isHf": true, "isMod": false, "isPro": false, "name": "akhaliq", "type": "user" }
true
null
2502.14786
[ { "_id": "67b7ed0d58f6b70b18dda7b4", "hidden": false, "name": "Michael Tschannen", "status": "admin_assigned", "statusLastChangedAt": "2025-02-21T10:23:44.125Z", "user": { "_id": "6489893e1ec8356ba5bb9777", "avatarUrl": "/avatars/54354c1e5774cadd1d83d42054e9d96b.svg", "full...
2025-02-20T18:08:29
SigLIP 2: Multilingual Vision-Language Encoders with Improved Semantic Understanding, Localization, and Dense Features
We introduce SigLIP 2, a family of new multilingual vision-language encoders that build on the success of the original SigLIP. In this second iteration, we extend the original image-text training objective with several prior, independently developed techniques into a unified recipe -- this includes captioning-based pre...
124
67b7ed0e58f6b70b18dda7f4
null
null
2025-02-20T22:30:51.542000
RelaCtrl: Relevance-Guided Efficient Control for Diffusion Transformers
https://cdn-thumbnails.h…s/2502.14377.png
2
{ "_id": "60f1abe7544c2adfd699860c", "avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/1674929746905-60f1abe7544c2adfd699860c.jpeg", "followerCount": 6280, "fullname": "AK", "isHf": true, "isMod": false, "isPro": false, "name": "akhaliq", "type": "user" }
false
null
2502.14377
[ { "_id": "67b7f350357c2729ac216494", "hidden": false, "name": "Ke Cao", "status": "admin_assigned", "statusLastChangedAt": "2025-02-21T15:08:00.737Z", "user": { "_id": "66e4077369d1083dd97c7cd8", "avatarUrl": "/avatars/0dad41e3e2f38f89b7b21c12d673f432.svg", "fullname": "Ke ...
2025-02-20T09:10:05
RelaCtrl: Relevance-Guided Efficient Control for Diffusion Transformers
The Diffusion Transformer plays a pivotal role in advancing text-to-image and text-to-video generation, owing primarily to its inherent scalability. However, existing controlled diffusion transformer methods incur significant parameter and computational overheads and suffer from inefficient resource allocation due to t...
12
67b7f354357c2729ac216582
null
null
2025-02-20T22:19:05.902000
Logic-RL: Unleashing LLM Reasoning with Rule-Based Reinforcement Learning
https://cdn-thumbnails.h…s/2502.14768.png
5
{ "_id": "60f1abe7544c2adfd699860c", "avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/1674929746905-60f1abe7544c2adfd699860c.jpeg", "followerCount": 6280, "fullname": "AK", "isHf": true, "isMod": false, "isPro": false, "name": "akhaliq", "type": "user" }
false
null
2502.14768
[ { "_id": "67b7f08c357c2729ac20a81b", "hidden": false, "name": "Tian Xie", "status": null, "statusLastChangedAt": null, "user": null }, { "_id": "67b7f08c357c2729ac20a81c", "hidden": false, "name": "Zitian Gao", "status": "admin_assigned", "statusLastChangedAt": "2025-...
2025-02-20T17:49:26
Logic-RL: Unleashing LLM Reasoning with Rule-Based Reinforcement Learning
Inspired by the success of DeepSeek-R1, we explore the potential of rule-based reinforcement learning (RL) in large reasoning models. To analyze reasoning dynamics, we use synthetic logic puzzles as training data due to their controllable complexity and straightforward answer verification. We make some key technical co...
44
67b7f08e357c2729ac20a88f
null
null
2025-02-20T22:15:33.133000
SuperGPQA: Scaling LLM Evaluation across 285 Graduate Disciplines
https://cdn-thumbnails.h…s/2502.14739.png
10
{ "_id": "60f1abe7544c2adfd699860c", "avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/1674929746905-60f1abe7544c2adfd699860c.jpeg", "followerCount": 6280, "fullname": "AK", "isHf": true, "isMod": false, "isPro": false, "name": "akhaliq", "type": "user" }
true
null
2502.14739
[ { "_id": "67b7efc26348a1df80a8ae53", "hidden": false, "name": "M-A-P Team", "status": null, "statusLastChangedAt": null, "user": null }, { "_id": "67b7efc26348a1df80a8ae54", "hidden": false, "name": "Xinrun Du", "status": "claimed_verified", "statusLastChangedAt": "20...
2025-02-20T17:05:58
SuperGPQA: Scaling LLM Evaluation across 285 Graduate Disciplines
Large language models (LLMs) have demonstrated remarkable proficiency in mainstream academic disciplines such as mathematics, physics, and computer science. However, human knowledge encompasses over 200 specialized disciplines, far exceeding the scope of existing benchmarks. The capabilities of LLMs in many of these sp...
94
67b7efc66348a1df80a8afc8
null
null
2025-02-20T22:11:45.130000
AlphaMaze: Enhancing Large Language Models' Spatial Intelligence via GRPO
https://cdn-thumbnails.h…s/2502.14669.png
2
{ "_id": "60f1abe7544c2adfd699860c", "avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/1674929746905-60f1abe7544c2adfd699860c.jpeg", "followerCount": 6280, "fullname": "AK", "isHf": true, "isMod": false, "isPro": false, "name": "akhaliq", "type": "user" }
false
null
2502.14669
[ { "_id": "67b7eeddaf9f1b1bd95b878b", "hidden": false, "name": "Alan Dao", "status": "admin_assigned", "statusLastChangedAt": "2025-02-21T15:03:59.165Z", "user": { "_id": "62d7b2339b629105a5d6888a", "avatarUrl": "/avatars/c3f164fde6b8f9a671890e08ce8a3e75.svg", "fullname": "A...
2025-02-20T16:05:18
AlphaMaze: Enhancing Large Language Models' Spatial Intelligence via GRPO
Large Language Models (LLMs) have demonstrated impressive capabilities in language processing, yet they often struggle with tasks requiring genuine visual spatial reasoning. In this paper, we introduce a novel two-stage training framework designed to equip standard LLMs with visual reasoning abilities for maze navigati...
11
67b7eeddaf9f1b1bd95b87c8
null
null
2025-02-20T22:08:38.225000
MLGym: A New Framework and Benchmark for Advancing AI Research Agents
https://cdn-thumbnails.h…s/2502.14499.png
3
{ "_id": "60f1abe7544c2adfd699860c", "avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/1674929746905-60f1abe7544c2adfd699860c.jpeg", "followerCount": 6280, "fullname": "AK", "isHf": true, "isMod": false, "isPro": false, "name": "akhaliq", "type": "user" }
false
null
2502.14499
[ { "_id": "67b7ee1dfedfe971271dcca0", "hidden": false, "name": "Deepak Nathani", "status": "extracted_confirmed", "statusLastChangedAt": "2025-02-21T07:20:46.836Z", "user": { "_id": "6114c9fae7a2566ae7d1a1a7", "avatarUrl": "/avatars/c71ab1850322fcf5ef239cb8d31cb137.svg", "fu...
2025-02-20T12:28:23
MLGym: A New Framework and Benchmark for Advancing AI Research Agents
We introduce Meta MLGym and MLGym-Bench, a new framework and benchmark for evaluating and developing LLM agents on AI research tasks. This is the first Gym environment for machine learning (ML) tasks, enabling research on reinforcement learning (RL) algorithms for training such agents. MLGym-bench consists of 13 divers...
171
67b7ee1ffedfe971271dcd3a
null
null
2025-02-20T22:04:42.635000
S*: Test Time Scaling for Code Generation
https://cdn-thumbnails.h…s/2502.14382.png
3
{ "_id": "60f1abe7544c2adfd699860c", "avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/1674929746905-60f1abe7544c2adfd699860c.jpeg", "followerCount": 6280, "fullname": "AK", "isHf": true, "isMod": false, "isPro": false, "name": "akhaliq", "type": "user" }
true
null
2502.14382
[ { "_id": "67b7ed3e58f6b70b18ddb4bc", "hidden": false, "name": "Dacheng Li", "status": "admin_assigned", "statusLastChangedAt": "2025-02-21T14:45:13.558Z", "user": { "_id": "63715b25ffc0489ed7d1f415", "avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/63715b25f...
2025-02-20T09:18:53
S*: Test Time Scaling for Code Generation
Increasing test-time compute for LLMs shows promise across domains but remains underexplored in code generation, despite extensive study in math. In this paper, we propose S*, the first hybrid test-time scaling framework that substantially improves the coverage and selection accuracy of generated code. S* extends the e...
59
67b7ed3f58f6b70b18ddb510
null
null
2025-02-20T21:25:09.725000
On the Trustworthiness of Generative Foundation Models: Guideline, Assessment, and Perspective
https://cdn-thumbnails.h…s/2502.14296.png
2
{ "_id": "639d94ab7145123e0d44e48a", "avatarUrl": "/avatars/5bb6a65b306d1383c4a8bcd9334b470a.svg", "followerCount": 2, "fullname": "Yue Huang", "isHf": false, "isMod": false, "isPro": false, "name": "HowieHwong", "type": "user" }
true
null
2502.14296
[ { "_id": "67b7e371f17ca6989faa9884", "hidden": false, "name": "Yue Huang", "status": "extracted_pending", "statusLastChangedAt": "2025-02-21T02:22:45.907Z", "user": { "_id": "639d94ab7145123e0d44e48a", "avatarUrl": "/avatars/5bb6a65b306d1383c4a8bcd9334b470a.svg", "fullname"...
2025-02-20T06:20:36
On the Trustworthiness of Generative Foundation Models: Guideline, Assessment, and Perspective
Generative Foundation Models (GenFMs) have emerged as transformative tools. However, their widespread adoption raises critical concerns regarding trustworthiness across dimensions. This paper presents a comprehensive framework to address these challenges through three key contributions. First, we systematically review ...
45
67b7e375f17ca6989faa9a28
null
null
2025-02-20T21:13:28.792000
Which of These Best Describes Multiple Choice Evaluation with LLMs? A) Forced B) Flawed C) Fixable D) All of the Above
https://cdn-thumbnails.h…s/2502.14127.png
2
{ "_id": "62a3f93fe2b7740fe2a94c86", "avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/62a3f93fe2b7740fe2a94c86/ZiaPqiVqXI2ANIyWQY_hT.png", "followerCount": 6, "fullname": "Nishant Balepur", "isHf": false, "isMod": false, "isPro": false, "name": "nbalepur", "type": "user" }
true
null
2502.14127
[ { "_id": "67b7e12b92b9b5b8184c6580", "hidden": false, "name": "Nishant Balepur", "status": "claimed_verified", "statusLastChangedAt": "2025-02-21T09:59:02.019Z", "user": { "_id": "62a3f93fe2b7740fe2a94c86", "avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/62...
2025-02-19T22:11:52
Which of These Best Describes Multiple Choice Evaluation with LLMs? A) Forced B) Flawed C) Fixable D) All of the Above
Multiple choice question answering (MCQA) is popular for LLM evaluation due to its simplicity and human-like testing, but we argue for its reform. We first reveal flaws in MCQA's format, as it struggles to: 1) test generation/subjectivity; 2) match LLM use cases; and 3) fully test knowledge. We instead advocate for gen...
2
67b7e12c92b9b5b8184c65a5
null
null
2025-02-20T16:00:25.426000
REALTALK: A 21-Day Real-World Dataset for Long-Term Conversation
https://cdn-thumbnails.h…s/2502.13270.png
2
{ "_id": "6142ec5a7215c6d505bafd4e", "avatarUrl": "/avatars/ae0387b672435c5a4cf16ff6764ce597.svg", "followerCount": null, "fullname": "Dong-Ho Lee", "isHf": false, "isMod": false, "isPro": false, "name": "danny911kr", "type": "user" }
true
[ "https://cdn-uploads.huggingface.co/production/uploads/6142ec5a7215c6d505bafd4e/8ZXPnL7UdgpvHkiP0HHDI.png" ]
2502.13270
[ { "_id": "67b7975d10a9714460c03882", "hidden": false, "name": "Dong-Ho Lee", "status": "claimed_verified", "statusLastChangedAt": "2025-02-21T09:59:30.731Z", "user": { "_id": "6142ec5a7215c6d505bafd4e", "avatarUrl": "/avatars/ae0387b672435c5a4cf16ff6764ce597.svg", "fullname...
2025-02-18T20:29:01
REALTALK: A 21-Day Real-World Dataset for Long-Term Conversation
Long-term, open-domain dialogue capabilities are essential for chatbots aiming to recall past interactions and demonstrate emotional intelligence (EI). Yet, most existing research relies on synthetic, LLM-generated data, leaving open questions about real-world conversational patterns. To address this gap, we introduce ...
6
67b7975e10a9714460c038bb
null
null
2025-02-20T14:34:52.849000
From Tools to Teammates: Evaluating LLMs in Multi-Session Coding Interactions
https://cdn-thumbnails.h…s/2502.13791.png
3
{ "_id": "62645f88c39850dc093d6105", "avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/1650745211725-noauth.png", "followerCount": 51, "fullname": "Mohammed Hamdy", "isHf": false, "isMod": false, "isPro": false, "name": "mmhamdy", "type": "user" }
true
null
2502.13791
[ { "_id": "67b7838bb41e5f760f8bd1b0", "hidden": false, "name": "Nathanaël Carraz Rakotonirina", "status": "claimed_verified", "statusLastChangedAt": "2025-02-24T15:51:38.471Z", "user": { "_id": "6195d3199b7166aedc74247f", "avatarUrl": "https://cdn-avatars.huggingface.co/v1/product...
2025-02-19T14:58:04
From Tools to Teammates: Evaluating LLMs in Multi-Session Coding Interactions
Large Language Models (LLMs) are increasingly used in working environments for a wide range of tasks, excelling at solving individual problems in isolation. However, are they also able to effectively collaborate over long-term interactions? To investigate this, we introduce MemoryCode, a synthetic multi-session dataset...
5
67b7838cb41e5f760f8bd209
null
null
2025-02-20T13:47:47.134000
Judging the Judges: A Collection of LLM-Generated Relevance Judgements
https://cdn-thumbnails.h…s/2502.13908.png
2
{ "_id": "64108fc514215c0775e13f5e", "avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/64108fc514215c0775e13f5e/pHWr8TlBnYrulo2owIrrv.jpeg", "followerCount": null, "fullname": "Hossein A. (Saeed) Rahmani", "isHf": false, "isMod": false, "isPro": false, "name": "rahmanidashti", "ty...
true
null
2502.13908
[ { "_id": "67b75ce1fedef65ff99cf5f8", "hidden": false, "name": "Hossein A. Rahmani", "status": "claimed_verified", "statusLastChangedAt": "2025-02-20T16:57:36.417Z", "user": { "_id": "64108fc514215c0775e13f5e", "avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads...
2025-02-19T17:40:32
Judging the Judges: A Collection of LLM-Generated Relevance Judgements
Using Large Language Models (LLMs) for relevance assessments offers promising opportunities to improve Information Retrieval (IR), Natural Language Processing (NLP), and related fields. Indeed, LLMs hold the promise of allowing IR experimenters to build evaluation collections with a fraction of the manual human labor c...
4
67b75ce2fedef65ff99cf623
null
null
2025-02-20T12:26:53.898000
MMTEB: Massive Multilingual Text Embedding Benchmark
https://cdn-thumbnails.h…s/2502.13595.png
3
{ "_id": "5f1eb362eec0ad2a071ad6e2", "avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/5f1eb362eec0ad2a071ad6e2/IXMYkYKuTwn6kBdWnQeeY.png", "followerCount": 120, "fullname": "Niklas Muennighoff", "isHf": false, "isMod": false, "isPro": false, "name": "Muennighoff", "type": "user" ...
true
null
2502.13595
[ { "_id": "67b6fa9cb544aa153178a60b", "hidden": false, "name": "Kenneth Enevoldsen", "status": "claimed_verified", "statusLastChangedAt": "2025-02-20T15:52:45.751Z", "user": { "_id": "5ff5943752c26e9bc240bada", "avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads...
2025-02-19T10:13:43
MMTEB: Massive Multilingual Text Embedding Benchmark
Text embeddings are typically evaluated on a limited set of tasks, which are constrained by language, domain, and task diversity. To address these limitations and provide a more comprehensive evaluation, we introduce the Massive Multilingual Text Embedding Benchmark (MMTEB) - a large-scale, community-driven expansion o...
31
67b6fa9db544aa153178a69c
null
null
2025-02-20T12:23:27.067000
AIDE: AI-Driven Exploration in the Space of Code
https://cdn-thumbnails.h…s/2502.13138.png
6
{ "_id": "65f7927e7bc58032aa5bda58", "avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/65f7927e7bc58032aa5bda58/JxUSj-J7YBwtgj6rqQtjn.jpeg", "followerCount": null, "fullname": "Dex Dixing Xu", "isHf": false, "isMod": false, "isPro": false, "name": "dexhunter", "type": "user" }
true
[ "https://cdn-uploads.huggingface.co/production/uploads/65f7927e7bc58032aa5bda58/bkhW4LYUeFqT9_aqPd3Om.jpeg" ]
2502.13138
[ { "_id": "67b6e0829b29983879ad2312", "hidden": false, "name": "Zhengyao Jiang", "status": "admin_assigned", "statusLastChangedAt": "2025-02-20T17:38:24.557Z", "user": { "_id": "630384837b50dd9d0a3328dc", "avatarUrl": "/avatars/17097a93ef403592bc07c0ff6712faf3.svg", "fullnam...
2025-02-18T18:57:21
AIDE: AI-Driven Exploration in the Space of Code
Machine learning, the foundation of modern artificial intelligence, has driven innovations that have fundamentally transformed the world. Yet, behind advancements lies a complex and often tedious process requiring labor and compute intensive iteration and experimentation. Engineers and scientists developing machine lea...
7
67b6e0839b29983879ad2346
null
null
2025-02-20T12:09:53.761000
MVL-SIB: A Massively Multilingual Vision-Language Benchmark for Cross-Modal Topical Matching
https://cdn-thumbnails.h…s/2502.12852.png
2
{ "_id": "64c8c2d87d0ea4e7f12995c6", "avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/64c8c2d87d0ea4e7f12995c6/h8eWJrz8kqavemy8vQ2NK.jpeg", "followerCount": 3, "fullname": "Fabian David Schmidt", "isHf": false, "isMod": false, "isPro": false, "name": "fdschmidt93", "type": "user"...
true
null
2502.12852
[ { "_id": "67b5b31f5a17526b55c3ccde", "hidden": false, "name": "Fabian David Schmidt", "status": "claimed_verified", "statusLastChangedAt": "2025-02-20T15:53:15.852Z", "user": { "_id": "64c8c2d87d0ea4e7f12995c6", "avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploa...
2025-02-18T13:40:05
MVL-SIB: A Massively Multilingual Vision-Language Benchmark for Cross-Modal Topical Matching
Existing multilingual vision-language (VL) benchmarks often only cover a handful of languages. Consequently, evaluations of large vision-language models (LVLMs) predominantly target high-resource languages, underscoring the need for evaluation data for low-resource languages. To address this limitation, we introduce MV...
3
67b5b3205a17526b55c3cd40
null
null
2025-02-20T12:07:02.880000
Reducing Hallucinations in Language Model-based SPARQL Query Generation Using Post-Generation Memory Retrieval
https://cdn-thumbnails.h…s/2502.13369.png
2
{ "_id": "63e972f1ccae1fe5c6211759", "avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/63e972f1ccae1fe5c6211759/AfKPgMdAraUtvbtJpoHFY.jpeg", "followerCount": 2, "fullname": "Luis Lara", "isHf": false, "isMod": false, "isPro": false, "name": "ludolara", "type": "user" }
true
null
2502.13369
[ { "_id": "67b7610afedfe97127f75374", "hidden": false, "name": "Aditya Sharma", "status": "admin_assigned", "statusLastChangedAt": "2025-02-20T17:37:33.974Z", "user": { "_id": "66d959e4fb6d15635f2b9d76", "avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/noauth...
2025-02-19T02:08:13
Reducing Hallucinations in Language Model-based SPARQL Query Generation Using Post-Generation Memory Retrieval
The ability to generate SPARQL queries from natural language questions is crucial for ensuring efficient and accurate retrieval of structured data from knowledge graphs (KG). While large language models (LLMs) have been widely adopted for SPARQL query generation, they are often susceptible to hallucinations and out-of-...
2
67b7610bfedfe97127f7539c
null
null
2025-02-20T10:53:49.049000
High-Fidelity Novel View Synthesis via Splatting-Guided Diffusion
https://cdn-thumbnails.h…s/2502.12752.png
2
{ "_id": "657dc1576dc01435cd9029d8", "avatarUrl": "/avatars/3bba11ac7659fce61aeaedf40e2057a8.svg", "followerCount": 2, "fullname": "Xiang Zhang", "isHf": false, "isMod": false, "isPro": false, "name": "XiangZ", "type": "user" }
true
null
2502.12752
[ { "_id": "67b74fbdbb87b88059a9c5d3", "hidden": false, "name": "Xiang Zhang", "status": "claimed_verified", "statusLastChangedAt": "2025-02-20T16:06:01.193Z", "user": { "_id": "657dc1576dc01435cd9029d8", "avatarUrl": "/avatars/3bba11ac7659fce61aeaedf40e2057a8.svg", "fullname...
2025-02-18T11:13:06
High-Fidelity Novel View Synthesis via Splatting-Guided Diffusion
Despite recent advances in Novel View Synthesis (NVS), generating high-fidelity views from single or sparse observations remains a significant challenge. Existing splatting-based approaches often produce distorted geometry due to splatting errors. While diffusion-based methods leverage rich 3D priors to achieve improve...
3
67b74fc7bb87b88059a9c75d
null
null
2025-02-20T10:46:55.281000
TESS 2: A Large-Scale Generalist Diffusion Language Model
https://cdn-thumbnails.h…s/2502.13917.png
3
{ "_id": "62608fc2ffe8827cb1d89f9f", "avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/1654027835241-62608fc2ffe8827cb1d89f9f.png", "followerCount": 11, "fullname": "Hamish Ivison", "isHf": false, "isMod": false, "isPro": false, "name": "hamishivi", "type": "user" }
true
null
2502.13917
[ { "_id": "67b698422c8b2ef925e03f4f", "hidden": false, "name": "Jaesung Tae", "status": null, "statusLastChangedAt": null, "user": null }, { "_id": "67b698422c8b2ef925e03f50", "hidden": false, "name": "Hamish Ivison", "status": "extracted_confirmed", "statusLastChanged...
2025-02-19T17:50:31
TESS 2: A Large-Scale Generalist Diffusion Language Model
We introduce TESS 2, a general instruction-following diffusion language model that outperforms contemporary instruction-tuned diffusion models, as well as matches and sometimes exceeds strong autoregressive (AR) models. We train TESS 2 by first adapting a strong AR model via continued pretraining with the usual cross-e...
6
67b698432c8b2ef925e03fb4
null
null
2025-02-20T07:25:12.795000
REFIND: Retrieval-Augmented Factuality Hallucination Detection in Large Language Models
https://cdn-thumbnails.h…s/2502.13622.png
2
{ "_id": "6540fbf9cb7fffd683942b43", "avatarUrl": "/avatars/d4a64fbde511d0949e1c339179586850.svg", "followerCount": 2, "fullname": "DongGeon Lee", "isHf": false, "isMod": false, "isPro": false, "name": "oneonlee", "type": "user" }
true
null
2502.13622
[ { "_id": "67b69cf4573aa8417aec103c", "hidden": false, "name": "DongGeon Lee", "status": "claimed_verified", "statusLastChangedAt": "2025-02-20T09:35:55.480Z", "user": { "_id": "6540fbf9cb7fffd683942b43", "avatarUrl": "/avatars/d4a64fbde511d0949e1c339179586850.svg", "fullnam...
2025-02-19T10:59:05
REFIND: Retrieval-Augmented Factuality Hallucination Detection in Large Language Models
Hallucinations in large language model (LLM) outputs severely limit their reliability in knowledge-intensive tasks such as question answering. To address this challenge, we introduce REFIND (Retrieval-augmented Factuality hallucINation Detection), a novel framework that detects hallucinated spans within LLM outputs by ...
4
67b69cf7573aa8417aec10bf
null
null
2025-02-20T06:45:40.507000
Train Small, Infer Large: Memory-Efficient LoRA Training for Large Language Models
https://cdn-thumbnails.h…s/2502.13533.png
2
{ "_id": "63fcb42c987f631186e554f2", "avatarUrl": "/avatars/5cf87e9fa21c088c0bd8577d651d01f6.svg", "followerCount": null, "fullname": "Jun Zhang", "isHf": false, "isMod": false, "isPro": false, "name": "junzhang98", "type": "user" }
true
null
2502.13533
[ { "_id": "67b68f883cd5860d8597eace", "hidden": false, "name": "Jun Zhang", "status": "claimed_verified", "statusLastChangedAt": "2025-02-20T09:36:13.757Z", "user": { "_id": "63fcb42c987f631186e554f2", "avatarUrl": "/avatars/5cf87e9fa21c088c0bd8577d651d01f6.svg", "fullname":...
2025-02-19T08:39:15
Train Small, Infer Large: Memory-Efficient LoRA Training for Large Language Models
Large Language Models (LLMs) have significantly advanced natural language processing with exceptional task generalization capabilities. Low-Rank Adaption (LoRA) offers a cost-effective fine-tuning solution, freezing the original model parameters and training only lightweight, low-rank adapter matrices. However, the mem...
9
67b68f8b3cd5860d8597eb97
null
null
2025-02-20T05:38:39.430000
Noise May Contain Transferable Knowledge: Understanding Semi-supervised Heterogeneous Domain Adaptation from an Empirical Perspective
https://cdn-thumbnails.h…s/2502.13573.png
2
{ "_id": "668bb3b14c25c09b01815a55", "avatarUrl": "/avatars/5d46301dd5d7641e3da05b0ad560efee.svg", "followerCount": null, "fullname": "Yuan Yao", "isHf": false, "isMod": false, "isPro": false, "name": "yyyaoyuan", "type": "user" }
true
null
2502.13573
[ { "_id": "67b70459ea22340afaaf416f", "hidden": false, "name": "Yuan Yao", "status": "extracted_pending", "statusLastChangedAt": "2025-02-20T10:30:51.477Z", "user": { "_id": "668bb3b14c25c09b01815a55", "avatarUrl": "/avatars/5d46301dd5d7641e3da05b0ad560efee.svg", "fullname":...
2025-02-19T09:27:03
Noise May Contain Transferable Knowledge: Understanding Semi-supervised Heterogeneous Domain Adaptation from an Empirical Perspective
Semi-supervised heterogeneous domain adaptation (SHDA) addresses learning across domains with distinct feature representations and distributions, where source samples are labeled while most target samples are unlabeled, with only a small fraction labeled. Moreover, there is no one-to-one correspondence between source a...
2
67b7045bea22340afaaf41fd
null
null
2025-02-20T05:19:11.890000
GIMMICK -- Globally Inclusive Multimodal Multitask Cultural Knowledge Benchmarking
https://cdn-thumbnails.h…s/2502.13766.png
2
{ "_id": "62dfd54798815401141c47fe", "avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/62dfd54798815401141c47fe/ct2OA_K0Wwpshy8DCswxy.png", "followerCount": 6, "fullname": "Flo Schneider", "isHf": false, "isMod": false, "isPro": false, "name": "floschne", "type": "user" }
true
null
2502.13766
[ { "_id": "67b6faf5a96bf2b8ff8c235c", "hidden": false, "name": "Florian Schneider", "status": "claimed_verified", "statusLastChangedAt": "2025-02-20T10:49:37.443Z", "user": { "_id": "62dfd54798815401141c47fe", "avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/...
2025-02-19T14:27:40
GIMMICK -- Globally Inclusive Multimodal Multitask Cultural Knowledge Benchmarking
Large Vision-Language Models (LVLMs) have recently gained attention due to their distinctive performance and broad applicability. While it has been previously shown that their efficacy in usage scenarios involving non-Western contexts falls short, existing studies are limited in scope, covering just a narrow range of c...
3
67b6faf8a96bf2b8ff8c2422
null
null
2025-02-20T04:32:22.011000
InfiR : Crafting Effective Small Language Models and Multimodal Small Language Models in Reasoning
https://cdn-thumbnails.h…s/2502.11573.png
2
{ "_id": "618c1ad1c74578e0a4a4d074", "avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/618c1ad1c74578e0a4a4d074/8u_AkeHt4d6xtQ8hzaffU.jpeg", "followerCount": 60, "fullname": "Drishti Sharma", "isHf": false, "isMod": false, "isPro": true, "name": "DrishtiSharma", "type": "user" }
false
null
2502.11573
[ { "_id": "67b6f629d9da6999328e38f5", "hidden": false, "name": "Congkai Xie", "status": "admin_assigned", "statusLastChangedAt": "2025-02-20T17:12:49.025Z", "user": { "_id": "6719f1ad725123d503b5ef3c", "avatarUrl": "/avatars/08e1be1f4afa1b6e1501a15cdb786a47.svg", "fullname":...
2025-02-17T09:07:32
InfiR : Crafting Effective Small Language Models and Multimodal Small Language Models in Reasoning
Large Language Models (LLMs) and Multimodal Large Language Models (MLLMs) have made significant advancements in reasoning capabilities. However, they still face challenges such as high computational demands and privacy concerns. This paper focuses on developing efficient Small Language Models (SLMs) and Multimodal Smal...
8
67b6f62ad9da6999328e3955
null
null
2025-02-20T03:56:54.121000
ActionPiece: Contextually Tokenizing Action Sequences for Generative Recommendation
https://cdn-thumbnails.h…s/2502.13581.png
3
{ "_id": "64a62c2f500beb50968e5c9c", "avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/noauth/wfL3ojJmXqyzGUmCblPf4.jpeg", "followerCount": 5, "fullname": "Yupeng Hou", "isHf": false, "isMod": false, "isPro": false, "name": "hyp1231", "type": "user" }
true
null
2502.13581
[ { "_id": "67b6ee04412c9eccae5151f5", "hidden": false, "name": "Yupeng Hou", "status": "claimed_verified", "statusLastChangedAt": "2025-02-20T09:35:14.498Z", "user": { "_id": "64a62c2f500beb50968e5c9c", "avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/noauth/...
2025-02-19T09:45:29
ActionPiece: Contextually Tokenizing Action Sequences for Generative Recommendation
Generative recommendation (GR) is an emerging paradigm where user actions are tokenized into discrete token patterns and autoregressively generated as predictions. However, existing GR models tokenize each action independently, assigning the same fixed tokens to identical actions across all sequences without considerin...
5
67b6ee04412c9eccae515223
null
null
2025-02-20T02:40:09.567000
MoM: Linear Sequence Modeling with Mixture-of-Memories
https://cdn-thumbnails.h…s/2502.13685.png
2
{ "_id": "6246bb33da617c00b48e4d92", "avatarUrl": "/avatars/0304a9f6eb7f5dee4d933d03222f94e9.svg", "followerCount": 3, "fullname": "Weigao Sun", "isHf": false, "isMod": false, "isPro": false, "name": "weigao266", "type": "user" }
true
null
2502.13685
[ { "_id": "67b6dc1ba7567156c6547880", "hidden": false, "name": "Jusen Du", "status": "admin_assigned", "statusLastChangedAt": "2025-02-20T16:08:01.601Z", "user": { "_id": "65003e857804f04a163328d9", "avatarUrl": "/avatars/fe32150aabfde8d283b38ccebcf6982e.svg", "fullname": "J...
2025-02-19T12:53:55
MoM: Linear Sequence Modeling with Mixture-of-Memories
Linear sequence modeling methods, such as linear attention, state space modeling, and linear RNNs, offer significant efficiency improvements by reducing the complexity of training and inference. However, these methods typically compress the entire input sequence into a single fixed-size memory state, which leads to sub...
33
67b6dc1ca7567156c65478b8
null
https://github.com/OpenSparseLLMs/MoM
2025-02-20T01:20:46.431000
Presumed Cultural Identity: How Names Shape LLM Responses
https://cdn-thumbnails.h…s/2502.11995.png
2
{ "_id": "60c50f18754747f54fa37114", "avatarUrl": "/avatars/648ae58b81806dbd93a68546666047e3.svg", "followerCount": 1, "fullname": "Siddhesh", "isHf": false, "isMod": false, "isPro": false, "name": "sidicity", "type": "user" }
false
null
2502.11995
[ { "_id": "67b65bbe0d878eff1a6b111d", "hidden": false, "name": "Siddhesh Pawar", "status": "admin_assigned", "statusLastChangedAt": "2025-02-20T16:11:39.727Z", "user": { "_id": "661e2ac200798c2e33cc49a5", "avatarUrl": "/avatars/8e5e1672b36f86bb4ad7a7e22e8d4f4d.svg", "fullnam...
2025-02-17T16:35:15
Presumed Cultural Identity: How Names Shape LLM Responses
Names are deeply tied to human identity. They can serve as markers of individuality, cultural heritage, and personal history. However, using names as a core indicator of identity can lead to over-simplification of complex identities. When interacting with LLMs, user names are an important point of information for perso...
10
67b65bbf0d878eff1a6b1174
null
null
2025-02-20T01:07:44.785000
SongGen: A Single Stage Auto-regressive Transformer for Text-to-Song Generation
https://cdn-thumbnails.h…s/2502.13128.png
2
{ "_id": "64b4eec4faa3181a5eab9c46", "avatarUrl": "/avatars/bcc9bf5cbf67546ad2b4c9ec8b96ac96.svg", "followerCount": 16, "fullname": "Jiaqi Wang", "isHf": false, "isMod": false, "isPro": true, "name": "myownskyW7", "type": "user" }
true
null
2502.13128
[ { "_id": "67b6c696e9b901edeaf320d5", "hidden": false, "name": "Zihan Liu", "status": "admin_assigned", "statusLastChangedAt": "2025-02-20T16:07:49.211Z", "user": { "_id": "65f33b1c9f7970ccc0234cbf", "avatarUrl": "/avatars/99fbab303912e3674663251c04279907.svg", "fullname": "...
2025-02-18T18:52:21
SongGen: A Single Stage Auto-regressive Transformer for Text-to-Song Generation
Text-to-song generation, the task of creating vocals and accompaniment from textual inputs, poses significant challenges due to domain complexity and data scarcity. Existing approaches often employ multi-stage generation procedures, resulting in cumbersome training and inference pipelines. In this paper, we propose Son...
37
67b6c698e9b901edeaf321a7
null
null
2025-02-19T23:54:57.669000
Why Safeguarded Ships Run Aground? Aligned Large Language Models' Safety Mechanisms Tend to Be Anchored in The Template Region
https://cdn-thumbnails.h…s/2502.13946.png
2
{ "_id": "631326d6289cf15634c52369", "avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/631326d6289cf15634c52369/lmPWGHLsQ36H556cqcXjT.jpeg", "followerCount": 7, "fullname": "Cooper Leong", "isHf": false, "isMod": false, "isPro": false, "name": "cooperleong00", "type": "user" }
true
null
2502.13946
[ { "_id": "67b6b416b4ad845374143c31", "hidden": false, "name": "Chak Tou Leong", "status": "claimed_verified", "statusLastChangedAt": "2025-02-24T09:25:12.631Z", "user": { "_id": "631326d6289cf15634c52369", "avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/631...
2025-02-19T18:42:45
Why Safeguarded Ships Run Aground? Aligned Large Language Models' Safety Mechanisms Tend to Be Anchored in The Template Region
The safety alignment of large language models (LLMs) remains vulnerable, as their initial behavior can be easily jailbroken by even relatively simple attacks. Since infilling a fixed template between the input instruction and initial model output is a common practice for existing LLMs, we hypothesize that this template...
9
67b6b416b4ad845374143c5b
null
null
2025-02-19T23:35:06.194000
Qwen2.5-VL Technical Report
https://cdn-thumbnails.h…s/2502.13923.png
7
{ "_id": "63451cf0a05b51f7ded25505", "avatarUrl": "/avatars/dec4bbee4a82b773fc58dfc2dce9dbeb.svg", "followerCount": 14, "fullname": "shuai bai", "isHf": false, "isMod": false, "isPro": false, "name": "bluelike", "type": "user" }
true
null
2502.13923
[ { "_id": "67b6b0688b56622e70b9e83e", "hidden": false, "name": "Shuai Bai", "status": "admin_assigned", "statusLastChangedAt": "2025-02-20T15:54:00.062Z", "user": { "_id": "63451cf0a05b51f7ded25505", "avatarUrl": "/avatars/dec4bbee4a82b773fc58dfc2dce9dbeb.svg", "fullname": "...
2025-02-19T18:00:14
Qwen2.5-VL Technical Report
We introduce Qwen2.5-VL, the latest flagship model of Qwen vision-language series, which demonstrates significant advancements in both foundational capabilities and innovative functionalities. Qwen2.5-VL achieves a major leap forward in understanding and interacting with the world through enhanced visual recognition, p...
154
67b6b0688b56622e70b9e875
null
null
2025-02-19T23:34:43.424000
Is That Your Final Answer? Test-Time Scaling Improves Selective Question Answering
https://cdn-thumbnails.h…s/2502.13962.png
4
{ "_id": "60f1abe7544c2adfd699860c", "avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/1674929746905-60f1abe7544c2adfd699860c.jpeg", "followerCount": 6280, "fullname": "AK", "isHf": true, "isMod": false, "isPro": false, "name": "akhaliq", "type": "user" }
true
null
2502.13962
[ { "_id": "67b691751f861500916ecd5d", "hidden": false, "name": "William Jurayj", "status": "claimed_verified", "statusLastChangedAt": "2025-02-20T09:36:09.674Z", "user": { "_id": "6372bc95c4267fd7cd77f4d0", "avatarUrl": "/avatars/17a24af68f45487e601687d777b352b6.svg", "fulln...
2025-02-19T18:58:31
Is That Your Final Answer? Test-Time Scaling Improves Selective Question Answering
Scaling the test-time compute of large language models has demonstrated impressive performance on reasoning benchmarks. However, existing evaluations of test-time scaling make the strong assumption that a reasoning system should always give an answer to any question provided. This overlooks concerns about whether a mod...
28
67b691761f861500916ecd8e
null
null
2025-02-19T23:31:36.410000
Thinking Preference Optimization
https://cdn-thumbnails.h…s/2502.13173.png
4
{ "_id": "60f1abe7544c2adfd699860c", "avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/1674929746905-60f1abe7544c2adfd699860c.jpeg", "followerCount": 6280, "fullname": "AK", "isHf": true, "isMod": false, "isPro": false, "name": "akhaliq", "type": "user" }
false
null
2502.13173
[ { "_id": "67b6b014f7e569081326494f", "hidden": false, "name": "Wang Yang", "status": null, "statusLastChangedAt": null, "user": null }, { "_id": "67b6b014f7e5690813264950", "hidden": false, "name": "Hongye Jin", "status": null, "statusLastChangedAt": null, "user":...
2025-02-17T19:56:21
Thinking Preference Optimization
Supervised Fine-Tuning (SFT) has been a go-to and effective method for enhancing long chain-of-thought (CoT) reasoning in relatively small LLMs by fine-tuning them with long CoT responses from larger LLMs. To continually improve reasoning abilities, we can either collect new high-quality long CoT reasoning SFT data or ...
17
67b6b015f7e56908132649a0
null
null
2025-02-19T23:18:32.647000
NExT-Mol: 3D Diffusion Meets 1D Language Modeling for 3D Molecule Generation
https://cdn-thumbnails.h…s/2502.12638.png
2
{ "_id": "6310a3cd531cc21f9e06de6a", "avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/6310a3cd531cc21f9e06de6a/aTGMx3O41lUARK9s3dAik.jpeg", "followerCount": 3, "fullname": "Zhiyuan Liu", "isHf": false, "isMod": false, "isPro": false, "name": "acharkq", "type": "user" }
true
null
2502.12638
[ { "_id": "67b6acdb3a3df2f965e7af0b", "hidden": false, "name": "Zhiyuan Liu", "status": "admin_assigned", "statusLastChangedAt": "2025-02-20T17:43:04.070Z", "user": { "_id": "6310a3cd531cc21f9e06de6a", "avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/6310a3cd...
2025-02-18T08:40:13
NExT-Mol: 3D Diffusion Meets 1D Language Modeling for 3D Molecule Generation
3D molecule generation is crucial for drug discovery and material design. While prior efforts focus on 3D diffusion models for their benefits in modeling continuous 3D conformers, they overlook the advantages of 1D SELFIES-based Language Models (LMs), which can generate 100% valid molecules and leverage the billion-sca...
8
67b6acdd3a3df2f965e7af85
null
null
2025-02-19T23:07:01.367000
AdaptiveStep: Automatically Dividing Reasoning Step through Model Confidence
https://cdn-thumbnails.h…s/2502.13943.png
2
{ "_id": "6529f79e802e3d1a4f8ec662", "avatarUrl": "/avatars/d05320c370a6497d8792ef5acb563dd5.svg", "followerCount": 2, "fullname": "Yuliang Liu", "isHf": false, "isMod": false, "isPro": false, "name": "yuliang03181", "type": "user" }
true
null
2502.13943
[ { "_id": "67b6a9a7c721bee91cac2888", "hidden": false, "name": "Yuliang Liu", "status": "admin_assigned", "statusLastChangedAt": "2025-02-20T17:11:40.282Z", "user": { "_id": "6529f79e802e3d1a4f8ec662", "avatarUrl": "/avatars/d05320c370a6497d8792ef5acb563dd5.svg", "fullname":...
2025-02-19T18:35:55
AdaptiveStep: Automatically Dividing Reasoning Step through Model Confidence
Current approaches for training Process Reward Models (PRMs) often involve breaking down responses into multiple reasoning steps using rule-based techniques, such as using predefined placeholder tokens or setting the reasoning step's length into a fixed size. These approaches overlook the fact that specific words do no...
7
67b6a9a8c721bee91cac28e7
null
null
2025-02-19T22:57:23.298000
Craw4LLM: Efficient Web Crawling for LLM Pretraining
https://cdn-thumbnails.h…s/2502.13347.png
2
{ "_id": "6135eeeb5bc6ecdf86b60f0d", "avatarUrl": "/avatars/43cedcf20ab6b0801a662787400e1384.svg", "followerCount": 7, "fullname": "Shi Yu", "isHf": false, "isMod": false, "isPro": false, "name": "yushi", "type": "user" }
true
null
2502.13347
[ { "_id": "67b6a7e83ef3656c48f149b9", "hidden": false, "name": "Shi Yu", "status": "claimed_verified", "statusLastChangedAt": "2025-02-20T09:35:47.487Z", "user": { "_id": "6135eeeb5bc6ecdf86b60f0d", "avatarUrl": "/avatars/43cedcf20ab6b0801a662787400e1384.svg", "fullname": "S...
2025-02-19T00:31:43
Craw4LLM: Efficient Web Crawling for LLM Pretraining
Web crawl is a main source of large language models' (LLMs) pretraining data, but the majority of crawled web pages are discarded in pretraining due to low data quality. This paper presents Crawl4LLM, an efficient web crawling method that explores the web graph based on the preference of LLM pretraining. Specifically, ...
27
67b6a7e93ef3656c48f149f1
null
null
2025-02-19T22:42:06.502000
Autellix: An Efficient Serving Engine for LLM Agents as General Programs
https://cdn-thumbnails.h…s/2502.13965.png
2
{ "_id": "654037be97949fd2304aab7f", "avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/654037be97949fd2304aab7f/2cSME81gcwYa2OTeVlq5Q.jpeg", "followerCount": 3, "fullname": "Michael Luo", "isHf": false, "isMod": false, "isPro": false, "name": "michaelzhiluo", "type": "user" }
true
null
2502.13965
[ { "_id": "67b6a3fa09841367596a1db5", "hidden": false, "name": "Michael Luo", "status": "claimed_verified", "statusLastChangedAt": "2025-02-24T09:25:24.729Z", "user": { "_id": "654037be97949fd2304aab7f", "avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/654037...
2025-02-19T18:59:30
Autellix: An Efficient Serving Engine for LLM Agents as General Programs
Large language model (LLM) applications are evolving beyond simple chatbots into dynamic, general-purpose agentic programs, which scale LLM calls and output tokens to help AI agents reason, explore, and solve complex tasks. However, existing LLM serving systems ignore dependencies between programs and calls, missing si...
18
67b6a3fb09841367596a1e06
null
null
2025-02-19T22:27:22.403000
SearchRAG: Can Search Engines Be Helpful for LLM-based Medical Question Answering?
https://cdn-thumbnails.h…s/2502.13233.png
2
{ "_id": "64beb6b6140491ca9f803ebf", "avatarUrl": "/avatars/0daa2e813a13668b8b708cd8c12763d9.svg", "followerCount": null, "fullname": "Yucheng SHi", "isHf": false, "isMod": false, "isPro": false, "name": "YuchengShi", "type": "user" }
true
null
2502.13233
[ { "_id": "67b689aeba514d2c2c969289", "hidden": false, "name": "Yucheng Shi", "status": "claimed_verified", "statusLastChangedAt": "2025-02-20T09:36:18.925Z", "user": { "_id": "64beb6b6140491ca9f803ebf", "avatarUrl": "/avatars/0daa2e813a13668b8b708cd8c12763d9.svg", "fullname...
2025-02-18T19:12:15
SearchRAG: Can Search Engines Be Helpful for LLM-based Medical Question Answering?
Large Language Models (LLMs) have shown remarkable capabilities in general domains but often struggle with tasks requiring specialized knowledge. Conventional Retrieval-Augmented Generation (RAG) techniques typically retrieve external information from static knowledge bases, which can be outdated or incomplete, missing...
13
67b689aeba514d2c2c9692b9
null
null
2025-02-19T22:13:49.764000
RAD: Training an End-to-End Driving Policy via Large-Scale 3DGS-based Reinforcement Learning
https://cdn-thumbnails.h…s/2502.13144.png
2
{ "_id": "6536187bd34e9f02b9df1c3b", "avatarUrl": "/avatars/0b34d62868b93053b0a05062a018b5bd.svg", "followerCount": 1, "fullname": "Hao Gao", "isHf": false, "isMod": false, "isPro": false, "name": "Hao605", "type": "user" }
true
null
2502.13144
[ { "_id": "67b55c7fba22c1ddbb8d5746", "hidden": false, "name": "Hao Gao", "status": "claimed_verified", "statusLastChangedAt": "2025-02-19T09:00:48.944Z", "user": { "_id": "6536187bd34e9f02b9df1c3b", "avatarUrl": "/avatars/0b34d62868b93053b0a05062a018b5bd.svg", "fullname": "...
2025-02-18T18:59:21
RAD: Training an End-to-End Driving Policy via Large-Scale 3DGS-based Reinforcement Learning
Existing end-to-end autonomous driving (AD) algorithms typically follow the Imitation Learning (IL) paradigm, which faces challenges such as causal confusion and the open-loop gap. In this work, we establish a 3DGS-based closed-loop Reinforcement Learning (RL) training paradigm. By leveraging 3DGS techniques, we constr...
36
67b55c80ba22c1ddbb8d579c
null
null
2025-02-19T21:38:13.468000
Small Models Struggle to Learn from Strong Reasoners
https://cdn-thumbnails.h…s/2502.12143.png
6
{ "_id": "653df1323479e9ebbe3eb6cc", "avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/653df1323479e9ebbe3eb6cc/K_g-r1iMRNKj99LXPuYF3.jpeg", "followerCount": 11, "fullname": "Zhangchen Xu", "isHf": false, "isMod": false, "isPro": true, "name": "flydust", "type": "user" }
true
null
2502.12143
[ { "_id": "67b4d05a9f8a8ab661450397", "hidden": false, "name": "Yuetai Li", "status": null, "statusLastChangedAt": null, "user": null }, { "_id": "67b4d05a9f8a8ab661450398", "hidden": false, "name": "Xiang Yue", "status": null, "statusLastChangedAt": null, "user": ...
2025-02-17T18:56:15
Small Models Struggle to Learn from Strong Reasoners
Large language models (LLMs) excel in complex reasoning tasks, and distilling their reasoning capabilities into smaller models has shown promise. However, we uncover an interesting phenomenon, which we term the Small Model Learnability Gap: small models (leq3B parameters) do not consistently benefit from long chain-of-...
28
67b4d05b9f8a8ab6614503cb
null
null
2025-02-19T21:35:20.931000
LongPO: Long Context Self-Evolution of Large Language Models through Short-to-Long Preference Optimization
https://cdn-thumbnails.h…s/2502.13922.png
2
{ "_id": "645475e2548f22be59847604", "avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/645475e2548f22be59847604/EhSurrZ25u31qQ2TVXQXt.jpeg", "followerCount": 1, "fullname": "Chen", "isHf": false, "isMod": false, "isPro": false, "name": "Guanzheng", "type": "user" }
true
null
2502.13922
[ { "_id": "67b6948dbef24bad725b5d4b", "hidden": false, "name": "Guanzheng Chen", "status": "admin_assigned", "statusLastChangedAt": "2025-02-20T16:08:30.816Z", "user": { "_id": "645475e2548f22be59847604", "avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/64547...
2025-02-19T17:59:03
LongPO: Long Context Self-Evolution of Large Language Models through Short-to-Long Preference Optimization
Large Language Models (LLMs) have demonstrated remarkable capabilities through pretraining and alignment. However, superior short-context LLMs may underperform in long-context scenarios due to insufficient long-context alignment. This alignment process remains challenging due to the impracticality of human annotation f...
25
67b6948ebef24bad725b5d84
null
null
2025-02-19T20:37:51.607000
The Hidden Risks of Large Reasoning Models: A Safety Assessment of R1
https://cdn-thumbnails.h…s/2502.12659.png
2
{ "_id": "64679a226192d39142245e5e", "avatarUrl": "/avatars/05abee0b6317f100923936ca2099e9eb.svg", "followerCount": 4, "fullname": "Xin Eric Wang", "isHf": false, "isMod": false, "isPro": false, "name": "xw-eric", "type": "user" }
false
null
2502.12659
[ { "_id": "67b68700ce3055c9c0fc2987", "hidden": false, "name": "Kaiwen Zhou", "status": null, "statusLastChangedAt": null, "user": null }, { "_id": "67b68700ce3055c9c0fc2988", "hidden": false, "name": "Chengzhi Liu", "status": null, "statusLastChangedAt": null, "us...
2025-02-18T09:06:07
The Hidden Risks of Large Reasoning Models: A Safety Assessment of R1
The rapid development of large reasoning models, such as OpenAI-o3 and DeepSeek-R1, has led to significant improvements in complex reasoning over non-reasoning large language models~(LLMs). However, their enhanced capabilities, combined with the open-source access of models like DeepSeek-R1, raise serious safety concer...
6
67b68701ce3055c9c0fc29e4
null
null
2025-02-19T18:20:05.946000
Scaling Autonomous Agents via Automatic Reward Modeling And Planning
https://cdn-thumbnails.h…s/2502.12130.png
2
{ "_id": "654e024de113b04ba5c71e2f", "avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/654e024de113b04ba5c71e2f/WH6S_gpQU6OXqDaiPpheK.jpeg", "followerCount": 1, "fullname": "Rui Sun", "isHf": false, "isMod": false, "isPro": false, "name": "ThreeSR", "type": "user" }
true
null
2502.12130
[ { "_id": "67b657d6a267b1a747a7fed6", "hidden": false, "name": "Zhenfang Chen", "status": null, "statusLastChangedAt": null, "user": null }, { "_id": "67b657d6a267b1a747a7fed7", "hidden": false, "name": "Delin Chen", "status": null, "statusLastChangedAt": null, "us...
2025-02-17T18:49:25
Scaling Autonomous Agents via Automatic Reward Modeling And Planning
Large language models (LLMs) have demonstrated remarkable capabilities across a range of text-generation tasks. However, LLMs still struggle with problems requiring multi-step decision-making and environmental feedback, such as online shopping, scientific reasoning, and mathematical problem-solving. Unlike pure text da...
2
67b657d7a267b1a747a7ff1a
null
null
2025-02-19T13:39:32.672000
YOLOv12: Attention-Centric Real-Time Object Detectors
https://cdn-thumbnails.h…s/2502.12524.png
2
{ "_id": "5f1158120c833276f61f1a84", "avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/1608042047613-5f1158120c833276f61f1a84.jpeg", "followerCount": 777, "fullname": "Niels Rogge", "isHf": true, "isMod": false, "isPro": false, "name": "nielsr", "type": "user" }
false
null
2502.12524
[ { "_id": "67b608ca13df25808fbc22ae", "hidden": false, "name": "Yunjie Tian", "status": null, "statusLastChangedAt": null, "user": null }, { "_id": "67b608ca13df25808fbc22af", "hidden": false, "name": "Qixiang Ye", "status": null, "statusLastChangedAt": null, "user...
2025-02-18T04:20:14
YOLOv12: Attention-Centric Real-Time Object Detectors
Enhancing the network architecture of the YOLO framework has been crucial for a long time, but has focused on CNN-based improvements despite the proven superiority of attention mechanisms in modeling capabilities. This is because attention-based models cannot match the speed of CNN-based models. This paper proposes an ...
10
67b608cb13df25808fbc2308
null
null
2025-02-19T10:33:08.946000
Harnessing Vision Models for Time Series Analysis: A Survey
https://cdn-thumbnails.h…s/2502.08869.png
2
{ "_id": "67b5efbe38c175486e2869b9", "avatarUrl": "/avatars/64a698259033bb8ac324e57c557a9aa9.svg", "followerCount": null, "fullname": "Jingchao Ni", "isHf": false, "isMod": false, "isPro": false, "name": "nijingchao", "type": "user" }
true
[ "https://cdn-uploads.huggingface.co/production/uploads/67b5efbe38c175486e2869b9/iBIxlNXQX2KDabTTeqWL0.png", "https://cdn-uploads.huggingface.co/production/uploads/67b5efbe38c175486e2869b9/cGTQawzFrVI21iLfRjpFt.png", "https://cdn-uploads.huggingface.co/production/uploads/67b5efbe38c175486e2869b9/j-lNPZ3OqCUHj6vh...
2502.08869
[ { "_id": "67b5f3e30e7fed1190f29f80", "hidden": false, "name": "Jingchao Ni", "status": "claimed_verified", "statusLastChangedAt": "2025-02-19T15:14:31.563Z", "user": { "_id": "67b5efbe38c175486e2869b9", "avatarUrl": "/avatars/64a698259033bb8ac324e57c557a9aa9.svg", "fullname...
2025-02-13T00:42:11
Harnessing Vision Models for Time Series Analysis: A Survey
Time series analysis has witnessed the inspiring development from traditional autoregressive models, deep learning models, to recent Transformers and Large Language Models (LLMs). Efforts in leveraging vision models for time series analysis have also been made along the way but are less visible to the community due to ...
2
67b5f3e30e7fed1190f29fb7
null
null
2025-02-19T08:03:59.885000
Flow-of-Options: Diversified and Improved LLM Reasoning by Thinking Through Options
https://cdn-thumbnails.h…s/2502.12929.png
2
{ "_id": "643837ef581e6bf0fa9c72f8", "avatarUrl": "/avatars/5b95d2509d1c7640d77a3405ebd53eaf.svg", "followerCount": null, "fullname": "Lakshmi Nair", "isHf": false, "isMod": false, "isPro": false, "name": "lnair", "type": "user" }
true
[ "https://cdn-uploads.huggingface.co/production/uploads/643837ef581e6bf0fa9c72f8/HhevzVLx8wGDy7sD0zSAj.png" ]
2502.12929
[ { "_id": "67b546dc2b2ec6908f00c771", "hidden": false, "name": "Lakshmi Nair", "status": "claimed_verified", "statusLastChangedAt": "2025-02-19T14:37:39.033Z", "user": { "_id": "643837ef581e6bf0fa9c72f8", "avatarUrl": "/avatars/5b95d2509d1c7640d77a3405ebd53eaf.svg", "fullnam...
2025-02-18T15:11:46
Flow-of-Options: Diversified and Improved LLM Reasoning by Thinking Through Options
We present a novel reasoning approach called Flow-of-Options (FoO), designed to address intrinsic biases in Large Language Models (LLMs). FoO enables LLMs to systematically explore a diverse range of possibilities in their reasoning, as demonstrated by an FoO-based agentic system for autonomously solving Machine Learni...
7
67b546dd2b2ec6908f00c7f6
null
null
2025-02-19T07:53:04.918000
Text2World: Benchmarking Large Language Models for Symbolic World Model Generation
https://cdn-thumbnails.h…s/2502.13092.png
2
{ "_id": "6237df4a5ab9df625fb70c1a", "avatarUrl": "/avatars/c5d1a52895cb6515f28019a8e7e3e855.svg", "followerCount": 1, "fullname": "Mengkang Hu", "isHf": false, "isMod": false, "isPro": false, "name": "MengkangHu", "type": "user" }
true
null
2502.13092
[ { "_id": "67b5473109afe1f3029835cb", "hidden": false, "name": "Mengkang Hu", "status": "claimed_verified", "statusLastChangedAt": "2025-02-19T09:01:15.592Z", "user": { "_id": "6237df4a5ab9df625fb70c1a", "avatarUrl": "/avatars/c5d1a52895cb6515f28019a8e7e3e855.svg", "fullname...
2025-02-18T17:59:48
Text2World: Benchmarking Large Language Models for Symbolic World Model Generation
Recently, there has been growing interest in leveraging large language models (LLMs) to generate symbolic world models from textual descriptions. Although LLMs have been extensively explored in the context of world modeling, prior studies encountered several challenges, including evaluation randomness, dependence on in...
12
67b5473209afe1f302983600
null
null
2025-02-19T06:51:04.672000
Atom of Thoughts for Markov LLM Test-Time Scaling
https://cdn-thumbnails.h…s/2502.12018.png
3
{ "_id": "6402e8fb06c715b93407442d", "avatarUrl": "/avatars/12b67f0632be5a53b56d8a68586a7f98.svg", "followerCount": 2, "fullname": "Fengwei Teng", "isHf": false, "isMod": false, "isPro": false, "name": "leavendough", "type": "user" }
true
null
2502.12018
[ { "_id": "67b5c4ed85107d20148ae710", "hidden": false, "name": "Fengwei Teng", "status": "claimed_verified", "statusLastChangedAt": "2025-02-19T11:49:34.612Z", "user": { "_id": "6402e8fb06c715b93407442d", "avatarUrl": "/avatars/12b67f0632be5a53b56d8a68586a7f98.svg", "fullnam...
2025-02-17T16:52:42
Atom of Thoughts for Markov LLM Test-Time Scaling
Large Language Models (LLMs) achieve superior performance through training-time scaling, and test-time scaling further enhances their capabilities by conducting effective reasoning during inference. However, as the scale of reasoning increases, existing test-time scaling methods suffer from accumulated historical infor...
15
67b5c4ee85107d20148ae73d
null
null
2025-02-19T06:13:51.101000
Eager Updates For Overlapped Communication and Computation in DiLoCo
https://cdn-thumbnails.h…s/2502.12996.png
2
{ "_id": "622792366303bf1dc304f49f", "avatarUrl": "/avatars/975c1cc3eb2f97cf8e848162056d5bea.svg", "followerCount": 4, "fullname": "Arthur Douillard", "isHf": false, "isMod": false, "isPro": false, "name": "ArthurDouillard", "type": "user" }
true
null
2502.12996
[ { "_id": "67b5bcd091132877cf330179", "hidden": false, "name": "Satyen Kale", "status": null, "statusLastChangedAt": null, "user": null }, { "_id": "67b5bcd091132877cf33017a", "hidden": false, "name": "Arthur Douillard", "status": "admin_assigned", "statusLastChangedAt...
2025-02-18T16:16:14
Eager Updates For Overlapped Communication and Computation in DiLoCo
Distributed optimization methods such as DiLoCo have been shown to be effective in training very large models across multiple distributed workers, such as datacenters. These methods split updates into two parts: an inner optimization phase, where the workers independently execute multiple optimization steps on their ow...
7
67b5bcd191132877cf3301aa
null
null
2025-02-19T04:54:27.788000
FinMTEB: Finance Massive Text Embedding Benchmark
https://cdn-thumbnails.h…s/2502.10990.png
2
{ "_id": "647d834618274bce03013cc2", "avatarUrl": "/avatars/a95c7df96dc4fb6a96193f6dd5068227.svg", "followerCount": 2, "fullname": "yixuan", "isHf": false, "isMod": false, "isPro": true, "name": "yixuantt", "type": "user" }
true
null
2502.10990
[ { "_id": "67b3ee6c1e80a69e79c3155a", "hidden": false, "name": "Yixuan Tang", "status": "claimed_verified", "statusLastChangedAt": "2025-02-19T09:04:50.969Z", "user": { "_id": "647d834618274bce03013cc2", "avatarUrl": "/avatars/a95c7df96dc4fb6a96193f6dd5068227.svg", "fullname...
2025-02-16T04:23:52
FinMTEB: Finance Massive Text Embedding Benchmark
Embedding models play a crucial role in representing and retrieving information across various NLP applications. Recent advances in large language models (LLMs) have further enhanced the performance of embedding models. While these models are often benchmarked on general-purpose datasets, real-world applications demand...
3
67b3ee6d1e80a69e79c3158f
null
null
2025-02-19T04:43:42.973000
Cramming 1568 Tokens into a Single Vector and Back Again: Exploring the Limits of Embedding Space Capacity
https://cdn-thumbnails.h…s/2502.13063.png
4
{ "_id": "639c6e978a34ed9a404c6a7b", "avatarUrl": "/avatars/c98ca8c9f9ed8509c2f1bb6aa994fd57.svg", "followerCount": 7, "fullname": "MIKHAIL BURTSEV", "isHf": false, "isMod": false, "isPro": false, "name": "mbur", "type": "user" }
true
null
2502.13063
[ { "_id": "67b5a7896f72266cb765e744", "hidden": false, "name": "Yuri Kuratov", "status": "extracted_pending", "statusLastChangedAt": "2025-02-19T09:42:34.422Z", "user": { "_id": "618b9540682ec1c38327e586", "avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/618b...
2025-02-18T17:08:45
Cramming 1568 Tokens into a Single Vector and Back Again: Exploring the Limits of Embedding Space Capacity
A range of recent works addresses the problem of compression of sequence of tokens into a shorter sequence of real-valued vectors to be used as inputs instead of token embeddings or key-value cache. These approaches allow to reduce the amount of compute in existing language models. Despite relying on powerful models as...
64
67b5a78a6f72266cb765e779
null
null
2025-02-19T03:03:51.930000
You Do Not Fully Utilize Transformer's Representation Capacity
https://cdn-thumbnails.h…s/2502.09245.png
3
{ "_id": "63ed5676684767daecac6f8a", "avatarUrl": "/avatars/d0e4a715f9c3fb6d74c183bab751ec35.svg", "followerCount": 4, "fullname": "Yaroslav Aksenov", "isHf": false, "isMod": false, "isPro": false, "name": "yaraksen", "type": "user" }
true
[ "https://cdn-uploads.huggingface.co/production/uploads/63ed5676684767daecac6f8a/tZDsnW0gjHoYCpbZ-wwJi.png" ]
2502.09245
[ { "_id": "67b57a993d4f319f1fa9424b", "hidden": false, "name": "Gleb Gerasimov", "status": "claimed_verified", "statusLastChangedAt": "2025-02-19T10:10:30.547Z", "user": { "_id": "65db0871ab2f64915ce05e73", "avatarUrl": "/avatars/77e03f493196c5413cd2a02270e93660.svg", "fulln...
2025-02-13T12:00:50
You Do Not Fully Utilize Transformer's Representation Capacity
In contrast to RNNs, which compress previous tokens into a single hidden state, Transformers can attend to all previous tokens directly. However, standard Transformers only use representations from the immediately preceding layer. In this paper, we show that this design choice causes representation collapse and leads t...
34
67b57a9a3d4f319f1fa94274
null
null
2025-02-19T02:56:09.510000
Injecting Domain-Specific Knowledge into Large Language Models: A Comprehensive Survey
https://cdn-thumbnails.h…s/2502.10708.png
2
{ "_id": "65407ba7a38390065750233f", "avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/65407ba7a38390065750233f/1_IPMZbk-S9u2t18PQgMp.jpeg", "followerCount": 1, "fullname": "Zirui Song", "isHf": false, "isMod": false, "isPro": false, "name": "Ziruibest", "type": "user" }
true
null
2502.10708
[ { "_id": "67b58e32e972a2806a9a0451", "hidden": false, "name": "Zirui Song", "status": "claimed_verified", "statusLastChangedAt": "2025-02-19T09:00:38.943Z", "user": { "_id": "65407ba7a38390065750233f", "avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/65407ba...
2025-02-15T07:43:43
Injecting Domain-Specific Knowledge into Large Language Models: A Comprehensive Survey
Large Language Models (LLMs) have demonstrated remarkable success in various tasks such as natural language understanding, text summarization, and machine translation. However, their general-purpose nature often limits their effectiveness in domain-specific applications that require specialized knowledge, such as healt...
4
67b58e33e972a2806a9a04b8
null
null
2025-02-19T02:47:33.654000
Perovskite-LLM: Knowledge-Enhanced Large Language Models for Perovskite Solar Cell Research
https://cdn-thumbnails.h…s/2502.12669.png
2
{ "_id": "63024676056ec3a2a8714b24", "avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/1661093436322-noauth.jpeg", "followerCount": 5, "fullname": "Xiang Liu", "isHf": false, "isMod": false, "isPro": false, "name": "Dominic789654", "type": "user" }
true
null
2502.12669
[ { "_id": "67b58c806e53744c2a373351", "hidden": false, "name": "Xiang Liu", "status": "admin_assigned", "statusLastChangedAt": "2025-02-19T09:34:03.429Z", "user": { "_id": "63024676056ec3a2a8714b24", "avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/1661093436...
2025-02-18T09:19:24
Perovskite-LLM: Knowledge-Enhanced Large Language Models for Perovskite Solar Cell Research
The rapid advancement of perovskite solar cells (PSCs) has led to an exponential growth in research publications, creating an urgent need for efficient knowledge management and reasoning systems in this domain. We present a comprehensive knowledge-enhanced system for PSCs that integrates three key components. First, we...
2
67b58c826e53744c2a3733c2
null
null
2025-02-19T02:27:36.940000
OctoTools: An Agentic Framework with Extensible Tools for Complex Reasoning
https://cdn-thumbnails.h…s/2502.11271.png
3
{ "_id": "60f5f68fa7fd83d025749234", "avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/60f5f68fa7fd83d025749234/gCeJAZfzaANAcEvI6v5-P.jpeg", "followerCount": 8, "fullname": "Pan Lu", "isHf": false, "isMod": false, "isPro": false, "name": "lupantech", "type": "user" }
true
null
2502.11271
[ { "_id": "67b4322c217ec18a40587bec", "hidden": false, "name": "Pan Lu", "status": "claimed_verified", "statusLastChangedAt": "2025-02-19T09:04:43.677Z", "user": { "_id": "60f5f68fa7fd83d025749234", "avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/60f5f68fa7f...
2025-02-16T21:18:47
OctoTools: An Agentic Framework with Extensible Tools for Complex Reasoning
Solving complex reasoning tasks may involve visual understanding, domain knowledge retrieval, numerical calculation, and multi-step reasoning. Existing methods augment large language models (LLMs) with external tools but are restricted to specialized domains, limited tool types, or require additional training data. In ...
16
67b4322d217ec18a40587c27
null
null
2025-02-19T01:24:26.365000
Pre-training Auto-regressive Robotic Models with 4D Representations
https://cdn-thumbnails.h…s/2502.13142.png
2
{ "_id": "667c5764186b27ef806636d3", "avatarUrl": "/avatars/5c08f0109bc0e350624112c0aff544f6.svg", "followerCount": null, "fullname": "Roei Herzig", "isHf": false, "isMod": false, "isPro": false, "name": "roeiherz", "type": "user" }
true
null
2502.13142
[ { "_id": "67b5790132be608036ee94e5", "hidden": false, "name": "Dantong Niu", "status": "admin_assigned", "statusLastChangedAt": "2025-02-19T09:12:28.457Z", "user": { "_id": "65c3fdf79d062be813813e45", "avatarUrl": "/avatars/52528a61abe5bbbef4a4a431944973cd.svg", "fullname":...
2025-02-18T18:59:01
Pre-training Auto-regressive Robotic Models with 4D Representations
Foundation models pre-trained on massive unlabeled datasets have revolutionized natural language and computer vision, exhibiting remarkable generalization capabilities, thus highlighting the importance of pre-training. Yet, efforts in robotics have struggled to achieve similar success, limited by either the need for co...
4
67b5790832be608036ee9638
null
null
2025-02-19T01:21:54.836000
PAFT: Prompt-Agnostic Fine-Tuning
https://cdn-thumbnails.h…s/2502.12859.png
8
{ "_id": "65ed3051492a7f35db21fea2", "avatarUrl": "/avatars/4fc0ccc21aa88e4e8ff74a6f850570b8.svg", "followerCount": null, "fullname": "Chenxing Wei", "isHf": false, "isMod": false, "isPro": false, "name": "kittttttt", "type": "user" }
true
null
2502.12859
[ { "_id": "67b576aa489d68b981e086ad", "hidden": false, "name": "Chenxing Wei", "status": "admin_assigned", "statusLastChangedAt": "2025-02-19T10:23:00.016Z", "user": { "_id": "65ed3051492a7f35db21fea2", "avatarUrl": "/avatars/4fc0ccc21aa88e4e8ff74a6f850570b8.svg", "fullname"...
2025-02-18T13:46:47
PAFT: Prompt-Agnostic Fine-Tuning
While Large Language Models (LLMs) adapt well to downstream tasks after fine-tuning, this adaptability often compromises prompt robustness, as even minor prompt variations can significantly degrade performance. To address this, we propose Prompt-Agnostic Fine-Tuning(PAFT), a simple yet effective approach that dynamical...
15
67b576aa489d68b981e08708
null
null
2025-02-19T00:22:36.628000
Soundwave: Less is More for Speech-Text Alignment in LLMs
https://cdn-thumbnails.h…s/2502.12900.png
2
{ "_id": "66975b9f8031bf92b428e138", "avatarUrl": "/avatars/3254281a7bac1c8ddde1d6bc7e518b2f.svg", "followerCount": null, "fullname": "Yuhao Zhang", "isHf": false, "isMod": false, "isPro": false, "name": "Yoohao", "type": "user" }
true
null
2502.12900
[ { "_id": "67b54851b986e35c41e063da", "hidden": false, "name": "Yuhao Zhang", "status": "extracted_pending", "statusLastChangedAt": "2025-02-19T02:56:18.848Z", "user": { "_id": "66975b9f8031bf92b428e138", "avatarUrl": "/avatars/3254281a7bac1c8ddde1d6bc7e518b2f.svg", "fullnam...
2025-02-18T14:36:39
Soundwave: Less is More for Speech-Text Alignment in LLMs
Existing end-to-end speech large language models (LLMs) usually rely on large-scale annotated data for training, while data-efficient training has not been discussed in depth. We focus on two fundamental problems between speech and text: the representation space gap and sequence length inconsistency. We propose Soundwa...
76
67b54852b986e35c41e06426
null
null
2025-02-18T23:51:36.910000
Magma: A Foundation Model for Multimodal AI Agents
https://cdn-thumbnails.h…s/2502.13130.png
6
{ "_id": "60f1abe7544c2adfd699860c", "avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/1674929746905-60f1abe7544c2adfd699860c.jpeg", "followerCount": 6280, "fullname": "AK", "isHf": true, "isMod": false, "isPro": false, "name": "akhaliq", "type": "user" }
true
null
2502.13130
[ { "_id": "67b5625fb27eb6046b2ceec5", "hidden": false, "name": "Jianwei Yang", "status": null, "statusLastChangedAt": null, "user": null }, { "_id": "67b5625fb27eb6046b2ceec6", "hidden": false, "name": "Reuben Tan", "status": "admin_assigned", "statusLastChangedAt": "2...
2025-02-18T18:55:21
Magma: A Foundation Model for Multimodal AI Agents
We present Magma, a foundation model that serves multimodal AI agentic tasks in both the digital and physical worlds. Magma is a significant extension of vision-language (VL) models in that it not only retains the VL understanding ability (verbal intelligence) of the latter, but is also equipped with the ability to pla...
54
67b56265b27eb6046b2cf08f
null
null
2025-02-18T23:37:46.756000
Revisiting the Test-Time Scaling of o1-like Models: Do they Truly Possess Test-Time Scaling Capabilities?
https://cdn-thumbnails.h…s/2502.12215.png
2
{ "_id": "60f1abe7544c2adfd699860c", "avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/1674929746905-60f1abe7544c2adfd699860c.jpeg", "followerCount": 6280, "fullname": "AK", "isHf": true, "isMod": false, "isPro": false, "name": "akhaliq", "type": "user" }
false
null
2502.12215
[ { "_id": "67b56007fa141a55e51d9d78", "hidden": false, "name": "Zhiyuan Zeng", "status": null, "statusLastChangedAt": null, "user": null }, { "_id": "67b56007fa141a55e51d9d79", "hidden": false, "name": "Qinyuan Cheng", "status": null, "statusLastChangedAt": null, "...
2025-02-17T07:21:11
Revisiting the Test-Time Scaling of o1-like Models: Do they Truly Possess Test-Time Scaling Capabilities?
The advent of test-time scaling in large language models (LLMs), exemplified by OpenAI's o1 series, has advanced reasoning capabilities by scaling computational resource allocation during inference. While successors like QwQ, Deepseek-R1 (R1) and LIMO replicate these advancements, whether these models truly possess tes...
16
67b56007fa141a55e51d9da7
null
null
2025-02-18T23:23:34.214000
SafeRoute: Adaptive Model Selection for Efficient and Accurate Safety Guardrails in Large Language Models
https://cdn-thumbnails.h…s/2502.12464.png
2
{ "_id": "64ad5f59b7e4b2c1ce47eb43", "avatarUrl": "/avatars/1f13ebe21a90d8c99920aa2c8cd9ac45.svg", "followerCount": 4, "fullname": "Seanie Lee", "isHf": false, "isMod": false, "isPro": false, "name": "Seanie-lee", "type": "user" }
true
[ "https://cdn-uploads.huggingface.co/production/uploads/64ad5f59b7e4b2c1ce47eb43/ZEq_vSLjsXuPX3O-TWIpE.png" ]
2502.12464
[ { "_id": "67b55b2cc92c4aa82c13562d", "hidden": false, "name": "Seanie Lee", "status": "claimed_verified", "statusLastChangedAt": "2025-02-19T09:00:53.341Z", "user": { "_id": "64ad5f59b7e4b2c1ce47eb43", "avatarUrl": "/avatars/1f13ebe21a90d8c99920aa2c8cd9ac45.svg", "fullname"...
2025-02-18T02:51:17
SafeRoute: Adaptive Model Selection for Efficient and Accurate Safety Guardrails in Large Language Models
Deploying large language models (LLMs) in real-world applications requires robust safety guard models to detect and block harmful user prompts. While large safety guard models achieve strong performance, their computational cost is substantial. To mitigate this, smaller distilled models are used, but they often underpe...
27
67b55b2dc92c4aa82c13568b
null
null
2025-02-18T22:59:16.530000
MUDDFormer: Breaking Residual Bottlenecks in Transformers via Multiway Dynamic Dense Connections
https://cdn-thumbnails.h…s/2502.12170.png
2
{ "_id": "62d77440bad37ef354028365", "avatarUrl": "/avatars/df0dea879e06fa814867e9aad03d1e68.svg", "followerCount": null, "fullname": "Da Xiao", "isHf": false, "isMod": false, "isPro": false, "name": "xiaoda99", "type": "user" }
false
null
2502.12170
[ { "_id": "67b5434f2b2ec6908fffe75e", "hidden": false, "name": "Da Xiao", "status": null, "statusLastChangedAt": null, "user": null }, { "_id": "67b5434f2b2ec6908fffe75f", "hidden": false, "name": "Qingye Meng", "status": "admin_assigned", "statusLastChangedAt": "2025-...
2025-02-13T10:26:27
MUDDFormer: Breaking Residual Bottlenecks in Transformers via Multiway Dynamic Dense Connections
We propose MUltiway Dynamic Dense (MUDD) connections, a simple yet effective method to address the limitations of residual connections and enhance cross-layer information flow in Transformers. Unlike existing dense connection approaches with static and shared connection weights, MUDD generates connection weights dynami...
12
67b543502b2ec6908fffe788
null
null
2025-02-18T22:46:16.586000
Multilingual Encoder Knows more than You Realize: Shared Weights Pretraining for Extremely Low-Resource Languages
https://cdn-thumbnails.h…s/2502.10852.png
2
{ "_id": "6430bdd8cd31d174a9f900fb", "avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/noauth/Y9SPnRfpKSbYc7MhNdP-H.jpeg", "followerCount": 2, "fullname": "Ziyin Zhang", "isHf": false, "isMod": false, "isPro": false, "name": "Geralt-Targaryen", "type": "user" }
true
null
2502.10852
[ { "_id": "67b55321f703732d151de666", "hidden": false, "name": "Zeli Su", "status": null, "statusLastChangedAt": null, "user": null }, { "_id": "67b55321f703732d151de667", "hidden": false, "name": "Ziyin Zhang", "status": "admin_assigned", "statusLastChangedAt": "2025-...
2025-02-15T16:53:10
Multilingual Encoder Knows more than You Realize: Shared Weights Pretraining for Extremely Low-Resource Languages
While multilingual language models like XLM-R have advanced multilingualism in NLP, they still perform poorly in extremely low-resource languages. This situation is exacerbated by the fact that modern LLMs such as LLaMA and Qwen support far fewer languages than XLM-R, making text generation models non-existent for many...
2
67b55322f703732d151de69d
null
null
2025-02-18T22:43:02.567000
Continuous Diffusion Model for Language Modeling
https://cdn-thumbnails.h…s/2502.11564.png
4
{ "_id": "65e5bd4568234ef5d6decadc", "avatarUrl": "/avatars/c41095a946c0176b949c0b3566136c05.svg", "followerCount": 4, "fullname": "Jaehyeong Jo", "isHf": false, "isMod": false, "isPro": false, "name": "harryjo97", "type": "user" }
true
null
2502.11564
[ { "_id": "67b40f93aba9e111862052ab", "hidden": false, "name": "Jaehyeong Jo", "status": "claimed_verified", "statusLastChangedAt": "2025-02-18T09:31:27.544Z", "user": { "_id": "65e5bd4568234ef5d6decadc", "avatarUrl": "/avatars/c41095a946c0176b949c0b3566136c05.svg", "fullnam...
2025-02-17T08:54:29
Continuous Diffusion Model for Language Modeling
Diffusion models have emerged as a promising alternative to autoregressive models in modeling discrete categorical data. Yet diffusion models that directly work on discrete data space do not fully exploit the power of iterative refinement, as the signals are lost during the transition between discrete states. Existing ...
50
67b40f94aba9e111862052d5
null
null
2025-02-18T22:35:23.066000
HealthGPT: A Medical Large Vision-Language Model for Unifying Comprehension and Generation via Heterogeneous Knowledge Adaptation
https://cdn-thumbnails.h…s/2502.09838.png
2
{ "_id": "65fc18edfb66882aba4d548e", "avatarUrl": "/avatars/f70d47fe4aba98b5a5cd64f7e002dfd2.svg", "followerCount": null, "fullname": "wenqiao", "isHf": false, "isMod": false, "isPro": false, "name": "wannature", "type": "user" }
true
null
2502.09838
[ { "_id": "67b55078a64445f58c771d84", "hidden": true, "name": "Tianwei Lin", "status": null, "statusLastChangedAt": null, "user": null }, { "_id": "67b55078a64445f58c771d85", "hidden": false, "name": "Wenqiao Zhang", "status": "admin_assigned", "statusLastChangedAt": "...
2025-02-14T00:42:36
HealthGPT: A Medical Large Vision-Language Model for Unifying Comprehension and Generation via Heterogeneous Knowledge Adaptation
We present HealthGPT, a powerful Medical Large Vision-Language Model (Med-LVLM) that integrates medical visual comprehension and generation capabilities within a unified autoregressive paradigm. Our bootstrapping philosophy is to progressively adapt heterogeneous comprehension and generation knowledge to pre-trained la...
10
67b5507aa64445f58c771df9
null
null
2025-02-18T22:08:27.750000
Multimodal Mamba: Decoder-only Multimodal State Space Model via Quadratic to Linear Distillation
https://cdn-thumbnails.h…s/2502.13145.png
2
{ "_id": "6577073fc2bf55b1f6bafb49", "avatarUrl": "/avatars/58803398b1a918b7570db17893e65122.svg", "followerCount": 4, "fullname": "liao", "isHf": false, "isMod": false, "isPro": false, "name": "LegendBC", "type": "user" }
true
null
2502.13145
[ { "_id": "67b54b04bd51b4e46e39d287", "hidden": false, "name": "Bencheng Liao", "status": "claimed_verified", "statusLastChangedAt": "2025-02-19T09:01:00.934Z", "user": { "_id": "6577073fc2bf55b1f6bafb49", "avatarUrl": "/avatars/58803398b1a918b7570db17893e65122.svg", "fullna...
2025-02-18T18:59:57
Multimodal Mamba: Decoder-only Multimodal State Space Model via Quadratic to Linear Distillation
Recent Multimodal Large Language Models (MLLMs) have achieved remarkable performance but face deployment challenges due to their quadratic computational complexity, growing Key-Value cache requirements, and reliance on separate vision encoders. We propose mmMamba, a framework for developing linear-complexity native mul...
36
67b54b05bd51b4e46e39d2bb
null
null
2025-02-18T22:06:19.200000
FLAG-Trader: Fusion LLM-Agent with Gradient-based Reinforcement Learning for Financial Trading
https://cdn-thumbnails.h…s/2502.11433.png
2
{ "_id": "63b58ed5889aa6707f0bb0f4", "avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/63b58ed5889aa6707f0bb0f4/znl74_aMswlV8VtHrfj3G.jpeg", "followerCount": 15, "fullname": "Jimin Huang", "isHf": false, "isMod": false, "isPro": true, "name": "jiminHuang", "type": "user" }
true
[ "https://cdn-uploads.huggingface.co/production/uploads/63b58ed5889aa6707f0bb0f4/2C9mhT-1Qz14hik7sxjf2.png" ]
2502.11433
[ { "_id": "67b54a644508bd0617598c21", "hidden": false, "name": "Guojun Xiong", "status": "admin_assigned", "statusLastChangedAt": "2025-02-19T10:14:19.641Z", "user": { "_id": "67b54cbcd9f66be7f6f3f7de", "avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/no-auth...
2025-02-17T04:45:53
FLAG-Trader: Fusion LLM-Agent with Gradient-based Reinforcement Learning for Financial Trading
Large language models (LLMs) fine-tuned on multimodal financial data have demonstrated impressive reasoning capabilities in various financial tasks. However, they often struggle with multi-step, goal-oriented scenarios in interactive financial markets, such as trading, where complex agentic approaches are required to i...
31
67b54a654508bd0617598c7e
null
null
2025-02-18T21:59:45.466000
Rethinking Diverse Human Preference Learning through Principal Component Analysis
https://cdn-thumbnails.h…s/2502.13131.png
3
{ "_id": "64d45451c34a346181b130dd", "avatarUrl": "/avatars/9bb8205b889337df5d321539c9b5d69d.svg", "followerCount": 6, "fullname": "Rui Yang", "isHf": false, "isMod": false, "isPro": false, "name": "Ray2333", "type": "user" }
true
null
2502.13131
[ { "_id": "67b5461d29cc269e5a4eb823", "hidden": false, "name": "Feng Luo", "status": null, "statusLastChangedAt": null, "user": null }, { "_id": "67b5461d29cc269e5a4eb824", "hidden": true, "name": "Rui Yang", "status": "claimed_verified", "statusLastChangedAt": "2025-0...
2025-02-18T18:55:26
Rethinking Diverse Human Preference Learning through Principal Component Analysis
Understanding human preferences is crucial for improving foundation models and building personalized AI systems. However, preferences are inherently diverse and complex, making it difficult for traditional reward models to capture their full range. While fine-grained preference data can help, collecting it is expensive...
35
67b5461f29cc269e5a4eb8bc
null
null
2025-02-18T21:57:00.289000
HeadInfer: Memory-Efficient LLM Inference by Head-wise Offloading
https://cdn-thumbnails.h…s/2502.12574.png
2
{ "_id": "64cb48f7667f4f808535107e", "avatarUrl": "/avatars/8f77f378ad665b246e1ea3aaba2153ae.svg", "followerCount": 1, "fullname": "chengluo", "isHf": false, "isMod": false, "isPro": false, "name": "wdlctc", "type": "user" }
true
null
2502.12574
[ { "_id": "67b547f555d0424a31b9c384", "hidden": false, "name": "Cheng Luo", "status": "admin_assigned", "statusLastChangedAt": "2025-02-19T09:40:25.130Z", "user": { "_id": "64cb48f7667f4f808535107e", "avatarUrl": "/avatars/8f77f378ad665b246e1ea3aaba2153ae.svg", "fullname": "...
2025-02-18T06:26:05
HeadInfer: Memory-Efficient LLM Inference by Head-wise Offloading
Transformer-based large language models (LLMs) demonstrate impressive performance in long context generation. Extending the context length has disproportionately shifted the memory footprint of LLMs during inference to the key-value cache (KV cache). In this paper, we propose HEADINFER, which offloads the KV cache to C...
11
67b547f755d0424a31b9c3e5
null
null
2025-02-18T21:56:39.407000
Phantom: Subject-consistent video generation via cross-modal alignment
https://cdn-thumbnails.h…s/2502.11079.png
2
{ "_id": "63a950ac3453852ef5394178", "avatarUrl": "/avatars/48a5e537b10e2247a17e63502e3201a6.svg", "followerCount": 1, "fullname": "Lijie Liu", "isHf": false, "isMod": false, "isPro": false, "name": "liulj13", "type": "user" }
true
[ "https://cdn-uploads.huggingface.co/production/uploads/63a950ac3453852ef5394178/HuVZ5d9xTlI4R1onRv_F5.mp4" ]
2502.11079
[ { "_id": "67b40141ad717fe02e188c1a", "hidden": false, "name": "Lijie Liu", "status": "claimed_verified", "statusLastChangedAt": "2025-02-18T09:31:42.570Z", "user": { "_id": "63a950ac3453852ef5394178", "avatarUrl": "/avatars/48a5e537b10e2247a17e63502e3201a6.svg", "fullname":...
2025-02-16T11:02:50
Phantom: Subject-consistent video generation via cross-modal alignment
The continuous development of foundational models for video generation is evolving into various applications, with subject-consistent video generation still in the exploratory stage. We refer to this as Subject-to-Video, which extracts subject elements from reference images and generates subject-consistent video throug...
52
67b40144ad717fe02e188cb2
null
null
2025-02-18T21:55:26.822000
Crowd Comparative Reasoning: Unlocking Comprehensive Evaluations for LLM-as-a-Judge
https://cdn-thumbnails.h…s/2502.12501.png
2
{ "_id": "62a42f22c683d02f5b63320c", "avatarUrl": "/avatars/bc611abe9c4ef8d378123cb8ac9fdbf2.svg", "followerCount": null, "fullname": "Qiyuan Zhang", "isHf": false, "isMod": false, "isPro": false, "name": "DonJoey", "type": "user" }
true
null
2502.12501
[ { "_id": "67b547ffc9071a3e97139532", "hidden": false, "name": "Qiyuan Zhang", "status": "claimed_verified", "statusLastChangedAt": "2025-02-19T09:01:10.215Z", "user": { "_id": "62a42f22c683d02f5b63320c", "avatarUrl": "/avatars/bc611abe9c4ef8d378123cb8ac9fdbf2.svg", "fullnam...
2025-02-18T03:31:06
Crowd Comparative Reasoning: Unlocking Comprehensive Evaluations for LLM-as-a-Judge
LLM-as-a-Judge, which generates chain-of-thought (CoT) judgments, has become a widely adopted auto-evaluation method. However, its reliability is compromised by the CoT reasoning's inability to capture comprehensive and deeper details, often leading to incomplete outcomes. Existing methods mainly rely on majority votin...
6
67b54800c9071a3e9713956c
null
null
2025-02-18T21:52:22.326000
RealSyn: An Effective and Scalable Multimodal Interleaved Document Transformation Paradigm
https://cdn-thumbnails.h…s/2502.12513.png
2
{ "_id": "63e202f352b7578dba448ab5", "avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/63e202f352b7578dba448ab5/8itVBLcv14m7OVsoF8h1o.jpeg", "followerCount": 4, "fullname": "Yang", "isHf": false, "isMod": false, "isPro": false, "name": "Kaichengalex", "type": "user" }
true
null
2502.12513
[ { "_id": "67b545fd88527668fa8bcc14", "hidden": false, "name": "Tiancheng Gu", "status": "admin_assigned", "statusLastChangedAt": "2025-02-19T10:19:15.243Z", "user": { "_id": "6508712e7ee07e274b0f4c94", "avatarUrl": "/avatars/23fe5593b0bce36c2167c3142e57e0e9.svg", "fullname"...
2025-02-18T03:58:38
RealSyn: An Effective and Scalable Multimodal Interleaved Document Transformation Paradigm
After pre-training on extensive image-text pairs, Contrastive Language-Image Pre-training (CLIP) demonstrates promising performance on a wide variety of benchmarks. However, a substantial volume of non-paired data, such as multimodal interleaved documents, remains underutilized for vision-language representation learni...
15
67b545fe88527668fa8bcc65
null
null
2025-02-18T21:51:33.957000
SoFar: Language-Grounded Orientation Bridges Spatial Reasoning and Object Manipulation
https://cdn-thumbnails.h…s/2502.13143.png
2
{ "_id": "63c3e8abc7d7f4c63a515a02", "avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/63c3e8abc7d7f4c63a515a02/npMHnVP2hHLbvoUGe7C4O.jpeg", "followerCount": 2, "fullname": "Zekun Qi", "isHf": false, "isMod": false, "isPro": false, "name": "qizekun", "type": "user" }
true
null
2502.13143
[ { "_id": "67b546c0d8a1eac02c605f6a", "hidden": false, "name": "Zekun Qi", "status": "claimed_verified", "statusLastChangedAt": "2025-02-19T09:01:21.001Z", "user": { "_id": "63c3e8abc7d7f4c63a515a02", "avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/63c3e8abc...
2025-02-18T18:59:02
SoFar: Language-Grounded Orientation Bridges Spatial Reasoning and Object Manipulation
Spatial intelligence is a critical component of embodied AI, promoting robots to understand and interact with their environments. While recent advances have enhanced the ability of VLMs to perceive object locations and positional relationships, they still lack the capability to precisely understand object orientations-...
29
67b546c5d8a1eac02c606090
null
null
2025-02-18T21:18:22.741000
Sailor2: Sailing in South-East Asia with Inclusive Multilingual LLMs
https://cdn-thumbnails.h…s/2502.12982.png
4
{ "_id": "6214e4ee1e35c843d42d1f88", "avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/6214e4ee1e35c843d42d1f88/fj-9wuIdPhvogh3BrcXTB.jpeg", "followerCount": 15, "fullname": "Longxu Dou", "isHf": false, "isMod": false, "isPro": true, "name": "dreamerdeo", "type": "user" }
true
null
2502.12982
[ { "_id": "67b53f572b2ec6908ffef365", "hidden": false, "name": "Longxu Dou", "status": "extracted_pending", "statusLastChangedAt": "2025-02-19T02:17:59.980Z", "user": { "_id": "6214e4ee1e35c843d42d1f88", "avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/6214e4...
2025-02-18T16:04:57
Sailor2: Sailing in South-East Asia with Inclusive Multilingual LLMs
Sailor2 is a family of cutting-edge multilingual language models for South-East Asian (SEA) languages, available in 1B, 8B, and 20B sizes to suit diverse applications. Building on Qwen2.5, Sailor2 undergoes continuous pre-training on 500B tokens (400B SEA-specific and 100B replay tokens) to support 13 SEA languages whi...
14
67b53f572b2ec6908ffef3c9
null
null
2025-02-18T20:05:09.186000
ExaGPT: Example-Based Machine-Generated Text Detection for Human Interpretability
https://cdn-thumbnails.h…s/2502.11336.png
2
{ "_id": "6538e649f940c8a0358aa8b8", "avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/6538e649f940c8a0358aa8b8/veNw6QJuZu8anWCXtOXxu.jpeg", "followerCount": null, "fullname": "Ryuto Koike", "isHf": false, "isMod": false, "isPro": false, "name": "ryuryukke", "type": "user" }
false
[ "https://cdn-uploads.huggingface.co/production/uploads/6538e649f940c8a0358aa8b8/LTS6uI3uy5AxEeoD9-oMX.png" ]
2502.11336
[ { "_id": "67b52de36007d463b988b202", "hidden": false, "name": "Ryuto Koike", "status": "claimed_verified", "statusLastChangedAt": "2025-02-19T09:03:41.013Z", "user": { "_id": "6538e649f940c8a0358aa8b8", "avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/6538e6...
2025-02-17T01:15:07
ExaGPT: Example-Based Machine-Generated Text Detection for Human Interpretability
Detecting texts generated by Large Language Models (LLMs) could cause grave mistakes due to incorrect decisions, such as undermining student's academic dignity. LLM text detection thus needs to ensure the interpretability of the decision, which can help users judge how reliably correct its prediction is. When humans ve...
0
67b52de46007d463b988b279
null
null
2025-02-18T18:58:34.838000
Diffusion Models without Classifier-free Guidance
https://cdn-thumbnails.h…s/2502.12154.png
2
{ "_id": "6372f265112fb535baf254c4", "avatarUrl": "/avatars/9b821bc533175c7dded48cdb3a3e1a12.svg", "followerCount": 2, "fullname": "tzco", "isHf": false, "isMod": false, "isPro": false, "name": "tzco", "type": "user" }
true
null
2502.12154
[ { "_id": "67b400719ff3ff79dae14701", "hidden": false, "name": "Zhicong Tang", "status": "claimed_verified", "statusLastChangedAt": "2025-02-18T09:31:45.361Z", "user": { "_id": "6372f265112fb535baf254c4", "avatarUrl": "/avatars/9b821bc533175c7dded48cdb3a3e1a12.svg", "fullnam...
2025-02-17T18:59:50
Diffusion Models without Classifier-free Guidance
This paper presents Model-guidance (MG), a novel objective for training diffusion model that addresses and removes of the commonly used Classifier-free guidance (CFG). Our innovative approach transcends the standard modeling of solely data distribution to incorporating the posterior probability of conditions. The propo...
4
67b400789ff3ff79dae147ee
null
null
2025-02-18T14:56:45.613000
EQ-VAE: Equivariance Regularized Latent Space for Improved Generative Image Modeling
https://cdn-thumbnails.h…s/2502.09509.png
2
{ "_id": "661ba524bd9243bf7e598355", "avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/661ba524bd9243bf7e598355/i77yD4XgJn2vUbn_mIsT8.jpeg", "followerCount": 2, "fullname": "Ioannis Kakogeorgiou", "isHf": false, "isMod": false, "isPro": false, "name": "gkakogeorgiou", "type": "use...
true
[ "https://cdn-uploads.huggingface.co/production/uploads/661ba524bd9243bf7e598355/9XkVow22TY84dDgXm-Duc.gif" ]
2502.09509
[ { "_id": "67b4e4259beded220ad14729", "hidden": false, "name": "Theodoros Kouzelis", "status": "claimed_verified", "statusLastChangedAt": "2025-02-21T16:15:22.578Z", "user": { "_id": "6383aa17834d3558a3955186", "avatarUrl": "/avatars/1f6aed0a762379df334bc6a734d42f86.svg", "f...
2025-02-13T17:21:51
EQ-VAE: Equivariance Regularized Latent Space for Improved Generative Image Modeling
Latent generative models have emerged as a leading approach for high-quality image synthesis. These models rely on an autoencoder to compress images into a latent space, followed by a generative model to learn the latent distribution. We identify that existing autoencoders lack equivariance to semantic-preserving trans...
7
67b4e4289beded220ad147c7
null
null
2025-02-18T13:59:31.380000
Ask in Any Modality: A Comprehensive Survey on Multimodal Retrieval-Augmented Generation
https://cdn-thumbnails.h…s/2502.08826.png
2
{ "_id": "64ba58d377dd483716aba098", "avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/64ba58d377dd483716aba098/6VASAUkFpDC-PR01yUJWj.png", "followerCount": 3, "fullname": "Mahdi Abootorabi", "isHf": false, "isMod": false, "isPro": false, "name": "aboots", "type": "user" }
true
[ "https://cdn-uploads.huggingface.co/production/uploads/64ba58d377dd483716aba098/N0fZ0I60EfZjITEnf6gPc.png", "https://cdn-uploads.huggingface.co/production/uploads/64ba58d377dd483716aba098/CtLxMqUEhWr6d9ztU1YZq.jpeg", "https://cdn-uploads.huggingface.co/production/uploads/64ba58d377dd483716aba098/HczPPOjzArOwgdw...
2502.08826
[ { "_id": "67b303f18bd6e9a5cad8bc4d", "hidden": false, "name": "Mohammad Mahdi Abootorabi", "status": "extracted_confirmed", "statusLastChangedAt": "2025-02-17T09:40:27.588Z", "user": { "_id": "64ba58d377dd483716aba098", "avatarUrl": "https://cdn-avatars.huggingface.co/v1/producti...
2025-02-12T22:33:41
Ask in Any Modality: A Comprehensive Survey on Multimodal Retrieval-Augmented Generation
Large Language Models (LLMs) struggle with hallucinations and outdated knowledge due to their reliance on static training data. Retrieval-Augmented Generation (RAG) mitigates these issues by integrating external dynamic information enhancing factual and updated grounding. Recent advances in multimodal learning have led...
17
67b303f28bd6e9a5cad8bc85
null
null
2025-02-18T13:21:05.722000
IHEval: Evaluating Language Models on Following the Instruction Hierarchy
https://cdn-thumbnails.h…s/2502.08745.png
2
{ "_id": "63bf9695da08ed054400205e", "avatarUrl": "/avatars/b6fca49559a61cf66628088c60d26c10.svg", "followerCount": 1, "fullname": "Zhihan Zhang", "isHf": false, "isMod": false, "isPro": false, "name": "zhihz0535", "type": "user" }
true
null
2502.08745
[ { "_id": "67b4cf1994ec5e365fb7995d", "hidden": false, "name": "Zhihan Zhang", "status": "extracted_confirmed", "statusLastChangedAt": "2025-02-18T18:19:31.455Z", "user": { "_id": "63bf9695da08ed054400205e", "avatarUrl": "/avatars/b6fca49559a61cf66628088c60d26c10.svg", "full...
2025-02-12T19:35:28
IHEval: Evaluating Language Models on Following the Instruction Hierarchy
The instruction hierarchy, which establishes a priority order from system messages to user messages, conversation history, and tool outputs, is essential for ensuring consistent and safe behavior in language models (LMs). Despite its importance, this topic receives limited attention, and there is a lack of comprehensiv...
18
67b4cf1a94ec5e365fb799c1
null
null
2025-02-18T13:04:04.423000
Data Valuation using Neural Networks for Efficient Instruction Fine-Tuning
https://cdn-thumbnails.h…s/2502.09969.png
2
{ "_id": "6391e4e984afa726d66180b9", "avatarUrl": "/avatars/e437e2820745b522a868b8da27d9a11f.svg", "followerCount": 0, "fullname": "Ishika Agarwal", "isHf": false, "isMod": false, "isPro": false, "name": "ishikaa", "type": "user" }
true
null
2502.09969
[ { "_id": "67b4cb6c777b7676c8b3c43d", "hidden": false, "name": "Ishika Agarwal", "status": "extracted_confirmed", "statusLastChangedAt": "2025-02-18T18:06:42.786Z", "user": { "_id": "6391e4e984afa726d66180b9", "avatarUrl": "/avatars/e437e2820745b522a868b8da27d9a11f.svg", "fu...
2025-02-14T07:55:47
Data Valuation using Neural Networks for Efficient Instruction Fine-Tuning
Influence functions provide crucial insights into model training, but existing methods suffer from large computational costs and limited generalization. Particularly, recent works have proposed various metrics and algorithms to calculate the influence of data using language models, which do not scale well with large mo...
1
67b4cb6d777b7676c8b3c45c
null
null
2025-02-18T11:57:43.538000
Explorer: Scaling Exploration-driven Web Trajectory Synthesis for Multimodal Web Agents
https://cdn-thumbnails.h…s/2502.11357.png
2
{ "_id": "6556717676fe5cfa6a115405", "avatarUrl": "/avatars/570dd8f4eb6baaff12d7ebe11dde6348.svg", "followerCount": 1, "fullname": "Vardaan Pahuja", "isHf": false, "isMod": false, "isPro": false, "name": "vardaan123", "type": "user" }
true
null
2502.11357
[ { "_id": "67b3f1f1f5bd60d66133e1f3", "hidden": false, "name": "Vardaan Pahuja", "status": "claimed_verified", "statusLastChangedAt": "2025-02-18T09:31:47.969Z", "user": { "_id": "6556717676fe5cfa6a115405", "avatarUrl": "/avatars/570dd8f4eb6baaff12d7ebe11dde6348.svg", "fulln...
2025-02-17T02:13:48
Explorer: Scaling Exploration-driven Web Trajectory Synthesis for Multimodal Web Agents
Recent success in large multimodal models (LMMs) has sparked promising applications of agents capable of autonomously completing complex web tasks. While open-source LMM agents have made significant advances in offline evaluation benchmarks, their performance still falls substantially short of human-level capabilities ...
9
67b3f1f1f5bd60d66133e24b
null
null
2025-02-18T11:42:58.976000
ILIAS: Instance-Level Image retrieval At Scale
https://cdn-thumbnails.h…s/2502.11748.png
2
{ "_id": "66a3ae59f33ff23e1c027ccd", "avatarUrl": "/avatars/216717d547bf785a2b1696171e5f4b11.svg", "followerCount": 1, "fullname": "Vladan Stojnic", "isHf": false, "isMod": false, "isPro": false, "name": "stojnvla", "type": "user" }
true
null
2502.11748
[ { "_id": "67b465600e5142133055d7c1", "hidden": false, "name": "Giorgos Kordopatis-Zilos", "status": "claimed_verified", "statusLastChangedAt": "2025-02-19T09:04:38.791Z", "user": { "_id": "673c934bdf13003bd11746fd", "avatarUrl": "/avatars/1aec1157549be85963b39eb54845b695.svg", ...
2025-02-17T12:42:38
ILIAS: Instance-Level Image retrieval At Scale
This work introduces ILIAS, a new test dataset for Instance-Level Image retrieval At Scale. It is designed to evaluate the ability of current and future foundation models and retrieval techniques to recognize particular objects. The key benefits over existing datasets include large scale, domain diversity, accurate gro...
4
67b465680e5142133055d97d
null
null
2025-02-18T08:59:34.204000
Can a Single Model Master Both Multi-turn Conversations and Tool Use? CALM: A Unified Conversational Agentic Language Model
https://cdn-thumbnails.h…s/2502.08820.png
2
{ "_id": "63888d3fd68e37abd599f428", "avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/63888d3fd68e37abd599f428/YaNyxG_oM6IgrHTkFZ6Eq.jpeg", "followerCount": 12, "fullname": "emre can", "isHf": false, "isMod": false, "isPro": true, "name": "emrecanacikgoz", "type": "user" }
true
null
2502.08820
[ { "_id": "67aece59f2e8a2ee35b5affd", "hidden": false, "name": "Emre Can Acikgoz", "status": "claimed_verified", "statusLastChangedAt": "2025-02-18T09:34:01.421Z", "user": { "_id": "63888d3fd68e37abd599f428", "avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/6...
2025-02-12T22:18:34
Can a Single Model Master Both Multi-turn Conversations and Tool Use? CALM: A Unified Conversational Agentic Language Model
Large Language Models (LLMs) with API-calling capabilities enabled building effective Language Agents (LA), while also revolutionizing the conventional task-oriented dialogue (TOD) paradigm. However, current approaches face a critical dilemma: TOD systems are often trained on a limited set of target APIs, requiring new...
4
67aece5af2e8a2ee35b5b03e
null
null
2025-02-18T07:33:17.294000
The Mirage of Model Editing: Revisiting Evaluation in the Wild
https://cdn-thumbnails.h…s/2502.11177.png
2
{ "_id": "64e4090f222b232f03fe5f63", "avatarUrl": "/avatars/1e97328de374d726f64bf16528d36ca4.svg", "followerCount": null, "fullname": "Wanli Yang", "isHf": false, "isMod": false, "isPro": false, "name": "WenDingY", "type": "user" }
false
null
2502.11177
[ { "_id": "67b47dd2e638b35196b8e014", "hidden": false, "name": "Wanli Yang", "status": null, "statusLastChangedAt": null, "user": null }, { "_id": "67b47dd2e638b35196b8e015", "hidden": false, "name": "Fei Sun", "status": null, "statusLastChangedAt": null, "user": n...
2025-02-16T15:57:55
The Mirage of Model Editing: Revisiting Evaluation in the Wild
Despite near-perfect results in artificial evaluations, the effectiveness of model editing in real-world applications remains unexplored. To bridge this gap, we propose to study model editing in question answering (QA) by establishing a rigorous evaluation practice to assess the effectiveness of editing methods in corr...
10
67b47dd2e638b35196b8e03a
null
null
2025-02-18T07:16:07.632000
Memory, Benchmark & Robots: A Benchmark for Solving Complex Tasks with Reinforcement Learning
https://cdn-thumbnails.h…s/2502.10550.png
2
{ "_id": "6668687caee0993c95b0eb81", "avatarUrl": "/avatars/301fe1f395e0a129b1c9785868fa9858.svg", "followerCount": 2, "fullname": "Egor Cherepanov", "isHf": false, "isMod": false, "isPro": false, "name": "avanturist", "type": "user" }
true
[ "https://cdn-uploads.huggingface.co/production/uploads/6668687caee0993c95b0eb81/zl6FgeOWq-7PC7PRLEyzW.qt" ]
2502.10550
[ { "_id": "67b478517fa6ecaa21d1498d", "hidden": false, "name": "Egor Cherepanov", "status": "claimed_verified", "statusLastChangedAt": "2025-02-18T16:39:34.993Z", "user": { "_id": "6668687caee0993c95b0eb81", "avatarUrl": "/avatars/301fe1f395e0a129b1c9785868fa9858.svg", "full...
2025-02-14T20:46:19
Memory, Benchmark & Robots: A Benchmark for Solving Complex Tasks with Reinforcement Learning
Memory is crucial for enabling agents to tackle complex tasks with temporal and spatial dependencies. While many reinforcement learning (RL) algorithms incorporate memory, the field lacks a universal benchmark to assess an agent's memory capabilities across diverse scenarios. This gap is particularly evident in tableto...
5
67b478557fa6ecaa21d14a24
null
null
2025-02-18T06:33:31.888000
Dyve: Thinking Fast and Slow for Dynamic Process Verification
https://cdn-thumbnails.h…s/2502.11157.png
2
{ "_id": "6608fa4f5baec84322ec85ea", "avatarUrl": "/avatars/13bdaff931676b065fa1efef06fef922.svg", "followerCount": 1, "fullname": "Zhong", "isHf": false, "isMod": false, "isPro": false, "name": "Jianyuan1", "type": "user" }
true
[ "https://cdn-uploads.huggingface.co/production/uploads/6608fa4f5baec84322ec85ea/iiYwe_FlXRwT1RjPvzF-b.png" ]
2502.11157
[ { "_id": "67b44baa5fd91177ed7760a2", "hidden": false, "name": "Jianyuan Zhong", "status": "claimed_verified", "statusLastChangedAt": "2025-02-18T09:30:45.385Z", "user": { "_id": "6608fa4f5baec84322ec85ea", "avatarUrl": "/avatars/13bdaff931676b065fa1efef06fef922.svg", "fulln...
2025-02-16T15:11:19
Dyve: Thinking Fast and Slow for Dynamic Process Verification
We present Dyve, a dynamic process verifier that enhances reasoning error detection in large language models by integrating fast and slow thinking, inspired by Kahneman's Systems Theory. Dyve adaptively applies immediate token-level confirmation System 1 for straightforward steps and comprehensive analysis System 2 for...
6
67b44bab5fd91177ed7760ca
null
null
2025-02-18T06:07:36.212000
Native Sparse Attention: Hardware-Aligned and Natively Trainable Sparse Attention
https://cdn-thumbnails.h…s/2502.11089.png
9
{ "_id": "645e054ff7a55f0d780a8ff7", "avatarUrl": "/avatars/9614510443bee3bd5d6266efd1c39fc1.svg", "followerCount": 5, "fullname": "Chunjiang Ge", "isHf": false, "isMod": false, "isPro": false, "name": "HelloJiang", "type": "user" }
true
null
2502.11089
[ { "_id": "67b43211d3c5f50aa9c03a2d", "hidden": false, "name": "Jingyang Yuan", "status": null, "statusLastChangedAt": null, "user": null }, { "_id": "67b43211d3c5f50aa9c03a2e", "hidden": false, "name": "Huazuo Gao", "status": "admin_assigned", "statusLastChangedAt": "...
2025-02-16T11:53:44
Native Sparse Attention: Hardware-Aligned and Natively Trainable Sparse Attention
Long-context modeling is crucial for next-generation language models, yet the high computational cost of standard attention mechanisms poses significant computational challenges. Sparse attention offers a promising direction for improving efficiency while maintaining model capabilities. We present NSA, a Natively train...
139
67b43212d3c5f50aa9c03a5c
null
null
2025-02-18T05:28:54.029000
Better Embeddings with Coupled Adam
https://cdn-thumbnails.h…s/2502.08441.png
3
{ "_id": "66867e1675f10ce7ef96180e", "avatarUrl": "/avatars/ac85c00ba9d4dc48887b8864a0626743.svg", "followerCount": null, "fullname": "Felix Stollenwerk", "isHf": false, "isMod": false, "isPro": false, "name": "flxst", "type": "user" }
true
null
2502.08441
[ { "_id": "67b30311a2b3622dd42a51ff", "hidden": false, "name": "Felix Stollenwerk", "status": "claimed_verified", "statusLastChangedAt": "2025-02-18T09:32:36.770Z", "user": { "_id": "66867e1675f10ce7ef96180e", "avatarUrl": "/avatars/ac85c00ba9d4dc48887b8864a0626743.svg", "fu...
2025-02-12T14:32:17
Better Embeddings with Coupled Adam
Despite their remarkable capabilities, LLMs learn word representations that exhibit the undesirable yet poorly understood feature of anisotropy. In this paper, we argue that the second moment in Adam is a cause of anisotropic embeddings, and suggest a modified optimizer called Coupled Adam to mitigate the problem. Our ...
1
67b30312a2b3622dd42a522d
null
null
2025-02-18T04:37:21.573000
Show Me the Work: Fact-Checkers' Requirements for Explainable Automated Fact-Checking
https://cdn-thumbnails.h…s/2502.09083.png
2
{ "_id": "6698cffdb2ebada9f4a7e7d7", "avatarUrl": "/avatars/e66d946c14595d3b008185f2be8d2f57.svg", "followerCount": 2, "fullname": "Greta Warren", "isHf": false, "isMod": false, "isPro": false, "name": "gretawarren", "type": "user" }
true
[ "https://cdn-uploads.huggingface.co/production/uploads/6698cffdb2ebada9f4a7e7d7/55xAEeg9Xsk87DXHTH9gM.png" ]
2502.09083
[ { "_id": "67b30726d4665a0448e6436d", "hidden": false, "name": "Greta Warren", "status": "claimed_verified", "statusLastChangedAt": "2025-02-18T09:32:34.585Z", "user": { "_id": "6698cffdb2ebada9f4a7e7d7", "avatarUrl": "/avatars/e66d946c14595d3b008185f2be8d2f57.svg", "fullnam...
2025-02-13T08:56:25
Show Me the Work: Fact-Checkers' Requirements for Explainable Automated Fact-Checking
The pervasiveness of large language models and generative AI in online media has amplified the need for effective automated fact-checking to assist fact-checkers in tackling the increasing volume and sophistication of misinformation. The complex nature of fact-checking demands that automated fact-checking systems provi...
4
67b30727d4665a0448e6438d
null
null
2025-02-18T04:34:15.786000
MagicArticulate: Make Your 3D Models Articulation-Ready
https://cdn-thumbnails.h…s/2502.12135.png
2
{ "_id": "64fb31a34c8924c4fe7498bc", "avatarUrl": "/avatars/6c8e4a66e1b8b3c786a4000210089392.svg", "followerCount": 4, "fullname": "Chaoyue Song", "isHf": false, "isMod": false, "isPro": false, "name": "chaoyue7", "type": "user" }
true
null
2502.12135
[ { "_id": "67b4028237db78705fb256e1", "hidden": false, "name": "Chaoyue Song", "status": "claimed_verified", "statusLastChangedAt": "2025-02-18T09:31:40.771Z", "user": { "_id": "64fb31a34c8924c4fe7498bc", "avatarUrl": "/avatars/6c8e4a66e1b8b3c786a4000210089392.svg", "fullnam...
2025-02-17T18:53:27
MagicArticulate: Make Your 3D Models Articulation-Ready
With the explosive growth of 3D content creation, there is an increasing demand for automatically converting static 3D models into articulation-ready versions that support realistic animation. Traditional approaches rely heavily on manual annotation, which is both time-consuming and labor-intensive. Moreover, the lack ...
8
67b4028437db78705fb25726
null
null
2025-02-18T04:33:41.120000
I Think, Therefore I Diffuse: Enabling Multimodal In-Context Reasoning in Diffusion Models
https://cdn-thumbnails.h…s/2502.10458.png
3
{ "_id": "6354bda206d707b33249c4c2", "avatarUrl": "/avatars/bbd9f76274ac52214df92084d50bc7b5.svg", "followerCount": 1, "fullname": "Zhenxing Mi", "isHf": false, "isMod": false, "isPro": false, "name": "Mifucius", "type": "user" }
true
null
2502.10458
[ { "_id": "67b3ea0f4dd7ea0538ce589d", "hidden": false, "name": "Zhenxing Mi", "status": "claimed_verified", "statusLastChangedAt": "2025-02-18T09:31:52.837Z", "user": { "_id": "6354bda206d707b33249c4c2", "avatarUrl": "/avatars/bbd9f76274ac52214df92084d50bc7b5.svg", "fullname...
2025-02-12T05:30:08
I Think, Therefore I Diffuse: Enabling Multimodal In-Context Reasoning in Diffusion Models
This paper presents ThinkDiff, a novel alignment paradigm that empowers text-to-image diffusion models with multimodal in-context understanding and reasoning capabilities by integrating the strengths of vision-language models (VLMs). Existing multimodal diffusion finetuning methods largely focus on pixel-level reconstr...
30
67b3ea124dd7ea0538ce592d
https://mizhenxing.github.io/ThinkDiff
https://github.com/MiZhenxing/ThinkDiff
2025-02-18T04:20:25.916000
Intuitive physics understanding emerges from self-supervised pretraining on natural videos
https://cdn-thumbnails.h…s/2502.11831.png
2
{ "_id": "5f1158120c833276f61f1a84", "avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/1608042047613-5f1158120c833276f61f1a84.jpeg", "followerCount": 777, "fullname": "Niels Rogge", "isHf": true, "isMod": false, "isPro": false, "name": "nielsr", "type": "user" }
false
null
2502.11831
[ { "_id": "67b450cf315f7b69956df3d6", "hidden": false, "name": "Quentin Garrido", "status": "admin_assigned", "statusLastChangedAt": "2025-02-19T15:28:09.217Z", "user": { "_id": "63049022412a1b9d381b9dcb", "avatarUrl": "/avatars/7382c0a0e3f5609b754ec09a309d33f6.svg", "fullna...
2025-02-17T14:27:14
Intuitive physics understanding emerges from self-supervised pretraining on natural videos
We investigate the emergence of intuitive physics understanding in general-purpose deep neural network models trained to predict masked regions in natural videos. Leveraging the violation-of-expectation framework, we find that video prediction models trained to predict outcomes in a learned representation space demonst...
18
67b450d0315f7b69956df3f9
null
https://github.com/facebookresearch/jepa-intuitive-physics
2025-02-18T04:16:28.219000
Towards Data-Efficient Pretraining for Atomic Property Prediction
https://cdn-thumbnails.h…s/2502.11085.png
3
{ "_id": "642b51385bf2355d02a23d15", "avatarUrl": "/avatars/87985347643b2647555f2453fa4d94fb.svg", "followerCount": 4, "fullname": "Hasan Abed Al Kader Hammoud", "isHf": false, "isMod": false, "isPro": true, "name": "hammh0a", "type": "user" }
true
[ "https://cdn-uploads.huggingface.co/production/uploads/642b51385bf2355d02a23d15/bLvTbh56AkUmcmRst8mT3.png" ]
2502.11085
[ { "_id": "67b44f44620ae0bad17d6699", "hidden": false, "name": "Yasir Ghunaim", "status": null, "statusLastChangedAt": null, "user": null }, { "_id": "67b44f44620ae0bad17d669a", "hidden": false, "name": "Hasan Abed Al Kader Hammoud", "status": "claimed_verified", "stat...
2025-02-16T11:46:23
Towards Data-Efficient Pretraining for Atomic Property Prediction
This paper challenges the recent paradigm in atomic property prediction that links progress to growing dataset sizes and computational resources. We show that pretraining on a carefully selected, task-relevant dataset can match or even surpass large-scale pretraining, while using as little as 1/24th of the computationa...
3
67b44f45620ae0bad17d66b0
null
null
2025-02-18T03:53:47.570000
PhysReason: A Comprehensive Benchmark towards Physics-Based Reasoning
https://cdn-thumbnails.h…s/2502.12054.png
2
{ "_id": "6602548a68d519ed324b47c5", "avatarUrl": "/avatars/5ab411f87440cc2a98c7a1c6a3ed5548.svg", "followerCount": 4, "fullname": "ChengyouJia", "isHf": false, "isMod": false, "isPro": false, "name": "ChengyouJia", "type": "user" }
true
null
2502.12054
[ { "_id": "67b44a6888813676da9f8239", "hidden": false, "name": "Xinyu Zhang", "status": null, "statusLastChangedAt": null, "user": null }, { "_id": "67b44a6888813676da9f823a", "hidden": false, "name": "Yuxuan Dong", "status": null, "statusLastChangedAt": null, "use...
2025-02-17T17:24:14
PhysReason: A Comprehensive Benchmark towards Physics-Based Reasoning
Large language models demonstrate remarkable capabilities across various domains, especially mathematics and logic reasoning. However, current evaluations overlook physics-based reasoning - a complex task requiring physics theorems and constraints. We present PhysReason, a 1,200-problem benchmark comprising knowledge-b...
5
67b44a6988813676da9f82d0
null
null
2025-02-18T02:26:18.856000
Large Language Models and Mathematical Reasoning Failures
https://cdn-thumbnails.h…s/2502.11574.png
3
{ "_id": "6033e34a9aa44495c80dd043", "avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/1614079701740-6033e34a9aa44495c80dd043.jpeg", "followerCount": 39, "fullname": "Birger Moell", "isHf": false, "isMod": false, "isPro": false, "name": "birgermoell", "type": "user" }
true
null
2502.11574
[ { "_id": "67b435c29e5685b308a8edac", "hidden": false, "name": "Johan Boye", "status": "extracted_pending", "statusLastChangedAt": "2025-02-18T07:24:50.956Z", "user": { "_id": "65bcbc01d6d0ffbceb8b2e6e", "avatarUrl": "/avatars/73edb2d6b7b11208439ac88b365079e8.svg", "fullname...
2025-02-17T09:07:32
Large Language Models and Mathematical Reasoning Failures
This paper investigates the mathematical reasoning capabilities of large language models (LLMs) using 50 newly constructed high-school-level word problems. Unlike prior studies that focus solely on answer correctness, we rigorously analyze both final answers and solution steps to identify reasoning failures. Evaluating...
3
67b435c29e5685b308a8edf1
null
null
2025-02-18T02:23:29.869000
Language Complexity Measurement as a Noisy Zero-Shot Proxy for Evaluating LLM Performance
https://cdn-thumbnails.h…s/2502.11578.png
2
{ "_id": "6033e34a9aa44495c80dd043", "avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/1614079701740-6033e34a9aa44495c80dd043.jpeg", "followerCount": 39, "fullname": "Birger Moell", "isHf": false, "isMod": false, "isPro": false, "name": "birgermoell", "type": "user" }
true
null
2502.11578
[ { "_id": "67b435475bff5f34c1ebee1b", "hidden": false, "name": "Birger Moell", "status": "claimed_verified", "statusLastChangedAt": "2025-02-18T09:30:52.639Z", "user": { "_id": "6033e34a9aa44495c80dd043", "avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/16140...
2025-02-17T09:09:58
Language Complexity Measurement as a Noisy Zero-Shot Proxy for Evaluating LLM Performance
Large Language Models (LLMs) have made significant strides in natural language generation but often face challenges in tasks requiring precise calculations and structural analysis. This paper investigates the performance of state-of-the-art LLMs on language complexity measurement tasks, through the computation of the L...
0
67b435485bff5f34c1ebee52
null
null
2025-02-18T01:45:36.359000
System Message Generation for User Preferences using Open-Source Models
https://cdn-thumbnails.h…s/2502.11330.png
2
{ "_id": "64587be872b60ae7a3817858", "avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/64587be872b60ae7a3817858/BbdOOxOCEzWTvEpkWp8MM.png", "followerCount": 3, "fullname": "Minbyul Jeong", "isHf": false, "isMod": false, "isPro": false, "name": "Minbyul", "type": "user" }
true
null
2502.11330
[ { "_id": "67b42c5632929e97a92dee90", "hidden": false, "name": "Minbyul Jeong", "status": "claimed_verified", "statusLastChangedAt": "2025-02-19T09:04:45.723Z", "user": { "_id": "64587be872b60ae7a3817858", "avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/6458...
2025-02-17T01:05:31
System Message Generation for User Preferences using Open-Source Models
System messages play a crucial role in interactions with large language models (LLMs), often serving as prompts to initiate conversations. Through system messages, users can assign specific roles, perform intended tasks, incorporate background information, specify various output formats and communication styles. Despit...
15
67b42c5732929e97a92deed7
null
null
2025-02-18T01:02:25.236000
How Do LLMs Acquire New Knowledge? A Knowledge Circuits Perspective on Continual Pre-Training
https://cdn-thumbnails.h…s/2502.11196.png
6
{ "_id": "620b3bbb0668e435407c8d0a", "avatarUrl": "/avatars/e0fccbb2577d76088e09f054c35cffbc.svg", "followerCount": 19, "fullname": "Ningyu Zhang", "isHf": false, "isMod": false, "isPro": false, "name": "Ningyu", "type": "user" }
true
[ "https://cdn-uploads.huggingface.co/production/uploads/620b3bbb0668e435407c8d0a/_LGnwvwslWc3YDIirfOKS.png" ]
2502.11196
[ { "_id": "67b42223c2fe54b8d43efed6", "hidden": false, "name": "Yixin Ou", "status": "admin_assigned", "statusLastChangedAt": "2025-02-19T15:22:40.840Z", "user": { "_id": "6241749cf80bd930bd99f3dd", "avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/16692102433...
2025-02-16T16:55:43
How Do LLMs Acquire New Knowledge? A Knowledge Circuits Perspective on Continual Pre-Training
Despite exceptional capabilities in knowledge-intensive tasks, Large Language Models (LLMs) face a critical gap in understanding how they internalize new knowledge, particularly how to structurally embed acquired knowledge in their neural computations. We address this issue through the lens of knowledge circuit evoluti...
22
67b42225c2fe54b8d43eff9b
null
null
2025-02-18T01:01:24.331000
SURGE: On the Potential of Large Language Models as General-Purpose Surrogate Code Executors
https://cdn-thumbnails.h…s/2502.11167.png
2
{ "_id": "650267e7e751d03da933a24a", "avatarUrl": "/avatars/f047a047d1de304cd97027463541bdf3.svg", "followerCount": 1, "fullname": "Bohan22", "isHf": false, "isMod": false, "isPro": false, "name": "Bohan22", "type": "user" }
true
null
2502.11167
[ { "_id": "67b4221bbc387d2eda6f8637", "hidden": false, "name": "Bohan Lyu", "status": "claimed_verified", "statusLastChangedAt": "2025-02-18T09:31:06.388Z", "user": { "_id": "650267e7e751d03da933a24a", "avatarUrl": "/avatars/f047a047d1de304cd97027463541bdf3.svg", "fullname":...
2025-02-16T15:38:19
SURGE: On the Potential of Large Language Models as General-Purpose Surrogate Code Executors
Large language models (LLMs) have demonstrated remarkable capabilities in code-related tasks, such as code understanding and code generation. However, an equally important yet underexplored question is whether LLMs can serve as general-purpose surrogate code executors, to predict the output and behavior of a program wi...
10
67b4221ebc387d2eda6f8717
null
null
2025-02-18T00:58:24.094000
ReLearn: Unlearning via Learning for Large Language Models
https://cdn-thumbnails.h…s/2502.11190.png
2
{ "_id": "620b3bbb0668e435407c8d0a", "avatarUrl": "/avatars/e0fccbb2577d76088e09f054c35cffbc.svg", "followerCount": 19, "fullname": "Ningyu Zhang", "isHf": false, "isMod": false, "isPro": false, "name": "Ningyu", "type": "user" }
true
[ "https://cdn-uploads.huggingface.co/production/uploads/620b3bbb0668e435407c8d0a/A4YB7t6hDVty6QrvLN0a7.png" ]
2502.11190
[ { "_id": "67b420dfb2528c023491f455", "hidden": false, "name": "Haoming Xu", "status": null, "statusLastChangedAt": null, "user": null }, { "_id": "67b420dfb2528c023491f456", "hidden": true, "name": "Ningyuan Zhao", "status": "admin_assigned", "statusLastChangedAt": "2...
2025-02-16T16:31:00
ReLearn: Unlearning via Learning for Large Language Models
Current unlearning methods for large language models usually rely on reverse optimization to reduce target token probabilities. However, this paradigm disrupts the subsequent tokens prediction, degrading model performance and linguistic coherence. Moreover, existing evaluation metrics overemphasize contextual forgettin...
29
67b420e2b2528c023491f506
null
null
2025-02-18T00:49:53.124000
Learning Getting-Up Policies for Real-World Humanoid Robots
https://cdn-thumbnails.h…s/2502.12152.png
3
{ "_id": "6201fc5d91d53938a6432fbf", "avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/6201fc5d91d53938a6432fbf/VLs8ZYaZrop4KBpZn53fH.jpeg", "followerCount": 3, "fullname": "Runpei Dong", "isHf": false, "isMod": false, "isPro": false, "name": "RunpeiDong", "type": "user" }
true
[ "https://cdn-uploads.huggingface.co/production/uploads/6201fc5d91d53938a6432fbf/x35BuXOhc6ubukxLfiVzt.mp4" ]
2502.12152
[ { "_id": "67b41ed52867282b4eb37ce4", "hidden": false, "name": "Xialin He", "status": null, "statusLastChangedAt": null, "user": null }, { "_id": "67b41ed52867282b4eb37ce5", "hidden": false, "name": "Runpei Dong", "status": "claimed_verified", "statusLastChangedAt": "2...
2025-02-17T18:59:06
Learning Getting-Up Policies for Real-World Humanoid Robots
Automatic fall recovery is a crucial prerequisite before humanoid robots can be reliably deployed. Hand-designing controllers for getting up is difficult because of the varied configurations a humanoid can end up in after a fall and the challenging terrains humanoid robots are expected to operate on. This paper develop...
36
67b41edb2867282b4eb37ddf
null
null
2025-02-18T00:28:31.293000
SWE-Lancer: Can Frontier LLMs Earn $1 Million from Real-World Freelance Software Engineering?
https://cdn-thumbnails.h…s/2502.12115.png
5
{ "_id": "60f1abe7544c2adfd699860c", "avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/1674929746905-60f1abe7544c2adfd699860c.jpeg", "followerCount": 6280, "fullname": "AK", "isHf": true, "isMod": false, "isPro": false, "name": "akhaliq", "type": "user" }
false
null
2502.12115
[ { "_id": "67b41a72a38d04cc6148d80e", "hidden": false, "name": "Samuel Miserendino", "status": null, "statusLastChangedAt": null, "user": null }, { "_id": "67b41a72a38d04cc6148d80f", "hidden": false, "name": "Michele Wang", "status": null, "statusLastChangedAt": null, ...
2025-02-17T18:41:16
SWE-Lancer: Can Frontier LLMs Earn $1 Million from Real-World Freelance Software Engineering?
We introduce SWE-Lancer, a benchmark of over 1,400 freelance software engineering tasks from Upwork, valued at \1 million USD total in real-world payouts. SWE-Lancer encompasses both independent engineering tasks--ranging from 50 bug fixes to \$32,000 feature implementations--and managerial tasks, where models choose b...
42
67b41a74a38d04cc6148d84b
null
null