publishedAt
timestamp[ns]date
2023-02-13 12:55:54
2026-03-31 12:24:03
title
stringlengths
6
206
thumbnail
stringlengths
77
77
numComments
int64
0
143
submittedBy
dict
isAuthorParticipating
bool
2 classes
mediaUrls
listlengths
0
15
paper_id
stringlengths
10
10
paper_authors
listlengths
1
3.3k
paper_publishedAt
timestamp[ns]date
2023-02-13 17:55:54
2026-03-31 16:24:03
paper_title
stringlengths
6
206
paper_summary
stringlengths
165
1.92k
paper_upvotes
int64
0
673
paper_discussionId
stringlengths
24
24
paper_projectPage
stringlengths
15
247
paper_githubRepo
stringlengths
25
132
2025-03-04T12:05:25.041000
Efficient Test-Time Scaling via Self-Calibration
https://cdn-thumbnails.h…s/2503.00031.png
1
{ "_id": "62ea79dd01ed9b0e8f61ccd3", "avatarUrl": "/avatars/70af83e0e267be39fcd5f23b85e2dafa.svg", "followerCount": 2, "fullname": "Chengsong Huang", "isHf": false, "isMod": false, "isPro": false, "name": "ChengsongHuang", "type": "user" }
true
null
2503.00031
[ { "_id": "67c732c14aaf26f75cea0d82", "hidden": false, "name": "Chengsong Huang", "status": "claimed_verified", "statusLastChangedAt": "2025-03-04T21:15:36.013Z", "user": { "_id": "62ea79dd01ed9b0e8f61ccd3", "avatarUrl": "/avatars/70af83e0e267be39fcd5f23b85e2dafa.svg", "full...
2025-02-25T00:21:14
Efficient Test-Time Scaling via Self-Calibration
Increasing test-time computation is a straightforward approach to enhancing the quality of responses in Large Language Models (LLMs). While Best-of-N sampling and Self-Consistency with majority voting are simple and effective, they require a fixed number of sampling responses for each query, regardless of its complexit...
8
67c732c34aaf26f75cea0df7
null
null
2025-03-04T10:47:26.717000
Why Are Web AI Agents More Vulnerable Than Standalone LLMs? A Security Analysis
https://cdn-thumbnails.h…s/2502.20383.png
1
{ "_id": "63e0b1925ba41def87930c47", "avatarUrl": "/avatars/4d55fdbe979ddf72a21430d66518d24f.svg", "followerCount": 1, "fullname": "Jeffrey Yang Fan Chiang", "isHf": false, "isMod": false, "isPro": false, "name": "RandomHakkaDude", "type": "user" }
true
[ "https://cdn-uploads.huggingface.co/production/uploads/63e0b1925ba41def87930c47/OQIn8hn8i8nP9HMjOk5cR.mp4" ]
2502.20383
[ { "_id": "67c284e76e9f0735ea1c436d", "hidden": false, "name": "Jeffrey Yang Fan Chiang", "status": "claimed_verified", "statusLastChangedAt": "2025-03-04T08:51:34.456Z", "user": { "_id": "63e0b1925ba41def87930c47", "avatarUrl": "/avatars/4d55fdbe979ddf72a21430d66518d24f.svg", ...
2025-02-27T18:56:26
Why Are Web AI Agents More Vulnerable Than Standalone LLMs? A Security Analysis
Recent advancements in Web AI agents have demonstrated remarkable capabilities in addressing complex web navigation tasks. However, emerging research shows that these agents exhibit greater vulnerability compared to standalone Large Language Models (LLMs), despite both being built upon the same safety-aligned models. T...
1
67c284e96e9f0735ea1c43dd
https://vulnerable-ai-agents.github.io/
null
2025-03-04T08:19:57.557000
General Reasoning Requires Learning to Reason from the Get-go
https://cdn-thumbnails.h…s/2502.19402.png
1
{ "_id": "6520d6db2a16045c092b3b36", "avatarUrl": "/avatars/dab34f141a1aef39d00c789ff85e729f.svg", "followerCount": null, "fullname": "Seungwook Han", "isHf": false, "isMod": false, "isPro": false, "name": "hanseungwook", "type": "user" }
true
null
2502.19402
[ { "_id": "67c66a6321d722b4247e5959", "hidden": false, "name": "Seungwook Han", "status": "admin_assigned", "statusLastChangedAt": "2025-03-04T16:08:58.266Z", "user": { "_id": "6520d6db2a16045c092b3b36", "avatarUrl": "/avatars/dab34f141a1aef39d00c789ff85e729f.svg", "fullname...
2025-02-26T18:51:12
General Reasoning Requires Learning to Reason from the Get-go
Large Language Models (LLMs) have demonstrated impressive real-world utility, exemplifying artificial useful intelligence (AUI). However, their ability to reason adaptively and robustly -- the hallmarks of artificial general intelligence (AGI) -- remains fragile. While LLMs seemingly succeed in commonsense reasoning, p...
4
67c66a6521d722b4247e59c8
null
null
2025-03-04T08:11:33.371000
PodAgent: A Comprehensive Framework for Podcast Generation
https://cdn-thumbnails.h…s/2503.00455.png
1
{ "_id": "674836767b7151c3ff30f865", "avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/no-auth/jcwK5NW-efhCt8s2TE6vK.png", "followerCount": null, "fullname": "Yujia Xiao", "isHf": false, "isMod": false, "isPro": false, "name": "Yogurt928", "type": "user" }
true
null
2503.00455
[ { "_id": "67c6facdd8af5b36fd4b59cf", "hidden": false, "name": "Yujia Xiao", "status": "claimed_verified", "statusLastChangedAt": "2025-03-04T16:08:12.490Z", "user": { "_id": "674836767b7151c3ff30f865", "avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/no-auth...
2025-03-01T11:35:17
PodAgent: A Comprehensive Framework for Podcast Generation
Existing Existing automatic audio generation methods struggle to generate podcast-like audio programs effectively. The key challenges lie in in-depth content generation, appropriate and expressive voice production. This paper proposed PodAgent, a comprehensive framework for creating audio programs. PodAgent 1) generate...
5
67c6facfd8af5b36fd4b5a45
https://podcast-agent.github.io/demo/
https://github.com/yujxx/PodAgent
2025-03-04T06:41:49.997000
When an LLM is apprehensive about its answers -- and when its uncertainty is justified
https://cdn-thumbnails.h…s/2503.01688.png
1
{ "_id": "675708985b91dea24c3ef642", "avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/675708985b91dea24c3ef642/8KmerI1LwJEBHM2vrC54d.jpeg", "followerCount": null, "fullname": "Andrey Goncharov", "isHf": false, "isMod": false, "isPro": false, "name": "aigoncharov", "type": "user" ...
true
[ "https://cdn-uploads.huggingface.co/production/uploads/675708985b91dea24c3ef642/9wCzAalApYA8hPN94CaEu.png" ]
2503.01688
[ { "_id": "67c6e6735aea9d8918635ac2", "hidden": false, "name": "Petr Sychev", "status": "claimed_verified", "statusLastChangedAt": "2025-03-04T12:01:33.230Z", "user": { "_id": "6728224623d75cbd1cdbe568", "avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/noauth...
2025-03-03T16:03:46
When an LLM is apprehensive about its answers -- and when its uncertainty is justified
Uncertainty estimation is crucial for evaluating Large Language Models (LLMs), particularly in high-stakes domains where incorrect answers result in significant consequences. Numerous approaches consider this problem, while focusing on a specific type of uncertainty, ignoring others. We investigate what estimates, spec...
16
67c6e6755aea9d8918635b20
null
https://github.com/LabARSS/question-complextiy-estimation
2025-03-04T05:28:10.012000
SampleMix: A Sample-wise Pre-training Data Mixing Strategey by Coordinating Data Quality and Diversity
https://cdn-thumbnails.h…s/2503.01506.png
1
{ "_id": "65a0aade5fafc248c2156e95", "avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/65a0aade5fafc248c2156e95/S9YjJMTuKc-U1cFizqUMA.jpeg", "followerCount": 1, "fullname": "DeyangKong", "isHf": false, "isMod": false, "isPro": false, "name": "DeyangKong", "type": "user" }
true
null
2503.01506
[ { "_id": "67c67cf5c8d296910ca74711", "hidden": false, "name": "Xiangyu Xi", "status": "claimed_verified", "statusLastChangedAt": "2025-03-04T12:01:25.632Z", "user": { "_id": "63edb098679c2cc40abc6c2e", "avatarUrl": "/avatars/288c7229937c2c3f29fda6d17c7df2eb.svg", "fullname"...
2025-03-03T13:22:11
SampleMix: A Sample-wise Pre-training Data Mixing Strategey by Coordinating Data Quality and Diversity
Existing pretraining data mixing methods for large language models (LLMs) typically follow a domain-wise methodology, a top-down process that first determines domain weights and then performs uniform data sampling across each domain. However, these approaches neglect significant inter-domain overlaps and commonalities,...
7
67c67d03c8d296910ca7494f
null
null
2025-03-04T05:13:44.578000
Word Form Matters: LLMs' Semantic Reconstruction under Typoglycemia
https://cdn-thumbnails.h…s/2503.01714.png
1
{ "_id": "65407ba7a38390065750233f", "avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/65407ba7a38390065750233f/1_IPMZbk-S9u2t18PQgMp.jpeg", "followerCount": 1, "fullname": "Zirui Song", "isHf": false, "isMod": false, "isPro": false, "name": "Ziruibest", "type": "user" }
true
null
2503.01714
[ { "_id": "67c6d22d983375492193aab0", "hidden": false, "name": "Chenxi Wang", "status": "claimed_verified", "statusLastChangedAt": "2025-03-04T11:16:44.551Z", "user": { "_id": "679bc0ec7f3c28bf968321c8", "avatarUrl": "/avatars/9d5ab9c6af32878e28987518c0210c1a.svg", "fullname...
2025-03-03T16:31:45
Word Form Matters: LLMs' Semantic Reconstruction under Typoglycemia
Human readers can efficiently comprehend scrambled words, a phenomenon known as Typoglycemia, primarily by relying on word form; if word form alone is insufficient, they further utilize contextual cues for interpretation. While advanced large language models (LLMs) exhibit similar abilities, the underlying mechanisms r...
5
67c6d22e983375492193ab13
null
null
2025-03-04T05:12:10.849000
Direct Discriminative Optimization: Your Likelihood-Based Visual Generative Model is Secretly a GAN Discriminator
https://cdn-thumbnails.h…s/2503.01103.png
1
{ "_id": "652bf7edc3cba555d5673c6e", "avatarUrl": "/avatars/78f6416c30203b30671f8423f061c657.svg", "followerCount": null, "fullname": "Kaiwen Zheng", "isHf": false, "isMod": false, "isPro": false, "name": "worstcoder", "type": "user" }
true
null
2503.01103
[ { "_id": "67c6d1c35e896ed915374027", "hidden": false, "name": "Kaiwen Zheng", "status": "claimed_verified", "statusLastChangedAt": "2025-03-04T10:17:24.142Z", "user": { "_id": "652bf7edc3cba555d5673c6e", "avatarUrl": "/avatars/78f6416c30203b30671f8423f061c657.svg", "fullnam...
2025-03-03T02:06:22
Direct Discriminative Optimization: Your Likelihood-Based Visual Generative Model is Secretly a GAN Discriminator
While likelihood-based generative models, particularly diffusion and autoregressive models, have achieved remarkable fidelity in visual generation, the maximum likelihood estimation (MLE) objective inherently suffers from a mode-covering tendency that limits the generation quality under limited model capacity. In this ...
2
67c6d1c65e896ed9153740e4
https://research.nvidia.com/labs/dir/ddo/
null
2025-03-04T04:56:33.061000
From Hours to Minutes: Lossless Acceleration of Ultra Long Sequence Generation up to 100K Tokens
https://cdn-thumbnails.h…s/2502.18890.png
1
{ "_id": "63a95a6a7930fa8c7dd63d4e", "avatarUrl": "/avatars/d9d0420f7ddfe2f3a7e029fb05f1c89f.svg", "followerCount": 3, "fullname": "Zilong Zheng", "isHf": false, "isMod": false, "isPro": false, "name": "zlzheng", "type": "user" }
true
[ "https://cdn-uploads.huggingface.co/production/uploads/63a95a6a7930fa8c7dd63d4e/3WZ10b-Ku3GcY1fc1MWx8.mp4" ]
2502.18890
[ { "_id": "67c6cbd6e52534aa6ada2e26", "hidden": false, "name": "Tong Wu", "status": "admin_assigned", "statusLastChangedAt": "2025-03-04T10:58:45.670Z", "user": { "_id": "668f7fee5156d55f72af4f21", "avatarUrl": "/avatars/02edf8d7d5f288d80dc665b18dda4d0a.svg", "fullname": "To...
2025-02-26T07:10:08
From Hours to Minutes: Lossless Acceleration of Ultra Long Sequence Generation up to 100K Tokens
Generating ultra-long sequences with large language models (LLMs) has become increasingly crucial but remains a highly time-intensive task, particularly for sequences up to 100K tokens. While traditional speculative decoding methods exist, simply extending their generation limits fails to accelerate the process and can...
7
67c6cbd7e52534aa6ada2e79
null
https://github.com/bigai-nlco/TokenSwift
2025-03-04T04:54:04.054000
DiffRhythm: Blazingly Fast and Embarrassingly Simple End-to-End Full-Length Song Generation with Latent Diffusion
https://cdn-thumbnails.h…s/2503.01183.png
1
{ "_id": "624bebf604abc7ebb01789af", "avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/1649143001781-624bebf604abc7ebb01789af.jpeg", "followerCount": 3863, "fullname": "Apolinário from multimodal AI art", "isHf": true, "isMod": false, "isPro": true, "name": "multimodalart", "type"...
false
null
2503.01183
[ { "_id": "67c6a15e21d722b4248bd9c2", "hidden": false, "name": "Ziqian Ning", "status": null, "statusLastChangedAt": null, "user": null }, { "_id": "67c6a15e21d722b4248bd9c3", "hidden": false, "name": "Huakang Chen", "status": null, "statusLastChangedAt": null, "us...
2025-03-03T05:15:34
DiffRhythm: Blazingly Fast and Embarrassingly Simple End-to-End Full-Length Song Generation with Latent Diffusion
Recent advancements in music generation have garnered significant attention, yet existing approaches face critical limitations. Some current generative models can only synthesize either the vocal track or the accompaniment track. While some models can generate combined vocal and accompaniment, they typically rely on me...
18
67c6a16021d722b4248bda37
https://aslp-lab.github.io/DiffRhythm.github.io/
https://github.com/ASLP-lab/DiffRhythm
2025-03-04T04:17:23.806000
Unposed Sparse Views Room Layout Reconstruction in the Age of Pretrain Model
https://cdn-thumbnails.h…s/2502.16779.png
1
{ "_id": "642bdfc65edcc5760cb1ea12", "avatarUrl": "/avatars/599b0bbb379b43cd39097c204c946075.svg", "followerCount": null, "fullname": "huang", "isHf": false, "isMod": false, "isPro": false, "name": "yxuan", "type": "user" }
true
null
2502.16779
[ { "_id": "67c65c06e116e361574405e9", "hidden": false, "name": "Yaxuan Huang", "status": "claimed_verified", "statusLastChangedAt": "2025-03-04T08:51:27.582Z", "user": { "_id": "642bdfc65edcc5760cb1ea12", "avatarUrl": "/avatars/599b0bbb379b43cd39097c204c946075.svg", "fullnam...
2025-02-24T02:14:19
Unposed Sparse Views Room Layout Reconstruction in the Age of Pretrain Model
Room layout estimation from multiple-perspective images is poorly investigated due to the complexities that emerge from multi-view geometry, which requires muti-step solutions such as camera intrinsic and extrinsic estimation, image matching, and triangulation. However, in 3D reconstruction, the advancement of recent 3...
2
67c65c0be116e36157440751
null
https://github.com/justacar/Plane-DUSt3R
2025-03-04T03:56:04.503000
OneRec: Unifying Retrieve and Rank with Generative Recommender and Iterative Preference Alignment
https://cdn-thumbnails.h…s/2502.18965.png
1
{ "_id": "668f5875b5b3081d776e4094", "avatarUrl": "/avatars/8c763393f25afbe5fb8b132f775e746a.svg", "followerCount": 1, "fullname": "Xiaohuan Zhou", "isHf": false, "isMod": false, "isPro": false, "name": "XiaohuanZhou", "type": "user" }
false
null
2502.18965
[ { "_id": "67c6bfdf96b9f5fa18c517db", "hidden": false, "name": "Jiaxin Deng", "status": "admin_assigned", "statusLastChangedAt": "2025-03-04T10:16:32.410Z", "user": { "_id": "625f6ebee1994410eef16a42", "avatarUrl": "/avatars/eaa353afe91e849adcd35656477a6462.svg", "fullname":...
2025-02-26T09:25:10
OneRec: Unifying Retrieve and Rank with Generative Recommender and Iterative Preference Alignment
Recently, generative retrieval-based recommendation systems have emerged as a promising paradigm. However, most modern recommender systems adopt a retrieve-and-rank strategy, where the generative model functions only as a selector during the retrieval stage. In this paper, we propose OneRec, which replaces the cascaded...
18
67c6bfe396b9f5fa18c518e5
null
null
2025-03-04T03:20:03.380000
AI-Invented Tonal Languages: Preventing a Machine Lingua Franca Beyond Human Understanding
https://cdn-thumbnails.h…s/2503.01063.png
1
{ "_id": "63136a82e29fb2e86d5e5bdd", "avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/63136a82e29fb2e86d5e5bdd/pFZDuQtzfUStovbwwZGvn.png", "followerCount": null, "fullname": "David Noever", "isHf": false, "isMod": false, "isPro": false, "name": "dnoever", "type": "user" }
true
[ "https://cdn-uploads.huggingface.co/production/uploads/63136a82e29fb2e86d5e5bdd/mgIPjnhtUaGLR2Iv4ViL6.jpeg" ]
2503.01063
[ { "_id": "67c6b72b7aad9a016ae60797", "hidden": false, "name": "David Noever", "status": "admin_assigned", "statusLastChangedAt": "2025-03-04T11:17:50.200Z", "user": { "_id": "63136a82e29fb2e86d5e5bdd", "avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/63136a8...
2025-03-02T23:59:52
AI-Invented Tonal Languages: Preventing a Machine Lingua Franca Beyond Human Understanding
This paper investigates the potential for large language models (LLMs) to develop private tonal languages for machine-to-machine (M2M) communication. Inspired by cryptophasia in human twins (affecting up to 50% of twin births) and natural tonal languages like Mandarin and Vietnamese, we implement a precise character-to...
1
67c6b72c7aad9a016ae607bb
null
null
2025-03-04T02:48:58.261000
Liger: Linearizing Large Language Models to Gated Recurrent Structures
https://cdn-thumbnails.h…s/2503.01496.png
1
{ "_id": "6246bb33da617c00b48e4d92", "avatarUrl": "/avatars/0304a9f6eb7f5dee4d933d03222f94e9.svg", "followerCount": 3, "fullname": "Weigao Sun", "isHf": false, "isMod": false, "isPro": false, "name": "weigao266", "type": "user" }
true
null
2503.01496
[ { "_id": "67c6b05f35198d0f397adc98", "hidden": false, "name": "Disen Lan", "status": "claimed_verified", "statusLastChangedAt": "2025-03-04T08:34:46.117Z", "user": { "_id": "66ea643899af9ac3463639b1", "avatarUrl": "/avatars/252d470e761a57834dee3dbc60dfefed.svg", "fullname":...
2025-03-03T13:08:00
Liger: Linearizing Large Language Models to Gated Recurrent Structures
Transformers with linear recurrent modeling offer linear-time training and constant-memory inference. Despite their demonstrated efficiency and performance, pretraining such non-standard architectures from scratch remains costly and risky. The linearization of large language models (LLMs) transforms pretrained standard...
13
67c6b06035198d0f397adcc4
null
null
2025-03-04T02:27:17.351000
CLEA: Closed-Loop Embodied Agent for Enhancing Task Execution in Dynamic Environments
https://cdn-thumbnails.h…s/2503.00729.png
1
{ "_id": "6628c6107751d297d7025a71", "avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/6628c6107751d297d7025a71/S1rm5VIwV2Uxfv8GetKMU.jpeg", "followerCount": 1, "fullname": "Lei Mingcong", "isHf": false, "isMod": false, "isPro": false, "name": "SP4595", "type": "user" }
true
null
2503.00729
[ { "_id": "67c6ab3ec0b62d612c54ddf5", "hidden": false, "name": "Mingcong Lei", "status": "claimed_verified", "statusLastChangedAt": "2025-03-04T08:34:48.061Z", "user": { "_id": "6628c6107751d297d7025a71", "avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/6628c...
2025-03-02T04:50:59
CLEA: Closed-Loop Embodied Agent for Enhancing Task Execution in Dynamic Environments
Large Language Models (LLMs) exhibit remarkable capabilities in the hierarchical decomposition of complex tasks through semantic reasoning. However, their application in embodied systems faces challenges in ensuring reliable execution of subtask sequences and achieving one-shot success in long-term task completion. To ...
2
67c6ab42c0b62d612c54df71
https://sp4595.github.io/CLEA/
https://github.com/SP4595/CLEA-Closed-Loop-Embodied-Agent
2025-03-04T02:21:00.460000
Speculative Ad-hoc Querying
https://cdn-thumbnails.h…s/2503.00714.png
1
{ "_id": "6577437552f02732a463d97d", "avatarUrl": "/avatars/8eb271ec249fa9b0d97dfe0eace6da88.svg", "followerCount": null, "fullname": "Haoyu Li", "isHf": false, "isMod": false, "isPro": false, "name": "Haoyu0529", "type": "user" }
true
[ "https://cdn-uploads.huggingface.co/production/uploads/6577437552f02732a463d97d/fEkQ4BZ8Yx_CzsjvHBWFq.qt" ]
2503.00714
[ { "_id": "67c6a803025b72f14ccb0939", "hidden": false, "name": "Haoyu Li", "status": "extracted_pending", "statusLastChangedAt": "2025-03-04T07:13:08.306Z", "user": { "_id": "6577437552f02732a463d97d", "avatarUrl": "/avatars/8eb271ec249fa9b0d97dfe0eace6da88.svg", "fullname":...
2025-03-02T03:44:31
Speculative Ad-hoc Querying
Analyzing large datasets requires responsive query execution, but executing SQL queries on massive datasets can be slow. This paper explores whether query execution can begin even before the user has finished typing, allowing results to appear almost instantly. We propose SpeQL, a system that leverages Large Language M...
8
67c6a804025b72f14ccb0994
https://github.com/lihy0529/SpeQL
https://github.com/lihy0529/SpeQL
2025-03-04T02:16:25.633000
CodeArena: A Collective Evaluation Platform for LLM Code Generation
https://cdn-thumbnails.h…s/2503.01295.png
1
{ "_id": "61711f02e0b1ddb56eb9b526", "avatarUrl": "/avatars/3e2fdf774f5bc1f73b450486d6da42d4.svg", "followerCount": 3, "fullname": "Mingzhe Du", "isHf": false, "isMod": false, "isPro": false, "name": "Elfsong", "type": "user" }
true
null
2503.01295
[ { "_id": "67c6a8b534aeb86063e94010", "hidden": false, "name": "Mingzhe Du", "status": "claimed_verified", "statusLastChangedAt": "2025-03-04T08:34:49.954Z", "user": { "_id": "61711f02e0b1ddb56eb9b526", "avatarUrl": "/avatars/3e2fdf774f5bc1f73b450486d6da42d4.svg", "fullname"...
2025-03-03T08:31:16
CodeArena: A Collective Evaluation Platform for LLM Code Generation
Large Language Models (LLMs) have reshaped code generation by synergizing their exceptional comprehension of natural language and programming syntax, thereby substantially boosting developer productivity. These advancements have prompted numerous efforts to quantitatively evaluate their coding capabilities. However, pe...
5
67c6a8b634aeb86063e9406a
null
null
2025-03-04T01:56:03.632000
Qilin: A Multimodal Information Retrieval Dataset with APP-level User Sessions
https://cdn-thumbnails.h…s/2503.00501.png
1
{ "_id": "60c0ed29d8bc072769d78f48", "avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/60c0ed29d8bc072769d78f48/V6q6Tn4kzB46NIbTYw9pQ.jpeg", "followerCount": 2, "fullname": "Qian Dong", "isHf": false, "isMod": false, "isPro": false, "name": "qian", "type": "user" }
true
null
2503.00501
[ { "_id": "67c6a343ad6b7c2fa29d5e7e", "hidden": false, "name": "Jia Chen", "status": "claimed_verified", "statusLastChangedAt": "2025-03-04T16:08:10.744Z", "user": { "_id": "67c03221aed8409476d39da8", "avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/67c03221a...
2025-03-01T14:15:00
Qilin: A Multimodal Information Retrieval Dataset with APP-level User Sessions
User-generated content (UGC) communities, especially those featuring multimodal content, improve user experiences by integrating visual and textual information into results (or items). The challenge of improving user experiences in complex systems with search and recommendation (S\&R) services has drawn significant att...
11
67c6a346ad6b7c2fa29d5f88
https://huggingface.co/datasets/THUIR/Qilin
https://github.com/RED-Search/Qilin/
2025-03-04T01:19:45.715000
Kiss3DGen: Repurposing Image Diffusion Models for 3D Asset Generation
https://cdn-thumbnails.h…s/2503.01370.png
1
{ "_id": "6332e2689bf698ce68a22e8c", "avatarUrl": "/avatars/c1922acfda2e6d2fe7b03194a404eb10.svg", "followerCount": 2, "fullname": "JIANTAO LIN", "isHf": false, "isMod": false, "isPro": true, "name": "LTT", "type": "user" }
true
null
2503.01370
[ { "_id": "67c691673ff65c55829685a0", "hidden": false, "name": "Jiantao Lin", "status": "admin_assigned", "statusLastChangedAt": "2025-03-04T10:52:36.682Z", "user": { "_id": "6332e2689bf698ce68a22e8c", "avatarUrl": "/avatars/c1922acfda2e6d2fe7b03194a404eb10.svg", "fullname":...
2025-03-03T10:07:19
Kiss3DGen: Repurposing Image Diffusion Models for 3D Asset Generation
Diffusion models have achieved great success in generating 2D images. However, the quality and generalizability of 3D content generation remain limited. State-of-the-art methods often require large-scale 3D assets for training, which are challenging to collect. In this work, we introduce Kiss3DGen (Keep It Simple and S...
7
67c6916b3ff65c5582968702
https://ltt-o.github.io/Kiss3dgen.github.io/
https://github.com/EnVision-Research/Kiss3DGen
2025-03-04T00:52:22.204000
Difix3D+: Improving 3D Reconstructions with Single-Step Diffusion Models
https://cdn-thumbnails.h…s/2503.01774.png
1
{ "_id": "633aaf695df91da9cea92960", "avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/633aaf695df91da9cea92960/9T4y1ru5wt5iKUUqf9_Tt.png", "followerCount": 12, "fullname": "Jay Wu", "isHf": false, "isMod": false, "isPro": false, "name": "jayw", "type": "user" }
true
null
2503.01774
[ { "_id": "67c694febdab31ec59fea175", "hidden": false, "name": "Jay Zhangjie Wu", "status": "claimed_verified", "statusLastChangedAt": "2025-03-04T08:34:53.874Z", "user": { "_id": "633aaf695df91da9cea92960", "avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/63...
2025-03-03T17:58:33
Difix3D+: Improving 3D Reconstructions with Single-Step Diffusion Models
Neural Radiance Fields and 3D Gaussian Splatting have revolutionized 3D reconstruction and novel-view synthesis task. However, achieving photorealistic rendering from extreme novel viewpoints remains challenging, as artifacts persist across representations. In this work, we introduce Difix3D+, a novel pipeline designed...
29
67c69500bdab31ec59fea24d
https://research.nvidia.com/labs/toronto-ai/difix3d
null
2025-03-04T00:29:56.570000
VideoUFO: A Million-Scale User-Focused Dataset for Text-to-Video Generation
https://cdn-thumbnails.h…s/2503.01739.png
1
{ "_id": "62b32a4429a410b7f6b06710", "avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/62b32a4429a410b7f6b06710/VzgvmnlYZWuifZTkIkCxy.jpeg", "followerCount": 14, "fullname": "Wenhao Wang", "isHf": false, "isMod": false, "isPro": false, "name": "WenhaoWang", "type": "user" }
true
null
2503.01739
[ { "_id": "67c68f7828a037872c5ce5bb", "hidden": false, "name": "Wenhao Wang", "status": "admin_assigned", "statusLastChangedAt": "2025-03-04T11:14:37.907Z", "user": { "_id": "62b32a4429a410b7f6b06710", "avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/62b32a44...
2025-03-03T17:00:36
VideoUFO: A Million-Scale User-Focused Dataset for Text-to-Video Generation
Text-to-video generative models convert textual prompts into dynamic visual content, offering wide-ranging applications in film production, gaming, and education. However, their real-world performance often falls short of user expectations. One key reason is that these models have not been trained on videos related to ...
3
67c68f7a28a037872c5ce60d
null
null
2025-03-04T00:09:04.418000
Cognitive Behaviors that Enable Self-Improving Reasoners, or, Four Habits of Highly Effective STaRs
https://cdn-thumbnails.h…s/2503.01307.png
1
{ "_id": "63e6a880f2e9a8f22c5a1630", "avatarUrl": "/avatars/53b57690fe052ce6882bbfc87b11567c.svg", "followerCount": null, "fullname": "Kanishk Gandhi", "isHf": false, "isMod": false, "isPro": false, "name": "obiwan96", "type": "user" }
true
null
2503.01307
[ { "_id": "67c68adc0457c9f809c22df8", "hidden": false, "name": "Kanishk Gandhi", "status": "claimed_verified", "statusLastChangedAt": "2025-03-04T08:35:01.161Z", "user": { "_id": "63e6a880f2e9a8f22c5a1630", "avatarUrl": "/avatars/53b57690fe052ce6882bbfc87b11567c.svg", "fulln...
2025-03-03T08:46:22
Cognitive Behaviors that Enable Self-Improving Reasoners, or, Four Habits of Highly Effective STaRs
Test-time inference has emerged as a powerful paradigm for enabling language models to ``think'' longer and more carefully about complex challenges, much like skilled human experts. While reinforcement learning (RL) can drive self-improvement in language models on verifiable tasks, some models exhibit substantial gains...
13
67c68add0457c9f809c22e31
null
null
2025-03-03T23:44:06.105000
Large-Scale Data Selection for Instruction Tuning
https://cdn-thumbnails.h…s/2503.01807.png
1
{ "_id": "62608fc2ffe8827cb1d89f9f", "avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/1654027835241-62608fc2ffe8827cb1d89f9f.png", "followerCount": 11, "fullname": "Hamish Ivison", "isHf": false, "isMod": false, "isPro": false, "name": "hamishivi", "type": "user" }
true
null
2503.01807
[ { "_id": "67c67ff6dec55d10cb10fc9e", "hidden": false, "name": "Hamish Ivison", "status": "claimed_verified", "statusLastChangedAt": "2025-03-04T08:40:13.649Z", "user": { "_id": "62608fc2ffe8827cb1d89f9f", "avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/1654...
2025-03-03T18:37:26
Large-Scale Data Selection for Instruction Tuning
Selecting high-quality training data from a larger pool is a crucial step when instruction-tuning language models, as carefully curated datasets often produce models that outperform those trained on much larger, noisier datasets. Automated data selection approaches for instruction-tuning are typically tested by selecti...
5
67c67ff9dec55d10cb10fcef
null
null
2025-03-03T23:29:27.952000
Visual-RFT: Visual Reinforcement Fine-Tuning
https://cdn-thumbnails.h…s/2503.01785.png
1
{ "_id": "63fda3fced9eead590ff6918", "avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/1677566802735-noauth.jpeg", "followerCount": 16, "fullname": "Zeyi Sun", "isHf": false, "isMod": false, "isPro": false, "name": "Zery", "type": "user" }
true
null
2503.01785
[ { "_id": "67c6816614a1bf9855188b8b", "hidden": false, "name": "Ziyu Liu", "status": "admin_assigned", "statusLastChangedAt": "2025-03-04T09:12:57.481Z", "user": { "_id": "66fe1334ff3ee1f7569fab6d", "avatarUrl": "/avatars/6868b1a545028a9b8bbded52490dc093.svg", "fullname": "z...
2025-03-03T18:16:32
Visual-RFT: Visual Reinforcement Fine-Tuning
Reinforcement Fine-Tuning (RFT) in Large Reasoning Models like OpenAI o1 learns from feedback on its answers, which is especially useful in applications when fine-tuning data is scarce. Recent open-source work like DeepSeek-R1 demonstrates that reinforcement learning with verifiable reward is one key direction in repro...
43
67c6816c14a1bf9855188d8c
https://github.com/Liuziyu77/Visual-RFT
https://github.com/Liuziyu77/Visual-RFT
2025-03-03T23:15:05.187000
Phi-4-Mini Technical Report: Compact yet Powerful Multimodal Language Models via Mixture-of-LoRAs
https://cdn-thumbnails.h…s/2503.01743.png
3
{ "_id": "63f5173bb51da4d61da6c038", "avatarUrl": "/avatars/0ee530cf80476aa3985c4d591cd384a1.svg", "followerCount": 6, "fullname": "Young Jin Kim", "isHf": false, "isMod": false, "isPro": false, "name": "ykim362", "type": "user" }
true
null
2503.01743
[ { "_id": "67c67d0dfe135a5f482599bb", "hidden": false, "name": "Abdelrahman Abouelenin", "status": null, "statusLastChangedAt": null, "user": null }, { "_id": "67c67d0dfe135a5f482599bc", "hidden": false, "name": "Atabak Ashfaq", "status": "admin_assigned", "statusLastC...
2025-03-03T17:05:52
Phi-4-Mini Technical Report: Compact yet Powerful Multimodal Language Models via Mixture-of-LoRAs
We introduce Phi-4-Mini and Phi-4-Multimodal, compact yet highly capable language and multimodal models. Phi-4-Mini is a 3.8-billion-parameter language model trained on high-quality web and synthetic data, significantly outperforming recent open-source models of similar size and matching the performance of models twice...
42
67c67d0efe135a5f48259a38
https://huggingface.co/microsoft/Phi-4-multimodal-instruct
null
2025-03-03T22:35:45.299000
DuoDecoding: Hardware-aware Heterogeneous Speculative Decoding with Dynamic Multi-Sequence Drafting
https://cdn-thumbnails.h…s/2503.00784.png
1
{ "_id": "6485d5b300c9cfe5c2470c81", "avatarUrl": "/avatars/c29aa81d2add795e8448b99274a04b83.svg", "followerCount": 3, "fullname": "Kai", "isHf": false, "isMod": false, "isPro": false, "name": "KaiLv", "type": "user" }
true
null
2503.00784
[ { "_id": "67c673bcf47209364f0cec96", "hidden": false, "name": "Kai Lv", "status": "admin_assigned", "statusLastChangedAt": "2025-03-04T10:14:11.523Z", "user": { "_id": "6485d5b300c9cfe5c2470c81", "avatarUrl": "/avatars/c29aa81d2add795e8448b99274a04b83.svg", "fullname": "Kai...
2025-03-02T08:27:48
DuoDecoding: Hardware-aware Heterogeneous Speculative Decoding with Dynamic Multi-Sequence Drafting
Large language models (LLMs) exhibit exceptional performance across a wide range of tasks; however, their token-by-token autoregressive generation process significantly hinders inference speed. Speculative decoding presents a promising draft-then-verify framework that reduces generation latency while maintaining output...
8
67c673bdf47209364f0cecb7
null
https://github.com/KaiLv69/DuoDecoding
2025-03-03T21:22:16.512000
Predictive Data Selection: The Data That Predicts Is the Data That Teaches
https://cdn-thumbnails.h…s/2503.00808.png
1
{ "_id": "641c9662043963b1c0a1df52", "avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/641c9662043963b1c0a1df52/L1o85EHztv_xP9r6ppljf.jpeg", "followerCount": 2, "fullname": "KaShun SHUM", "isHf": false, "isMod": false, "isPro": false, "name": "ksshumab", "type": "user" }
true
null
2503.00808
[ { "_id": "67c66382e5394bda7cbd03f9", "hidden": false, "name": "Kashun Shum", "status": "claimed_verified", "statusLastChangedAt": "2025-03-04T08:51:25.484Z", "user": { "_id": "641c9662043963b1c0a1df52", "avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/641c96...
2025-03-02T09:21:28
Predictive Data Selection: The Data That Predicts Is the Data That Teaches
Language model pretraining involves training on extensive corpora, where data quality plays a pivotal role. In this work, we aim to directly estimate the contribution of data during pretraining and select pretraining data in an efficient manner. Specifically, we draw inspiration from recent findings showing that compre...
45
67c66383e5394bda7cbd0428
null
https://github.com/hkust-nlp/PreSelect
2025-03-03T11:25:57.425000
Multi-Turn Code Generation Through Single-Step Rewards
https://cdn-thumbnails.h…s/2502.20380.png
2
{ "_id": "6421d2972143035270db37b9", "avatarUrl": "/avatars/4fadeafc273d32cf72fe2f12d444c5e8.svg", "followerCount": 2, "fullname": "Gonzalo Gonzalez", "isHf": false, "isMod": false, "isPro": false, "name": "chalo2000", "type": "user" }
true
null
2502.20380
[ { "_id": "67c34e3beae05d8f94f800b4", "hidden": false, "name": "Arnav Kumar Jain", "status": null, "statusLastChangedAt": null, "user": null }, { "_id": "67c34e3beae05d8f94f800b5", "hidden": false, "name": "Gonzalo Gonzalez-Pumariega", "status": "claimed_verified", "st...
2025-02-27T18:55:05
Multi-Turn Code Generation Through Single-Step Rewards
We address the problem of code generation from multi-turn execution feedback. Existing methods either generate code without feedback or use complex, hierarchical reinforcement learning to optimize multi-turn rewards. We propose a simple yet scalable approach, muCode, that solves multi-turn code generation using only si...
24
67c34e3ceae05d8f94f8010e
https://portal-cornell.github.io/muCode/
https://github.com/portal-cornell/muCode
2025-03-03T10:56:33.810000
Preference Learning Unlocks LLMs' Psycho-Counseling Skills
https://cdn-thumbnails.h…s/2502.19731.png
2
{ "_id": "650857fef3060ea840ffbbfe", "avatarUrl": "/avatars/3a339936021c040f19a21838ae1382c4.svg", "followerCount": 1, "fullname": "Mian Zhang", "isHf": false, "isMod": false, "isPro": false, "name": "billmianz", "type": "user" }
true
null
2502.19731
[ { "_id": "67c36b35e12b50f698e7db1d", "hidden": false, "name": "Mian Zhang", "status": "claimed_verified", "statusLastChangedAt": "2025-03-04T08:51:31.238Z", "user": { "_id": "650857fef3060ea840ffbbfe", "avatarUrl": "/avatars/3a339936021c040f19a21838ae1382c4.svg", "fullname"...
2025-02-27T03:50:25
Preference Learning Unlocks LLMs' Psycho-Counseling Skills
Applying large language models (LLMs) to assist in psycho-counseling is an emerging and meaningful approach, driven by the significant gap between patient needs and the availability of mental health support. However, current LLMs struggle to consistently provide effective responses to client speeches, largely due to th...
6
67c36b36e12b50f698e7db51
null
null
2025-03-03T10:26:31.746000
EgoNormia: Benchmarking Physical Social Norm Understanding
https://cdn-thumbnails.h…s/2502.20490.png
2
{ "_id": "61aa376688c20eebf1e8deb3", "avatarUrl": "/avatars/7c11dcb232c73547d7d87834be287822.svg", "followerCount": 7, "fullname": "Hao Zhu", "isHf": false, "isMod": false, "isPro": false, "name": "ProKil", "type": "user" }
true
null
2502.20490
[ { "_id": "67c5c853e7c5cfb1d2b52858", "hidden": false, "name": "MohammadHossein Rezaei", "status": "extracted_confirmed", "statusLastChangedAt": "2025-03-03T16:56:51.354Z", "user": { "_id": "63f6ba02a67b8acfa50407bb", "avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/...
2025-02-27T19:54:16
EgoNormia: Benchmarking Physical Social Norm Understanding
Human activity is moderated by norms. When performing actions in the real world, humans not only follow norms, but also consider the trade-off between different norms However, machines are often trained without explicit supervision on norm understanding and reasoning, especially when the norms are grounded in a physica...
4
67c5c857e7c5cfb1d2b52994
https://egonormia.org
https://github.com/open-social-world/egonormia
2025-03-03T09:49:10.381000
How far can we go with ImageNet for Text-to-Image generation?
https://cdn-thumbnails.h…s/2502.21318.png
2
{ "_id": "630652803aed65d34e98eee3", "avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/630652803aed65d34e98eee3/XG_PuVFA6ziGQZd3UUZSF.jpeg", "followerCount": 3, "fullname": "Nicolas Dufour", "isHf": false, "isMod": false, "isPro": false, "name": "nicolas-dufour", "type": "user" }
true
[ "https://cdn-uploads.huggingface.co/production/uploads/630652803aed65d34e98eee3/8GIi2e6959v5dl4XUVqkc.png" ]
2502.21318
[ { "_id": "67c5c13ca10c7059c3d3d4c9", "hidden": false, "name": "L. Degeorge", "status": "claimed_verified", "statusLastChangedAt": "2025-03-03T16:07:10.195Z", "user": { "_id": "63bb08b07fd5e883e13efd32", "avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/63bb08...
2025-02-28T18:59:42
How far can we go with ImageNet for Text-to-Image generation?
Recent text-to-image (T2I) generation models have achieved remarkable results by training on billion-scale datasets, following a `bigger is better' paradigm that prioritizes data quantity over quality. We challenge this established paradigm by demonstrating that strategic data augmentation of small, well-curated datase...
22
67c5c145a10c7059c3d3d693
https://lucasdegeorge.github.io/projects/t2i_imagenet/
https://github.com/lucasdegeorge/T2I-ImageNet
2025-03-03T09:44:46.734000
DexGraspVLA: A Vision-Language-Action Framework Towards General Dexterous Grasping
https://cdn-thumbnails.h…s/2502.20900.png
2
{ "_id": "655d9f43b5da99edaf3f2f81", "avatarUrl": "/avatars/c7225b3ed54d099a4fd87682427fb5bf.svg", "followerCount": 2, "fullname": "Yifan Zhong", "isHf": false, "isMod": false, "isPro": false, "name": "Yifan-Zhong", "type": "user" }
false
null
2502.20900
[ { "_id": "67c5beea1b2c18e03a3d5218", "hidden": false, "name": "Yifan Zhong", "status": null, "statusLastChangedAt": null, "user": null }, { "_id": "67c5beea1b2c18e03a3d5219", "hidden": false, "name": "Xuchuan Huang", "status": null, "statusLastChangedAt": null, "u...
2025-02-28T09:57:20
DexGraspVLA: A Vision-Language-Action Framework Towards General Dexterous Grasping
Dexterous grasping remains a fundamental yet challenging problem in robotics. A general-purpose robot must be capable of grasping diverse objects in arbitrary scenarios. However, existing research typically relies on specific assumptions, such as single-object settings or limited environments, leading to constrained ge...
6
67c5beed1b2c18e03a3d52c0
null
null
2025-03-03T09:33:49.658000
TeleRAG: Efficient Retrieval-Augmented Generation Inference with Lookahead Retrieval
https://cdn-thumbnails.h…s/2502.20969.png
2
{ "_id": "6304ac1a412a1b9d381ca378", "avatarUrl": "/avatars/f4724eb5afc2a3b0e61e6da7bfa7be27.svg", "followerCount": null, "fullname": "Keisuke Kamahori", "isHf": false, "isMod": false, "isPro": false, "name": "kamahori", "type": "user" }
true
[ "https://cdn-uploads.huggingface.co/production/uploads/6304ac1a412a1b9d381ca378/BYM8EdFZVDrDbfX8LKVC2.png" ]
2502.20969
[ { "_id": "67c5bc8babe08983d98a4248", "hidden": false, "name": "Chien-Yu Lin", "status": null, "statusLastChangedAt": null, "user": null }, { "_id": "67c5bc8babe08983d98a4249", "hidden": false, "name": "Keisuke Kamahori", "status": "claimed_verified", "statusLastChange...
2025-02-28T11:32:22
TeleRAG: Efficient Retrieval-Augmented Generation Inference with Lookahead Retrieval
Retrieval-augmented generation (RAG) extends large language models (LLMs) with external data sources to enhance factual correctness and domain coverage. Modern RAG pipelines rely on large datastores, leading to system challenges in latency-sensitive deployments, especially when limited GPU memory is available. To addre...
7
67c5bc8cabe08983d98a426c
null
null
2025-03-03T08:13:06.912000
MIGE: A Unified Framework for Multimodal Instruction-Based Image Generation and Editing
https://cdn-thumbnails.h…s/2502.21291.png
2
{ "_id": "63468720dd6d90d82ccf3450", "avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/63468720dd6d90d82ccf3450/tVBFlmZNz8FRMkOrDaDID.jpeg", "followerCount": 32, "fullname": "YSH", "isHf": false, "isMod": false, "isPro": false, "name": "BestWishYsh", "type": "user" }
false
null
2502.21291
[ { "_id": "67c5aad632a7208c9ae1d020", "hidden": false, "name": "Xueyun Tian", "status": null, "statusLastChangedAt": null, "user": null }, { "_id": "67c5aad632a7208c9ae1d021", "hidden": false, "name": "Wei Li", "status": "extracted_pending", "statusLastChangedAt": "202...
2025-02-28T18:21:08
MIGE: A Unified Framework for Multimodal Instruction-Based Image Generation and Editing
Despite significant progress in diffusion-based image generation, subject-driven generation and instruction-based editing remain challenging. Existing methods typically treat them separately, struggling with limited high-quality data and poor generalization. However, both tasks require capturing complex visual variatio...
4
67c5aad932a7208c9ae1d19a
null
https://github.com/Eureka-Maggie/MIGE
2025-03-03T07:33:14.717000
LettuceDetect: A Hallucination Detection Framework for RAG Applications
https://cdn-thumbnails.h…s/2502.17125.png
2
{ "_id": "646264832538819c729e32ba", "avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/646264832538819c729e32ba/syc-UpPQyR3Nbf-gYndc4.jpeg", "followerCount": 1, "fullname": "Adam Kovacs", "isHf": false, "isMod": false, "isPro": true, "name": "adaamko", "type": "user" }
true
null
2502.17125
[ { "_id": "67c0536530abbab5c723f2e0", "hidden": false, "name": "Ádám Kovács", "status": "claimed_verified", "statusLastChangedAt": "2025-03-02T20:18:13.294Z", "user": { "_id": "646264832538819c729e32ba", "avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/646264...
2025-02-24T13:11:47
LettuceDetect: A Hallucination Detection Framework for RAG Applications
Retrieval Augmented Generation (RAG) systems remain vulnerable to hallucinated answers despite incorporating external knowledge sources. We present LettuceDetect a framework that addresses two critical limitations in existing hallucination detection methods: (1) the context window constraints of traditional encoder-bas...
5
67c0536630abbab5c723f31e
null
https://github.com/KRLabsOrg/LettuceDetect
2025-03-03T07:04:47.515000
Optimal Brain Apoptosis
https://cdn-thumbnails.h…s/2502.17941.png
2
{ "_id": "668e62f6514c46e257387f6b", "avatarUrl": "/avatars/601b111141141cb2ea710b3166e62cd0.svg", "followerCount": null, "fullname": "Mingyuan Sun", "isHf": false, "isMod": false, "isPro": false, "name": "mingyuansun", "type": "user" }
true
null
2502.17941
[ { "_id": "67c59a7e6eb050aa82406452", "hidden": false, "name": "Mingyuan Sun", "status": "claimed_verified", "statusLastChangedAt": "2025-03-03T16:07:21.192Z", "user": { "_id": "668e62f6514c46e257387f6b", "avatarUrl": "/avatars/601b111141141cb2ea710b3166e62cd0.svg", "fullnam...
2025-02-25T08:03:04
Optimal Brain Apoptosis
The increasing complexity and parameter count of Convolutional Neural Networks (CNNs) and Transformers pose challenges in terms of computational efficiency and resource demands. Pruning has been identified as an effective strategy to address these challenges by removing redundant elements such as neurons, channels, or ...
7
67c59a7f6eb050aa824064b9
null
https://github.com/NEU-REAL/OBA
2025-03-03T04:21:42.563000
Tell me why: Visual foundation models as self-explainable classifiers
https://cdn-thumbnails.h…s/2502.19577.png
2
{ "_id": "66588b6fd22637bfab498709", "avatarUrl": "/avatars/9007f0d3b078bd6193912a5359107f24.svg", "followerCount": null, "fullname": "Hugues Turbé", "isHf": false, "isMod": false, "isPro": false, "name": "hturbe", "type": "user" }
true
[ "https://cdn-uploads.huggingface.co/production/uploads/66588b6fd22637bfab498709/4VG_eDtZKZ4kj1AdG_P14.png" ]
2502.19577
[ { "_id": "67c42356054ae6d1c760b643", "hidden": false, "name": "Hugues Turbé", "status": "claimed_verified", "statusLastChangedAt": "2025-03-02T20:15:04.391Z", "user": { "_id": "66588b6fd22637bfab498709", "avatarUrl": "/avatars/9007f0d3b078bd6193912a5359107f24.svg", "fullnam...
2025-02-26T21:40:30
Tell me why: Visual foundation models as self-explainable classifiers
Visual foundation models (VFMs) have become increasingly popular due to their state-of-the-art performance. However, interpretability remains crucial for critical applications. In this sense, self-explainable models (SEM) aim to provide interpretable classifiers that decompose predictions into a weighted sum of interpr...
9
67c4235c054ae6d1c760b806
null
null
2025-03-03T02:35:09.967000
Chain of Draft: Thinking Faster by Writing Less
https://cdn-thumbnails.h…s/2502.18600.png
4
{ "_id": "63da3d7ae697e5898cb86854", "avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/1675246771355-noauth.jpeg", "followerCount": 86, "fullname": "Talha Rüzgar Akkuş", "isHf": false, "isMod": false, "isPro": false, "name": "Q-bert", "type": "user" }
true
null
2502.18600
[ { "_id": "67c0a8058589d8ecb79d472b", "hidden": false, "name": "Silei Xu", "status": "extracted_confirmed", "statusLastChangedAt": "2025-02-27T18:01:14.543Z", "user": { "_id": "6594b1bb57a556fbe162915e", "avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/6594b1...
2025-02-25T19:36:06
Chain of Draft: Thinking Faster by Writing Less
Large Language Models (LLMs) have demonstrated remarkable performance in solving complex reasoning tasks through mechanisms like Chain-of-Thought (CoT) prompting, which emphasizes verbose, step-by-step reasoning. However, humans typically employ a more efficient strategy: drafting concise intermediate thoughts that cap...
35
67c0a8078589d8ecb79d47ed
null
https://github.com/sileix/chain-of-draft
2025-03-02T22:22:01.895000
ViDoRAG: Visual Document Retrieval-Augmented Generation via Dynamic Iterative Reasoning Agents
https://cdn-thumbnails.h…s/2502.18017.png
2
{ "_id": "657429d833e5a4bf5b278615", "avatarUrl": "/avatars/ed7e28c1b9a7bed1cad864c992cdcc69.svg", "followerCount": 1, "fullname": "QiuchenWang", "isHf": false, "isMod": false, "isPro": false, "name": "autumncc", "type": "user" }
true
null
2502.18017
[ { "_id": "67bef5a6070ec160042d99f4", "hidden": false, "name": "Qiuchen Wang", "status": "claimed_verified", "statusLastChangedAt": "2025-02-28T12:15:57.850Z", "user": { "_id": "657429d833e5a4bf5b278615", "avatarUrl": "/avatars/ed7e28c1b9a7bed1cad864c992cdcc69.svg", "fullnam...
2025-02-25T09:26:12
ViDoRAG: Visual Document Retrieval-Augmented Generation via Dynamic Iterative Reasoning Agents
Understanding information from visually rich documents remains a significant challenge for traditional Retrieval-Augmented Generation (RAG) methods. Existing benchmarks predominantly focus on image-based question answering (QA), overlooking the fundamental challenges of efficient retrieval, comprehension, and reasoning...
17
67bef5a7070ec160042d9a65
null
https://github.com/Alibaba-NLP/ViDoRAG
2025-03-02T22:08:44.891000
Sim-to-Real Reinforcement Learning for Vision-Based Dexterous Manipulation on Humanoids
https://cdn-thumbnails.h…s/2502.20396.png
2
{ "_id": "60f1abe7544c2adfd699860c", "avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/1674929746905-60f1abe7544c2adfd699860c.jpeg", "followerCount": 6280, "fullname": "AK", "isHf": true, "isMod": false, "isPro": false, "name": "akhaliq", "type": "user" }
false
null
2502.20396
[ { "_id": "67c51d36c830dcb76bbb5994", "hidden": false, "name": "Toru Lin", "status": "claimed_verified", "statusLastChangedAt": "2025-03-03T16:07:25.709Z", "user": { "_id": "65e8b34632f166badb8d893a", "avatarUrl": "/avatars/a55da1d08dc1104e6c539cd3f1ef1ebe.svg", "fullname": ...
2025-02-27T18:59:52
Sim-to-Real Reinforcement Learning for Vision-Based Dexterous Manipulation on Humanoids
Reinforcement learning has delivered promising results in achieving human- or even superhuman-level capabilities across diverse problem domains, but success in dexterous robot manipulation remains limited. This work investigates the key challenges in applying reinforcement learning to solve a collection of contact-rich...
11
67c51d39c830dcb76bbb5a1f
null
null
2025-03-02T22:04:15.087000
HAIC: Improving Human Action Understanding and Generation with Better Captions for Multi-modal Large Language Models
https://cdn-thumbnails.h…s/2502.20811.png
2
{ "_id": "60f1abe7544c2adfd699860c", "avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/1674929746905-60f1abe7544c2adfd699860c.jpeg", "followerCount": 6280, "fullname": "AK", "isHf": true, "isMod": false, "isPro": false, "name": "akhaliq", "type": "user" }
false
null
2502.20811
[ { "_id": "67c51c198d02783fa3a6249d", "hidden": false, "name": "Xiao Wang", "status": null, "statusLastChangedAt": null, "user": null }, { "_id": "67c51c198d02783fa3a6249e", "hidden": false, "name": "Jingyun Hua", "status": null, "statusLastChangedAt": null, "user"...
2025-02-28T07:53:40
HAIC: Improving Human Action Understanding and Generation with Better Captions for Multi-modal Large Language Models
Recent Multi-modal Large Language Models (MLLMs) have made great progress in video understanding. However, their performance on videos involving human actions is still limited by the lack of high-quality data. To address this, we introduce a two-stage data annotation pipeline. First, we design strategies to accumulate ...
1
67c51c1b8d02783fa3a62543
null
null
2025-03-02T22:00:31.796000
SoS1: O1 and R1-Like Reasoning LLMs are Sum-of-Square Solvers
https://cdn-thumbnails.h…s/2502.20545.png
2
{ "_id": "60f1abe7544c2adfd699860c", "avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/1674929746905-60f1abe7544c2adfd699860c.jpeg", "followerCount": 6280, "fullname": "AK", "isHf": true, "isMod": false, "isPro": false, "name": "akhaliq", "type": "user" }
false
null
2502.20545
[ { "_id": "67c51b459d5807d6674b3d3c", "hidden": false, "name": "Kechen Li", "status": "claimed_verified", "statusLastChangedAt": "2025-03-04T08:51:29.578Z", "user": { "_id": "6742deb4d3ad4510c12da658", "avatarUrl": "/avatars/91407d854560ef9a2facd80fa8fab6ec.svg", "fullname":...
2025-02-27T21:41:43
SoS1: O1 and R1-Like Reasoning LLMs are Sum-of-Square Solvers
Large Language Models (LLMs) have achieved human-level proficiency across diverse tasks, but their ability to perform rigorous mathematical problem solving remains an open challenge. In this work, we investigate a fundamental yet computationally intractable problem: determining whether a given multivariate polynomial i...
17
67c51b469d5807d6674b3d88
null
null
2025-03-02T21:48:46.577000
LiteASR: Efficient Automatic Speech Recognition with Low-Rank Approximation
https://cdn-thumbnails.h…s/2502.20583.png
2
{ "_id": "6304ac1a412a1b9d381ca378", "avatarUrl": "/avatars/f4724eb5afc2a3b0e61e6da7bfa7be27.svg", "followerCount": null, "fullname": "Keisuke Kamahori", "isHf": false, "isMod": false, "isPro": false, "name": "kamahori", "type": "user" }
true
null
2502.20583
[ { "_id": "67c516998d02783fa3a52dc8", "hidden": false, "name": "Keisuke Kamahori", "status": "claimed_verified", "statusLastChangedAt": "2025-03-03T08:07:02.986Z", "user": { "_id": "6304ac1a412a1b9d381ca378", "avatarUrl": "/avatars/f4724eb5afc2a3b0e61e6da7bfa7be27.svg", "ful...
2025-02-27T22:52:21
LiteASR: Efficient Automatic Speech Recognition with Low-Rank Approximation
Modern automatic speech recognition (ASR) models, such as OpenAI's Whisper, rely on deep encoder-decoder architectures, and their encoders are a critical bottleneck for efficient deployment due to high computational intensity. We introduce LiteASR, a low-rank compression scheme for ASR encoders that significantly reduc...
9
67c516998d02783fa3a52dfd
null
https://github.com/efeslab/LiteASR
2025-03-02T21:35:24.437000
DeepSolution: Boosting Complex Engineering Solution Design via Tree-based Exploration and Bi-point Thinking
https://cdn-thumbnails.h…s/2502.20730.png
4
{ "_id": "63664c8fa2abcdf2fd6425ed", "avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/63664c8fa2abcdf2fd6425ed/IywpB0DXZ_twkmZmVSCCD.jpeg", "followerCount": 1, "fullname": "Li Zhuoqun", "isHf": false, "isMod": false, "isPro": false, "name": "lzq2021", "type": "user" }
true
[ "https://cdn-uploads.huggingface.co/production/uploads/63664c8fa2abcdf2fd6425ed/y_kT4GP3xgm-5RdguMNV7.png", "https://cdn-uploads.huggingface.co/production/uploads/63664c8fa2abcdf2fd6425ed/wDAS_USsxsVHbin1I5CEe.png", "https://cdn-uploads.huggingface.co/production/uploads/63664c8fa2abcdf2fd6425ed/4lJgWp9V8pm4vDBU...
2502.20730
[ { "_id": "67c514aba3d873e41624a082", "hidden": false, "name": "Zhuoqun Li", "status": "claimed_verified", "statusLastChangedAt": "2025-03-03T08:07:26.218Z", "user": { "_id": "63664c8fa2abcdf2fd6425ed", "avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/63664c8...
2025-02-28T05:23:10
DeepSolution: Boosting Complex Engineering Solution Design via Tree-based Exploration and Bi-point Thinking
Designing solutions for complex engineering challenges is crucial in human production activities. However, previous research in the retrieval-augmented generation (RAG) field has not sufficiently addressed tasks related to the design of complex engineering solutions. To fill this gap, we introduce a new benchmark, Solu...
30
67c514aca3d873e41624a10b
null
https://github.com/Li-Z-Q/DeepSolution
2025-02-28T16:51:51.551000
PlanGEN: A Multi-Agent Framework for Generating Planning and Reasoning Trajectories for Complex Problem Solving
https://cdn-thumbnails.h…s/2502.16111.png
3
{ "_id": "61a00714f5119f1651f7e4be", "avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/1651013366729-61a00714f5119f1651f7e4be.jpeg", "followerCount": 1, "fullname": "Mihir Parmar", "isHf": false, "isMod": false, "isPro": false, "name": "Mihir3009", "type": "user" }
false
[ "https://cdn-uploads.huggingface.co/production/uploads/61a00714f5119f1651f7e4be/dZJBpAQlVaJSFYXhuE1Rl.png" ]
2502.16111
[ { "_id": "67be18d2bb66802239ec8095", "hidden": false, "name": "Mihir Parmar", "status": null, "statusLastChangedAt": null, "user": null }, { "_id": "67be18d2bb66802239ec8096", "hidden": false, "name": "Xin Liu", "status": null, "statusLastChangedAt": null, "user":...
2025-02-22T06:21:56
PlanGEN: A Multi-Agent Framework for Generating Planning and Reasoning Trajectories for Complex Problem Solving
Recent agent frameworks and inference-time algorithms often struggle with complex planning problems due to limitations in verifying generated plans or reasoning and varying complexity of instances within a single task. Many existing methods for these tasks either perform task-level verification without considering cons...
7
67be18d3bb66802239ec80d1
null
null
2025-02-28T13:21:13.227000
Beyond Next-Token: Next-X Prediction for Autoregressive Visual Generation
https://cdn-thumbnails.h…s/2502.20388.png
2
{ "_id": "65317ea1501804124f011950", "avatarUrl": "/avatars/b055c3aba0c65d5377c69472e4576480.svg", "followerCount": 3, "fullname": "Ren", "isHf": false, "isMod": false, "isPro": false, "name": "OliverRen", "type": "user" }
false
null
2502.20388
[ { "_id": "67c1643aa4ccbde471532ba6", "hidden": false, "name": "Sucheng Ren", "status": null, "statusLastChangedAt": null, "user": null }, { "_id": "67c1643aa4ccbde471532ba7", "hidden": false, "name": "Qihang Yu", "status": null, "statusLastChangedAt": null, "user"...
2025-02-27T18:59:08
Beyond Next-Token: Next-X Prediction for Autoregressive Visual Generation
Autoregressive (AR) modeling, known for its next-token prediction paradigm, underpins state-of-the-art language and visual generative models. Traditionally, a ``token'' is treated as the smallest prediction unit, often a discrete symbol in language or a quantized patch in vision. However, the optimal token definition f...
13
67c1643ba4ccbde471532c03
null
null
2025-02-28T08:54:03.125000
On Relation-Specific Neurons in Large Language Models
https://cdn-thumbnails.h…s/2502.17355.png
2
{ "_id": "61bf84c8ca59d6d196a1b4e8", "avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/61bf84c8ca59d6d196a1b4e8/L_NvUwlMYcye9X35z6f7e.jpeg", "followerCount": 44, "fullname": "Amir Hossein Kargaran", "isHf": true, "isMod": false, "isPro": false, "name": "kargaranamir", "type": "use...
true
null
2502.17355
[ { "_id": "67bf1808b91e7e6477d92c1e", "hidden": false, "name": "Yihong Liu", "status": "claimed_verified", "statusLastChangedAt": "2025-02-28T15:14:48.351Z", "user": { "_id": "653f7e569e84d1e8b6a66e70", "avatarUrl": "/avatars/24eaa6434508a162c349aebfc51990ff.svg", "fullname"...
2025-02-24T17:33:18
On Relation-Specific Neurons in Large Language Models
In large language models (LLMs), certain neurons can store distinct pieces of knowledge learned during pretraining. While knowledge typically appears as a combination of relations and entities, it remains unclear whether some neurons focus on a relation itself -- independent of any entity. We hypothesize such neurons d...
6
67bf1808b91e7e6477d92c55
null
null
2025-02-28T08:46:19.110000
Guardians of the Agentic System: Preventing Many Shots Jailbreak with Agentic System
https://cdn-thumbnails.h…s/2502.16750.png
2
{ "_id": "653425f4ed74ace63395826c", "avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/no-auth/QJlB0DOEel6U9b-95wasK.png", "followerCount": 3, "fullname": "Saikat Barua", "isHf": false, "isMod": false, "isPro": false, "name": "AlignAI", "type": "user" }
true
[ "https://cdn-uploads.huggingface.co/production/uploads/653425f4ed74ace63395826c/czZ9fF4yF6yz3E89YtU6e.jpeg" ]
2502.16750
[ { "_id": "67c1b63744d780e60d7c5274", "hidden": false, "name": "Saikat Barua", "status": "claimed_verified", "statusLastChangedAt": "2025-02-28T13:24:57.086Z", "user": { "_id": "653425f4ed74ace63395826c", "avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/no-au...
2025-02-23T23:35:15
Guardians of the Agentic System: Preventing Many Shots Jailbreak with Agentic System
The autonomous AI agents using large language models can create undeniable values in all span of the society but they face security threats from adversaries that warrants immediate protective solutions because trust and safety issues arise. Considering the many-shot jailbreaking and deceptive alignment as some of the m...
10
67c1b63a44d780e60d7c5317
null
null
2025-02-28T07:55:48.923000
Training Consistency Models with Variational Noise Coupling
https://cdn-thumbnails.h…s/2502.18197.png
2
{ "_id": "67c07f498589d8ecb7912686", "avatarUrl": "/avatars/84e77389c211a7c4237f73208658c23a.svg", "followerCount": null, "fullname": "Gianluigi Silvestri", "isHf": false, "isMod": false, "isPro": false, "name": "gisilvs", "type": "user" }
true
null
2502.18197
[ { "_id": "67c07fa2a43d7939d6d90d54", "hidden": false, "name": "Gianluigi Silvestri", "status": "claimed_verified", "statusLastChangedAt": "2025-02-27T22:09:04.844Z", "user": { "_id": "67c07f498589d8ecb7912686", "avatarUrl": "/avatars/84e77389c211a7c4237f73208658c23a.svg", "...
2025-02-25T13:38:04
Training Consistency Models with Variational Noise Coupling
Consistency Training (CT) has recently emerged as a promising alternative to diffusion models, achieving competitive performance in image generation tasks. However, non-distillation consistency training often suffers from high variance and instability, and analyzing and improving its training dynamics is an active area...
5
67c07fa6a43d7939d6d90e1f
null
null
2025-02-28T07:25:35.166000
Efficient Gaussian Splatting for Monocular Dynamic Scene Rendering via Sparse Time-Variant Attribute Modeling
https://cdn-thumbnails.h…s/2502.20378.png
2
{ "_id": "6442882f8443bce4c98a88aa", "avatarUrl": "/avatars/70d5aa651b07b43629554096d76efd4c.svg", "followerCount": 1, "fullname": "Kong", "isHf": false, "isMod": false, "isPro": false, "name": "imsuperkong", "type": "user" }
true
null
2502.20378
[ { "_id": "67c1aa781c3a8036977ed8b1", "hidden": false, "name": "Hanyang Kong", "status": "claimed_verified", "statusLastChangedAt": "2025-02-28T15:13:50.949Z", "user": { "_id": "6442882f8443bce4c98a88aa", "avatarUrl": "/avatars/70d5aa651b07b43629554096d76efd4c.svg", "fullnam...
2025-02-27T18:53:06
Efficient Gaussian Splatting for Monocular Dynamic Scene Rendering via Sparse Time-Variant Attribute Modeling
Rendering dynamic scenes from monocular videos is a crucial yet challenging task. The recent deformable Gaussian Splatting has emerged as a robust solution to represent real-world dynamic scenes. However, it often leads to heavily redundant Gaussians, attempting to fit every training view at various time steps, leading...
4
67c1aa7a1c3a8036977ed977
null
null
2025-02-28T04:47:08.197000
Building Interactable Replicas of Complex Articulated Objects via Gaussian Splatting
https://cdn-thumbnails.h…s/2502.19459.png
2
{ "_id": "63c7a33121bd95f80ed74652", "avatarUrl": "/avatars/7dd59afea785a2bff0ec2b757abd474e.svg", "followerCount": 2, "fullname": "Siyuan Huang", "isHf": false, "isMod": false, "isPro": false, "name": "thuhsy", "type": "user" }
true
null
2502.19459
[ { "_id": "67c185f46a31b8fe77434551", "hidden": false, "name": "Yu Liu", "status": "admin_assigned", "statusLastChangedAt": "2025-02-28T12:35:40.098Z", "user": { "_id": "636de85cc4a7a729c164d2b5", "avatarUrl": "/avatars/3e281e547e1697e1c06805e7e63f3918.svg", "fullname": "Yu ...
2025-02-26T10:25:32
Building Interactable Replicas of Complex Articulated Objects via Gaussian Splatting
Building articulated objects is a key challenge in computer vision. Existing methods often fail to effectively integrate information across different object states, limiting the accuracy of part-mesh reconstruction and part dynamics modeling, particularly for complex multi-part articulated objects. We introduce ArtGS, ...
8
67c185f66a31b8fe774345d2
https://articulate-gs.github.io
https://github.com/YuLiu-LY/ArtGS
2025-02-28T04:36:05.045000
MedVLM-R1: Incentivizing Medical Reasoning Capability of Vision-Language Models (VLMs) via Reinforcement Learning
https://cdn-thumbnails.h…s/2502.19634.png
3
{ "_id": "631b9ff5824f2502e3557c7e", "avatarUrl": "/avatars/076043c9dba07644a570692563ef8114.svg", "followerCount": 5, "fullname": "liu", "isHf": false, "isMod": false, "isPro": false, "name": "che111", "type": "user" }
true
null
2502.19634
[ { "_id": "67c12bf3505a88e4a1866a01", "hidden": false, "name": "Jiazhen Pan", "status": "claimed_verified", "statusLastChangedAt": "2025-03-02T20:18:10.120Z", "user": { "_id": "66588c8a338165aad1516756", "avatarUrl": "/avatars/c6539b4ef65f465f6f762628d6921be6.svg", "fullname...
2025-02-26T23:57:34
MedVLM-R1: Incentivizing Medical Reasoning Capability of Vision-Language Models (VLMs) via Reinforcement Learning
Reasoning is a critical frontier for advancing medical image analysis, where transparency and trustworthiness play a central role in both clinician trust and regulatory approval. Although Medical Visual Language Models (VLMs) show promise for radiological tasks, most existing VLMs merely produce final answers without r...
54
67c12bf4505a88e4a1866a35
null
null
2025-02-28T04:02:19.534000
Multimodal Representation Alignment for Image Generation: Text-Image Interleaved Control Is Easier Than You Think
https://cdn-thumbnails.h…s/2502.20172.png
3
{ "_id": "63468720dd6d90d82ccf3450", "avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/63468720dd6d90d82ccf3450/tVBFlmZNz8FRMkOrDaDID.jpeg", "followerCount": 32, "fullname": "YSH", "isHf": false, "isMod": false, "isPro": false, "name": "BestWishYsh", "type": "user" }
false
null
2502.20172
[ { "_id": "67c17b8f60206395233b7e46", "hidden": false, "name": "Liang Chen", "status": "admin_assigned", "statusLastChangedAt": "2025-02-28T12:34:40.397Z", "user": { "_id": "658c481dd1c8b106727a8b73", "avatarUrl": "/avatars/d34a7a62c3a524e5fdd2d5994348db58.svg", "fullname": ...
2025-02-27T15:08:39
Multimodal Representation Alignment for Image Generation: Text-Image Interleaved Control Is Easier Than You Think
The field of advanced text-to-image generation is witnessing the emergence of unified frameworks that integrate powerful text encoders, such as CLIP and T5, with Diffusion Transformer backbones. Although there have been efforts to control output images with additional conditions, like canny and depth map, a comprehensi...
24
67c17b9160206395233b7e9c
null
null
2025-02-28T03:27:32.294000
NeoBERT: A Next-Generation BERT
https://cdn-thumbnails.h…s/2502.19587.png
6
{ "_id": "6317233cc92fd6fee317e030", "avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/6317233cc92fd6fee317e030/cJHSvvimr1kqgQfHOjO5n.png", "followerCount": 1617, "fullname": "Tom Aarsen", "isHf": true, "isMod": false, "isPro": false, "name": "tomaarsen", "type": "user" }
false
null
2502.19587
[ { "_id": "67c13aa6a43d7939d60eb02e", "hidden": false, "name": "Lola Le Breton", "status": "admin_assigned", "statusLastChangedAt": "2025-02-28T12:35:52.732Z", "user": { "_id": "6512e961332b85e7cf8c1431", "avatarUrl": "/avatars/d4bdb9670166112dcb36753bc1823b28.svg", "fullnam...
2025-02-26T22:00:22
NeoBERT: A Next-Generation BERT
Recent innovations in architecture, pre-training, and fine-tuning have led to the remarkable in-context learning and reasoning abilities of large auto-regressive language models such as LLaMA and DeepSeek. In contrast, encoders like BERT and RoBERTa have not seen the same level of progress despite being foundational fo...
34
67c13aa7a43d7939d60eb065
null
null
2025-02-28T01:55:41.427000
Lean and Mean: Decoupled Value Policy Optimization with Global Value Guidance
https://cdn-thumbnails.h…s/2502.16944.png
2
{ "_id": "669dcf6200970c3b27aafa5d", "avatarUrl": "/avatars/bb9ed5ff86326fdaeb184c6b0e40f74f.svg", "followerCount": null, "fullname": "kaikai yang", "isHf": false, "isMod": false, "isPro": false, "name": "keanudicap", "type": "user" }
true
null
2502.16944
[ { "_id": "67be807e8a5a805423137ca2", "hidden": false, "name": "Chenghua Huang", "status": "admin_assigned", "statusLastChangedAt": "2025-02-28T12:30:31.972Z", "user": { "_id": "664af07a691370727c281031", "avatarUrl": "/avatars/e5ed17342e0ea953bacc7d57e9f3b686.svg", "fullnam...
2025-02-24T08:11:33
Lean and Mean: Decoupled Value Policy Optimization with Global Value Guidance
Proximal Policy Optimization (PPO)-based Reinforcement Learning from Human Feedback (RLHF) is essential for aligning large language models (LLMs) with human preferences. It requires joint training of an actor and critic with a pretrained, fixed reward model for guidance. This approach increases computational complexity...
10
67be807e8a5a805423137cc2
null
null
2025-02-28T01:14:11.268000
FINEREASON: Evaluating and Improving LLMs' Deliberate Reasoning through Reflective Puzzle Solving
https://cdn-thumbnails.h…s/2502.20238.png
2
{ "_id": "64e85b3edb3767299865e0e3", "avatarUrl": "/avatars/fdbe121535dea940edd2766161393485.svg", "followerCount": null, "fullname": "Chen", "isHf": false, "isMod": false, "isPro": false, "name": "Guizhen", "type": "user" }
true
null
2502.20238
[ { "_id": "67c15306333e2f71f01c8e35", "hidden": false, "name": "Guizhen Chen", "status": "admin_assigned", "statusLastChangedAt": "2025-02-28T12:28:41.974Z", "user": { "_id": "64e85b3edb3767299865e0e3", "avatarUrl": "/avatars/fdbe121535dea940edd2766161393485.svg", "fullname"...
2025-02-27T16:23:25
FINEREASON: Evaluating and Improving LLMs' Deliberate Reasoning through Reflective Puzzle Solving
Many challenging reasoning tasks require not just rapid, intuitive responses, but a more deliberate, multi-step approach. Recent progress in large language models (LLMs) highlights an important shift from the "System 1" way of quick reactions to the "System 2" style of reflection-and-correction problem solving. However...
23
67c15307333e2f71f01c8ebc
null
null
2025-02-28T00:14:01.841000
Mobius: Text to Seamless Looping Video Generation via Latent Shift
https://cdn-thumbnails.h…s/2502.20307.png
2
{ "_id": "60f1abe7544c2adfd699860c", "avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/1674929746905-60f1abe7544c2adfd699860c.jpeg", "followerCount": 6280, "fullname": "AK", "isHf": true, "isMod": false, "isPro": false, "name": "akhaliq", "type": "user" }
false
null
2502.20307
[ { "_id": "67c1460201cef6d4b9b9ac73", "hidden": false, "name": "Xiuli Bi", "status": null, "statusLastChangedAt": null, "user": null }, { "_id": "67c1460201cef6d4b9b9ac74", "hidden": false, "name": "Jianfei Yuan", "status": null, "statusLastChangedAt": null, "user"...
2025-02-27T17:33:51
Mobius: Text to Seamless Looping Video Generation via Latent Shift
We present Mobius, a novel method to generate seamlessly looping videos from text descriptions directly without any user annotations, thereby creating new visual materials for the multi-media presentation. Our method repurposes the pre-trained video latent diffusion model for generating looping videos from text prompts...
16
67c1460501cef6d4b9b9addf
null
null
2025-02-28T00:10:30.864000
FlexiDiT: Your Diffusion Transformer Can Easily Generate High-Quality Samples with Less Compute
https://cdn-thumbnails.h…s/2502.20126.png
2
{ "_id": "60f1abe7544c2adfd699860c", "avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/1674929746905-60f1abe7544c2adfd699860c.jpeg", "followerCount": 6280, "fullname": "AK", "isHf": true, "isMod": false, "isPro": false, "name": "akhaliq", "type": "user" }
false
null
2502.20126
[ { "_id": "67c14524af5eaa8dd062a216", "hidden": false, "name": "Sotiris Anagnostidis", "status": "admin_assigned", "statusLastChangedAt": "2025-02-28T12:32:18.844Z", "user": { "_id": "62f8f4ff92e64c61bc6938da", "avatarUrl": "/avatars/d386eb35d2c3d52186b2a8ec957f51bc.svg", "f...
2025-02-27T14:16:56
FlexiDiT: Your Diffusion Transformer Can Easily Generate High-Quality Samples with Less Compute
Despite their remarkable performance, modern Diffusion Transformers are hindered by substantial resource requirements during inference, stemming from the fixed and large amount of compute needed for each denoising step. In this work, we revisit the conventional static paradigm that allocates a fixed compute budget per ...
18
67c14529af5eaa8dd062a38c
null
null
2025-02-28T00:03:34.893000
R1-T1: Fully Incentivizing Translation Capability in LLMs via Reasoning Learning
https://cdn-thumbnails.h…s/2502.19735.png
2
{ "_id": "60f1abe7544c2adfd699860c", "avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/1674929746905-60f1abe7544c2adfd699860c.jpeg", "followerCount": 6280, "fullname": "AK", "isHf": true, "isMod": false, "isPro": false, "name": "akhaliq", "type": "user" }
false
null
2502.19735
[ { "_id": "67c1438fd7ffcd1cab1fc412", "hidden": false, "name": "Minggui He", "status": "extracted_pending", "statusLastChangedAt": "2025-02-28T05:03:12.675Z", "user": { "_id": "6727998d4fc2e4f7cc0c85d3", "avatarUrl": "/avatars/ac18eaadd606f7fae64996502f393cf2.svg", "fullname...
2025-02-27T03:57:00
R1-T1: Fully Incentivizing Translation Capability in LLMs via Reasoning Learning
Despite recent breakthroughs in reasoning-enhanced large language models (LLMs) like DeepSeek-R1, incorporating inference-time reasoning into machine translation (MT), where human translators naturally employ structured, multi-layered reasoning chain-of-thoughts (CoTs), is yet underexplored. Existing methods either des...
7
67c14390d7ffcd1cab1fc479
null
null
2025-02-27T23:34:45.416000
UniTok: A Unified Tokenizer for Visual Generation and Understanding
https://cdn-thumbnails.h…s/2502.20321.png
2
{ "_id": "6344dcb1cd37e44d9ed46508", "avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/6344dcb1cd37e44d9ed46508/J92UKSxKR3iziD2WJfih4.jpeg", "followerCount": 7, "fullname": "Yi Jiang", "isHf": false, "isMod": false, "isPro": false, "name": "JiangYi", "type": "user" }
true
null
2502.20321
[ { "_id": "67c13c68d8247a49b808fdac", "hidden": false, "name": "Chuofan Ma", "status": "admin_assigned", "statusLastChangedAt": "2025-02-28T12:31:29.232Z", "user": { "_id": "62c585eb09baf76938a70de8", "avatarUrl": "/avatars/ae8cca53710b3325bf0dd0f08c2b1bbf.svg", "fullname": ...
2025-02-27T17:47:01
UniTok: A Unified Tokenizer for Visual Generation and Understanding
The representation disparity between visual generation and understanding imposes a critical gap in integrating these capabilities into a single framework. To bridge this gap, we introduce UniTok, a discrete visual tokenizer that encodes fine-grained details for generation while also capturing high-level semantics for u...
25
67c13c6ad8247a49b8090003
null
null
2025-02-27T23:04:14.619000
CODESYNC: Synchronizing Large Language Models with Dynamic Code Evolution at Scale
https://cdn-thumbnails.h…s/2502.16645.png
2
{ "_id": "643be8879f5d314db2d9ed23", "avatarUrl": "/avatars/64e9bb2c4e10fbe03e2b81afedf40865.svg", "followerCount": 4, "fullname": "Chen Dongping", "isHf": false, "isMod": false, "isPro": false, "name": "shuaishuaicdp", "type": "user" }
false
null
2502.16645
[ { "_id": "67c12e60d8247a49b805694f", "hidden": false, "name": "Chenlong Wang", "status": "admin_assigned", "statusLastChangedAt": "2025-02-28T12:29:50.564Z", "user": { "_id": "6441270ead24e9b2cfbc45e0", "avatarUrl": "/avatars/92eab1ae50efaaee070674ae20244fc0.svg", "fullname...
2025-02-23T16:46:18
CODESYNC: Synchronizing Large Language Models with Dynamic Code Evolution at Scale
Large Language Models (LLMs) have exhibited exceptional performance in software engineering yet face challenges in adapting to continually evolving code knowledge, particularly regarding the frequent updates of third-party library APIs. This limitation, stemming from static pre-training datasets, often results in non-e...
19
67c12e61d8247a49b805698f
null
null
2025-02-27T22:38:04.562000
SoRFT: Issue Resolving with Subtask-oriented Reinforced Fine-Tuning
https://cdn-thumbnails.h…s/2502.20127.png
2
{ "_id": "654da66fb36f85a025bc24b6", "avatarUrl": "/avatars/e5542856ab4bf1845e8f546b5f17cd99.svg", "followerCount": 1, "fullname": "Zexiong Ma", "isHf": false, "isMod": false, "isPro": false, "name": "mizersy", "type": "user" }
true
null
2502.20127
[ { "_id": "67c12de08cd49ca63e230b99", "hidden": false, "name": "Zexiong Ma", "status": "claimed_verified", "statusLastChangedAt": "2025-02-28T09:28:35.503Z", "user": { "_id": "654da66fb36f85a025bc24b6", "avatarUrl": "/avatars/e5542856ab4bf1845e8f546b5f17cd99.svg", "fullname"...
2025-02-27T14:19:45
SoRFT: Issue Resolving with Subtask-oriented Reinforced Fine-Tuning
Mainstream issue-resolving frameworks predominantly rely on commercial models, leading to high costs and privacy concerns. Existing training approaches for issue resolving struggle with poor generalization and fail to fully leverage open-source development resources. We propose Subtask-oriented Reinforced Fine-Tuning (...
9
67c12de08cd49ca63e230bd1
null
null
2025-02-27T22:27:24.486000
R2-T2: Re-Routing in Test-Time for Multimodal Mixture-of-Experts
https://cdn-thumbnails.h…s/2502.20395.png
5
{ "_id": "647f5af5b0e96764589f3b2a", "avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/noauth/VJ4cDyjp5M3V5WmI5gPIU.jpeg", "followerCount": 12, "fullname": "Tianyi Zhou", "isHf": false, "isMod": false, "isPro": false, "name": "zhoutianyi", "type": "user" }
true
[ "https://cdn-uploads.huggingface.co/production/uploads/647f5af5b0e96764589f3b2a/PaZkWIhqZBRCSfBA-k4OX.png", "https://cdn-uploads.huggingface.co/production/uploads/647f5af5b0e96764589f3b2a/FASlyPDiSb9VHZaeWMj9H.png", "https://cdn-uploads.huggingface.co/production/uploads/647f5af5b0e96764589f3b2a/kGeIJVMDDAbIassi...
2502.20395
[ { "_id": "67c12b5def9af74902537b98", "hidden": false, "name": "Zhongyang Li", "status": "claimed_verified", "statusLastChangedAt": "2025-02-28T12:14:22.809Z", "user": { "_id": "671002fd13203512e7b8f9e3", "avatarUrl": "/avatars/313d8ea313ed300750cfdaaca44fdb6e.svg", "fullnam...
2025-02-27T18:59:32
R2-T2: Re-Routing in Test-Time for Multimodal Mixture-of-Experts
In large multimodal models (LMMs), the perception of non-language modalities (e.g., visual representations) is usually not on par with the large language models (LLMs)' powerful reasoning capabilities, deterring LMMs' performance on challenging downstream tasks. This weakness has been recently mitigated by replacing th...
40
67c12b5eef9af74902537c00
null
null
2025-02-27T22:22:53.713000
LongRoPE2: Near-Lossless LLM Context Window Scaling
https://cdn-thumbnails.h…s/2502.20082.png
2
{ "_id": "62b0009c72043b05d29492b2", "avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/62b0009c72043b05d29492b2/NqRkX2YLhlfOLvYysa7dD.png", "followerCount": 27, "fullname": "Li Lyna Zhang", "isHf": false, "isMod": false, "isPro": false, "name": "lynazhang", "type": "user" }
true
null
2502.20082
[ { "_id": "67c12b6d25c74ee5b6e2ce8e", "hidden": false, "name": "Ning Shang", "status": "admin_assigned", "statusLastChangedAt": "2025-02-28T12:28:26.117Z", "user": { "_id": "632bc663eafe8eca5e9bfdbc", "avatarUrl": "/avatars/787553c73e9a96adc5219e67acd29c00.svg", "fullname": ...
2025-02-27T13:41:07
LongRoPE2: Near-Lossless LLM Context Window Scaling
LongRoPE2 is a novel approach that extends the effective context window of pre-trained large language models (LLMs) to the target length, while preserving the performance on the original shorter context window. This is achieved by three contributions: (1) a hypothesis that insufficient training in higher RoPE dimension...
29
67c12b6e25c74ee5b6e2ceb5
null
null
2025-02-27T22:15:54.222000
Self-rewarding correction for mathematical reasoning
https://cdn-thumbnails.h…s/2502.19613.png
6
{ "_id": "643e59806db6ba8c5ee123f3", "avatarUrl": "/avatars/4052f2a250107f43b3634c3ee3cc30a1.svg", "followerCount": 16, "fullname": "Wei Xiong", "isHf": false, "isMod": false, "isPro": false, "name": "weqweasdas", "type": "user" }
false
null
2502.19613
[ { "_id": "67c12987505a88e4a185e0d7", "hidden": false, "name": "Wei Xiong", "status": null, "statusLastChangedAt": null, "user": null }, { "_id": "67c12987505a88e4a185e0d8", "hidden": false, "name": "Hanning Zhang", "status": "admin_assigned", "statusLastChangedAt": "2...
2025-02-26T23:01:16
Self-rewarding correction for mathematical reasoning
We study self-rewarding reasoning large language models (LLMs), which can simultaneously generate step-by-step reasoning and evaluate the correctness of their outputs during the inference time-without external feedback. This integrated approach allows a single model to independently guide its reasoning process, offerin...
71
67c12989505a88e4a185e115
null
null
2025-02-27T21:19:58.170000
Adapting Automatic Speech Recognition for Accented Air Traffic Control Communications
https://cdn-thumbnails.h…s/2502.20311.png
2
{ "_id": "60a546bdf9b53404e7806278", "avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/1621444268349-noauth.png", "followerCount": 2, "fullname": "Prannaya Gupta", "isHf": false, "isMod": false, "isPro": false, "name": "ThePyProgrammer", "type": "user" }
true
null
2502.20311
[ { "_id": "67c11d0bd1f37121ad63acfb", "hidden": false, "name": "Marcus Yu Zhe Wee", "status": null, "statusLastChangedAt": null, "user": null }, { "_id": "67c11d0bd1f37121ad63acfc", "hidden": false, "name": "Justin Juin Hng Wong", "status": "claimed_verified", "statusL...
2025-02-27T17:35:59
Adapting Automatic Speech Recognition for Accented Air Traffic Control Communications
Effective communication in Air Traffic Control (ATC) is critical to maintaining aviation safety, yet the challenges posed by accented English remain largely unaddressed in Automatic Speech Recognition (ASR) systems. Existing models struggle with transcription accuracy for Southeast Asian-accented (SEA-accented) speech,...
5
67c11d0cd1f37121ad63ad24
null
null
2025-02-27T21:02:33.864000
MMKE-Bench: A Multimodal Editing Benchmark for Diverse Visual Knowledge
https://cdn-thumbnails.h…s/2502.19870.png
2
{ "_id": "65745569839aa08899ea5d27", "avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/noauth/4X8waDwiphbfKZySrYlFy.jpeg", "followerCount": 2, "fullname": "kailinjiang", "isHf": false, "isMod": false, "isPro": false, "name": "kailinjiang", "type": "user" }
false
null
2502.19870
[ { "_id": "67c11908dfcbe8a49cf19952", "hidden": false, "name": "Yuntao Du", "status": null, "statusLastChangedAt": null, "user": null }, { "_id": "67c11908dfcbe8a49cf19953", "hidden": false, "name": "Kailin Jiang", "status": null, "statusLastChangedAt": null, "user...
2025-02-27T08:21:28
MMKE-Bench: A Multimodal Editing Benchmark for Diverse Visual Knowledge
Knowledge editing techniques have emerged as essential tools for updating the factual knowledge of large language models (LLMs) and multimodal models (LMMs), allowing them to correct outdated or inaccurate information without retraining from scratch. However, existing benchmarks for multimodal knowledge editing primari...
3
67c1190cdfcbe8a49cf19aac
null
null
2025-02-27T14:03:36.365000
Towards Optimal Multi-draft Speculative Decoding
https://cdn-thumbnails.h…s/2502.18779.png
2
{ "_id": "6623ea65b642e29cdf90a1b4", "avatarUrl": "/avatars/e32e90574c1162b2be87ed78604e3e4d.svg", "followerCount": 1, "fullname": "TongZheng", "isHf": false, "isMod": false, "isPro": true, "name": "TongZheng1999", "type": "user" }
true
null
2502.18779
[ { "_id": "67c0b4d0cda310c08781e820", "hidden": false, "name": "Zhengmian Hu", "status": null, "statusLastChangedAt": null, "user": null }, { "_id": "67c0b4d0cda310c08781e821", "hidden": false, "name": "Tong Zheng", "status": "claimed_verified", "statusLastChangedAt": ...
2025-02-26T03:22:44
Towards Optimal Multi-draft Speculative Decoding
Large Language Models (LLMs) have become an indispensable part of natural language processing tasks. However, autoregressive sampling has become an efficiency bottleneck. Multi-Draft Speculative Decoding (MDSD) is a recent approach where, when generating each token, a small draft model generates multiple drafts, and th...
4
67c0b4d1cda310c08781e864
null
null
2025-02-27T11:09:15.703000
FSPO: Few-Shot Preference Optimization of Synthetic Preference Data in LLMs Elicits Effective Personalization to Real Users
https://cdn-thumbnails.h…s/2502.19312.png
2
{ "_id": "6511ee845b7e52b0251fdee9", "avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/6511ee845b7e52b0251fdee9/hTIwiIYBGOVnIrxtpri83.png", "followerCount": 4, "fullname": "Anikait Singh", "isHf": false, "isMod": false, "isPro": false, "name": "Asap7772", "type": "user" }
true
null
2502.19312
[ { "_id": "67c01972d63ea6742473aa2a", "hidden": false, "name": "Anikait Singh", "status": "extracted_pending", "statusLastChangedAt": "2025-02-27T07:51:17.284Z", "user": { "_id": "6511ee845b7e52b0251fdee9", "avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/651...
2025-02-26T17:08:46
FSPO: Few-Shot Preference Optimization of Synthetic Preference Data in LLMs Elicits Effective Personalization to Real Users
Effective personalization of LLMs is critical for a broad range of user-interfacing applications such as virtual assistants and content curation. Inspired by the strong in-context learning capabilities of LLMs, we propose Few-Shot Preference Optimization (FSPO), which reframes reward modeling as a meta-learning problem...
5
67c01975d63ea6742473aa52
null
null
2025-02-27T10:12:03.128000
Drop-Upcycling: Training Sparse Mixture of Experts with Partial Re-initialization
https://cdn-thumbnails.h…s/2502.19261.png
3
{ "_id": "6308c49c454dc257521bc7f9", "avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/6308c49c454dc257521bc7f9/UWUS6OPa6OpVu1T0gd-wJ.jpeg", "followerCount": 19, "fullname": "Taishi", "isHf": false, "isMod": false, "isPro": false, "name": "Taishi-N324", "type": "user" }
true
null
2502.19261
[ { "_id": "67c07170af68756abc571ab8", "hidden": false, "name": "Taishi Nakamura", "status": "claimed_verified", "statusLastChangedAt": "2025-02-27T22:09:06.783Z", "user": { "_id": "6308c49c454dc257521bc7f9", "avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/63...
2025-02-26T16:06:36
Drop-Upcycling: Training Sparse Mixture of Experts with Partial Re-initialization
The Mixture of Experts (MoE) architecture reduces the training and inference cost significantly compared to a dense model of equivalent capacity. Upcycling is an approach that initializes and trains an MoE model using a pre-trained dense model. While upcycling leads to initial performance gains, the training progresses...
6
67c07172af68756abc571b53
null
null
2025-02-27T09:41:49.469000
Rank1: Test-Time Compute for Reranking in Information Retrieval
https://cdn-thumbnails.h…s/2502.18418.png
2
{ "_id": "6362d9712691058b19de1ba4", "avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/6362d9712691058b19de1ba4/c9QrA2oE6lcs_46ShaTY1.jpeg", "followerCount": 15, "fullname": "Orion Weller", "isHf": false, "isMod": false, "isPro": true, "name": "orionweller", "type": "user" }
false
null
2502.18418
[ { "_id": "67bf17b23f838c1e33ac7c4d", "hidden": false, "name": "Orion Weller", "status": null, "statusLastChangedAt": null, "user": null }, { "_id": "67bf17b23f838c1e33ac7c4e", "hidden": false, "name": "Kathryn Ricci", "status": null, "statusLastChangedAt": null, "...
2025-02-25T18:14:06
Rank1: Test-Time Compute for Reranking in Information Retrieval
We introduce Rank1, the first reranking model trained to take advantage of test-time compute. Rank1 demonstrates the applicability within retrieval of using a reasoning language model (i.e. OpenAI's o1, Deepseek's R1, etc.) for distillation in order to rapidly improve the performance of a smaller model. We gather and o...
23
67bf17b33f838c1e33ac7c8e
null
null
2025-02-27T07:31:45.499000
DOEI: Dual Optimization of Embedding Information for Attention-Enhanced Class Activation Maps
https://cdn-thumbnails.h…s/2502.15885.png
2
{ "_id": "64ec877bb93654d4ca5c92e9", "avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/64ec877bb93654d4ca5c92e9/GvHk_KSdE9Rhnk_o-NaZX.jpeg", "followerCount": 1, "fullname": "Zeyu Zhang", "isHf": false, "isMod": false, "isPro": false, "name": "SteveZeyuZhang", "type": "user" }
true
null
2502.15885
[ { "_id": "67c05aeca2a76d8a27d33c8a", "hidden": false, "name": "Hongjie Zhu", "status": null, "statusLastChangedAt": null, "user": null }, { "_id": "67c05aeca2a76d8a27d33c8b", "hidden": false, "name": "Zeyu Zhang", "status": "claimed_verified", "statusLastChangedAt": "...
2025-02-21T19:06:01
DOEI: Dual Optimization of Embedding Information for Attention-Enhanced Class Activation Maps
Weakly supervised semantic segmentation (WSSS) typically utilizes limited semantic annotations to obtain initial Class Activation Maps (CAMs). However, due to the inadequate coupling between class activation responses and semantic information in high-dimensional space, the CAM is prone to object co-occurrence or under-...
2
67c05af3a2a76d8a27d33faf
null
null
2025-02-27T04:18:26.724000
Project Alexandria: Towards Freeing Scientific Knowledge from Copyright Burdens via LLMs
https://cdn-thumbnails.h…s/2502.19413.png
2
{ "_id": "6464a0d41683d3c81f51924a", "avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/6464a0d41683d3c81f51924a/s7yYVwfUB4WOhVFJS6A6T.jpeg", "followerCount": 5, "fullname": "Ameya Prabhu", "isHf": false, "isMod": false, "isPro": false, "name": "AmeyaPrabhu", "type": "user" }
true
null
2502.19413
[ { "_id": "67c02d6aa15ac71dcf1c754e", "hidden": false, "name": "Christoph Schuhmann", "status": null, "statusLastChangedAt": null, "user": null }, { "_id": "67c02d6aa15ac71dcf1c754f", "hidden": false, "name": "Gollam Rabby", "status": "claimed_verified", "statusLastCha...
2025-02-26T18:56:52
Project Alexandria: Towards Freeing Scientific Knowledge from Copyright Burdens via LLMs
Paywalls, licenses and copyright rules often restrict the broad dissemination and reuse of scientific knowledge. We take the position that it is both legally and technically feasible to extract the scientific knowledge in scholarly texts. Current methods, like text embeddings, fail to reliably preserve factual content,...
19
67c02d6ba15ac71dcf1c7596
null
null
2025-02-27T04:15:43.126000
GHOST 2.0: generative high-fidelity one shot transfer of heads
https://cdn-thumbnails.h…s/2502.18417.png
2
{ "_id": "67aafccd7517c92ba71142f2", "avatarUrl": "/avatars/ef4b5c6867250b8b7af2c995dd7ad740.svg", "followerCount": 2, "fullname": "Anastasiia Iashchenko", "isHf": false, "isMod": false, "isPro": false, "name": "nastasia-y", "type": "user" }
true
null
2502.18417
[ { "_id": "67c02b2eb14cf3cbc800c292", "hidden": false, "name": "Alexander Groshev", "status": null, "statusLastChangedAt": null, "user": null }, { "_id": "67c02b2eb14cf3cbc800c293", "hidden": false, "name": "Anastasiia Iashchenko", "status": "claimed_verified", "status...
2025-02-25T18:13:55
GHOST 2.0: generative high-fidelity one shot transfer of heads
While the task of face swapping has recently gained attention in the research community, a related problem of head swapping remains largely unexplored. In addition to skin color transfer, head swap poses extra challenges, such as the need to preserve structural information of the whole head during synthesis and inpaint...
61
67c02b31b14cf3cbc800c34b
null
null
2025-02-27T02:43:05.341000
BIG-Bench Extra Hard
https://cdn-thumbnails.h…s/2502.19187.png
2
{ "_id": "5f1158120c833276f61f1a84", "avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/1608042047613-5f1158120c833276f61f1a84.jpeg", "followerCount": 777, "fullname": "Niels Rogge", "isHf": true, "isMod": false, "isPro": false, "name": "nielsr", "type": "user" }
false
null
2502.19187
[ { "_id": "67c01747e8c7d56a8e0cbdc3", "hidden": false, "name": "Mehran Kazemi", "status": null, "statusLastChangedAt": null, "user": null }, { "_id": "67c01747e8c7d56a8e0cbdc4", "hidden": false, "name": "Bahare Fatemi", "status": "extracted_pending", "statusLastChanged...
2025-02-26T14:50:50
BIG-Bench Extra Hard
Large language models (LLMs) are increasingly deployed in everyday applications, demanding robust general reasoning capabilities and diverse reasoning skillset. However, current LLM reasoning benchmarks predominantly focus on mathematical and coding abilities, leaving a gap in evaluating broader reasoning proficiencies...
6
67c01748e8c7d56a8e0cbe0b
null
null
2025-02-27T02:36:29.037000
Can Language Models Falsify? Evaluating Algorithmic Reasoning with Counterexample Creation
https://cdn-thumbnails.h…s/2502.19414.png
2
{ "_id": "6506832221ac448013f94995", "avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/6506832221ac448013f94995/sVUI1JV4Dxan5l-MqNze4.jpeg", "followerCount": 1, "fullname": "Shashwat Goel", "isHf": false, "isMod": false, "isPro": false, "name": "shash42", "type": "user" }
true
null
2502.19414
[ { "_id": "67c01587925b73feaf61ac41", "hidden": false, "name": "Shiven Sinha", "status": "claimed_verified", "statusLastChangedAt": "2025-02-27T12:54:41.540Z", "user": { "_id": "66325cc59292069aed610056", "avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/66325...
2025-02-26T18:58:13
Can Language Models Falsify? Evaluating Algorithmic Reasoning with Counterexample Creation
There is growing excitement about the potential of Language Models (LMs) to accelerate scientific discovery. Falsifying hypotheses is key to scientific progress, as it allows claims to be iteratively refined over time. This process requires significant researcher effort, reasoning, and ingenuity. Yet current benchmarks...
17
67c01588925b73feaf61ad2c
null
null
2025-02-27T00:47:02.948000
CritiQ: Mining Data Quality Criteria from Human Preferences
https://cdn-thumbnails.h…s/2502.19279.png
2
{ "_id": "638ef0b0c67af472d31674a6", "avatarUrl": "/avatars/02df97d15a0f46b47f9162221733b121.svg", "followerCount": 1, "fullname": "Honglin Guo", "isHf": false, "isMod": false, "isPro": false, "name": "KYLN24", "type": "user" }
true
null
2502.19279
[ { "_id": "67bffaca3f838c1e33e074e7", "hidden": false, "name": "Honglin Guo", "status": "claimed_verified", "statusLastChangedAt": "2025-02-27T09:13:52.094Z", "user": { "_id": "638ef0b0c67af472d31674a6", "avatarUrl": "/avatars/02df97d15a0f46b47f9162221733b121.svg", "fullname...
2025-02-26T16:33:41
CritiQ: Mining Data Quality Criteria from Human Preferences
Language model heavily depends on high-quality data for optimal performance. Existing approaches rely on manually designed heuristics, the perplexity of existing models, training classifiers, or careful prompt engineering, which require significant expert experience and human annotation effort while introduce biases. W...
7
67bffacc3f838c1e33e075a2
null
null
2025-02-27T00:37:24.965000
PosterSum: A Multimodal Benchmark for Scientific Poster Summarization
https://cdn-thumbnails.h…s/2502.17540.png
2
{ "_id": "657ccbf2869d5bb0e53b482f", "avatarUrl": "/avatars/2eae5a10bdc14814a04d9f255f16de6b.svg", "followerCount": 4, "fullname": "Rohit Saxena", "isHf": false, "isMod": false, "isPro": false, "name": "rohitsaxena", "type": "user" }
true
null
2502.17540
[ { "_id": "67bff9608d761fc6a75e24ad", "hidden": false, "name": "Rohit Saxena", "status": "claimed_verified", "statusLastChangedAt": "2025-02-27T09:13:54.284Z", "user": { "_id": "657ccbf2869d5bb0e53b482f", "avatarUrl": "/avatars/2eae5a10bdc14814a04d9f255f16de6b.svg", "fullnam...
2025-02-24T18:35:39
PosterSum: A Multimodal Benchmark for Scientific Poster Summarization
Generating accurate and concise textual summaries from multimodal documents is challenging, especially when dealing with visually complex content like scientific posters. We introduce PosterSum, a novel benchmark to advance the development of vision-language models that can understand and summarize scientific posters i...
2
67bff96d8d761fc6a75e27a0
null
null
2025-02-27T00:17:58.262000
Language Models' Factuality Depends on the Language of Inquiry
https://cdn-thumbnails.h…s/2502.17955.png
2
{ "_id": "65d2f1e0fe21569868393411", "avatarUrl": "/avatars/1401020e76d958bef3f33e7449773694.svg", "followerCount": 1, "fullname": "Tushar Aggarwal", "isHf": false, "isMod": false, "isPro": false, "name": "AggarwalTushar", "type": "user" }
true
null
2502.17955
[ { "_id": "67bff526ca6e3c22b6e89d71", "hidden": false, "name": "Tushar Aggarwal", "status": "extracted_confirmed", "statusLastChangedAt": "2025-02-27T18:59:26.826Z", "user": { "_id": "65d2f1e0fe21569868393411", "avatarUrl": "/avatars/1401020e76d958bef3f33e7449773694.svg", "f...
2025-02-25T08:27:18
Language Models' Factuality Depends on the Language of Inquiry
Multilingual language models (LMs) are expected to recall factual knowledge consistently across languages, yet they often fail to transfer knowledge between languages even when they possess the correct information in one of the languages. For example, we find that an LM may correctly identify Rashed Al Shashai as being...
29
67bff528ca6e3c22b6e89ddd
null
null
2025-02-27T00:08:09.082000
Plutus: Benchmarking Large Language Models in Low-Resource Greek Finance
https://cdn-thumbnails.h…s/2502.18772.png
2
{ "_id": "63b58ed5889aa6707f0bb0f4", "avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/63b58ed5889aa6707f0bb0f4/znl74_aMswlV8VtHrfj3G.jpeg", "followerCount": 15, "fullname": "Jimin Huang", "isHf": false, "isMod": false, "isPro": true, "name": "jiminHuang", "type": "user" }
true
null
2502.18772
[ { "_id": "67bfc297ca6e3c22b6d99c78", "hidden": false, "name": "Xueqing Peng", "status": null, "statusLastChangedAt": null, "user": null }, { "_id": "67bfc297ca6e3c22b6d99c79", "hidden": false, "name": "Triantafillos Papadopoulos", "status": null, "statusLastChangedAt"...
2025-02-26T03:04:01
Plutus: Benchmarking Large Language Models in Low-Resource Greek Finance
Despite Greece's pivotal role in the global economy, large language models (LLMs) remain underexplored for Greek financial context due to the linguistic complexity of Greek and the scarcity of domain-specific datasets. Previous efforts in multilingual financial natural language processing (NLP) have exposed considerabl...
30
67bfc298ca6e3c22b6d99caa
null
null
2025-02-26T23:05:13.440000
Kanana: Compute-efficient Bilingual Language Models
https://cdn-thumbnails.h…s/2502.18934.png
2
{ "_id": "60436d159e905013ae8715d7", "avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/1623809612769-60436d159e905013ae8715d7.jpeg", "followerCount": 5, "fullname": "Minho Ryu", "isHf": false, "isMod": false, "isPro": false, "name": "bzantium", "type": "user" }
true
null
2502.18934
[ { "_id": "67bfe1bf4426925c82fe5953", "hidden": false, "name": "Kanana LLM Team", "status": null, "statusLastChangedAt": null, "user": null }, { "_id": "67bfe1bf4426925c82fe5954", "hidden": false, "name": "Yunju Bak", "status": "admin_assigned", "statusLastChangedAt": ...
2025-02-26T08:36:20
Kanana: Compute-efficient Bilingual Language Models
We introduce Kanana, a series of bilingual language models that demonstrate exceeding performance in Korean and competitive performance in English. The computational cost of Kanana is significantly lower than that of state-of-the-art models of similar size. The report details the techniques employed during pre-training...
58
67bfe1c04426925c82fe59a1
null
null
2025-02-26T23:04:47.406000
Can Large Language Models Detect Errors in Long Chain-of-Thought Reasoning?
https://cdn-thumbnails.h…s/2502.19361.png
2
{ "_id": "65377c30e48353201e6fdda0", "avatarUrl": "/avatars/a8f803b6f2e598eaee9c52c0d2ddfc16.svg", "followerCount": 7, "fullname": "Jiaheng Liu", "isHf": false, "isMod": false, "isPro": false, "name": "CheeryLJH", "type": "user" }
false
null
2502.19361
[ { "_id": "67bfe435ca6e3c22b6e29442", "hidden": false, "name": "Yancheng He", "status": null, "statusLastChangedAt": null, "user": null }, { "_id": "67bfe435ca6e3c22b6e29443", "hidden": false, "name": "Shilong Li", "status": null, "statusLastChangedAt": null, "user...
2025-02-26T17:59:27
Can Large Language Models Detect Errors in Long Chain-of-Thought Reasoning?
Recently, o1-like models have drawn significant attention, where these models produce the long Chain-of-Thought (CoT) reasoning steps to improve the reasoning abilities of existing Large Language Models (LLMs). In this paper, to understand the qualities of these long CoTs and measure the critique abilities of existing ...
24
67bfe438ca6e3c22b6e2948e
null
null
2025-02-26T22:29:40.056000
MolSpectra: Pre-training 3D Molecular Representation with Multi-modal Energy Spectra
https://cdn-thumbnails.h…s/2502.16284.png
2
{ "_id": "64e84ec6d41a68b065bf78a7", "avatarUrl": "/avatars/bae3c5e3210b40af6e4f113e85f3e206.svg", "followerCount": null, "fullname": "Liang Wang", "isHf": false, "isMod": false, "isPro": false, "name": "AzureLeon1", "type": "user" }
true
null
2502.16284
[ { "_id": "67bfdbd0302c06f220658e9d", "hidden": false, "name": "Liang Wang", "status": "claimed_verified", "statusLastChangedAt": "2025-02-27T09:14:42.802Z", "user": { "_id": "64e84ec6d41a68b065bf78a7", "avatarUrl": "/avatars/bae3c5e3210b40af6e4f113e85f3e206.svg", "fullname"...
2025-02-22T16:34:32
MolSpectra: Pre-training 3D Molecular Representation with Multi-modal Energy Spectra
Establishing the relationship between 3D structures and the energy states of molecular systems has proven to be a promising approach for learning 3D molecular representations. However, existing methods are limited to modeling the molecular energy states from classical mechanics. This limitation results in a significant...
5
67bfdbd1302c06f220658ece
null
null
2025-02-26T22:18:06.494000
Towards an AI co-scientist
https://cdn-thumbnails.h…s/2502.18864.png
2
{ "_id": "60f1abe7544c2adfd699860c", "avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/1674929746905-60f1abe7544c2adfd699860c.jpeg", "followerCount": 6280, "fullname": "AK", "isHf": true, "isMod": false, "isPro": false, "name": "akhaliq", "type": "user" }
false
null
2502.18864
[ { "_id": "67bfd957c2a9b64ab3f97aa7", "hidden": false, "name": "Juraj Gottweis", "status": null, "statusLastChangedAt": null, "user": null }, { "_id": "67bfd957c2a9b64ab3f97aa8", "hidden": false, "name": "Wei-Hung Weng", "status": null, "statusLastChangedAt": null, ...
2025-02-26T06:17:13
Towards an AI co-scientist
Scientific discovery relies on scientists generating novel hypotheses that undergo rigorous experimental validation. To augment this process, we introduce an AI co-scientist, a multi-agent system built on Gemini 2.0. The AI co-scientist is intended to help uncover new, original knowledge and to formulate demonstrably n...
37
67bfd958c2a9b64ab3f97afa
null
null
2025-02-26T22:16:03.582000
AISafetyLab: A Comprehensive Framework for AI Safety Evaluation and Improvement
https://cdn-thumbnails.h…s/2502.16776.png
2
{ "_id": "61b58aa0d65058ce70beb98c", "avatarUrl": "/avatars/aefd9271b891abc6dd2ded1a30eebca4.svg", "followerCount": 1, "fullname": "Zhexin Zhang", "isHf": false, "isMod": false, "isPro": false, "name": "nonstopfor", "type": "user" }
false
null
2502.16776
[ { "_id": "67bfd8d546083445aacb4605", "hidden": false, "name": "Zhexin Zhang", "status": null, "statusLastChangedAt": null, "user": null }, { "_id": "67bfd8d546083445aacb4606", "hidden": false, "name": "Leqi Lei", "status": null, "statusLastChangedAt": null, "user"...
2025-02-24T02:11:52
AISafetyLab: A Comprehensive Framework for AI Safety Evaluation and Improvement
As AI models are increasingly deployed across diverse real-world scenarios, ensuring their safety remains a critical yet underexplored challenge. While substantial efforts have been made to evaluate and enhance AI safety, the lack of a standardized framework and comprehensive toolkit poses significant obstacles to syst...
5
67bfd8d646083445aacb464f
null
null
2025-02-26T22:10:20.646000
Distill Any Depth: Distillation Creates a Stronger Monocular Depth Estimator
https://cdn-thumbnails.h…s/2502.19204.png
4
{ "_id": "64196320ed725fef64419c2a", "avatarUrl": "/avatars/96feb22fb5e8931d6c9e0ea06148266f.svg", "followerCount": 3, "fullname": "Chi Zhang", "isHf": false, "isMod": false, "isPro": false, "name": "DrChiZhang", "type": "user" }
false
[ "https://cdn-uploads.huggingface.co/production/uploads/64196320ed725fef64419c2a/k13rSuJPlDkMtzwdHXCXm.png" ]
2502.19204
[ { "_id": "67bfd735ca6e3c22b6de43c7", "hidden": false, "name": "Xiankang He", "status": null, "statusLastChangedAt": null, "user": null }, { "_id": "67bfd735ca6e3c22b6de43c8", "hidden": false, "name": "Dongyan Guo", "status": null, "statusLastChangedAt": null, "use...
2025-02-26T15:10:05
Distill Any Depth: Distillation Creates a Stronger Monocular Depth Estimator
Monocular depth estimation (MDE) aims to predict scene depth from a single RGB image and plays a crucial role in 3D scene understanding. Recent advances in zero-shot MDE leverage normalized depth representations and distillation-based learning to improve generalization across diverse scenes. However, current depth norm...
11
67bfd736ca6e3c22b6de441e
null
null
2025-02-26T22:07:49.438000
TheoremExplainAgent: Towards Multimodal Explanations for LLM Theorem Understanding
https://cdn-thumbnails.h…s/2502.19400.png
2
{ "_id": "60f1abe7544c2adfd699860c", "avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/1674929746905-60f1abe7544c2adfd699860c.jpeg", "followerCount": 6280, "fullname": "AK", "isHf": true, "isMod": false, "isPro": false, "name": "akhaliq", "type": "user" }
false
null
2502.19400
[ { "_id": "67bfd6f15db054ee3c5a766b", "hidden": false, "name": "Max Ku", "status": "claimed_verified", "statusLastChangedAt": "2025-02-27T09:14:55.238Z", "user": { "_id": "631d760344503b7227837242", "avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/631d7603445...
2025-02-26T18:50:09
TheoremExplainAgent: Towards Multimodal Explanations for LLM Theorem Understanding
Understanding domain-specific theorems often requires more than just text-based reasoning; effective communication through structured visual explanations is crucial for deeper comprehension. While large language models (LLMs) demonstrate strong performance in text-based theorem reasoning, their ability to generate cohe...
41
67bfd6f25db054ee3c5a7699
https://tiger-ai-lab.github.io/TheoremExplainAgent/
https://github.com/TIGER-AI-Lab/TheoremExplainAgent
2025-02-26T22:05:16.150000
Agentic Reward Modeling: Integrating Human Preferences with Verifiable Correctness Signals for Reliable Reward Systems
https://cdn-thumbnails.h…s/2502.19328.png
2
{ "_id": "625a5446f1063e7085d5178a", "avatarUrl": "/avatars/5e78186f13f74b14e01583e06ff6c4dc.svg", "followerCount": 1, "fullname": "Hao Peng", "isHf": false, "isMod": false, "isPro": false, "name": "Wesleythu", "type": "user" }
true
null
2502.19328
[ { "_id": "67bfcb774d22a9379b29334c", "hidden": false, "name": "Hao Peng", "status": "claimed_verified", "statusLastChangedAt": "2025-03-02T20:18:21.224Z", "user": { "_id": "625a5446f1063e7085d5178a", "avatarUrl": "/avatars/5e78186f13f74b14e01583e06ff6c4dc.svg", "fullname": ...
2025-02-26T17:19:12
Agentic Reward Modeling: Integrating Human Preferences with Verifiable Correctness Signals for Reliable Reward Systems
Reward models (RMs) are crucial for the training and inference-time scaling up of large language models (LLMs). However, existing reward models primarily focus on human preferences, neglecting verifiable correctness signals which have shown strong potential in training LLMs. In this paper, we propose agentic reward mod...
20
67bfcb784d22a9379b29338f
null
null
2025-02-26T22:02:50.690000
VEM: Environment-Free Exploration for Training GUI Agent with Value Environment Model
https://cdn-thumbnails.h…s/2502.18906.png
2
{ "_id": "654dbac9938fbf1e696be8aa", "avatarUrl": "/avatars/b3c4035c48169c1bfb04a439fce3499f.svg", "followerCount": 2, "fullname": "Chaoyun Zhang", "isHf": false, "isMod": false, "isPro": false, "name": "vyokky", "type": "user" }
true
null
2502.18906
[ { "_id": "67bfd5d2381f8fcb67e5ad36", "hidden": false, "name": "Jiani Zheng", "status": "claimed_verified", "statusLastChangedAt": "2025-03-02T20:18:18.436Z", "user": { "_id": "64531f631a57e1179c203e6b", "avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/64531f...
2025-02-26T07:52:02
VEM: Environment-Free Exploration for Training GUI Agent with Value Environment Model
Training Vision-Language Models (VLMs) for Graphical User Interfaces (GUI) agents via Reinforcement Learning (RL) faces critical challenges: environment-based RL requires costly interactions, while environment-free methods struggle with distribution shift and reward generalization. We propose an environment-free RL fra...
11
67bfd5d7381f8fcb67e5ae3d
null
null
2025-02-26T18:40:15.965000
Scaling LLM Pre-training with Vocabulary Curriculum
https://cdn-thumbnails.h…s/2502.17910.png
2
{ "_id": "64d98ef7a4839890b25eb78b", "avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/64d98ef7a4839890b25eb78b/215-CSVLl81z6CAq0ECWU.jpeg", "followerCount": 14, "fullname": "Fangyuan Yu", "isHf": false, "isMod": false, "isPro": true, "name": "Ksgk-fy", "type": "user" }
true
null
2502.17910
[ { "_id": "67be7f96b4ca41e2807a4fb0", "hidden": false, "name": "Fangyuan Yu", "status": "claimed_verified", "statusLastChangedAt": "2025-02-26T08:26:07.778Z", "user": { "_id": "64d98ef7a4839890b25eb78b", "avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/64d98e...
2025-02-25T07:18:29
Scaling LLM Pre-training with Vocabulary Curriculum
Modern language models rely on static vocabularies, fixed before pretraining, in contrast to the adaptive vocabulary acquisition observed in human language learning. To bridge this gap, we introduce vocabulary curriculum learning, an approach that improves pretraining efficiency with log-linear scaling gains relative t...
1
67be7f97b4ca41e2807a4fed
null
null
2025-02-26T16:56:34.818000
LDGen: Enhancing Text-to-Image Synthesis via Large Language Model-Driven Language Representation
https://cdn-thumbnails.h…s/2502.18302.png
2
{ "_id": "64edcb9b84cc47a8b50bfab7", "avatarUrl": "/avatars/1b4defb79eef3753a540efa76c16462a.svg", "followerCount": 1, "fullname": "Li", "isHf": false, "isMod": false, "isPro": false, "name": "Kinpz", "type": "user" }
true
null
2502.18302
[ { "_id": "67befd09afb202a5b7518572", "hidden": false, "name": "Pengzhi Li", "status": "claimed_verified", "statusLastChangedAt": "2025-02-26T15:37:26.179Z", "user": { "_id": "64edcb9b84cc47a8b50bfab7", "avatarUrl": "/avatars/1b4defb79eef3753a540efa76c16462a.svg", "fullname"...
2025-02-25T15:42:34
LDGen: Enhancing Text-to-Image Synthesis via Large Language Model-Driven Language Representation
In this paper, we introduce LDGen, a novel method for integrating large language models (LLMs) into existing text-to-image diffusion models while minimizing computational demands. Traditional text encoders, such as CLIP and T5, exhibit limitations in multilingual processing, hindering image generation across diverse la...
4
67befd0cafb202a5b751865e
null
null
2025-02-26T14:46:51.721000
MLLMs Know Where to Look: Training-free Perception of Small Visual Details with Multimodal LLMs
https://cdn-thumbnails.h…s/2502.17422.png
2
{ "_id": "635b99d47a1656011516bff9", "avatarUrl": "/avatars/7243c4171ff127ba90631f105881d9d7.svg", "followerCount": 3, "fullname": "jiarui zhang", "isHf": false, "isMod": false, "isPro": false, "name": "jrzhang", "type": "user" }
true
null
2502.17422
[ { "_id": "67bf6ea633d6740f711cc995", "hidden": false, "name": "Jiarui Zhang", "status": "claimed_verified", "statusLastChangedAt": "2025-02-27T09:15:01.691Z", "user": { "_id": "635b99d47a1656011516bff9", "avatarUrl": "/avatars/7243c4171ff127ba90631f105881d9d7.svg", "fullnam...
2025-02-24T18:54:40
MLLMs Know Where to Look: Training-free Perception of Small Visual Details with Multimodal LLMs
Multimodal Large Language Models (MLLMs) have experienced rapid progress in visual recognition tasks in recent years. Given their potential integration into many critical applications, it is important to understand the limitations of their visual perception. In this work, we study whether MLLMs can perceive small visua...
7
67bf6eaa33d6740f711ccac2
null
null
2025-02-26T12:51:05.089000
Curie: Toward Rigorous and Automated Scientific Experimentation with AI Agents
https://cdn-thumbnails.h…s/2502.16069.png
5
{ "_id": "648fc22019e7511674b31f12", "avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/648fc22019e7511674b31f12/9kRR00GMFYcuj6zR0BVfx.jpeg", "followerCount": 1, "fullname": "Amber", "isHf": false, "isMod": false, "isPro": false, "name": "AmberLJC", "type": "user" }
false
null
2502.16069
[ { "_id": "67bf51f8653c05485b571e71", "hidden": false, "name": "Patrick Tser Jern Kon", "status": "claimed_verified", "statusLastChangedAt": "2025-03-02T20:18:25.807Z", "user": { "_id": "64b7111e17681d64b19cf95e", "avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uplo...
2025-02-22T03:58:19
Curie: Toward Rigorous and Automated Scientific Experimentation with AI Agents
Scientific experimentation, a cornerstone of human progress, demands rigor in reliability, methodical control, and interpretability to yield meaningful results. Despite the growing capabilities of large language models (LLMs) in automating different aspects of the scientific process, automating rigorous experimentation...
17
67bf51fa653c05485b571f00
null
null
2025-02-26T12:34:59.916000
An Overview of Large Language Models for Statisticians
https://cdn-thumbnails.h…s/2502.17814.png
2
{ "_id": "60f1abe7544c2adfd699860c", "avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/1674929746905-60f1abe7544c2adfd699860c.jpeg", "followerCount": 6280, "fullname": "AK", "isHf": true, "isMod": false, "isPro": false, "name": "akhaliq", "type": "user" }
false
null
2502.17814
[ { "_id": "67bf50aa9a1df81dba235650", "hidden": false, "name": "Wenlong Ji", "status": null, "statusLastChangedAt": null, "user": null }, { "_id": "67bf50aa9a1df81dba235651", "hidden": false, "name": "Weizhe Yuan", "status": null, "statusLastChangedAt": null, "user...
2025-02-25T03:40:36
An Overview of Large Language Models for Statisticians
Large Language Models (LLMs) have emerged as transformative tools in artificial intelligence (AI), exhibiting remarkable capabilities across diverse tasks such as text generation, reasoning, and decision-making. While their success has primarily been driven by advances in computational power and deep learning architect...
4
67bf50ab9a1df81dba2356ba
null
null
2025-02-26T10:53:44.153000
WiCkeD: A Simple Method to Make Multiple Choice Benchmarks More Challenging
https://cdn-thumbnails.h…s/2502.18316.png
2
{ "_id": "6586f687ce38d143c4092ed7", "avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/6586f687ce38d143c4092ed7/uPYZgk-lGGEfxa0kASX0y.jpeg", "followerCount": null, "fullname": "Ahmed Mohamed Elhady", "isHf": false, "isMod": false, "isPro": false, "name": "ahmedselhady", "type": "u...
true
null
2502.18316
[ { "_id": "67bf0ccbb2f5c23eb0a69a7d", "hidden": false, "name": "Ahmed Elhady", "status": "claimed_verified", "statusLastChangedAt": "2025-02-26T15:37:14.872Z", "user": { "_id": "6586f687ce38d143c4092ed7", "avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/6586f...
2025-02-25T16:09:38
WiCkeD: A Simple Method to Make Multiple Choice Benchmarks More Challenging
We introduce WiCkeD, a simple method to increase the complexity of existing multiple-choice benchmarks by randomly replacing a choice with "None of the above", a method often used in educational tests. We show that WiCkeD can be automatically applied to any existing benchmark, making it more challenging. We apply WiCke...
2
67bf0ccdb2f5c23eb0a69b25
null
null
2025-02-26T10:43:07.864000
Prompt-to-Leaderboard
https://cdn-thumbnails.h…s/2502.14855.png
3
{ "_id": "60f1abe7544c2adfd699860c", "avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/1674929746905-60f1abe7544c2adfd699860c.jpeg", "followerCount": 6280, "fullname": "AK", "isHf": true, "isMod": false, "isPro": false, "name": "akhaliq", "type": "user" }
false
null
2502.14855
[ { "_id": "67b8e77477a3ed169f302415", "hidden": false, "name": "Evan Frick", "status": null, "statusLastChangedAt": null, "user": null }, { "_id": "67b8e77477a3ed169f302416", "hidden": false, "name": "Connor Chen", "status": null, "statusLastChangedAt": null, "user...
2025-02-20T18:58:07
Prompt-to-Leaderboard
Large language model (LLM) evaluations typically rely on aggregated metrics like accuracy or human preference, averaging across users and prompts. This averaging obscures user- and prompt-specific variations in model performance. To address this, we propose Prompt-to-Leaderboard (P2L), a method that produces leaderboar...
7
67b8e77577a3ed169f302470
null
null
2025-02-26T09:40:23.169000
Finding the Sweet Spot: Preference Data Construction for Scaling Preference Optimization
https://cdn-thumbnails.h…s/2502.16825.png
2
{ "_id": "6239888e7fef05b7bdd5fcff", "avatarUrl": "/avatars/54fcc756b8c0936b6bb410c6e0e02d75.svg", "followerCount": 1, "fullname": "Hai Ye", "isHf": false, "isMod": false, "isPro": false, "name": "oceanpty", "type": "user" }
false
null
2502.16825
[ { "_id": "67bf243823f222a2cc2858d0", "hidden": false, "name": "Yao Xiao", "status": null, "statusLastChangedAt": null, "user": null }, { "_id": "67bf243823f222a2cc2858d1", "hidden": false, "name": "Hai Ye", "status": null, "statusLastChangedAt": null, "user": null...
2025-02-24T04:22:57
Finding the Sweet Spot: Preference Data Construction for Scaling Preference Optimization
Iterative data generation and model retraining are widely used to align large language models (LLMs). It typically involves a policy model to generate on-policy responses and a reward model to guide training data selection. Direct Preference Optimization (DPO) further enhances this process by constructing preference pa...
6
67bf243923f222a2cc285919
null
null
2025-02-26T07:28:05.618000
LaTIM: Measuring Latent Token-to-Token Interactions in Mamba Models
https://cdn-thumbnails.h…s/2502.15612.png
2
{ "_id": "62d19a4b1e36881a57f31c6a", "avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/62d19a4b1e36881a57f31c6a/C-tAc0uXvpIggh0nWB2Dy.jpeg", "followerCount": 1, "fullname": "Hugo Pitorro", "isHf": false, "isMod": false, "isPro": false, "name": "twigs", "type": "user" }
true
[ "https://cdn-uploads.huggingface.co/production/uploads/62d19a4b1e36881a57f31c6a/GN78Zj956f5CoUuGof_RC.png" ]
2502.15612
[ { "_id": "67bc8aeb70194f240328e1cf", "hidden": false, "name": "Hugo Pitorro", "status": "claimed_verified", "statusLastChangedAt": "2025-02-24T15:46:13.861Z", "user": { "_id": "62d19a4b1e36881a57f31c6a", "avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/62d19...
2025-02-21T17:33:59
LaTIM: Measuring Latent Token-to-Token Interactions in Mamba Models
State space models (SSMs), such as Mamba, have emerged as an efficient alternative to transformers for long-context sequence modeling. However, despite their growing adoption, SSMs lack the interpretability tools that have been crucial for understanding and improving attention-based architectures. While recent efforts ...
4
67bc8aed70194f240328e2cc
null
null
2025-02-26T02:37:36.287000
Introducing Visual Perception Token into Multimodal Large Language Model
https://cdn-thumbnails.h…s/2502.17425.png
2
{ "_id": "635364b3c41f548fe39db945", "avatarUrl": "/avatars/ad1916bbfabca0b6651c8eabacc5eba8.svg", "followerCount": 2, "fullname": "Runpeng Yu", "isHf": false, "isMod": false, "isPro": false, "name": "rp-yu", "type": "user" }
true
null
2502.17425
[ { "_id": "67bddd63c7d8b835b82ced9a", "hidden": false, "name": "Runpeng Yu", "status": "admin_assigned", "statusLastChangedAt": "2025-02-26T09:14:07.580Z", "user": { "_id": "635364b3c41f548fe39db945", "avatarUrl": "/avatars/ad1916bbfabca0b6651c8eabacc5eba8.svg", "fullname": ...
2025-02-24T18:56:12
Introducing Visual Perception Token into Multimodal Large Language Model
To utilize visual information, Multimodal Large Language Model (MLLM) relies on the perception process of its vision encoder. The completeness and accuracy of visual perception significantly influence the precision of spatial reasoning, fine-grained understanding, and other tasks. However, MLLM still lacks the autonomo...
14
67bddd64c7d8b835b82cee5a
null
null
2025-02-26T01:04:23.776000
The Lottery LLM Hypothesis, Rethinking What Abilities Should LLM Compression Preserve?
https://cdn-thumbnails.h…s/2502.17535.png
2
{ "_id": "63024676056ec3a2a8714b24", "avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/1661093436322-noauth.jpeg", "followerCount": 5, "fullname": "Xiang Liu", "isHf": false, "isMod": false, "isPro": false, "name": "Dominic789654", "type": "user" }
true
null
2502.17535
[ { "_id": "67beaec94a1d9d7e368a7840", "hidden": false, "name": "Zhenheng Tang", "status": "admin_assigned", "statusLastChangedAt": "2025-02-26T09:15:34.971Z", "user": { "_id": "66a4a319a1711696948b045c", "avatarUrl": "/avatars/1d92d57a949332cb8227697b9a0c2f39.svg", "fullname...
2025-02-24T15:39:35
The Lottery LLM Hypothesis, Rethinking What Abilities Should LLM Compression Preserve?
Motivated by reducing the computational and storage costs of LLMs, model compression and KV cache compression have attracted much attention from researchers. However, current methods predominantly emphasize maintaining the performance of compressed LLMs, as measured by perplexity or simple accuracy on tasks of common s...
8
67beaeca4a1d9d7e368a7875
null
null