id string | query string | target_arxiv_ids list | source string |
|---|---|---|---|
librarian_0 | Which paper was first to propose the use of spatiotemporal transformer for BEV generation? | [
"2203.17270"
] | librarian_test |
librarian_1 | Which paper was first to describe training autoregressive models on path-star graphs? | [
"2403.06963"
] | librarian_test |
librarian_2 | Is there any recent work that used an implicit model to produce a non-parametric distribution over SO(3) that can model objects with large symmetry groups? | [
"2106.05965"
] | librarian_test |
librarian_3 | Which paper first introduces the MQAR (multi-query associative recall) task? | [
"2312.04927"
] | librarian_test |
librarian_4 | Which paper first introduces a self-speculative decoding method for llms that allows early exit at initial layers? | [
"2309.08168"
] | librarian_test |
librarian_5 | Which paper introduces a method where chunking is applied after the transformer model and before pooling? | [
"2409.04701"
] | librarian_test |
librarian_6 | Find the paper about uncertainty estimation for LLMs by researchers from skoltech and airi universities from the end of 2023. The authors also introduce open source Python framework | [
"2311.07383"
] | librarian_test |
librarian_7 | Provide a complete list of papers from 2024 that explore whether multilingual language models use English as an internal reference language. | [
"2402.10588"
] | librarian_test |
librarian_8 | Provide a complete list of papers from 2024 about benchmarking role-playing language models by using other language models to simulate users | [
"2409.06820"
] | librarian_test |
librarian_9 | Find the paper that indroduces self-reflexive tokens to call rag adaptively | [
"2310.11511"
] | librarian_test |
librarian_10 | Which paper presents a method for accelerating the output of large language models using additional decoding heads for parallel prediction of multiple tokens? | [
"2401.10774"
] | librarian_test |
librarian_11 | Which paper introduces the method for adaptive rag using uncertainty scores? | [
"2406.19215",
"2501.12835",
"2305.06983",
"2403.10081"
] | librarian_test |
librarian_12 | Find the paper for adaptive rag using prober on internal representaion | [
"2410.13339",
"2405.18727"
] | librarian_test |
librarian_13 | Which paper proposed a decoding strategy for llms by dynamically selecting layers and contrasting their logits? | [
"2309.03883"
] | librarian_test |
librarian_14 | How can LLM agents be evaluated and benchmarked for financial tasks? Note that I am referring to agents. | [
"2409.14913"
] | librarian_test |
YAML Metadata Warning:empty or missing yaml metadata in repo card
Check out the documentation for more information.
Librarian Eval Benchmark
Benchmark suite for evaluating scientific literature search agents. Stratified samples from 3 sources for fast iteration (~95 queries total).
Datasets
| File | Source | Count | What it measures |
|---|---|---|---|
litsearch_sample.jsonl |
LitSearch (EMNLP 2024) | 50 | Scientific paper retrieval by research question |
infodeepseek_sample.jsonl |
InfoDeepSeek (2025) | 30 | Agentic multi-hop information seeking |
librarian_test_sample.jsonl |
Internal Holosophus eval | 15 | Paper retrieval -> ArXiv ID match |
Sampling
LitSearch (50 from 597)
Stratified by query_set x specificity x quality (16 strata, proportional, seed=42).
InfoDeepSeek (30 from 245)
All science_and_technology (21) + 9 hard multi-hop from other domains. 23/30 multi-hop, 22/30 hard.
Librarian Test (15/15)
All internal queries included.
Metrics
- LitSearch: Recall@K (target corpus_id in results)
- InfoDeepSeek: Accuracy (LLM judge)
- Librarian Test: Accuracy (ArXiv ID substring match)
- Downloads last month
- 31