Datasets:
dataset string | model string | model_provider string | quantization float64 | strategy string | search_engine string | accuracy_pct float64 | accuracy_raw string | correct int64 | total int64 | iterations int64 | questions_per_iteration int64 | avg_time_per_question string | total_tokens_used float64 | temperature float64 | context_window int64 | max_tokens int64 | hardware_gpu string | hardware_ram string | hardware_cpu string | evaluator_model string | evaluator_provider string | ldr_version string | date_tested string | contributor string | contributor_source string | notes string | source_file string |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
SimpleQA | qwen3.5:9b | OLLAMA | null | langgraph_agent | serper | 91.2 | 91.2% (182/200) | 182 | 200 | 10 | 1 | 1m 18s | null | 0.7 | 36,352 | 30,000 | null | null | null | qwen3.5:9b | ollama | 1.5.6 | 2026-04-06 | LearningCircuit | git | null | results/simpleqa/langgraph-agent/serper/qwen3.5-9b_2026-04-06.yaml |
SimpleQA | qwen3:4b | OLLAMA | null | source_based | serper | 74 | 74.0% (37/50) | 37 | 50 | 2 | 2 | 1m 3s | null | 0.7 | 4,096 | 30,000 | NVIDIA GeForce RTX 4090 [Discrete] | 32GB | AMD Ryzen 7 7800X3D (16) @ 5.02 GHz | null | null | 1.3.50 | 2026-02-18 | kwhyte7 | yaml | # Add any observations, errors, or insights here | results/simpleqa/source-based/serper/qwen3-4b_2026-02-18.yaml |
xbench_deepsearch | qwen3.5:9b | OLLAMA | null | langgraph_agent | serper | 59 | 59.0% (59/100) | 59 | 100 | 10 | 1 | 9m 31s | null | 0.7 | 36,352 | 30,000 | null | null | null | qwen3.5:9b | ollama | 1.5.6 | 2026-04-07 | LearningCircuit | git | null | results/xbench-deepsearch/langgraph-agent/serper/qwen3.5-9b_2026-04-07.yaml |
LDR Community Benchmarks (Leaderboards)
Aggregated leaderboards for Local Deep Research (LDR) community benchmark runs against SimpleQA, BrowseComp, and xbench-DeepSearch.
π Submit results, read raw YAMLs, open PRs:
github.com/LearningCircuit/ldr-benchmarks
This Hugging Face dataset hosts only the aggregated CSV leaderboards.
It is regenerated automatically on every merge to main in the GitHub
repo above. Each CSV row represents one benchmark run (one strategy from
one YAML submission).
Why the split?
- GitHub is the source of truth for raw YAML submissions, PR review, CI validation, and leaderboard regeneration.
- Hugging Face renders the aggregated CSVs in its Dataset Viewer and makes the leaderboards discoverable inside the ML community.
Raw per-run YAMLs β including configuration details, notes, and (where
permitted by the benchmark's sharing policy) per-question examples β live
in the GitHub repo under results/.
Benchmarks covered
- SimpleQA β OpenAI, MIT-licensed. Full per-question examples allowed in raw YAMLs on GitHub.
- BrowseComp β OpenAI, encrypted dataset with canary string. Only aggregate metrics are accepted (no per-question examples in raw YAMLs).
- xbench-DeepSearch β xbench team, encrypted dataset. Only aggregate metrics are accepted (no per-question examples in raw YAMLs).
See the GitHub repo's README for the full sharing policy.
Leaderboard columns
Each CSV row contains:
dataset, model, model_provider, quantization, strategy, search_engine, accuracy_pct, accuracy_raw, correct, total, iterations, questions_per_iteration, avg_time_per_question, total_tokens_used, temperature, context_window, max_tokens, hardware_gpu, hardware_ram, hardware_cpu, evaluator_model, evaluator_provider, ldr_version, date_tested, contributor, notes, source_file
The source_file column points at the raw YAML in the GitHub repo.
Configs
Use the dropdown at the top of the Dataset Viewer to switch between:
allβ every run, all benchmarks combined (default)simpleqaβ SimpleQA runs onlybrowsecompβ BrowseComp runs onlyxbench-deepsearchβ xbench-DeepSearch runs only
Considerations for using the data
This is a community-submitted leaderboard, not a controlled experiment. Keep these caveats in mind when interpreting results:
- Self-reported. Runs are submitted by contributors. CI validates schema and flags obvious issues, but the runs themselves are not independently re-executed.
- Evaluator bias. Many submissions use an LLM grader (default is Claude 3.7 Sonnet via OpenRouter). LLM evaluators have non-trivial error rates; a manual audit of ~200 SimpleQA questions commonly surfaces one or two grading mistakes.
- Small sample sizes. Many runs use 50β200 questions. Confidence intervals at that scale are wide (roughly Β±5β7 percentage points at n=200). Small differences between rows are usually not significant.
- Timing is environment-dependent.
avg_time_per_questiondepends on hardware, network latency, search engine responsiveness, and model server load. - Contamination risk. SimpleQA is publicly distributed and may appear in some models' training data. BrowseComp and xbench mitigate this with encryption, but older model generations may still be contaminated.
- Strategy semantics drift. LDR strategies evolve between versions.
Prefer comparing runs tagged with the same
ldr_version.
Attribution
- SimpleQA Β© OpenAI β MIT License
- BrowseComp Β© OpenAI β see the BrowseComp paper and openai/simple-evals
- xbench Β© xbench team β see xbench-ai/xbench-evals
Plain-text distribution of BrowseComp and xbench questions or answers is prohibited.
Contributors
Thanks to everyone who has contributed benchmark runs:
- LearningCircuit β 2 submissions
- kwhyte7 β 1 submission
Citation
@misc{ldr_community_benchmarks,
title = {LDR Community Benchmarks},
author = {The Local Deep Research community},
year = {2026},
publisher = {Hugging Face / GitHub},
howpublished = {\url{https://huggingface.co/datasets/local-deep-research/ldr-benchmarks}}
}
License
This dataset is licensed under Creative Commons Attribution 4.0 International (CC BY 4.0). If you use the data in research, publications, or derivative analyses, please cite it using the BibTeX entry above.
Individual benchmark datasets (SimpleQA, BrowseComp, xbench) retain their own upstream licenses β see the Attribution section.
- Downloads last month
- 86