Improve dataset card with paper link and tags

#2
by nielsr HF Staff - opened
Files changed (1) hide show
  1. README.md +73 -3
README.md CHANGED
@@ -1,3 +1,73 @@
1
- ---
2
- license: mit
3
- ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ license: mit
3
+ task_categories:
4
+ - question-answering
5
+ tags:
6
+ - hypothesis-generation
7
+ - scientific-literature
8
+ - truthfulness
9
+ - hallucination
10
+ ---
11
+
12
+ # Toward Reliable Scientific Hypothesis Generation: Evaluating Truthfulness and Hallucination in Large Language Models
13
+
14
+ [![Preprint](https://img.shields.io/badge/preprint-available-brightgreen)](https://arxiv.org/abs/2505.14599)
15
+ [![Dataset](https://img.shields.io/badge/dataset-available-yellow)](https://huggingface.co/TruthHypo)
16
+
17
+ [Paper](https://huggingface.co/papers/2505.14599)
18
+
19
+ ## News
20
+ - Our paper is accepted to IJCAI 2025!
21
+
22
+ ## Table of Contents
23
+
24
+ - [Introduction](#introduction)
25
+ - [Usage](#usage)
26
+ - [Structure](#structure)
27
+ - [Citation](#citation)
28
+
29
+ ## Introduction
30
+ TruthHypo is a benchmark for assessing the capabilities of LLMs in generating truthful scientific hypotheses. This repo also contains the source code of KnowHD, a knowledge-based hallucination detector to evaluate how well hypotheses are grounded in existing knowledge. Our [paper](https://arxiv.org/abs/2505.14599) shows that LLMs struggle to generate truthful hypotheses. By analyzing hallucinations in reasoning steps, we demonstrate that the groundedness scores provided by KnowHD serve as an effective metric for filtering truthful hypotheses from the diverse outputs of LLMs.
31
+
32
+ ## Usage
33
+ The TruthHypo dataset is directly accessible via [HuggingFace](https://huggingface.co/TruthHypo):
34
+ ```python
35
+ from datasets import load_dataset
36
+
37
+ data = load_dataset("TruthHypo/edges_test")
38
+ ```
39
+
40
+ The processed knowledge sources for knowledge-enhanced hypothesis generation can be found at
41
+ - Literature
42
+ - [PubMed Articles](https://huggingface.co/datasets/MedRAG/pubmed)
43
+ - Knowledge Graph
44
+ - [PubTator Edges](https://huggingface.co/datasets/TruthHypo/edges_train)
45
+ - [PubTator Nodes](https://huggingface.co/datasets/TruthHypo/nodes)
46
+
47
+ ## Structure
48
+
49
+ Our repository contains the following contents:
50
+ - data: the data of TruthHypo benchmark
51
+ - edges_test.tsv: the test data used for LLM evaluation
52
+ - src: the source code of agents and verifiers used in our experiments
53
+ - agent: the LLM agents used to generated biomedical hypotheses
54
+ - base.py: the base agent
55
+ - cot.py: the agent using parametric knowledge only
56
+ - kg.py: the agent using both parametric knowledge and information fromknowledge graphs
57
+ - rag.py: the agent using both parametric knowledge and information from scientific literature
58
+ - rag_kg.py: the agent using parametric knowledge and information from both knowledge graphs and scientific literature
59
+ - verifier: the LLM verifiers used to measure the groundedness of generated hypotheses
60
+ - rag_verifier.py: the verifier with scientific literature as the supporting knowledge base
61
+ - kg_verifier.py: the verifier with knowledge graphs as the supporting knowledge base
62
+ - rag_kg_verifier.py: the verifier with both scientific literature and knowledge graphs as the supporting knowledge base
63
+
64
+
65
+ ## Citation
66
+ ```
67
+ @article{xiong2025toward,
68
+ title={Toward Reliable Scientific Hypothesis Generation: Evaluating Truthfulness and Hallucination in Large Language Models},
69
+ author={Guangzhi Xiong and Eric Xie and Corey Williams and Myles Kim and Amir Hassan Shariatmadari and Sikun Guo and Stefan Bekiranov and Aidong Zhang},
70
+ journal={arXiv preprint arXiv:2505.14599},
71
+ year={2025}
72
+ }
73
+ ```