--- pretty_name: MemFactory license: cc-by-sa-4.0 language: - en task_categories: - question-answering tags: - long-context - evaluation - question-answering - multi-hop - hotpotqa - synthetic configs: - config_name: eval_50 data_files: - split: train path: eval_50.json - config_name: eval_100 data_files: - split: train path: eval_100.json - config_name: converted_hotpotqa_2000 data_files: - split: train path: converted_hotpotqa_2000.json - config_name: eval_fwe_16384 data_files: - split: train path: eval_fwe_16384.json --- # MemFactory ## Overview This repository provides a lightweight, derivative release of data used in **MemFactory**. To evaluate the effectiveness of MemFactory, we reuse and adapt data from the upstream dataset: - [`BytedTsinghua-SIA/hotpotqa`](https://huggingface.co/datasets/BytedTsinghua-SIA/hotpotqa) This repository includes four JSON files: - `eval_50.json` - `eval_100.json` - `eval_fwe_16384.json` - `converted_hotpotqa_2000.json` ### Data Sources - The three evaluation files are directly derived from the upstream HotpotQA-based release. - The training file `converted_hotpotqa_2000.json` is a **locally adapted version** of the upstream training data, modified for MemFactory experiments. For full dataset context, please refer to the upstream release: - https://huggingface.co/datasets/BytedTsinghua-SIA/hotpotqa --- ## Limitations - This is a **derivative redistribution**, not the original dataset. - The data may inherit: - annotation noise - biases - structural limitations from the upstream sources. - `eval_fwe_16384.json` follows a **different schema** from the QA-style files. - For full documentation and broader coverage, users should consult the upstream dataset. --- ## License This repository is released under **CC BY-SA 4.0**. **Reason:** - The data is derived from the upstream HotpotQA-based dataset, which uses the same license. - `converted_hotpotqa_2000.json` is an adapted derivative and must preserve share-alike terms. If you use or redistribute this repository: - Please retain attribution to the upstream source - Preserve the same license --- ## Loading with 🤗 datasets ```python from datasets import load_dataset eval_50 = load_dataset("nworats/MemFactory", "eval_50", split="train") eval_100 = load_dataset("nworats/MemFactory", "eval_100", split="train") train_converted = load_dataset("nworats/MemFactory", "converted_hotpotqa_2000", split="train") eval_fwe_16384 = load_dataset("nworats/MemFactory", "eval_fwe_16384", split="train") ```` ## Citation If you use this dataset, please cite: ### MemFactory (this work) *(Placeholder – replace with your paper when available)* ```bibtex @article{memfactory2025, title={MemFactory: [Your Subtitle Here]}, author={Your Name et al.}, journal={arXiv preprint arXiv:XXXX.XXXXX}, year={2025} } ``` ### Upstream MemAgent work ```bibtex @article{yu2025memagent, title={MemAgent: Reshaping Long-Context LLM with Multi-Conv RL-based Memory Agent}, author={Yu, Hongli and others}, journal={arXiv preprint arXiv:2507.02259}, year={2025} } ```