--- language: - zh license: apache-2.0 size_categories: - 1K1B** | gte-Qwen2-1.5B-instruct | 77.35 | | | gte-Qwen2-7B-instruct | **86.55** | | | e5-mistral-7b-instruct | 76.40 | | | Qwen3-Embedding-8B | 84.61 | | | | | | Trained | Out-of-Domain | 87.23 | | | In-Domain | 91.83 | The trained models (based on `bge-base-zh-v1.5`) are trained with queries by our data generation strategies described in the paper. The in-domain model can be downloaded from [Google Drive](https://drive.google.com/drive/folders/1l2pvELMQPKjhAasNGaY7d14jMK0iCRhj). ### Citation ```bibtex @inproceedings{xu-etal-2025-dense, title = "Dense Retrievers Can Fail on Simple Queries: Revealing The Granularity Dilemma of Embeddings", author = "Xu, Liyan and Su, Zhenlin and Yu, Mo and Li, Jiangnan and Meng, Fandong and Zhou, Jie", booktitle = "Findings of the Association for Computational Linguistics: EMNLP 2025", month = nov, year = "2025", address = "Suzhou, China", publisher = "Association for Computational Linguistics" } ```