Datasets:
Update README.md
Browse files
README.md
CHANGED
|
@@ -30,10 +30,12 @@ configs:
|
|
| 30 |
---
|
| 31 |
**Super easy task for humans** that **All SOTA LLM fail** to retrieve the correct answer from context. Including SOTA models: GPT5, Grok4, DeepSeek, Gemini 2.5PRO, Mistral, Llama4...etc
|
| 32 |
|
| 33 |
-
- AAAI 2026 Worshop Oral: LaMAS (LLM-based Multi-Agent Systems: Towards Responsible, Reliable, and Scalable Agentic Systems) Jan/2026 Singapole
|
| 34 |
- ICML 2025 Long-Context Foundation Models Workshop Accepted.(https://arxiv.org/abs/2506.08184)
|
| 35 |
- Update: This dataset is integrated into Moonshot AI(Kimi)'s **internal benchmarking framework** for assessing ** tracking capacity and context interference in LLM/agents**.
|
| 36 |
-
> **
|
|
|
|
|
|
|
| 37 |
|
| 38 |
|
| 39 |
## Key–value update paradigm (what the model sees) 1 keys, N updates each
|
|
|
|
| 30 |
---
|
| 31 |
**Super easy task for humans** that **All SOTA LLM fail** to retrieve the correct answer from context. Including SOTA models: GPT5, Grok4, DeepSeek, Gemini 2.5PRO, Mistral, Llama4...etc
|
| 32 |
|
| 33 |
+
- **AAAI 2026 Worshop Oral**: Jan/2026 LaMAS (LLM-based Multi-Agent Systems: Towards Responsible, Reliable, and Scalable Agentic Systems) Jan/2026 Singapole
|
| 34 |
- ICML 2025 Long-Context Foundation Models Workshop Accepted.(https://arxiv.org/abs/2506.08184)
|
| 35 |
- Update: This dataset is integrated into Moonshot AI(Kimi)'s **internal benchmarking framework** for assessing ** tracking capacity and context interference in LLM/agents**.
|
| 36 |
+
> **Update1:mergerd into Moonshot/Kimi AI's internal eval tools and under review by a xAI(Grok)'s' eval team**
|
| 37 |
+
> **Update2:AAAI 2026 workshop oral Jan/2026. Singapole for the updated paper version**
|
| 38 |
+
>
|
| 39 |
|
| 40 |
|
| 41 |
## Key–value update paradigm (what the model sees) 1 keys, N updates each
|