Introducing Falcon-H1-Arabic: Pushing the Boundaries of Arabic Language AI with Hybrid Architecture Jan 5 • 40
Announcing NeurIPS 2025 E2LM Competition: Early Training Evaluation of Language Models Jul 4, 2025 • 11
Falcon-H1: A Family of Hybrid-Head Language Models Redefining Efficiency and Performance May 21, 2025 • 39
Falcon-Edge: A series of powerful, universal, fine-tunable 1.58bit language models. May 15, 2025 • 36
SigLino: Vision Foundation Models (SigLIP2 + DINOv3) Collection Vision encoders distilled from DINOv3 and SigLIP2 (MoE & Dense). CVPR 2026. • 6 items • Updated 8 days ago • 17
Evaluating Arabic Large Language Models: A Survey of Benchmarks, Methods, and Gaps Paper • 2510.13430 • Published Oct 15, 2025 • 1
Falcon-H1: A Family of Hybrid-Head Language Models Redefining Efficiency and Performance Paper • 2507.22448 • Published Jul 30, 2025 • 71
3LM: Bridging Arabic, STEM, and Code through Benchmarking Paper • 2507.15850 • Published Jul 21, 2025 • 6
3LM: Bridging Arabic, STEM, and Code through Benchmarking Paper • 2507.15850 • Published Jul 21, 2025 • 6
NeurIPS 2025 E2LM Competition : Early Training Evaluation of Language Models Paper • 2506.07731 • Published Jun 9, 2025 • 2
Evaluating Arabic Large Language Models: A Survey of Benchmarks, Methods, and Gaps Paper • 2510.13430 • Published Oct 15, 2025 • 1
Are Arabic Benchmarks Reliable? QIMMA's Quality-First Approach to LLM Evaluation Paper • 2604.03395 • Published 15 days ago • 2
Are Arabic Benchmarks Reliable? QIMMA's Quality-First Approach to LLM Evaluation Paper • 2604.03395 • Published 15 days ago • 2