EAPO (Exploration-Aware Policy Optimization) is a reinforcement learning framework for training agentic large language models to perform adaptive exploration during test-time interaction. Unlike prior methods that apply exploration uniformly across all states, EAPO enables agents to selectively explore only when environmental uncertainty is high, improving long-horizon reasoning and decision making in interactive environments such as GUI control, web navigation, and embodied tasks.

Paper: Learning to Explore: Scaling Agentic Reasoning via Exploration-Aware Policy Optimization (https://arxiv.org/abs/2605.08978)

Code: https://github.com/HansenHua/EAPO-ICML26

Downloads last month
96
Safetensors
Model size
8B params
Tensor type
F32
·
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for hansenhua/EAPO-ICML26

Finetuned
(1061)
this model

Paper for hansenhua/EAPO-ICML26