diff --git "a/09FQT4oBgHgl3EQfEDVH/content/tmp_files/load_file.txt" "b/09FQT4oBgHgl3EQfEDVH/content/tmp_files/load_file.txt" new file mode 100644--- /dev/null +++ "b/09FQT4oBgHgl3EQfEDVH/content/tmp_files/load_file.txt" @@ -0,0 +1,1306 @@ +filepath=/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09FQT4oBgHgl3EQfEDVH/content/2301.13236v1.pdf,len=1305 +page_content='SoftTreeMax: Exponential Variance Reduction in Policy Gradient via Tree Search Gal Dalal * Assaf Hallak * Gugan Thoppe Shie Mannor Gal Chechik Abstract Despite the popularity of policy gradient meth- ods, they are known to suffer from large vari- ance and high sample complexity.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09FQT4oBgHgl3EQfEDVH/content/2301.13236v1.pdf'} +page_content=' To mitigate this, we introduce SoftTreeMax – a generaliza- tion of softmax that takes planning into account.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09FQT4oBgHgl3EQfEDVH/content/2301.13236v1.pdf'} +page_content=' In SoftTreeMax, we extend the traditional logits with the multi-step discounted cumulative reward, topped with the logits of future states.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09FQT4oBgHgl3EQfEDVH/content/2301.13236v1.pdf'} +page_content=' We con- sider two variants of SoftTreeMax, one for cumu- lative reward and one for exponentiated reward.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09FQT4oBgHgl3EQfEDVH/content/2301.13236v1.pdf'} +page_content=' For both, we analyze the gradient variance and reveal for the first time the role of a tree expan- sion policy in mitigating this variance.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09FQT4oBgHgl3EQfEDVH/content/2301.13236v1.pdf'} +page_content=' We prove that the resulting variance decays exponentially with the planning horizon as a function of the expansion policy.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09FQT4oBgHgl3EQfEDVH/content/2301.13236v1.pdf'} +page_content=' Specifically, we show that the closer the resulting state transitions are to uni- form, the faster the decay.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09FQT4oBgHgl3EQfEDVH/content/2301.13236v1.pdf'} +page_content=' In a practical imple- mentation, we utilize a parallelized GPU-based simulator for fast and efficient tree search.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09FQT4oBgHgl3EQfEDVH/content/2301.13236v1.pdf'} +page_content=' Our differentiable tree-based policy leverages all gra- dients at the tree leaves in each environment step instead of the traditional single-sample-based gra- dient.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09FQT4oBgHgl3EQfEDVH/content/2301.13236v1.pdf'} +page_content=' We then show in simulation how the vari- ance of the gradient is reduced by three orders of magnitude, leading to better sample complex- ity compared to the standard policy gradient.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09FQT4oBgHgl3EQfEDVH/content/2301.13236v1.pdf'} +page_content=' On Atari, SoftTreeMax demonstrates up to 5x better performance in a faster run time compared to dis- tributed PPO.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09FQT4oBgHgl3EQfEDVH/content/2301.13236v1.pdf'} +page_content=' Lastly, we demonstrate that high reward correlates with lower variance.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09FQT4oBgHgl3EQfEDVH/content/2301.13236v1.pdf'} +page_content=' 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09FQT4oBgHgl3EQfEDVH/content/2301.13236v1.pdf'} +page_content=' Introduction Policy Gradient (PG;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09FQT4oBgHgl3EQfEDVH/content/2301.13236v1.pdf'} +page_content=' Sutton et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09FQT4oBgHgl3EQfEDVH/content/2301.13236v1.pdf'} +page_content=' 1999) methods for Re- inforcement Learning (RL) are often the first choice for environments that allow numerous interactions at a fast pace (Schulman et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09FQT4oBgHgl3EQfEDVH/content/2301.13236v1.pdf'} +page_content=', 2017).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09FQT4oBgHgl3EQfEDVH/content/2301.13236v1.pdf'} +page_content=' Their success is attributed to several Equal contribution .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09FQT4oBgHgl3EQfEDVH/content/2301.13236v1.pdf'} +page_content=' Correspondence to: Gal Dalal