diff --git "a/0tFQT4oBgHgl3EQf0TbP/content/tmp_files/load_file.txt" "b/0tFQT4oBgHgl3EQf0TbP/content/tmp_files/load_file.txt" new file mode 100644--- /dev/null +++ "b/0tFQT4oBgHgl3EQf0TbP/content/tmp_files/load_file.txt" @@ -0,0 +1,882 @@ +filepath=/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFQT4oBgHgl3EQf0TbP/content/2301.13416v1.pdf,len=881 +page_content='Structure Flow-Guided Network for Real Depth Super-Resolution Jiayi Yuan*, Haobo Jiang*, Xiang Li, Jianjun Qian, Jun Li†, Jian Yang† PCA Lab, Key Lab of Intelligent Perception and Systems for High-Dimensional Information of Ministry of Education Jiangsu Key Lab of Image and Video Understanding for Social Security School of Computer Science and Engineering, Nanjing University of Science and Technology, Nanjing, China {jiayiyuan, jiang.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFQT4oBgHgl3EQf0TbP/content/2301.13416v1.pdf'} +page_content='hao.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFQT4oBgHgl3EQf0TbP/content/2301.13416v1.pdf'} +page_content='bo, xiang.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFQT4oBgHgl3EQf0TbP/content/2301.13416v1.pdf'} +page_content='li.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFQT4oBgHgl3EQf0TbP/content/2301.13416v1.pdf'} +page_content='implus, csjqian, junli, csjyang}@njust.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFQT4oBgHgl3EQf0TbP/content/2301.13416v1.pdf'} +page_content='edu.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFQT4oBgHgl3EQf0TbP/content/2301.13416v1.pdf'} +page_content='cn Real LR Groundtruth RGB image FDSR Ours (c) (e) (d) (f) (g) (h) (i) (j) (k) (l) (a) (b) Synthetic LR Figure 1: In this paper, we propose a novel structure flow-guided method for real-world DSR.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFQT4oBgHgl3EQf0TbP/content/2301.13416v1.pdf'} +page_content=' Our method obtains better depth edge recovery (g-h), compared to (e) and (f) using the SOTA method, FDSR (He et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFQT4oBgHgl3EQf0TbP/content/2301.13416v1.pdf'} +page_content=' 2021).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFQT4oBgHgl3EQf0TbP/content/2301.13416v1.pdf'} +page_content=' (a-b) Synthetic LR depth maps;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFQT4oBgHgl3EQf0TbP/content/2301.13416v1.pdf'} +page_content=' (c) Real LR depth map with the structural distortion;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFQT4oBgHgl3EQf0TbP/content/2301.13416v1.pdf'} +page_content=' (d) Real LR depth map with the edge noise (e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFQT4oBgHgl3EQf0TbP/content/2301.13416v1.pdf'} +page_content='g.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFQT4oBgHgl3EQf0TbP/content/2301.13416v1.pdf'} +page_content=', holes);' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFQT4oBgHgl3EQf0TbP/content/2301.13416v1.pdf'} +page_content=' (i-j) Ground-truth HR depth maps;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFQT4oBgHgl3EQf0TbP/content/2301.13416v1.pdf'} +page_content=' (k-l) RGB image guidance.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFQT4oBgHgl3EQf0TbP/content/2301.13416v1.pdf'} +page_content=' Abstract Real depth super-resolution (DSR), unlike synthetic settings, is a challenging task due to the structural distortion and the edge noise caused by the natural degradation in real-world low-resolution (LR) depth maps.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFQT4oBgHgl3EQf0TbP/content/2301.13416v1.pdf'} +page_content=' These defeats result in sig- nificant structure inconsistency between the depth map and the RGB guidance, which potentially confuses the RGB- structure guidance and thereby degrades the DSR quality.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFQT4oBgHgl3EQf0TbP/content/2301.13416v1.pdf'} +page_content=' In this paper, we propose a novel structure flow-guided DSR framework, where a cross-modality flow map is learned to guide the RGB-structure information transferring for pre- cise depth upsampling.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFQT4oBgHgl3EQf0TbP/content/2301.13416v1.pdf'} +page_content=' Specifically, our framework consists of a cross-modality flow-guided upsampling network (CFU- Net) and a flow-enhanced pyramid edge attention network (PEANet).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFQT4oBgHgl3EQf0TbP/content/2301.13416v1.pdf'} +page_content=' CFUNet contains a trilateral self-attention module combining both the geometric and semantic correlations for reliable cross-modality flow learning.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFQT4oBgHgl3EQf0TbP/content/2301.13416v1.pdf'} +page_content=' Then, the learned flow maps are combined with the grid-sampling mechanism for coarse high-resolution (HR) depth prediction.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFQT4oBgHgl3EQf0TbP/content/2301.13416v1.pdf'} +page_content=' PEANet targets These authors contributed equally.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFQT4oBgHgl3EQf0TbP/content/2301.13416v1.pdf'} +page_content=' †corresponding authors Copyright © 2023, Association for the Advancement of Artificial Intelligence (www.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFQT4oBgHgl3EQf0TbP/content/2301.13416v1.pdf'} +page_content='aaai.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFQT4oBgHgl3EQf0TbP/content/2301.13416v1.pdf'} +page_content='org).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFQT4oBgHgl3EQf0TbP/content/2301.13416v1.pdf'} +page_content=' All rights reserved.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFQT4oBgHgl3EQf0TbP/content/2301.13416v1.pdf'} +page_content=' at integrating the learned flow map as the edge attention into a pyramid network to hierarchically learn the edge-focused guidance feature for depth edge refinement.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFQT4oBgHgl3EQf0TbP/content/2301.13416v1.pdf'} +page_content=' Extensive exper- iments on real and synthetic DSR datasets verify that our ap- proach achieves excellent performance compared to state-of- the-art methods.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFQT4oBgHgl3EQf0TbP/content/2301.13416v1.pdf'} +page_content=' Introduction With the fast development of cheap RGB-D sensors, depth maps have played a much more important role in a variety of computer vision applications, such as object recognition (Blum et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFQT4oBgHgl3EQf0TbP/content/2301.13416v1.pdf'} +page_content=' 2012;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFQT4oBgHgl3EQf0TbP/content/2301.13416v1.pdf'} +page_content=' Eitel et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFQT4oBgHgl3EQf0TbP/content/2301.13416v1.pdf'} +page_content=' 2015), 3D reconstruction (Hou, Dai, and Nießner 2019;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFQT4oBgHgl3EQf0TbP/content/2301.13416v1.pdf'} +page_content=' Newcombe et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFQT4oBgHgl3EQf0TbP/content/2301.13416v1.pdf'} +page_content=' 2011), and virtual reality (Meuleman et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFQT4oBgHgl3EQf0TbP/content/2301.13416v1.pdf'} +page_content=' 2020)).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFQT4oBgHgl3EQf0TbP/content/2301.13416v1.pdf'} +page_content=' However, the defects (e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFQT4oBgHgl3EQf0TbP/content/2301.13416v1.pdf'} +page_content='g.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFQT4oBgHgl3EQf0TbP/content/2301.13416v1.pdf'} +page_content=', low resolution and structural distortion) lying in the cheap RGB-D sensors (e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFQT4oBgHgl3EQf0TbP/content/2301.13416v1.pdf'} +page_content='g.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFQT4oBgHgl3EQf0TbP/content/2301.13416v1.pdf'} +page_content=', Microsoft Kinect and HuaweiP30Pro), still hinder their more extensive applications in real world.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFQT4oBgHgl3EQf0TbP/content/2301.13416v1.pdf'} +page_content=' Also, although the popular DSR methods (Song et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFQT4oBgHgl3EQf0TbP/content/2301.13416v1.pdf'} +page_content=' 2020;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFQT4oBgHgl3EQf0TbP/content/2301.13416v1.pdf'} +page_content=' Kim, Ponce, and Ham 2021;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFQT4oBgHgl3EQf0TbP/content/2301.13416v1.pdf'} +page_content=' Sun et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFQT4oBgHgl3EQf0TbP/content/2301.13416v1.pdf'} +page_content=' 2021) have achieved excellent DSR accuracy on synthetic LR depth maps, the significant domain gap between the real and the synthetic data largely degrades their DSR precision on the real data.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFQT4oBgHgl3EQf0TbP/content/2301.13416v1.pdf'} +page_content=' arXiv:2301.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFQT4oBgHgl3EQf0TbP/content/2301.13416v1.pdf'} +page_content='13416v1 [cs.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFQT4oBgHgl3EQf0TbP/content/2301.13416v1.pdf'} +page_content='CV] 31 Jan 2023 This domain gap is mainly caused by different genera- tion mechanisms of the LR depth map.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFQT4oBgHgl3EQf0TbP/content/2301.13416v1.pdf'} +page_content=' The synthetic LR depth map is usually generated via artificial degradation (e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFQT4oBgHgl3EQf0TbP/content/2301.13416v1.pdf'} +page_content='g.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFQT4oBgHgl3EQf0TbP/content/2301.13416v1.pdf'} +page_content=', down-sampling operation), while the real one is from natural degradation (e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFQT4oBgHgl3EQf0TbP/content/2301.13416v1.pdf'} +page_content='g.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFQT4oBgHgl3EQf0TbP/content/2301.13416v1.pdf'} +page_content=', noise, blur, and distortion).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFQT4oBgHgl3EQf0TbP/content/2301.13416v1.pdf'} +page_content=' Differ- ent from the synthetic data, there are two challenges of the real-data DSR as below.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFQT4oBgHgl3EQf0TbP/content/2301.13416v1.pdf'} +page_content=' The first one is the severe structural distortion (see Fig.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFQT4oBgHgl3EQf0TbP/content/2301.13416v1.pdf'} +page_content=' 1 (c)), especially for the low-reflection glass surface or the infrared-absorbing surface.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFQT4oBgHgl3EQf0TbP/content/2301.13416v1.pdf'} +page_content=' The second one is the edge noise even the holes (see Fig.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFQT4oBgHgl3EQf0TbP/content/2301.13416v1.pdf'} +page_content=' 1 (d)), caused by the physical limitations or the low processing power of the depth sensors.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFQT4oBgHgl3EQf0TbP/content/2301.13416v1.pdf'} +page_content=' Both of the challenges above present a significant difference between the real and the synthetic data, which inherently degrades the generalization precision of the synthetic DSR methods to the real data.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFQT4oBgHgl3EQf0TbP/content/2301.13416v1.pdf'} +page_content=' In this paper, we develop a novel structure flow-guided DSR framework to handle the above challenges.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFQT4oBgHgl3EQf0TbP/content/2301.13416v1.pdf'} +page_content=' For the structural distortion, we propose a cross-modality flow- guided upsampling network (CFUNet) that learns a struc- tured flow between the depth map and the RGB image to guide their structure alignment for the recovery of the dis- torted depth structure.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFQT4oBgHgl3EQf0TbP/content/2301.13416v1.pdf'} +page_content=' It includes two key components: a trilateral self-attention module and a cross-modality cross- attention module.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFQT4oBgHgl3EQf0TbP/content/2301.13416v1.pdf'} +page_content=' In detail, the former leverages the geomet- ric and semantic correlations (i.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFQT4oBgHgl3EQf0TbP/content/2301.13416v1.pdf'} +page_content='e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFQT4oBgHgl3EQf0TbP/content/2301.13416v1.pdf'} +page_content=', coordinate distance, pixel difference, feature difference) to guide the relevant depth- feature aggregation into each depth feature to supplement the missing depth-structure information.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFQT4oBgHgl3EQf0TbP/content/2301.13416v1.pdf'} +page_content=' The latter utilizes the enhanced depth feature and the RGB feature as the in- put for their sufficient message passing and flow-map gen- eration.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFQT4oBgHgl3EQf0TbP/content/2301.13416v1.pdf'} +page_content=' Finally, we combine the flow map with the grid- sampling mechanism for the coarse HR depth prediction.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFQT4oBgHgl3EQf0TbP/content/2301.13416v1.pdf'} +page_content=' For the edge noise, we present a flow-enhanced pyramid edge attention network (PEANet) that integrates the learned structure flow map as the edge attention into a pyramid net- work to learn the edge-focused guidance feature for the edge refinement of the coarse HR depth map predicted above.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFQT4oBgHgl3EQf0TbP/content/2301.13416v1.pdf'} +page_content=' Considering the structure clue (i.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFQT4oBgHgl3EQf0TbP/content/2301.13416v1.pdf'} +page_content='e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFQT4oBgHgl3EQf0TbP/content/2301.13416v1.pdf'} +page_content=', edge region tends to own significant flow-value fluctuations) lying in the learned flow map, we combine the flow map with the RGB fea- ture to form the flow-enhanced RGB feature for highlighting the RGB-structure region.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFQT4oBgHgl3EQf0TbP/content/2301.13416v1.pdf'} +page_content=' Then, we feed the flow-enhanced RGB feature into an iterative pyramid network for its edge- focused guidance feature learning.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFQT4oBgHgl3EQf0TbP/content/2301.13416v1.pdf'} +page_content=' The low-level guidance features effectively filter the RGB-texture noise (guided by the flow map), while the high-level guidance features exploit the rich context information for more precise edge-feature capture.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFQT4oBgHgl3EQf0TbP/content/2301.13416v1.pdf'} +page_content=' Finally, we pass the learned guidance feature and the depth feature into a decoder network to predict the edge- refined HR depth map.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFQT4oBgHgl3EQf0TbP/content/2301.13416v1.pdf'} +page_content=' Extensive experiments on challeng- ing real-world datasets verify the effectiveness of our pro- posed method (see examples in Fig.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFQT4oBgHgl3EQf0TbP/content/2301.13416v1.pdf'} +page_content=' 1(g-h)).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFQT4oBgHgl3EQf0TbP/content/2301.13416v1.pdf'} +page_content=' In summary, our contributions are as follows: We propose an effective cross-modality flow-guided up- sampling network (CFUNet), where a structure flow map is learned to guide the structure alignment between the depth map and the RGB image for the recovery of the distorted depth edge.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFQT4oBgHgl3EQf0TbP/content/2301.13416v1.pdf'} +page_content=' We present a flow-enhanced pyramid edge attention net- work (PEANet) that integrates the flow map as edge at- tention into a pyramid network to hierarchically learn the edge-focused guidance feature for edge refinement.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFQT4oBgHgl3EQf0TbP/content/2301.13416v1.pdf'} +page_content=' Extensive experiments on the real and synthetic datasets verify the effectiveness of the proposed framework, and we achieve state-of-the-art restoration performance on multiple DSR dataset benchmarks.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFQT4oBgHgl3EQf0TbP/content/2301.13416v1.pdf'} +page_content=' Related Work Synthetic Depth Super-Resolution The synthetic depth super-resolution (DSR) architectures can be divided into the pre-upsampling methods and the progressive upsampling methods (Wang et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFQT4oBgHgl3EQf0TbP/content/2301.13416v1.pdf'} +page_content=' 2020).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFQT4oBgHgl3EQf0TbP/content/2301.13416v1.pdf'} +page_content=' The pre-upsampling DSR methods first upsample the input depth with interpolation algorithms (e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFQT4oBgHgl3EQf0TbP/content/2301.13416v1.pdf'} +page_content='g.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFQT4oBgHgl3EQf0TbP/content/2301.13416v1.pdf'} +page_content=', bicubic) from LR to HR, and then feed it into depth recovery network layers.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFQT4oBgHgl3EQf0TbP/content/2301.13416v1.pdf'} +page_content=' (Li et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFQT4oBgHgl3EQf0TbP/content/2301.13416v1.pdf'} +page_content=' 2016) introduce the first pre-upsampling network ar- chitecture.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFQT4oBgHgl3EQf0TbP/content/2301.13416v1.pdf'} +page_content=' As this method handles arbitrary scaling factor depth, more and more similar approaches have been pre- sented to further facilitate DSR task (Li et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFQT4oBgHgl3EQf0TbP/content/2301.13416v1.pdf'} +page_content=' 2019;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFQT4oBgHgl3EQf0TbP/content/2301.13416v1.pdf'} +page_content=' Lutio et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFQT4oBgHgl3EQf0TbP/content/2301.13416v1.pdf'} +page_content=' 2019;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFQT4oBgHgl3EQf0TbP/content/2301.13416v1.pdf'} +page_content=' Zhu et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFQT4oBgHgl3EQf0TbP/content/2301.13416v1.pdf'} +page_content=' 2018;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFQT4oBgHgl3EQf0TbP/content/2301.13416v1.pdf'} +page_content=' Chen and Jung 2018;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFQT4oBgHgl3EQf0TbP/content/2301.13416v1.pdf'} +page_content=' Hao et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFQT4oBgHgl3EQf0TbP/content/2301.13416v1.pdf'} +page_content=' 2019;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFQT4oBgHgl3EQf0TbP/content/2301.13416v1.pdf'} +page_content=' Su et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFQT4oBgHgl3EQf0TbP/content/2301.13416v1.pdf'} +page_content=' 2019).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFQT4oBgHgl3EQf0TbP/content/2301.13416v1.pdf'} +page_content=' However, upsampling in one step is not suitable for large scaling factors simply because it usu- ally leads to losing much detailed information.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFQT4oBgHgl3EQf0TbP/content/2301.13416v1.pdf'} +page_content=' To tackle these issues, a progressive upsampling structure is designed in MSG-net(Tak-Wai, Loy, and Tang 2016), which gradually upsamples the LR depth map by transposed convolution lay- ers at different scale levels.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFQT4oBgHgl3EQf0TbP/content/2301.13416v1.pdf'} +page_content=' Since then, various progressive upsample-based methods have been proposed that greatly promote the development of this domain(Tak-Wai, Loy, and Tang 2016;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFQT4oBgHgl3EQf0TbP/content/2301.13416v1.pdf'} +page_content=' Guo et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFQT4oBgHgl3EQf0TbP/content/2301.13416v1.pdf'} +page_content=' 2019;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFQT4oBgHgl3EQf0TbP/content/2301.13416v1.pdf'} +page_content=' He et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFQT4oBgHgl3EQf0TbP/content/2301.13416v1.pdf'} +page_content=' 2021;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFQT4oBgHgl3EQf0TbP/content/2301.13416v1.pdf'} +page_content=' Zuo et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFQT4oBgHgl3EQf0TbP/content/2301.13416v1.pdf'} +page_content=' 2019).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFQT4oBgHgl3EQf0TbP/content/2301.13416v1.pdf'} +page_content=' Recently, the joint-task learning framework achieves im- pressive performance, such as DSR & completion (Yan et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFQT4oBgHgl3EQf0TbP/content/2301.13416v1.pdf'} +page_content=' 2022), depth estimation & enhancement (Wang et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFQT4oBgHgl3EQf0TbP/content/2301.13416v1.pdf'} +page_content=' 2021) and DSR & depth estimation (Tang et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFQT4oBgHgl3EQf0TbP/content/2301.13416v1.pdf'} +page_content=' 2021;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFQT4oBgHgl3EQf0TbP/content/2301.13416v1.pdf'} +page_content=' Sun et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFQT4oBgHgl3EQf0TbP/content/2301.13416v1.pdf'} +page_content=' 2021).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFQT4oBgHgl3EQf0TbP/content/2301.13416v1.pdf'} +page_content=' Inspired by these joint-task methods, we combine the alignment task with the super-resolution task to distill the cross-modality knowledge for robust depth upsampling.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFQT4oBgHgl3EQf0TbP/content/2301.13416v1.pdf'} +page_content=' Real-world Depth Super-Resolution In recent years, the super-resolution for real-world images has been under the spotlight, which involves upsampling, denoising, and hole-filling.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFQT4oBgHgl3EQf0TbP/content/2301.13416v1.pdf'} +page_content=' Early traditional depth enhance- ment methods (Yang et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFQT4oBgHgl3EQf0TbP/content/2301.13416v1.pdf'} +page_content=' 2014;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFQT4oBgHgl3EQf0TbP/content/2301.13416v1.pdf'} +page_content=' Liu et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFQT4oBgHgl3EQf0TbP/content/2301.13416v1.pdf'} +page_content=' 2016, 2018) are based on complex and time-consuming optimization.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFQT4oBgHgl3EQf0TbP/content/2301.13416v1.pdf'} +page_content=' For fast CNN-based DSR, AIR (Song et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFQT4oBgHgl3EQf0TbP/content/2301.13416v1.pdf'} +page_content=' 2020) simulates the real LR depth map by combining the interval degrada- tion and the bicubic degradation, and proposes a channel at- tention based network for real DSR.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFQT4oBgHgl3EQf0TbP/content/2301.13416v1.pdf'} +page_content=' PAC (Su et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFQT4oBgHgl3EQf0TbP/content/2301.13416v1.pdf'} +page_content=' 2019) and DKN (Kim, Ponce, and Ham 2021) utilize the adap- tive kernels calculated by the neighborhood pixels in RGB image for robust DSR.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFQT4oBgHgl3EQf0TbP/content/2301.13416v1.pdf'} +page_content=' FDSR(He et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFQT4oBgHgl3EQf0TbP/content/2301.13416v1.pdf'} +page_content=' 2021) proposes the octave convolution for frequency domain separation, which achieves outstanding performance in real datasets.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFQT4oBgHgl3EQf0TbP/content/2301.13416v1.pdf'} +page_content=' Although these methods handle the large modality gap between the guidance image and depth map,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFQT4oBgHgl3EQf0TbP/content/2301.13416v1.pdf'} +page_content=' the structure misalignment ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFQT4oBgHgl3EQf0TbP/content/2301.13416v1.pdf'} +page_content='between the depth map and the RGB image still leads them ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFQT4oBgHgl3EQf0TbP/content/2301.13416v1.pdf'} +page_content='Cross ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFQT4oBgHgl3EQf0TbP/content/2301.13416v1.pdf'} +page_content='Attention ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFQT4oBgHgl3EQf0TbP/content/2301.13416v1.pdf'} +page_content='Trilateral ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFQT4oBgHgl3EQf0TbP/content/2301.13416v1.pdf'} +page_content='Self- ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFQT4oBgHgl3EQf0TbP/content/2301.13416v1.pdf'} +page_content='attention ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFQT4oBgHgl3EQf0TbP/content/2301.13416v1.pdf'} +page_content='Grid ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFQT4oBgHgl3EQf0TbP/content/2301.13416v1.pdf'} +page_content='Sample ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFQT4oBgHgl3EQf0TbP/content/2301.13416v1.pdf'} +page_content='������������������������������������ ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFQT4oBgHgl3EQf0TbP/content/2301.13416v1.pdf'} +page_content='������������������������������������ ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFQT4oBgHgl3EQf0TbP/content/2301.13416v1.pdf'} +page_content='������������������������������������������������������������������������������������ ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFQT4oBgHgl3EQf0TbP/content/2301.13416v1.pdf'} +page_content='Encoder ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFQT4oBgHgl3EQf0TbP/content/2301.13416v1.pdf'} +page_content='������������������������������������ ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFQT4oBgHgl3EQf0TbP/content/2301.13416v1.pdf'} +page_content='Cross-modality Flow-guided Upsampling ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFQT4oBgHgl3EQf0TbP/content/2301.13416v1.pdf'} +page_content='Flow-enhanced Pyramid Edge Attention ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFQT4oBgHgl3EQf0TbP/content/2301.13416v1.pdf'} +page_content='Edge ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFQT4oBgHgl3EQf0TbP/content/2301.13416v1.pdf'} +page_content='Decoder ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFQT4oBgHgl3EQf0TbP/content/2301.13416v1.pdf'} +page_content='������������������������������������������������������������������������������������ ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFQT4oBgHgl3EQf0TbP/content/2301.13416v1.pdf'} +page_content='Flow-enhanced ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFQT4oBgHgl3EQf0TbP/content/2301.13416v1.pdf'} +page_content='Pyramid Attention ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFQT4oBgHgl3EQf0TbP/content/2301.13416v1.pdf'} +page_content='Add ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFQT4oBgHgl3EQf0TbP/content/2301.13416v1.pdf'} +page_content='Multi-scale ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFQT4oBgHgl3EQf0TbP/content/2301.13416v1.pdf'} +page_content='features ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFQT4oBgHgl3EQf0TbP/content/2301.13416v1.pdf'} +page_content='Flow ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFQT4oBgHgl3EQf0TbP/content/2301.13416v1.pdf'} +page_content='Decoder ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFQT4oBgHgl3EQf0TbP/content/2301.13416v1.pdf'} +page_content='Upsample ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFQT4oBgHgl3EQf0TbP/content/2301.13416v1.pdf'} +page_content='Decoder ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFQT4oBgHgl3EQf0TbP/content/2301.13416v1.pdf'} +page_content='Flow maps ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFQT4oBgHgl3EQf0TbP/content/2301.13416v1.pdf'} +page_content='Figure 2: The pipeline of our structure flow-guided DSR framework.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFQT4oBgHgl3EQf0TbP/content/2301.13416v1.pdf'} +page_content=' Given the LR depth map and the RGB image, the left block (blue) first generates the flow maps through a trilateral self-attention module and a cross-attention module, and predicts the coarse depth map Dcoarse with the flow-based grid-sampling.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFQT4oBgHgl3EQf0TbP/content/2301.13416v1.pdf'} +page_content=' Then, the right block (yellow) integrates the RGB/depth features and the flow map (as edge attention) to learn the edge-focused guidance feature for edge refinement (Drefine).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFQT4oBgHgl3EQf0TbP/content/2301.13416v1.pdf'} +page_content=' to suffer from serious errors around the edge regions.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFQT4oBgHgl3EQf0TbP/content/2301.13416v1.pdf'} +page_content=' Dif- ferent from the general paradigms, we introduce a novel structure flow-guided framework, which exploits the cross- modality flow map to guide the RGB-structure information transferring for real DSR.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFQT4oBgHgl3EQf0TbP/content/2301.13416v1.pdf'} +page_content=' Approach In the following, we introduce our structure flow-guided DSR framework for robust real-world DSR.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFQT4oBgHgl3EQf0TbP/content/2301.13416v1.pdf'} +page_content=' As shown in Fig.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFQT4oBgHgl3EQf0TbP/content/2301.13416v1.pdf'} +page_content=' 2, our framework consists of two modules: a cross- modality flow-guided upsampling network (CFUNet) and a flow-enhanced pyramid edge attention network (PEANet).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFQT4oBgHgl3EQf0TbP/content/2301.13416v1.pdf'} +page_content=' Given an LR depth map DLR ∈ RH0×W0 and its cor- responding HR RGB image I ∈ RH×W ×3 (H/H0 = W/W0 = s and s is the scale factor), CFUNet first learns the cross-modality flow to guide the structure alignment be- tween depth the RGB for coarse HR depth prediction.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFQT4oBgHgl3EQf0TbP/content/2301.13416v1.pdf'} +page_content=' Then, PEANet exploits the structure flow as edge attention to learn the edge-focused guidance feature for edge refinement.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFQT4oBgHgl3EQf0TbP/content/2301.13416v1.pdf'} +page_content=' Cross-modality Flow-guided Upsampling Network As demonstrated in Fig.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFQT4oBgHgl3EQf0TbP/content/2301.13416v1.pdf'} +page_content=' 1 (c), the structural distortion of the real LR depth map leads to the significant structure misalign- ment between the RGB image and the depth map, which po- tentially damages the structure guidance of RGB images for depth edge recovery.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFQT4oBgHgl3EQf0TbP/content/2301.13416v1.pdf'} +page_content=' To handle it, our solution is to learn an effective cross-modality flow map between the depth and the RGB to identify their structure relationship.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFQT4oBgHgl3EQf0TbP/content/2301.13416v1.pdf'} +page_content=' Then, guided by the learned flow map, we align the structure of the depth map to the RGB image for the recovery of the distorted depth edge.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFQT4oBgHgl3EQf0TbP/content/2301.13416v1.pdf'} +page_content=' Next, we will describe our network in terms of the feature extraction, the trilateral attention-based flow genera- tion, and the flow-guided depth upsampling.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFQT4oBgHgl3EQf0TbP/content/2301.13416v1.pdf'} +page_content=' Feature extraction.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFQT4oBgHgl3EQf0TbP/content/2301.13416v1.pdf'} +page_content=' To achieve the consistent input size, we first upsample the LR depth map DLR to a resolution map DBic ∈ RH×W with the bicubic interpolation.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFQT4oBgHgl3EQf0TbP/content/2301.13416v1.pdf'} +page_content=' Then, we feed the upsampled depth map and the RGB image into an encoder for their feature extraction: {Fl ∈ RH×W ×D}L l=1 and {Gl ∈ RH×W ×D}L l=1 where the sub- script l denotes the feature output in l-th layer of the encoder.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFQT4oBgHgl3EQf0TbP/content/2301.13416v1.pdf'} +page_content=' Trilateral attention-based flow generation.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFQT4oBgHgl3EQf0TbP/content/2301.13416v1.pdf'} +page_content=' The key to generating a reliable cross-modality flow map is to model a robust relationship between the RGB and the depth map.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFQT4oBgHgl3EQf0TbP/content/2301.13416v1.pdf'} +page_content=' Nevertheless, the serious structural distortion caused by the natural degradation potentially increases the modality gap between the depth and the RGB.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFQT4oBgHgl3EQf0TbP/content/2301.13416v1.pdf'} +page_content=' Thereby, it’s difficult to directly exploit a general attention mechanism to model such a relationship.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFQT4oBgHgl3EQf0TbP/content/2301.13416v1.pdf'} +page_content=' To mitigate it, we target at enhanc- ing the depth feature through a proposed trilateral self- attention block so that the distorted depth-structure informa- tion can be largely complemented for relationship modeling.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFQT4oBgHgl3EQf0TbP/content/2301.13416v1.pdf'} +page_content=' As shown in Fig.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFQT4oBgHgl3EQf0TbP/content/2301.13416v1.pdf'} +page_content=' 3, our trilateral self-attention block fuses the geometric-level correlation and the semantic-level cor- relation to jointly guide the depth feature enhancement.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFQT4oBgHgl3EQf0TbP/content/2301.13416v1.pdf'} +page_content=' It’s noted that we just enhance the depth feature FL in the last layer (L-th layer): ¯F(i) L = � j αi,j · (βlow i,j + βhigh i,j ) · γi,j · F(j) L + F(j) L , (1) where F(j) L (1 ≤ j ≤ H × W) denotes the j-th depth-pixel feature and ¯F(i) L denotes the i-th enhanced depth feature (1 ≤ i ≤ H × W).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFQT4oBgHgl3EQf0TbP/content/2301.13416v1.pdf'} +page_content=' The geometric-level correlation con- tains a spatial kernel α ∈ R(H×W )×(H×W ) and a low-level color kernel βlow ∈ R(H×W )×(H×W ), while the semantic- level correlation contains a high-level color semantic ker- nel βhigh ∈ R(H×W )×(H×W ) and a depth semantic kernel γ ∈ R(H×W )×(H×W ).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFQT4oBgHgl3EQf0TbP/content/2301.13416v1.pdf'} +page_content=' In detail, we formulate the spatial kernel as a coordinate distance-aware Gaussian kernel: αi,j = Gaussian(∥ Coor(i) − Coor(j)∥2, σs), (2) where Gaussian(x, σ) = 1 σ √ 2π exp (− x2 2σ2 ) is the Gaussian function.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFQT4oBgHgl3EQf0TbP/content/2301.13416v1.pdf'} +page_content=' Coor(i) ∈ R2 denotes the row-column coordi- nates of pixel i at the depth map and σs is the kernel vari- ance.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFQT4oBgHgl3EQf0TbP/content/2301.13416v1.pdf'} +page_content=' The low-level and high-level color kernels are defined ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFQT4oBgHgl3EQf0TbP/content/2301.13416v1.pdf'} +page_content='K ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFQT4oBgHgl3EQf0TbP/content/2301.13416v1.pdf'} +page_content='V ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFQT4oBgHgl3EQf0TbP/content/2301.13416v1.pdf'} +page_content='Q ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFQT4oBgHgl3EQf0TbP/content/2301.13416v1.pdf'} +page_content='Cross-attention Module ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFQT4oBgHgl3EQf0TbP/content/2301.13416v1.pdf'} +page_content='Depth Transformation ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFQT4oBgHgl3EQf0TbP/content/2301.13416v1.pdf'} +page_content='Self- ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFQT4oBgHgl3EQf0TbP/content/2301.13416v1.pdf'} +page_content='Attention ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFQT4oBgHgl3EQf0TbP/content/2301.13416v1.pdf'} +page_content='Add & ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFQT4oBgHgl3EQf0TbP/content/2301.13416v1.pdf'} +page_content='Norm ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFQT4oBgHgl3EQf0TbP/content/2301.13416v1.pdf'} +page_content='Self- ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFQT4oBgHgl3EQf0TbP/content/2301.13416v1.pdf'} +page_content='Attention ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFQT4oBgHgl3EQf0TbP/content/2301.13416v1.pdf'} +page_content='Add & ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFQT4oBgHgl3EQf0TbP/content/2301.13416v1.pdf'} +page_content='Norm ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFQT4oBgHgl3EQf0TbP/content/2301.13416v1.pdf'} +page_content='Color Transformation ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFQT4oBgHgl3EQf0TbP/content/2301.13416v1.pdf'} +page_content='V ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFQT4oBgHgl3EQf0TbP/content/2301.13416v1.pdf'} +page_content='K ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFQT4oBgHgl3EQf0TbP/content/2301.13416v1.pdf'} +page_content='Q ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFQT4oBgHgl3EQf0TbP/content/2301.13416v1.pdf'} +page_content='������������������������� ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFQT4oBgHgl3EQf0TbP/content/2301.13416v1.pdf'} +page_content='������������������������ ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFQT4oBgHgl3EQf0TbP/content/2301.13416v1.pdf'} +page_content='������������������������′ ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFQT4oBgHgl3EQf0TbP/content/2301.13416v1.pdf'} +page_content='C ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFQT4oBgHgl3EQf0TbP/content/2301.13416v1.pdf'} +page_content='������������������������+������������ ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFQT4oBgHgl3EQf0TbP/content/2301.13416v1.pdf'} +page_content='C Concatenation ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFQT4oBgHgl3EQf0TbP/content/2301.13416v1.pdf'} +page_content='P Position Embedding ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFQT4oBgHgl3EQf0TbP/content/2301.13416v1.pdf'} +page_content='Element-wise Production ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFQT4oBgHgl3EQf0TbP/content/2301.13416v1.pdf'} +page_content='P ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFQT4oBgHgl3EQf0TbP/content/2301.13416v1.pdf'} +page_content='Trilateral Self-attention Module ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFQT4oBgHgl3EQf0TbP/content/2301.13416v1.pdf'} +page_content='{������������������������}������������>������������ ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFQT4oBgHgl3EQf0TbP/content/2301.13416v1.pdf'} +page_content='������������ ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFQT4oBgHgl3EQf0TbP/content/2301.13416v1.pdf'} +page_content='������������������������������������������������ ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFQT4oBgHgl3EQf0TbP/content/2301.13416v1.pdf'} +page_content='������������������������������������������������������������ ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFQT4oBgHgl3EQf0TbP/content/2301.13416v1.pdf'} +page_content='������������ ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFQT4oBgHgl3EQf0TbP/content/2301.13416v1.pdf'} +page_content='Geometric-level ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFQT4oBgHgl3EQf0TbP/content/2301.13416v1.pdf'} +page_content='Semantic-level ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFQT4oBgHgl3EQf0TbP/content/2301.13416v1.pdf'} +page_content='������������������������ ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFQT4oBgHgl3EQf0TbP/content/2301.13416v1.pdf'} +page_content='������������������������ ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFQT4oBgHgl3EQf0TbP/content/2301.13416v1.pdf'} +page_content='Figure 3: The architecture of the trilateral self-attention module and the cross-attention module.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFQT4oBgHgl3EQf0TbP/content/2301.13416v1.pdf'} +page_content=' by the Gaussian kernels with the low-level and the semantic- level RGB feature similarity, whose kernel sum is: βlow i,j + βhigh i,j = L � l=0 Gaussian(∥G(i) l − G(j) l ∥2, σc).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFQT4oBgHgl3EQf0TbP/content/2301.13416v1.pdf'} +page_content=' (3) The depth semantic kernel is designed based on the depth feature similarity in the L-th layer: γi,j = Gaussian(∥F(i) L − F(j) L ∥2, σd).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFQT4oBgHgl3EQf0TbP/content/2301.13416v1.pdf'} +page_content=' (4) Guided by the geometric and semantic kernels above, the correlated depth information can be effectively aggregated into each depth feature through Eq.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFQT4oBgHgl3EQf0TbP/content/2301.13416v1.pdf'} +page_content='1 for depth feature com- pletion and enhancement.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFQT4oBgHgl3EQf0TbP/content/2301.13416v1.pdf'} +page_content=' Then, we feed the enhanced depth feature ¯FL and the RGB feature GL into the cross-attention block for their ef- ficient cross-modality feature intersection: ˜F(i) L = ¯F(i) L + MLP( � j softmaxj(φq(¯F(i) L )⊤φk(G(j) L ))φv(G(j) L )), ˜G(i) L = G(i) L + MLP( � j softmaxj(φq(G(i) L )⊤φk(¯F(j) L )φv(¯F(j) L )), (5) where φq, φk and φv are the projection functions of the query, the key and the value in our nonlocal-style cross- attention module.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFQT4oBgHgl3EQf0TbP/content/2301.13416v1.pdf'} +page_content=' With the query-key similarity, the value can be retrieved for feature enhancement.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFQT4oBgHgl3EQf0TbP/content/2301.13416v1.pdf'} +page_content=' Then, we concate- nate the enhanced depth feature ˜FL and RGB feature ˜GL and pass them into a multi-layer convolutional network to obtain their correlated feature at each layer {Gl}L′ l=L+1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFQT4oBgHgl3EQf0TbP/content/2301.13416v1.pdf'} +page_content=' Fi- nally, following (Dosovitskiy et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFQT4oBgHgl3EQf0TbP/content/2301.13416v1.pdf'} +page_content=' 2015), based on the pre- viously extracted features {Gl}L l=1 and the correlated fea- tures {Gl}L′ l=L+1, we exploit a decoder network to generate the multi-layer flow maps {∆l}L′ l=1, where the flow genera- tion in layer l can be formulated as: Gflow l+1 , ∆l+1 = deconv(Cat[Gflow l , ∆l, GL′−l−1]), (6) where Gflow l denotes the intermediate flow feature and deconv consisting of a deconvolution operation and a con- volutional block (Gflow 1 , ∆1 = deconv(GL′)).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFQT4oBgHgl3EQf0TbP/content/2301.13416v1.pdf'} +page_content=' Flow-guided depth upsampling module.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFQT4oBgHgl3EQf0TbP/content/2301.13416v1.pdf'} +page_content=' With the learned flow map ∆L′ in the last layer, we combine it with the grid-sampling strategy for the HR depth map predic- tion.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFQT4oBgHgl3EQf0TbP/content/2301.13416v1.pdf'} +page_content=' In detail, the value of the HR depth map is the bilinear interpolation of the neighborhood pixels in LR depth map DLR, where the neighborhoods are defined according to the learned flow field, which can be formulated as: Dcoarse = Grid-Sample(DLR, ∆L′), (7) where Grid-Sample denotes the upsampling operation com- puting the output using pixel values from neighborhood pix- els and pixel locations from the grid (Li et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFQT4oBgHgl3EQf0TbP/content/2301.13416v1.pdf'} +page_content=' 2020).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFQT4oBgHgl3EQf0TbP/content/2301.13416v1.pdf'} +page_content=' Flow-enhanced Pyramid Edge Attention Network In order to further improve our DSR precision in the case of the edge noise problem, we propose a flow-enhanced pyra- mid network, where the learned structure flow is served as the edge attention to hierarchically mine edge-focused guid- ance feature from the RGB image for the edge refinement of Dcoarse.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFQT4oBgHgl3EQf0TbP/content/2301.13416v1.pdf'} +page_content=' Specifically, we first feed the previously pre- dicted HR depth map Dcoarse and the RGB image into an encoder network to extract their features: {Fcoarse t }T +1 t=1 and {Gt}T t=1, where subscript t indicates the extracted feature at the t-th layer.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFQT4oBgHgl3EQf0TbP/content/2301.13416v1.pdf'} +page_content=' Then, we propose the flow-enhanced pyramid attention module and the edge decoder module as follows for refined HR depth prediction.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFQT4oBgHgl3EQf0TbP/content/2301.13416v1.pdf'} +page_content=' Flow-enhanced pyramid attention module.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFQT4oBgHgl3EQf0TbP/content/2301.13416v1.pdf'} +page_content=' In this mod- ule, we target at combining the RGB feature and the flow map to learn the edge-focused guidance feature {Gguide t } at each layer.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFQT4oBgHgl3EQf0TbP/content/2301.13416v1.pdf'} +page_content=' In detail, for the t-th layer, with the RGB fea- ture Gt and its corresponding flow map ∆L′−t, we first fuse the flow information into the RGB feature to form the flow- enhanced RGB feature, Gflow t = ∆L′−t · Gt + Gt, (8) Decoded Search Feature Decoder Add&Ins.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFQT4oBgHgl3EQf0TbP/content/2301.13416v1.pdf'} +page_content='Norm formator MaskT Encoded Add&Ins.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFQT4oBgHgl3EQf0TbP/content/2301.13416v1.pdf'} +page_content='Norm Mul & Iins, Norm Template Feature Cross-Attention Cross-Attention Encoder ?' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFQT4oBgHgl3EQf0TbP/content/2301.13416v1.pdf'} +page_content=' Add&ins.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFQT4oBgHgl3EQf0TbP/content/2301.13416v1.pdf'} +page_content='Nom Add&Ins.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFQT4oBgHgl3EQf0TbP/content/2301.13416v1.pdf'} +page_content='Norm Self-Attention Self-Attention eight Sharing TemplateFeature SearchFeature Element-wiseProduction @ Template Feature Mask Figure 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFQT4oBgHgl3EQf0TbP/content/2301.13416v1.pdf'} +page_content=' An overview of the proposed transformer architectureC Scale Unify & Concat CONV CONV CONV CONV CONV CONV ∆������������′−������������ Flow-enhanced Pyramid Attention Module ×K ������������������������ ������������������������ ������������������������������������������������������������������������ ������������������������ ������������������������������������������������������������ Figure 4: The architecture of the pyramid attention module.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFQT4oBgHgl3EQf0TbP/content/2301.13416v1.pdf'} +page_content=' The subscript t denotes the feature output in the t-th layer of the encoder (1 ≤ t ≤ T).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFQT4oBgHgl3EQf0TbP/content/2301.13416v1.pdf'} +page_content=' ‘×K’ indicates the iteration times of the guidance feature updating.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFQT4oBgHgl3EQf0TbP/content/2301.13416v1.pdf'} +page_content=' where ∆L′−t ·Gt is expected to exploit the significant flow- value fluctuations at the edge region in ∆L′−t to better highlight the structure region of the RGB feature.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFQT4oBgHgl3EQf0TbP/content/2301.13416v1.pdf'} +page_content=' To fur- ther smooth the texture feature in Gflow t , we concatenate it with the texture-less depth feature Fcoarse t to obtain the texture-degraded RGB feature ˜Gflow t .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFQT4oBgHgl3EQf0TbP/content/2301.13416v1.pdf'} +page_content=' Then, we feed ˜Gflow t into a pyramid network to extract its edge-focused guid- ance features { ˜Gflow t,k }K k=1 at different scales.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFQT4oBgHgl3EQf0TbP/content/2301.13416v1.pdf'} +page_content=' The low-level guidance feature is to filter the texture noise (guided by the flow map) while the high-level is to exploit the rich context information for edge-feature capture.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFQT4oBgHgl3EQf0TbP/content/2301.13416v1.pdf'} +page_content=' After that, we unify the scales of the hierarchical feature { ˜Gflow t,k }K k=1 using the bicubic interpolation and pass the concatenated feature into a convolutional block to generate the flow-enhanced RGB guidance feature Gguide t at the t-th layer.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFQT4oBgHgl3EQf0TbP/content/2301.13416v1.pdf'} +page_content=' Notably, we de- sign an iterative architecture to progressively refine the RGB guidance feature as illustrated in Fig.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFQT4oBgHgl3EQf0TbP/content/2301.13416v1.pdf'} +page_content=' 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFQT4oBgHgl3EQf0TbP/content/2301.13416v1.pdf'} +page_content=' Edge decoder.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFQT4oBgHgl3EQf0TbP/content/2301.13416v1.pdf'} +page_content=' Guided by the flow-based guidance fea- tures {Gguide t }T t=1 learned at each layer, we progressively decode the depth feature in an iterative manner: Fedge t+1 = FU(Cat(Fedge t , Gguide T −t+1, Fcoarse T −t+1)), (9) where FU function indicates the fusion and upsampling op- eration following (Guo et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFQT4oBgHgl3EQf0TbP/content/2301.13416v1.pdf'} +page_content=' 2020) and the initial feature Fedge 1 is obtained by the convolutional operation on Fcoarse T +1 .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFQT4oBgHgl3EQf0TbP/content/2301.13416v1.pdf'} +page_content=' Finally, we pass Fedge T +1 into a convolutional block to obtain the edge-refined HR depth map Drefine.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFQT4oBgHgl3EQf0TbP/content/2301.13416v1.pdf'} +page_content=' Loss Function We train our model by minimizing the smooth-L1 loss be- tween the ground-truth depth map Dgt and the network out- put of each sub-network, including the coarse depth predic- tion Dcoarse and the refined one Drefine: Ldsr = H×W � i=1 ℓ � Dcoarse i − Dgt i � + ℓ � Drefine i − Dgt i � , (10) where the subscript i denote the pixel index and the smooth- L1 loss function is defined as: ℓ(u) = �0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFQT4oBgHgl3EQf0TbP/content/2301.13416v1.pdf'} +page_content='5u2, if |u| ≤ 1 (|u| − 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFQT4oBgHgl3EQf0TbP/content/2301.13416v1.pdf'} +page_content='5) , otherwise.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFQT4oBgHgl3EQf0TbP/content/2301.13416v1.pdf'} +page_content=' (11) Experiments Experimental Setting To evaluate the performance of our method, we perform ex- tensive experiments on real-world RGB-D-D dataset (He et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFQT4oBgHgl3EQf0TbP/content/2301.13416v1.pdf'} +page_content=' 2021), ToFMark dataset (Ferstl et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFQT4oBgHgl3EQf0TbP/content/2301.13416v1.pdf'} +page_content=' 2013) and syn- thetic NYU-v2 dataset (Silberman et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFQT4oBgHgl3EQf0TbP/content/2301.13416v1.pdf'} +page_content=' 2012).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFQT4oBgHgl3EQf0TbP/content/2301.13416v1.pdf'} +page_content=' We imple- ment our model with PyTorch and conduct all experiments on a server containing an Intel i5 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFQT4oBgHgl3EQf0TbP/content/2301.13416v1.pdf'} +page_content='2 GHz CPU and a TITAN RTX GPU with almost 24 GB.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFQT4oBgHgl3EQf0TbP/content/2301.13416v1.pdf'} +page_content=' During training, we randomly crop patches of resolution 256 × 256 as groundtruth and the training and testing data are normalized to the range [0, 1].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFQT4oBgHgl3EQf0TbP/content/2301.13416v1.pdf'} +page_content=' In order to balance the training time and network perfor- mance, the parameters L, L′, K, T are set to 3, 6, 3, 2 in this paper.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFQT4oBgHgl3EQf0TbP/content/2301.13416v1.pdf'} +page_content=' We quantitatively and visually compare our method with 13 state-of-the-art (SOTA) methods: TGV (Ferstl et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFQT4oBgHgl3EQf0TbP/content/2301.13416v1.pdf'} +page_content=' 2013), FBS (Barron and Poole 2016), MSG (Tak-Wai, Loy, and Tang 2016), DJF (Li et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFQT4oBgHgl3EQf0TbP/content/2301.13416v1.pdf'} +page_content=' 2016), DJFR (Li et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFQT4oBgHgl3EQf0TbP/content/2301.13416v1.pdf'} +page_content=' 2019), GbFT (AlBahar and Huang 2019), PAC (Su et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFQT4oBgHgl3EQf0TbP/content/2301.13416v1.pdf'} +page_content=' 2019), CUNet (Deng and Dragotti 2020), FDKN (Kim, Ponce, and Ham 2021), DKN (Kim, Ponce, and Ham 2021), FDSR (He et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFQT4oBgHgl3EQf0TbP/content/2301.13416v1.pdf'} +page_content=' 2021), CTKT (Sun et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFQT4oBgHgl3EQf0TbP/content/2301.13416v1.pdf'} +page_content=' 2021) and DCTNet (Zhao et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFQT4oBgHgl3EQf0TbP/content/2301.13416v1.pdf'} +page_content=' 2022).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFQT4oBgHgl3EQf0TbP/content/2301.13416v1.pdf'} +page_content=' For simplicity, we name our Structure Flow- Guided method as SFG.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFQT4oBgHgl3EQf0TbP/content/2301.13416v1.pdf'} +page_content=' Experiments on Real Datasets Depth maps captured by cheap depth sensors usually suf- fer from structural distortion and edge noise.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFQT4oBgHgl3EQf0TbP/content/2301.13416v1.pdf'} +page_content=' To verify the efficiency and robustness of our proposed method, we em- ploy our method on two challenging benchmarks: RGB-D-D dataset and ToFMark dataset.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFQT4oBgHgl3EQf0TbP/content/2301.13416v1.pdf'} +page_content=' Evaluation on hand-filled RGB-D-D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFQT4oBgHgl3EQf0TbP/content/2301.13416v1.pdf'} +page_content=' To evaluate the per- formance of our method on real LR depth maps, we conduct experiments on RGB-D-D datasets captured by two RGB- D sensors: Huawei P30 Pro (captures RGB images and LR depth maps) and Helios TOF camera (captures HR depth maps).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFQT4oBgHgl3EQf0TbP/content/2301.13416v1.pdf'} +page_content=' The LR inputs are shown in Fig.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFQT4oBgHgl3EQf0TbP/content/2301.13416v1.pdf'} +page_content=' 5, which suffer from the low resolution (LR size is 192 × 144 and target size is 512 × 384) and random structural missing in the edge re- gion.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFQT4oBgHgl3EQf0TbP/content/2301.13416v1.pdf'} +page_content=' Following FDSR (He et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFQT4oBgHgl3EQf0TbP/content/2301.13416v1.pdf'} +page_content=' 2021), we first use 2215 hand-filled RGB/D pairs for training and 405 RGB/D pairs for testing.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFQT4oBgHgl3EQf0TbP/content/2301.13416v1.pdf'} +page_content=' As listed in the first row of Table 1, the proposed model outperforms SOTA methods by a significant margin.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFQT4oBgHgl3EQf0TbP/content/2301.13416v1.pdf'} +page_content=' The first two rows in Fig.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFQT4oBgHgl3EQf0TbP/content/2301.13416v1.pdf'} +page_content=' 5 show the visual DSR com- parisons on hand-filled RGB-D-D dataset.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFQT4oBgHgl3EQf0TbP/content/2301.13416v1.pdf'} +page_content=' We can see that edges in the results of DKN (Kim, Ponce, and Ham 2021) and DCTNet (Zhao et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFQT4oBgHgl3EQf0TbP/content/2301.13416v1.pdf'} +page_content=' 2022) are over-smoothed and the artifacts are visible in the FDSR results.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFQT4oBgHgl3EQf0TbP/content/2301.13416v1.pdf'} +page_content=' In contrast, our re- sults show more accurate structures without texture copying.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFQT4oBgHgl3EQf0TbP/content/2301.13416v1.pdf'} +page_content=' Evaluation on incomplete RGB-D-D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFQT4oBgHgl3EQf0TbP/content/2301.13416v1.pdf'} +page_content=' To further verify the DSR performance of our method in the case of edge noise (e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFQT4oBgHgl3EQf0TbP/content/2301.13416v1.pdf'} +page_content='g.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFQT4oBgHgl3EQf0TbP/content/2301.13416v1.pdf'} +page_content=', edge holes), instead of the hole completion above, we directly test SFG on unfilled RGB-D dataset and achieve the (a) LR depth (f) Groundtruth (d) DCTNet (c) FDSR (e) SFG (ours) (b) DKN Hand-filled Incomplete Figure 5: Visual comparison on RGB-D-D dataset.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFQT4oBgHgl3EQf0TbP/content/2301.13416v1.pdf'} +page_content=' The first (last) two rows show DSR results of hand-filled (incomplete) LR.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFQT4oBgHgl3EQf0TbP/content/2301.13416v1.pdf'} +page_content=' RMSE Bicubic MSG DJF DJFR CUNet DKN FDKN FDSR DCTNet SFG (ours) Hand-filled 7.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFQT4oBgHgl3EQf0TbP/content/2301.13416v1.pdf'} +page_content='17 5.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFQT4oBgHgl3EQf0TbP/content/2301.13416v1.pdf'} +page_content='50 5.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFQT4oBgHgl3EQf0TbP/content/2301.13416v1.pdf'} +page_content='54 5.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFQT4oBgHgl3EQf0TbP/content/2301.13416v1.pdf'} +page_content='52 5.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFQT4oBgHgl3EQf0TbP/content/2301.13416v1.pdf'} +page_content='84 5.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFQT4oBgHgl3EQf0TbP/content/2301.13416v1.pdf'} +page_content='08 5.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFQT4oBgHgl3EQf0TbP/content/2301.13416v1.pdf'} +page_content='37 5.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFQT4oBgHgl3EQf0TbP/content/2301.13416v1.pdf'} +page_content='34 5.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFQT4oBgHgl3EQf0TbP/content/2301.13416v1.pdf'} +page_content='28 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFQT4oBgHgl3EQf0TbP/content/2301.13416v1.pdf'} +page_content='88 Incomplete 7.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFQT4oBgHgl3EQf0TbP/content/2301.13416v1.pdf'} +page_content='90 5.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFQT4oBgHgl3EQf0TbP/content/2301.13416v1.pdf'} +page_content='70 5.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFQT4oBgHgl3EQf0TbP/content/2301.13416v1.pdf'} +page_content='52 6.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFQT4oBgHgl3EQf0TbP/content/2301.13416v1.pdf'} +page_content='54 5.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFQT4oBgHgl3EQf0TbP/content/2301.13416v1.pdf'} +page_content='43 5.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFQT4oBgHgl3EQf0TbP/content/2301.13416v1.pdf'} +page_content='87 5.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFQT4oBgHgl3EQf0TbP/content/2301.13416v1.pdf'} +page_content='59 5.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFQT4oBgHgl3EQf0TbP/content/2301.13416v1.pdf'} +page_content='49 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFQT4oBgHgl3EQf0TbP/content/2301.13416v1.pdf'} +page_content='79 Noisy 11.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFQT4oBgHgl3EQf0TbP/content/2301.13416v1.pdf'} +page_content='57 10.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFQT4oBgHgl3EQf0TbP/content/2301.13416v1.pdf'} +page_content='36 5.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFQT4oBgHgl3EQf0TbP/content/2301.13416v1.pdf'} +page_content='62 5.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFQT4oBgHgl3EQf0TbP/content/2301.13416v1.pdf'} +page_content='71 6.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFQT4oBgHgl3EQf0TbP/content/2301.13416v1.pdf'} +page_content='13 5.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFQT4oBgHgl3EQf0TbP/content/2301.13416v1.pdf'} +page_content='16 5.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFQT4oBgHgl3EQf0TbP/content/2301.13416v1.pdf'} +page_content='54 5.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFQT4oBgHgl3EQf0TbP/content/2301.13416v1.pdf'} +page_content='63 5.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFQT4oBgHgl3EQf0TbP/content/2301.13416v1.pdf'} +page_content='16 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFQT4oBgHgl3EQf0TbP/content/2301.13416v1.pdf'} +page_content='45 Table 1: Quantitative comparison on RGB-D-D dataset.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFQT4oBgHgl3EQf0TbP/content/2301.13416v1.pdf'} +page_content=' Best and second best results are in bold and underline, respectively.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFQT4oBgHgl3EQf0TbP/content/2301.13416v1.pdf'} +page_content=' (b) Bicubic (d) DCTNet (c) DKN (e) SFG (ours) (a) Groundtruth Figure 6: Visual comparison on ToFMark dataset.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFQT4oBgHgl3EQf0TbP/content/2301.13416v1.pdf'} +page_content=' DJFR DKN FDKN FDSR DCTNet SFG (ours) RMSE 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFQT4oBgHgl3EQf0TbP/content/2301.13416v1.pdf'} +page_content='27 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFQT4oBgHgl3EQf0TbP/content/2301.13416v1.pdf'} +page_content='26 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFQT4oBgHgl3EQf0TbP/content/2301.13416v1.pdf'} +page_content='28 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFQT4oBgHgl3EQf0TbP/content/2301.13416v1.pdf'} +page_content='28 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFQT4oBgHgl3EQf0TbP/content/2301.13416v1.pdf'} +page_content='27 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFQT4oBgHgl3EQf0TbP/content/2301.13416v1.pdf'} +page_content='25 Table 2: Quantitative comparison on ToFMark dataset.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFQT4oBgHgl3EQf0TbP/content/2301.13416v1.pdf'} +page_content=' lowest RMSE as shown in the second row of Table 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFQT4oBgHgl3EQf0TbP/content/2301.13416v1.pdf'} +page_content=' More- over, as shown in the last two rows in Fig.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFQT4oBgHgl3EQf0TbP/content/2301.13416v1.pdf'} +page_content=' 5, the edges recov- ered by our method are sharper with fewer artifacts and vi- sually closest to the ground-truth map.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFQT4oBgHgl3EQf0TbP/content/2301.13416v1.pdf'} +page_content=' It’s mainly attributed to the edge-focused guidance feature learning with our flow- enhanced pyramid edge attention network.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFQT4oBgHgl3EQf0TbP/content/2301.13416v1.pdf'} +page_content=' Evaluation on noisy RGB-D-D and ToFMark.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFQT4oBgHgl3EQf0TbP/content/2301.13416v1.pdf'} +page_content=' We evalu- ate the denoising and generalization ability of our method on ToFMark dataset consisting of three RGB-D pairs.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFQT4oBgHgl3EQf0TbP/content/2301.13416v1.pdf'} +page_content=' The LR inputs have irregular noise and limited resolution (LR depth is 120 × 160 and target size is 610 × 810).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFQT4oBgHgl3EQf0TbP/content/2301.13416v1.pdf'} +page_content=' To simulate the similar degradation for training, we add the Gaussian noise (mean 0 and standard deviation 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFQT4oBgHgl3EQf0TbP/content/2301.13416v1.pdf'} +page_content='07) and the Gaussian blur (kernel size 5) on the 2215 RGB-D pairs from RGB-D-D dataset to generate the noisy training dataset.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFQT4oBgHgl3EQf0TbP/content/2301.13416v1.pdf'} +page_content=' Testing dataset consists of 405 RGB-D pairs from noisy RGB-D-D dataset and 3 RGB-D pairs from ToFMark dataset.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFQT4oBgHgl3EQf0TbP/content/2301.13416v1.pdf'} +page_content=' As shown in the last row of Table 1 and Table 2, our method achieves the low- est RMSE in noisy RGB-D-D dataset and the lowest RMSE in ToFMark dataset, which proves its ability for noise re- moving.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFQT4oBgHgl3EQf0TbP/content/2301.13416v1.pdf'} +page_content=' As shown in Fig.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFQT4oBgHgl3EQf0TbP/content/2301.13416v1.pdf'} +page_content=' 6, it is observed that DKN (Kim, Ponce, and Ham 2021) and DCTNet (Zhao et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFQT4oBgHgl3EQf0TbP/content/2301.13416v1.pdf'} +page_content=' 2022) intro- duce some texture artifacts and noise in the low-frequency region, while SFG recovers clean surface owing to PEA with effective texture removing.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFQT4oBgHgl3EQf0TbP/content/2301.13416v1.pdf'} +page_content=' Experiments on Synthetic Datasets Since most popular methods are designed for synthetic datasets, we further evaluate our method on NYU-v2 datasets for a more comprehensive comparison.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFQT4oBgHgl3EQf0TbP/content/2301.13416v1.pdf'} +page_content=' Following the widely used data splitting criterion, we sample 1000 T(a) LR depth (b) DJFR (c) DKN (d) FDSR (e) CFUNet (f) SFG (ours) (g) Groundtruth × 8 × 16 Figure 7: Visual comparison of ×8 and ×16 DSR results on NYU-v2 dataset.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFQT4oBgHgl3EQf0TbP/content/2301.13416v1.pdf'} +page_content=' RMSE TGV FBS DJFR GbFT PAC CUNet FDKN DKN FDSR DCTNet CTKT SFG (ours) ×4 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFQT4oBgHgl3EQf0TbP/content/2301.13416v1.pdf'} +page_content='98 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFQT4oBgHgl3EQf0TbP/content/2301.13416v1.pdf'} +page_content='29 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFQT4oBgHgl3EQf0TbP/content/2301.13416v1.pdf'} +page_content='38 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFQT4oBgHgl3EQf0TbP/content/2301.13416v1.pdf'} +page_content='35 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFQT4oBgHgl3EQf0TbP/content/2301.13416v1.pdf'} +page_content='39 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFQT4oBgHgl3EQf0TbP/content/2301.13416v1.pdf'} +page_content='89 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFQT4oBgHgl3EQf0TbP/content/2301.13416v1.pdf'} +page_content='86 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFQT4oBgHgl3EQf0TbP/content/2301.13416v1.pdf'} +page_content='62 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFQT4oBgHgl3EQf0TbP/content/2301.13416v1.pdf'} +page_content='61 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFQT4oBgHgl3EQf0TbP/content/2301.13416v1.pdf'} +page_content='59 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFQT4oBgHgl3EQf0TbP/content/2301.13416v1.pdf'} +page_content='49 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFQT4oBgHgl3EQf0TbP/content/2301.13416v1.pdf'} +page_content='45 ×8 11.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFQT4oBgHgl3EQf0TbP/content/2301.13416v1.pdf'} +page_content='23 8.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFQT4oBgHgl3EQf0TbP/content/2301.13416v1.pdf'} +page_content='94 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFQT4oBgHgl3EQf0TbP/content/2301.13416v1.pdf'} +page_content='94 5.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFQT4oBgHgl3EQf0TbP/content/2301.13416v1.pdf'} +page_content='73 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFQT4oBgHgl3EQf0TbP/content/2301.13416v1.pdf'} +page_content='59 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFQT4oBgHgl3EQf0TbP/content/2301.13416v1.pdf'} +page_content='58 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFQT4oBgHgl3EQf0TbP/content/2301.13416v1.pdf'} +page_content='33 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFQT4oBgHgl3EQf0TbP/content/2301.13416v1.pdf'} +page_content='26 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFQT4oBgHgl3EQf0TbP/content/2301.13416v1.pdf'} +page_content='18 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFQT4oBgHgl3EQf0TbP/content/2301.13416v1.pdf'} +page_content='16 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFQT4oBgHgl3EQf0TbP/content/2301.13416v1.pdf'} +page_content='73 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFQT4oBgHgl3EQf0TbP/content/2301.13416v1.pdf'} +page_content='84 ×16 28.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFQT4oBgHgl3EQf0TbP/content/2301.13416v1.pdf'} +page_content='13 14.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFQT4oBgHgl3EQf0TbP/content/2301.13416v1.pdf'} +page_content='59 9.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFQT4oBgHgl3EQf0TbP/content/2301.13416v1.pdf'} +page_content='18 9.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFQT4oBgHgl3EQf0TbP/content/2301.13416v1.pdf'} +page_content='01 8.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFQT4oBgHgl3EQf0TbP/content/2301.13416v1.pdf'} +page_content='09 6.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFQT4oBgHgl3EQf0TbP/content/2301.13416v1.pdf'} +page_content='96 6.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFQT4oBgHgl3EQf0TbP/content/2301.13416v1.pdf'} +page_content='78 6.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFQT4oBgHgl3EQf0TbP/content/2301.13416v1.pdf'} +page_content='51 5.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFQT4oBgHgl3EQf0TbP/content/2301.13416v1.pdf'} +page_content='86 5.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFQT4oBgHgl3EQf0TbP/content/2301.13416v1.pdf'} +page_content='84 5.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFQT4oBgHgl3EQf0TbP/content/2301.13416v1.pdf'} +page_content='11 5.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFQT4oBgHgl3EQf0TbP/content/2301.13416v1.pdf'} +page_content='56 Table 3: Quantitative comparison on NYU-v2 dataset in terms of average RMSE (cm).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFQT4oBgHgl3EQf0TbP/content/2301.13416v1.pdf'} +page_content=' Model RMSE CFUNet 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFQT4oBgHgl3EQf0TbP/content/2301.13416v1.pdf'} +page_content='22 CFUNet w/o TriSA 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFQT4oBgHgl3EQf0TbP/content/2301.13416v1.pdf'} +page_content='34 CFUNet w/o cross-attention 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFQT4oBgHgl3EQf0TbP/content/2301.13416v1.pdf'} +page_content='57 Table 4: Ablation study of CFUNet on RGB-D-D dataset.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFQT4oBgHgl3EQf0TbP/content/2301.13416v1.pdf'} +page_content=' Datasets SFG SFG w/o PEANet RGB-D-D 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFQT4oBgHgl3EQf0TbP/content/2301.13416v1.pdf'} +page_content='88 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFQT4oBgHgl3EQf0TbP/content/2301.13416v1.pdf'} +page_content='22 NYU-v2 (×4) 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFQT4oBgHgl3EQf0TbP/content/2301.13416v1.pdf'} +page_content='45 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFQT4oBgHgl3EQf0TbP/content/2301.13416v1.pdf'} +page_content='82 NYU-v2 (×8) 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFQT4oBgHgl3EQf0TbP/content/2301.13416v1.pdf'} +page_content='84 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFQT4oBgHgl3EQf0TbP/content/2301.13416v1.pdf'} +page_content='76 NYU-v2 (×16) 5.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFQT4oBgHgl3EQf0TbP/content/2301.13416v1.pdf'} +page_content='55 5.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFQT4oBgHgl3EQf0TbP/content/2301.13416v1.pdf'} +page_content='90 Table 5: Ablation study (in RMSE) of PEANet.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFQT4oBgHgl3EQf0TbP/content/2301.13416v1.pdf'} +page_content=' RGB-D pairs for training and the rest 449 RGB-D pairs for testing.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFQT4oBgHgl3EQf0TbP/content/2301.13416v1.pdf'} +page_content=' As shown in the Table 3, the proposed method still achieves comparable results with the SOTA methods on all upsampling cases (×4, ×8, ×16).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFQT4oBgHgl3EQf0TbP/content/2301.13416v1.pdf'} +page_content=' In addition, Fig.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFQT4oBgHgl3EQf0TbP/content/2301.13416v1.pdf'} +page_content=' 7 presents that our ×8 and ×16 upsampled depth maps own higher accuracy and more convincing results.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFQT4oBgHgl3EQf0TbP/content/2301.13416v1.pdf'} +page_content=' It verifies that our method not only performs DSR well in low-quality maps with noise and missing structure, but also achieves high- quality precision in the case of large-scale upsampling.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFQT4oBgHgl3EQf0TbP/content/2301.13416v1.pdf'} +page_content=' Ablation Analysis Ablation study on CFUNet.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFQT4oBgHgl3EQf0TbP/content/2301.13416v1.pdf'} +page_content=' As shown in the first row of the Table 4, we still achieve the lowest RMSE criterion just with the single CFUNet (SFG w/o PEANet) on RGB-D-D dataset when compare with SOTA methods.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFQT4oBgHgl3EQf0TbP/content/2301.13416v1.pdf'} +page_content=' It proves the effective- ness of the learned structure flow map for real DSR.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFQT4oBgHgl3EQf0TbP/content/2301.13416v1.pdf'} +page_content=' The Table 4 also shows that removing the trilateral self-attention (TriSA) and cross-attention module in CFUNet causes per- formance degradation on RGB-D-D datasets, which verifies the necessary of the depth feature enhancement for reliable flow map generation.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFQT4oBgHgl3EQf0TbP/content/2301.13416v1.pdf'} +page_content=' K=0 (w/o FPA) K=1 K=2 K=3 Figure 8: Visual comparison of guidance features using FPA with different iteration times K, i.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFQT4oBgHgl3EQf0TbP/content/2301.13416v1.pdf'} +page_content='e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFQT4oBgHgl3EQf0TbP/content/2301.13416v1.pdf'} +page_content=', from 0 (w/o FPA) to 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFQT4oBgHgl3EQf0TbP/content/2301.13416v1.pdf'} +page_content=' Ablation study on PEANet.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFQT4oBgHgl3EQf0TbP/content/2301.13416v1.pdf'} +page_content=' To analyze the effectiveness of PEANet, we train the network with and without PEANet on the synthetic dataset (NYU-v2) and the real-world dataset (RGB-D-D).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFQT4oBgHgl3EQf0TbP/content/2301.13416v1.pdf'} +page_content=' As shown in the Table 5, PEANet consistently brings the RMSE gain under both real and synthetic dataset settings.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFQT4oBgHgl3EQf0TbP/content/2301.13416v1.pdf'} +page_content=' It’s mainly due to our edge-focused guidance fea- ture learning for robust edge refinement.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFQT4oBgHgl3EQf0TbP/content/2301.13416v1.pdf'} +page_content=' In addition, Fig.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFQT4oBgHgl3EQf0TbP/content/2301.13416v1.pdf'} +page_content=' 8 shows the guidance features under varying iteration times in FPA (Flow-enhanced Pyramid Attention) module from 0 (w/o FPA) to 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFQT4oBgHgl3EQf0TbP/content/2301.13416v1.pdf'} +page_content=' Visually, as the number of iterations in- creases, the edge regions tend to receive more attention.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFQT4oBgHgl3EQf0TbP/content/2301.13416v1.pdf'} +page_content=' Conclusion In this paper, we proposed a novel structure flow-guided DSR framework for real-world depth super-resolution, which deals with issues of structural distortion and edge noise.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFQT4oBgHgl3EQf0TbP/content/2301.13416v1.pdf'} +page_content=' For the structural distortion, a cross-modality flow- guided upsampling network was presented to learn a reli- able cross-modality flow between depth and the correspond- ing RGB guidance for the reconstruction of the distorted depth edge, where a trilateral self-attention combines the ge- ometric and semantic correlations for structure flow learn- ing.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFQT4oBgHgl3EQf0TbP/content/2301.13416v1.pdf'} +page_content=' For the edge noise, a flow-enhanced pyramid edge at- tention network was introduced to produce edge attention based on the learned flow map and learn the edge-focused guidance feature for depth edge refinement with a pyramid network.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFQT4oBgHgl3EQf0TbP/content/2301.13416v1.pdf'} +page_content=' Extensive experiments on both real-world and syn- thetic datasets demonstrated the superiority of our method.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFQT4oBgHgl3EQf0TbP/content/2301.13416v1.pdf'} +page_content=' Acknowledgement This work was supported by the National Science Fund of China under Grant Nos.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFQT4oBgHgl3EQf0TbP/content/2301.13416v1.pdf'} +page_content=' U1713208 and 62072242.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFQT4oBgHgl3EQf0TbP/content/2301.13416v1.pdf'} +page_content=' References AlBahar, B.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFQT4oBgHgl3EQf0TbP/content/2301.13416v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFQT4oBgHgl3EQf0TbP/content/2301.13416v1.pdf'} +page_content=' and Huang, J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFQT4oBgHgl3EQf0TbP/content/2301.13416v1.pdf'} +page_content='-B.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFQT4oBgHgl3EQf0TbP/content/2301.13416v1.pdf'} +page_content=' 2019.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFQT4oBgHgl3EQf0TbP/content/2301.13416v1.pdf'} +page_content=' Guided image-to-image translation with bi-directional feature transformation.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFQT4oBgHgl3EQf0TbP/content/2301.13416v1.pdf'} +page_content=' In ICCV, 9016–9025.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFQT4oBgHgl3EQf0TbP/content/2301.13416v1.pdf'} +page_content=' Barron, J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFQT4oBgHgl3EQf0TbP/content/2301.13416v1.pdf'} +page_content=' T.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFQT4oBgHgl3EQf0TbP/content/2301.13416v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFQT4oBgHgl3EQf0TbP/content/2301.13416v1.pdf'} +page_content=' and Poole, B.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFQT4oBgHgl3EQf0TbP/content/2301.13416v1.pdf'} +page_content=' 2016.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFQT4oBgHgl3EQf0TbP/content/2301.13416v1.pdf'} +page_content=' The fast bilateral solver.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFQT4oBgHgl3EQf0TbP/content/2301.13416v1.pdf'} +page_content=' In ECCV, 617–632.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFQT4oBgHgl3EQf0TbP/content/2301.13416v1.pdf'} +page_content=' Blum, M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFQT4oBgHgl3EQf0TbP/content/2301.13416v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFQT4oBgHgl3EQf0TbP/content/2301.13416v1.pdf'} +page_content=' Springenberg, J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFQT4oBgHgl3EQf0TbP/content/2301.13416v1.pdf'} +page_content=' T.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFQT4oBgHgl3EQf0TbP/content/2301.13416v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFQT4oBgHgl3EQf0TbP/content/2301.13416v1.pdf'} +page_content=' W¨ulfing, J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFQT4oBgHgl3EQf0TbP/content/2301.13416v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFQT4oBgHgl3EQf0TbP/content/2301.13416v1.pdf'} +page_content=' and Riedmiller, M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFQT4oBgHgl3EQf0TbP/content/2301.13416v1.pdf'} +page_content=' 2012.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFQT4oBgHgl3EQf0TbP/content/2301.13416v1.pdf'} +page_content=' A learned feature descriptor for object recognition in RGB-D data.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFQT4oBgHgl3EQf0TbP/content/2301.13416v1.pdf'} +page_content=' In ICRA, 1298–1303.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFQT4oBgHgl3EQf0TbP/content/2301.13416v1.pdf'} +page_content=' Chen, B.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFQT4oBgHgl3EQf0TbP/content/2301.13416v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFQT4oBgHgl3EQf0TbP/content/2301.13416v1.pdf'} +page_content=' and Jung, C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFQT4oBgHgl3EQf0TbP/content/2301.13416v1.pdf'} +page_content=' 2018.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFQT4oBgHgl3EQf0TbP/content/2301.13416v1.pdf'} +page_content=' Single depth image super- resolution using convolutional neural networks.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFQT4oBgHgl3EQf0TbP/content/2301.13416v1.pdf'} +page_content=' In ICASSP, 1473–1477.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFQT4oBgHgl3EQf0TbP/content/2301.13416v1.pdf'} +page_content=' Deng, X.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFQT4oBgHgl3EQf0TbP/content/2301.13416v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFQT4oBgHgl3EQf0TbP/content/2301.13416v1.pdf'} +page_content=' and Dragotti, P.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFQT4oBgHgl3EQf0TbP/content/2301.13416v1.pdf'} +page_content=' L.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFQT4oBgHgl3EQf0TbP/content/2301.13416v1.pdf'} +page_content=' 2020.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFQT4oBgHgl3EQf0TbP/content/2301.13416v1.pdf'} +page_content=' Deep convolutional neu- ral network for multi-modal image restoration and fusion.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFQT4oBgHgl3EQf0TbP/content/2301.13416v1.pdf'} +page_content=' IEEE transactions on pattern analysis and machine intelli- gence, PP(99): 1–1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFQT4oBgHgl3EQf0TbP/content/2301.13416v1.pdf'} +page_content=' Dosovitskiy, A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFQT4oBgHgl3EQf0TbP/content/2301.13416v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFQT4oBgHgl3EQf0TbP/content/2301.13416v1.pdf'} +page_content=' Fischer, P.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFQT4oBgHgl3EQf0TbP/content/2301.13416v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFQT4oBgHgl3EQf0TbP/content/2301.13416v1.pdf'} +page_content=' Ilg, E.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFQT4oBgHgl3EQf0TbP/content/2301.13416v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFQT4oBgHgl3EQf0TbP/content/2301.13416v1.pdf'} +page_content=' Hausser, P.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFQT4oBgHgl3EQf0TbP/content/2301.13416v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFQT4oBgHgl3EQf0TbP/content/2301.13416v1.pdf'} +page_content=' Hazirbas, C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFQT4oBgHgl3EQf0TbP/content/2301.13416v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFQT4oBgHgl3EQf0TbP/content/2301.13416v1.pdf'} +page_content=' Golkov, V.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFQT4oBgHgl3EQf0TbP/content/2301.13416v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFQT4oBgHgl3EQf0TbP/content/2301.13416v1.pdf'} +page_content=' van der Smagt, P.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFQT4oBgHgl3EQf0TbP/content/2301.13416v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFQT4oBgHgl3EQf0TbP/content/2301.13416v1.pdf'} +page_content=' Cremers, D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFQT4oBgHgl3EQf0TbP/content/2301.13416v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFQT4oBgHgl3EQf0TbP/content/2301.13416v1.pdf'} +page_content=' and Brox, T.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFQT4oBgHgl3EQf0TbP/content/2301.13416v1.pdf'} +page_content=' 2015.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFQT4oBgHgl3EQf0TbP/content/2301.13416v1.pdf'} +page_content=' FlowNet: Learning Optical Flow With Convolutional Networks.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFQT4oBgHgl3EQf0TbP/content/2301.13416v1.pdf'} +page_content=' In ICCV, 2758–2766.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFQT4oBgHgl3EQf0TbP/content/2301.13416v1.pdf'} +page_content=' Eitel, A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFQT4oBgHgl3EQf0TbP/content/2301.13416v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFQT4oBgHgl3EQf0TbP/content/2301.13416v1.pdf'} +page_content=' Springenberg, J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFQT4oBgHgl3EQf0TbP/content/2301.13416v1.pdf'} +page_content=' T.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFQT4oBgHgl3EQf0TbP/content/2301.13416v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFQT4oBgHgl3EQf0TbP/content/2301.13416v1.pdf'} +page_content=' Spinello, L.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFQT4oBgHgl3EQf0TbP/content/2301.13416v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFQT4oBgHgl3EQf0TbP/content/2301.13416v1.pdf'} +page_content=' Riedmiller, M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFQT4oBgHgl3EQf0TbP/content/2301.13416v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFQT4oBgHgl3EQf0TbP/content/2301.13416v1.pdf'} +page_content=' and Burgard, W.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFQT4oBgHgl3EQf0TbP/content/2301.13416v1.pdf'} +page_content=' 2015.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFQT4oBgHgl3EQf0TbP/content/2301.13416v1.pdf'} +page_content=' Multimodal deep learning for robust RGB-D object recognition.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFQT4oBgHgl3EQf0TbP/content/2301.13416v1.pdf'} +page_content=' In IROS, 681–687.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFQT4oBgHgl3EQf0TbP/content/2301.13416v1.pdf'} +page_content=' Ferstl, D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFQT4oBgHgl3EQf0TbP/content/2301.13416v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFQT4oBgHgl3EQf0TbP/content/2301.13416v1.pdf'} +page_content=' Reinbacher, C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFQT4oBgHgl3EQf0TbP/content/2301.13416v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFQT4oBgHgl3EQf0TbP/content/2301.13416v1.pdf'} +page_content=' Ranftl, R.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFQT4oBgHgl3EQf0TbP/content/2301.13416v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFQT4oBgHgl3EQf0TbP/content/2301.13416v1.pdf'} +page_content=' R¨uther, M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFQT4oBgHgl3EQf0TbP/content/2301.13416v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFQT4oBgHgl3EQf0TbP/content/2301.13416v1.pdf'} +page_content=' and Bischof, H.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFQT4oBgHgl3EQf0TbP/content/2301.13416v1.pdf'} +page_content=' 2013.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFQT4oBgHgl3EQf0TbP/content/2301.13416v1.pdf'} +page_content=' Image guided depth upsampling using anisotropic total generalized variation.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFQT4oBgHgl3EQf0TbP/content/2301.13416v1.pdf'} +page_content=' In ICCV, 993–1000.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFQT4oBgHgl3EQf0TbP/content/2301.13416v1.pdf'} +page_content=' Guo, C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFQT4oBgHgl3EQf0TbP/content/2301.13416v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFQT4oBgHgl3EQf0TbP/content/2301.13416v1.pdf'} +page_content=' Li, C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFQT4oBgHgl3EQf0TbP/content/2301.13416v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFQT4oBgHgl3EQf0TbP/content/2301.13416v1.pdf'} +page_content=' Guo, J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFQT4oBgHgl3EQf0TbP/content/2301.13416v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFQT4oBgHgl3EQf0TbP/content/2301.13416v1.pdf'} +page_content=' Cong, R.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFQT4oBgHgl3EQf0TbP/content/2301.13416v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFQT4oBgHgl3EQf0TbP/content/2301.13416v1.pdf'} +page_content=' Fu, H.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFQT4oBgHgl3EQf0TbP/content/2301.13416v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFQT4oBgHgl3EQf0TbP/content/2301.13416v1.pdf'} +page_content=' and Han, P.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFQT4oBgHgl3EQf0TbP/content/2301.13416v1.pdf'} +page_content=' 2019.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFQT4oBgHgl3EQf0TbP/content/2301.13416v1.pdf'} +page_content=' Hierarchical Features Driven Residual Learning for Depth Map Super-Resolution.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFQT4oBgHgl3EQf0TbP/content/2301.13416v1.pdf'} +page_content=' IEEE Transactions on Image Pro- cessing, 2545–2557.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFQT4oBgHgl3EQf0TbP/content/2301.13416v1.pdf'} +page_content=' Guo, Y.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFQT4oBgHgl3EQf0TbP/content/2301.13416v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFQT4oBgHgl3EQf0TbP/content/2301.13416v1.pdf'} +page_content=' Chen, J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFQT4oBgHgl3EQf0TbP/content/2301.13416v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFQT4oBgHgl3EQf0TbP/content/2301.13416v1.pdf'} +page_content=' Wang, J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFQT4oBgHgl3EQf0TbP/content/2301.13416v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFQT4oBgHgl3EQf0TbP/content/2301.13416v1.pdf'} +page_content=' Chen, Q.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFQT4oBgHgl3EQf0TbP/content/2301.13416v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFQT4oBgHgl3EQf0TbP/content/2301.13416v1.pdf'} +page_content=' Cao, J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFQT4oBgHgl3EQf0TbP/content/2301.13416v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFQT4oBgHgl3EQf0TbP/content/2301.13416v1.pdf'} +page_content=' Deng, Z.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFQT4oBgHgl3EQf0TbP/content/2301.13416v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFQT4oBgHgl3EQf0TbP/content/2301.13416v1.pdf'} +page_content=' Xu, Y.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFQT4oBgHgl3EQf0TbP/content/2301.13416v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFQT4oBgHgl3EQf0TbP/content/2301.13416v1.pdf'} +page_content=' and Tan, M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFQT4oBgHgl3EQf0TbP/content/2301.13416v1.pdf'} +page_content=' 2020.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFQT4oBgHgl3EQf0TbP/content/2301.13416v1.pdf'} +page_content=' Closed-loop matters: Dual regression networks for single image super-resolution.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFQT4oBgHgl3EQf0TbP/content/2301.13416v1.pdf'} +page_content=' In CVPR, 5407– 5416.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFQT4oBgHgl3EQf0TbP/content/2301.13416v1.pdf'} +page_content=' Hao, X.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFQT4oBgHgl3EQf0TbP/content/2301.13416v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFQT4oBgHgl3EQf0TbP/content/2301.13416v1.pdf'} +page_content=' Lu, T.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFQT4oBgHgl3EQf0TbP/content/2301.13416v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFQT4oBgHgl3EQf0TbP/content/2301.13416v1.pdf'} +page_content=' Zhang, Y.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFQT4oBgHgl3EQf0TbP/content/2301.13416v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFQT4oBgHgl3EQf0TbP/content/2301.13416v1.pdf'} +page_content=' Wang, Z.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFQT4oBgHgl3EQf0TbP/content/2301.13416v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFQT4oBgHgl3EQf0TbP/content/2301.13416v1.pdf'} +page_content=' and Chen, H.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFQT4oBgHgl3EQf0TbP/content/2301.13416v1.pdf'} +page_content=' 2019.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFQT4oBgHgl3EQf0TbP/content/2301.13416v1.pdf'} +page_content=' Multi-Source Deep Residual Fusion Network for Depth Im- age Super-resolution.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFQT4oBgHgl3EQf0TbP/content/2301.13416v1.pdf'} +page_content=' In RSVT, 62–67.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFQT4oBgHgl3EQf0TbP/content/2301.13416v1.pdf'} +page_content=' He, L.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFQT4oBgHgl3EQf0TbP/content/2301.13416v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFQT4oBgHgl3EQf0TbP/content/2301.13416v1.pdf'} +page_content=' Zhu, H.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFQT4oBgHgl3EQf0TbP/content/2301.13416v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFQT4oBgHgl3EQf0TbP/content/2301.13416v1.pdf'} +page_content=' Li, F.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFQT4oBgHgl3EQf0TbP/content/2301.13416v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFQT4oBgHgl3EQf0TbP/content/2301.13416v1.pdf'} +page_content=' Bai, H.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFQT4oBgHgl3EQf0TbP/content/2301.13416v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFQT4oBgHgl3EQf0TbP/content/2301.13416v1.pdf'} +page_content=' Cong, R.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFQT4oBgHgl3EQf0TbP/content/2301.13416v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFQT4oBgHgl3EQf0TbP/content/2301.13416v1.pdf'} +page_content=' Zhang, C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFQT4oBgHgl3EQf0TbP/content/2301.13416v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFQT4oBgHgl3EQf0TbP/content/2301.13416v1.pdf'} +page_content=' Lin, C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFQT4oBgHgl3EQf0TbP/content/2301.13416v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFQT4oBgHgl3EQf0TbP/content/2301.13416v1.pdf'} +page_content=' Liu, M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFQT4oBgHgl3EQf0TbP/content/2301.13416v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFQT4oBgHgl3EQf0TbP/content/2301.13416v1.pdf'} +page_content=' and Zhao, Y.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFQT4oBgHgl3EQf0TbP/content/2301.13416v1.pdf'} +page_content=' 2021.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFQT4oBgHgl3EQf0TbP/content/2301.13416v1.pdf'} +page_content=' Towards Fast and Accurate Real-World Depth Super-Resolution: Benchmark Dataset Baseline and.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFQT4oBgHgl3EQf0TbP/content/2301.13416v1.pdf'} +page_content=' In CVPR, 9229–9238.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFQT4oBgHgl3EQf0TbP/content/2301.13416v1.pdf'} +page_content=' Hou, J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFQT4oBgHgl3EQf0TbP/content/2301.13416v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFQT4oBgHgl3EQf0TbP/content/2301.13416v1.pdf'} +page_content=' Dai, A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFQT4oBgHgl3EQf0TbP/content/2301.13416v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFQT4oBgHgl3EQf0TbP/content/2301.13416v1.pdf'} +page_content=' and Nießner, M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFQT4oBgHgl3EQf0TbP/content/2301.13416v1.pdf'} +page_content=' 2019.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFQT4oBgHgl3EQf0TbP/content/2301.13416v1.pdf'} +page_content=' 3d-sis: 3d semantic instance segmentation of rgb-d scans.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFQT4oBgHgl3EQf0TbP/content/2301.13416v1.pdf'} +page_content=' In CVPR, 4421–4430.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFQT4oBgHgl3EQf0TbP/content/2301.13416v1.pdf'} +page_content=' Kim, B.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFQT4oBgHgl3EQf0TbP/content/2301.13416v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFQT4oBgHgl3EQf0TbP/content/2301.13416v1.pdf'} +page_content=' Ponce, J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFQT4oBgHgl3EQf0TbP/content/2301.13416v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFQT4oBgHgl3EQf0TbP/content/2301.13416v1.pdf'} +page_content=' and Ham, B.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFQT4oBgHgl3EQf0TbP/content/2301.13416v1.pdf'} +page_content=' 2021.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFQT4oBgHgl3EQf0TbP/content/2301.13416v1.pdf'} +page_content=' Deformable kernel networks for joint image filtering.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFQT4oBgHgl3EQf0TbP/content/2301.13416v1.pdf'} +page_content=' International Journal of Computer Vision, 129(2): 579–600.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFQT4oBgHgl3EQf0TbP/content/2301.13416v1.pdf'} +page_content=' Li, X.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFQT4oBgHgl3EQf0TbP/content/2301.13416v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFQT4oBgHgl3EQf0TbP/content/2301.13416v1.pdf'} +page_content=' You, A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFQT4oBgHgl3EQf0TbP/content/2301.13416v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFQT4oBgHgl3EQf0TbP/content/2301.13416v1.pdf'} +page_content=' Zhu, Z.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFQT4oBgHgl3EQf0TbP/content/2301.13416v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFQT4oBgHgl3EQf0TbP/content/2301.13416v1.pdf'} +page_content=' Zhao, H.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFQT4oBgHgl3EQf0TbP/content/2301.13416v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFQT4oBgHgl3EQf0TbP/content/2301.13416v1.pdf'} +page_content=' Yang, M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFQT4oBgHgl3EQf0TbP/content/2301.13416v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFQT4oBgHgl3EQf0TbP/content/2301.13416v1.pdf'} +page_content=' Yang, K.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFQT4oBgHgl3EQf0TbP/content/2301.13416v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFQT4oBgHgl3EQf0TbP/content/2301.13416v1.pdf'} +page_content=' Tan, S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFQT4oBgHgl3EQf0TbP/content/2301.13416v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFQT4oBgHgl3EQf0TbP/content/2301.13416v1.pdf'} +page_content=' and Tong, Y.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFQT4oBgHgl3EQf0TbP/content/2301.13416v1.pdf'} +page_content=' 2020.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFQT4oBgHgl3EQf0TbP/content/2301.13416v1.pdf'} +page_content=' Semantic flow for fast and accurate scene parsing.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFQT4oBgHgl3EQf0TbP/content/2301.13416v1.pdf'} +page_content=' In ECCV, 775–793.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFQT4oBgHgl3EQf0TbP/content/2301.13416v1.pdf'} +page_content=' Springer.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFQT4oBgHgl3EQf0TbP/content/2301.13416v1.pdf'} +page_content=' Li, Y.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFQT4oBgHgl3EQf0TbP/content/2301.13416v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFQT4oBgHgl3EQf0TbP/content/2301.13416v1.pdf'} +page_content=' Huang, J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFQT4oBgHgl3EQf0TbP/content/2301.13416v1.pdf'} +page_content='-B.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFQT4oBgHgl3EQf0TbP/content/2301.13416v1.pdf'} +page_content=' ;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFQT4oBgHgl3EQf0TbP/content/2301.13416v1.pdf'} +page_content=' Ahuja, N.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFQT4oBgHgl3EQf0TbP/content/2301.13416v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFQT4oBgHgl3EQf0TbP/content/2301.13416v1.pdf'} +page_content=' and Yang, M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFQT4oBgHgl3EQf0TbP/content/2301.13416v1.pdf'} +page_content='-H.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFQT4oBgHgl3EQf0TbP/content/2301.13416v1.pdf'} +page_content=' 2016.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFQT4oBgHgl3EQf0TbP/content/2301.13416v1.pdf'} +page_content=' Deep joint image filtering.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFQT4oBgHgl3EQf0TbP/content/2301.13416v1.pdf'} +page_content=' In ECCV, 154–169.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFQT4oBgHgl3EQf0TbP/content/2301.13416v1.pdf'} +page_content=' Li, Y.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFQT4oBgHgl3EQf0TbP/content/2301.13416v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFQT4oBgHgl3EQf0TbP/content/2301.13416v1.pdf'} +page_content=' Huang, J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFQT4oBgHgl3EQf0TbP/content/2301.13416v1.pdf'} +page_content='-B.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFQT4oBgHgl3EQf0TbP/content/2301.13416v1.pdf'} +page_content=' ;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFQT4oBgHgl3EQf0TbP/content/2301.13416v1.pdf'} +page_content=' Ahuja, N.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFQT4oBgHgl3EQf0TbP/content/2301.13416v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFQT4oBgHgl3EQf0TbP/content/2301.13416v1.pdf'} +page_content=' and Yang, M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFQT4oBgHgl3EQf0TbP/content/2301.13416v1.pdf'} +page_content='-H.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFQT4oBgHgl3EQf0TbP/content/2301.13416v1.pdf'} +page_content=' 2019.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFQT4oBgHgl3EQf0TbP/content/2301.13416v1.pdf'} +page_content=' Joint image filtering with deep convolutional networks.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFQT4oBgHgl3EQf0TbP/content/2301.13416v1.pdf'} +page_content=' IEEE transactions on pattern analysis and machine intelligence, 41(8): 1909–1923.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFQT4oBgHgl3EQf0TbP/content/2301.13416v1.pdf'} +page_content=' Liu, W.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFQT4oBgHgl3EQf0TbP/content/2301.13416v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFQT4oBgHgl3EQf0TbP/content/2301.13416v1.pdf'} +page_content=' Chen, X.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFQT4oBgHgl3EQf0TbP/content/2301.13416v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFQT4oBgHgl3EQf0TbP/content/2301.13416v1.pdf'} +page_content=' Yang, J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFQT4oBgHgl3EQf0TbP/content/2301.13416v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFQT4oBgHgl3EQf0TbP/content/2301.13416v1.pdf'} +page_content=' and Wu, Q.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFQT4oBgHgl3EQf0TbP/content/2301.13416v1.pdf'} +page_content=' 2016.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFQT4oBgHgl3EQf0TbP/content/2301.13416v1.pdf'} +page_content=' Robust color guided depth map restoration.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFQT4oBgHgl3EQf0TbP/content/2301.13416v1.pdf'} +page_content=' IEEE Transactions on Image Processing, 26(1): 315–327.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFQT4oBgHgl3EQf0TbP/content/2301.13416v1.pdf'} +page_content=' Liu, X.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFQT4oBgHgl3EQf0TbP/content/2301.13416v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFQT4oBgHgl3EQf0TbP/content/2301.13416v1.pdf'} +page_content=' Zhai, D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFQT4oBgHgl3EQf0TbP/content/2301.13416v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFQT4oBgHgl3EQf0TbP/content/2301.13416v1.pdf'} +page_content=' Chen, R.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFQT4oBgHgl3EQf0TbP/content/2301.13416v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFQT4oBgHgl3EQf0TbP/content/2301.13416v1.pdf'} +page_content=' Ji, X.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFQT4oBgHgl3EQf0TbP/content/2301.13416v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFQT4oBgHgl3EQf0TbP/content/2301.13416v1.pdf'} +page_content=' Zhao, D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFQT4oBgHgl3EQf0TbP/content/2301.13416v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFQT4oBgHgl3EQf0TbP/content/2301.13416v1.pdf'} +page_content=' and Gao, W.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFQT4oBgHgl3EQf0TbP/content/2301.13416v1.pdf'} +page_content=' 2018.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFQT4oBgHgl3EQf0TbP/content/2301.13416v1.pdf'} +page_content=' Depth restoration from RGB-D data via joint adaptive regularization and thresholding on manifolds.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFQT4oBgHgl3EQf0TbP/content/2301.13416v1.pdf'} +page_content=' IEEE Trans- actions on Image Processing, 28(3): 1068–1079.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFQT4oBgHgl3EQf0TbP/content/2301.13416v1.pdf'} +page_content=' Lutio, R.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFQT4oBgHgl3EQf0TbP/content/2301.13416v1.pdf'} +page_content=' d.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFQT4oBgHgl3EQf0TbP/content/2301.13416v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFQT4oBgHgl3EQf0TbP/content/2301.13416v1.pdf'} +page_content=' D’aronco, S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFQT4oBgHgl3EQf0TbP/content/2301.13416v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFQT4oBgHgl3EQf0TbP/content/2301.13416v1.pdf'} +page_content=' Wegner, J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFQT4oBgHgl3EQf0TbP/content/2301.13416v1.pdf'} +page_content=' D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFQT4oBgHgl3EQf0TbP/content/2301.13416v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFQT4oBgHgl3EQf0TbP/content/2301.13416v1.pdf'} +page_content=' and Schindler, K.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFQT4oBgHgl3EQf0TbP/content/2301.13416v1.pdf'} +page_content=' 2019.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFQT4oBgHgl3EQf0TbP/content/2301.13416v1.pdf'} +page_content=' Guided super-resolution as pixel-to-pixel transforma- tion.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFQT4oBgHgl3EQf0TbP/content/2301.13416v1.pdf'} +page_content=' In ICCV, 8829–8837.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFQT4oBgHgl3EQf0TbP/content/2301.13416v1.pdf'} +page_content=' Meuleman, A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFQT4oBgHgl3EQf0TbP/content/2301.13416v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFQT4oBgHgl3EQf0TbP/content/2301.13416v1.pdf'} +page_content=' Baek, S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFQT4oBgHgl3EQf0TbP/content/2301.13416v1.pdf'} +page_content='-H.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFQT4oBgHgl3EQf0TbP/content/2301.13416v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFQT4oBgHgl3EQf0TbP/content/2301.13416v1.pdf'} +page_content=' Heide, F.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFQT4oBgHgl3EQf0TbP/content/2301.13416v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFQT4oBgHgl3EQf0TbP/content/2301.13416v1.pdf'} +page_content=' and Kim, M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFQT4oBgHgl3EQf0TbP/content/2301.13416v1.pdf'} +page_content=' H.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFQT4oBgHgl3EQf0TbP/content/2301.13416v1.pdf'} +page_content=' 2020.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFQT4oBgHgl3EQf0TbP/content/2301.13416v1.pdf'} +page_content=' Single-shot monocular rgb-d imaging using uneven double refraction.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFQT4oBgHgl3EQf0TbP/content/2301.13416v1.pdf'} +page_content=' In CVPR, 2465–2474.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFQT4oBgHgl3EQf0TbP/content/2301.13416v1.pdf'} +page_content=' Newcombe, R.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFQT4oBgHgl3EQf0TbP/content/2301.13416v1.pdf'} +page_content=' A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFQT4oBgHgl3EQf0TbP/content/2301.13416v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFQT4oBgHgl3EQf0TbP/content/2301.13416v1.pdf'} +page_content=' Izadi, S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFQT4oBgHgl3EQf0TbP/content/2301.13416v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFQT4oBgHgl3EQf0TbP/content/2301.13416v1.pdf'} +page_content=' Hilliges, O.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFQT4oBgHgl3EQf0TbP/content/2301.13416v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFQT4oBgHgl3EQf0TbP/content/2301.13416v1.pdf'} +page_content=' Molyneaux, D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFQT4oBgHgl3EQf0TbP/content/2301.13416v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFQT4oBgHgl3EQf0TbP/content/2301.13416v1.pdf'} +page_content=' Kim, D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFQT4oBgHgl3EQf0TbP/content/2301.13416v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFQT4oBgHgl3EQf0TbP/content/2301.13416v1.pdf'} +page_content=' Davison, A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFQT4oBgHgl3EQf0TbP/content/2301.13416v1.pdf'} +page_content=' J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFQT4oBgHgl3EQf0TbP/content/2301.13416v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFQT4oBgHgl3EQf0TbP/content/2301.13416v1.pdf'} +page_content=' Kohi, P.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFQT4oBgHgl3EQf0TbP/content/2301.13416v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFQT4oBgHgl3EQf0TbP/content/2301.13416v1.pdf'} +page_content=' Shotton, J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFQT4oBgHgl3EQf0TbP/content/2301.13416v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFQT4oBgHgl3EQf0TbP/content/2301.13416v1.pdf'} +page_content=' Hodges, S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFQT4oBgHgl3EQf0TbP/content/2301.13416v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFQT4oBgHgl3EQf0TbP/content/2301.13416v1.pdf'} +page_content=' and Fitzgibbon, A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFQT4oBgHgl3EQf0TbP/content/2301.13416v1.pdf'} +page_content=' 2011.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFQT4oBgHgl3EQf0TbP/content/2301.13416v1.pdf'} +page_content=' Kinectfusion: Real-time dense sur- face mapping and tracking.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFQT4oBgHgl3EQf0TbP/content/2301.13416v1.pdf'} +page_content=' In ISMAR, 127–136.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFQT4oBgHgl3EQf0TbP/content/2301.13416v1.pdf'} +page_content=' Ieee.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFQT4oBgHgl3EQf0TbP/content/2301.13416v1.pdf'} +page_content=' Silberman, N.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFQT4oBgHgl3EQf0TbP/content/2301.13416v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFQT4oBgHgl3EQf0TbP/content/2301.13416v1.pdf'} +page_content=' Hoiem, D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFQT4oBgHgl3EQf0TbP/content/2301.13416v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFQT4oBgHgl3EQf0TbP/content/2301.13416v1.pdf'} +page_content=' Kohli, P.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFQT4oBgHgl3EQf0TbP/content/2301.13416v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFQT4oBgHgl3EQf0TbP/content/2301.13416v1.pdf'} +page_content=' and Fergus, R.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFQT4oBgHgl3EQf0TbP/content/2301.13416v1.pdf'} +page_content=' 2012.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFQT4oBgHgl3EQf0TbP/content/2301.13416v1.pdf'} +page_content=' Indoor segmentation and support inference from rgbd im- ages.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFQT4oBgHgl3EQf0TbP/content/2301.13416v1.pdf'} +page_content=' In ECCV, 746–760.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFQT4oBgHgl3EQf0TbP/content/2301.13416v1.pdf'} +page_content=' Song, X.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFQT4oBgHgl3EQf0TbP/content/2301.13416v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFQT4oBgHgl3EQf0TbP/content/2301.13416v1.pdf'} +page_content=' Dai, Y.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFQT4oBgHgl3EQf0TbP/content/2301.13416v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFQT4oBgHgl3EQf0TbP/content/2301.13416v1.pdf'} +page_content=' Zhou, D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFQT4oBgHgl3EQf0TbP/content/2301.13416v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFQT4oBgHgl3EQf0TbP/content/2301.13416v1.pdf'} +page_content=' Liu, L.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFQT4oBgHgl3EQf0TbP/content/2301.13416v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFQT4oBgHgl3EQf0TbP/content/2301.13416v1.pdf'} +page_content=' Li, W.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFQT4oBgHgl3EQf0TbP/content/2301.13416v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFQT4oBgHgl3EQf0TbP/content/2301.13416v1.pdf'} +page_content=' Li, H.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFQT4oBgHgl3EQf0TbP/content/2301.13416v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFQT4oBgHgl3EQf0TbP/content/2301.13416v1.pdf'} +page_content=' and Yang, R.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFQT4oBgHgl3EQf0TbP/content/2301.13416v1.pdf'} +page_content=' 2020.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFQT4oBgHgl3EQf0TbP/content/2301.13416v1.pdf'} +page_content=' Channel attention based iterative residual learning for depth map super-resolution.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFQT4oBgHgl3EQf0TbP/content/2301.13416v1.pdf'} +page_content=' In CVPR, 5631–5640.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFQT4oBgHgl3EQf0TbP/content/2301.13416v1.pdf'} +page_content=' Su, H.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFQT4oBgHgl3EQf0TbP/content/2301.13416v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFQT4oBgHgl3EQf0TbP/content/2301.13416v1.pdf'} +page_content=' Jampani, V.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFQT4oBgHgl3EQf0TbP/content/2301.13416v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFQT4oBgHgl3EQf0TbP/content/2301.13416v1.pdf'} +page_content=' Sun, D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFQT4oBgHgl3EQf0TbP/content/2301.13416v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFQT4oBgHgl3EQf0TbP/content/2301.13416v1.pdf'} +page_content=' Gallo, O.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFQT4oBgHgl3EQf0TbP/content/2301.13416v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFQT4oBgHgl3EQf0TbP/content/2301.13416v1.pdf'} +page_content=' Learned-Miller, E.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFQT4oBgHgl3EQf0TbP/content/2301.13416v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFQT4oBgHgl3EQf0TbP/content/2301.13416v1.pdf'} +page_content=' and Kautz, J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFQT4oBgHgl3EQf0TbP/content/2301.13416v1.pdf'} +page_content=' 2019.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFQT4oBgHgl3EQf0TbP/content/2301.13416v1.pdf'} +page_content=' Pixel-adaptive convolutional neural net- works.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFQT4oBgHgl3EQf0TbP/content/2301.13416v1.pdf'} +page_content=' In CVPR, 11166–11175.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFQT4oBgHgl3EQf0TbP/content/2301.13416v1.pdf'} +page_content=' Sun, B.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFQT4oBgHgl3EQf0TbP/content/2301.13416v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFQT4oBgHgl3EQf0TbP/content/2301.13416v1.pdf'} +page_content=' Ye, X.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFQT4oBgHgl3EQf0TbP/content/2301.13416v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFQT4oBgHgl3EQf0TbP/content/2301.13416v1.pdf'} +page_content=' Li, B.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFQT4oBgHgl3EQf0TbP/content/2301.13416v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFQT4oBgHgl3EQf0TbP/content/2301.13416v1.pdf'} +page_content=' Li, H.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFQT4oBgHgl3EQf0TbP/content/2301.13416v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFQT4oBgHgl3EQf0TbP/content/2301.13416v1.pdf'} +page_content=' Wang, Z.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFQT4oBgHgl3EQf0TbP/content/2301.13416v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFQT4oBgHgl3EQf0TbP/content/2301.13416v1.pdf'} +page_content=' and Xu, R.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFQT4oBgHgl3EQf0TbP/content/2301.13416v1.pdf'} +page_content=' 2021.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFQT4oBgHgl3EQf0TbP/content/2301.13416v1.pdf'} +page_content=' Learning Scene Structure Guidance via Cross-Task Knowl- edge Transfer for Single Depth Super-Resolution.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFQT4oBgHgl3EQf0TbP/content/2301.13416v1.pdf'} +page_content=' In CVPR, 7792–7801.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFQT4oBgHgl3EQf0TbP/content/2301.13416v1.pdf'} +page_content=' Tak-Wai;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFQT4oBgHgl3EQf0TbP/content/2301.13416v1.pdf'} +page_content=' Loy, C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFQT4oBgHgl3EQf0TbP/content/2301.13416v1.pdf'} +page_content=' C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFQT4oBgHgl3EQf0TbP/content/2301.13416v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFQT4oBgHgl3EQf0TbP/content/2301.13416v1.pdf'} +page_content=' and Tang, X.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFQT4oBgHgl3EQf0TbP/content/2301.13416v1.pdf'} +page_content=' 2016.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFQT4oBgHgl3EQf0TbP/content/2301.13416v1.pdf'} +page_content=' Depth Map Super- Resolution by Deep Multi-Scale Guidance.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFQT4oBgHgl3EQf0TbP/content/2301.13416v1.pdf'} +page_content=' In ECCV, 353– 369.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFQT4oBgHgl3EQf0TbP/content/2301.13416v1.pdf'} +page_content=' Tang, Q.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFQT4oBgHgl3EQf0TbP/content/2301.13416v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFQT4oBgHgl3EQf0TbP/content/2301.13416v1.pdf'} +page_content=' Cong, R.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFQT4oBgHgl3EQf0TbP/content/2301.13416v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFQT4oBgHgl3EQf0TbP/content/2301.13416v1.pdf'} +page_content=' Sheng, R.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFQT4oBgHgl3EQf0TbP/content/2301.13416v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFQT4oBgHgl3EQf0TbP/content/2301.13416v1.pdf'} +page_content=' He, L.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFQT4oBgHgl3EQf0TbP/content/2301.13416v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFQT4oBgHgl3EQf0TbP/content/2301.13416v1.pdf'} +page_content=' Zhang, D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFQT4oBgHgl3EQf0TbP/content/2301.13416v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFQT4oBgHgl3EQf0TbP/content/2301.13416v1.pdf'} +page_content=' Zhao, Y.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFQT4oBgHgl3EQf0TbP/content/2301.13416v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFQT4oBgHgl3EQf0TbP/content/2301.13416v1.pdf'} +page_content=' and Kwong, S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFQT4oBgHgl3EQf0TbP/content/2301.13416v1.pdf'} +page_content=' 2021.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFQT4oBgHgl3EQf0TbP/content/2301.13416v1.pdf'} +page_content=' BridgeNet: A Joint Learning Network of Depth Map Super-Resolution and Monocular Depth Esti- mation.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFQT4oBgHgl3EQf0TbP/content/2301.13416v1.pdf'} +page_content=' In ACMMM, 2148–2157.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFQT4oBgHgl3EQf0TbP/content/2301.13416v1.pdf'} +page_content=' Wang, K.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFQT4oBgHgl3EQf0TbP/content/2301.13416v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFQT4oBgHgl3EQf0TbP/content/2301.13416v1.pdf'} +page_content=' Zhang, Z.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFQT4oBgHgl3EQf0TbP/content/2301.13416v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFQT4oBgHgl3EQf0TbP/content/2301.13416v1.pdf'} +page_content=' Yan, Z.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFQT4oBgHgl3EQf0TbP/content/2301.13416v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFQT4oBgHgl3EQf0TbP/content/2301.13416v1.pdf'} +page_content=' Li, X.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFQT4oBgHgl3EQf0TbP/content/2301.13416v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFQT4oBgHgl3EQf0TbP/content/2301.13416v1.pdf'} +page_content=' Xu, B.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFQT4oBgHgl3EQf0TbP/content/2301.13416v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFQT4oBgHgl3EQf0TbP/content/2301.13416v1.pdf'} +page_content=' Li, J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFQT4oBgHgl3EQf0TbP/content/2301.13416v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFQT4oBgHgl3EQf0TbP/content/2301.13416v1.pdf'} +page_content=' and Yang, J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFQT4oBgHgl3EQf0TbP/content/2301.13416v1.pdf'} +page_content=' 2021.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFQT4oBgHgl3EQf0TbP/content/2301.13416v1.pdf'} +page_content=' Regularizing Nighttime Weirdness: Efficient Self-Supervised Monocular Depth Estimation in the Dark.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFQT4oBgHgl3EQf0TbP/content/2301.13416v1.pdf'} +page_content=' In ICCV, 16055–16064.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFQT4oBgHgl3EQf0TbP/content/2301.13416v1.pdf'} +page_content=' Wang, Z.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFQT4oBgHgl3EQf0TbP/content/2301.13416v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFQT4oBgHgl3EQf0TbP/content/2301.13416v1.pdf'} +page_content=' Ye, X.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFQT4oBgHgl3EQf0TbP/content/2301.13416v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFQT4oBgHgl3EQf0TbP/content/2301.13416v1.pdf'} +page_content=' Sun, B.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFQT4oBgHgl3EQf0TbP/content/2301.13416v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFQT4oBgHgl3EQf0TbP/content/2301.13416v1.pdf'} +page_content=' Yang, J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFQT4oBgHgl3EQf0TbP/content/2301.13416v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFQT4oBgHgl3EQf0TbP/content/2301.13416v1.pdf'} +page_content=' Xu, R.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFQT4oBgHgl3EQf0TbP/content/2301.13416v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFQT4oBgHgl3EQf0TbP/content/2301.13416v1.pdf'} +page_content=' and Li, H.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFQT4oBgHgl3EQf0TbP/content/2301.13416v1.pdf'} +page_content=' 2020.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFQT4oBgHgl3EQf0TbP/content/2301.13416v1.pdf'} +page_content=' Depth upsampling based on deep edge-aware learning.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFQT4oBgHgl3EQf0TbP/content/2301.13416v1.pdf'} +page_content=' Pat- tern Recognition, 103: 107274.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFQT4oBgHgl3EQf0TbP/content/2301.13416v1.pdf'} +page_content=' Yan, Z.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFQT4oBgHgl3EQf0TbP/content/2301.13416v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFQT4oBgHgl3EQf0TbP/content/2301.13416v1.pdf'} +page_content=' Wang, K.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFQT4oBgHgl3EQf0TbP/content/2301.13416v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFQT4oBgHgl3EQf0TbP/content/2301.13416v1.pdf'} +page_content=' Li, X.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFQT4oBgHgl3EQf0TbP/content/2301.13416v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFQT4oBgHgl3EQf0TbP/content/2301.13416v1.pdf'} +page_content=' Zhang, Z.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFQT4oBgHgl3EQf0TbP/content/2301.13416v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFQT4oBgHgl3EQf0TbP/content/2301.13416v1.pdf'} +page_content=' Li, G.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFQT4oBgHgl3EQf0TbP/content/2301.13416v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFQT4oBgHgl3EQf0TbP/content/2301.13416v1.pdf'} +page_content=' Li, J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFQT4oBgHgl3EQf0TbP/content/2301.13416v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFQT4oBgHgl3EQf0TbP/content/2301.13416v1.pdf'} +page_content=' and Yang, J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFQT4oBgHgl3EQf0TbP/content/2301.13416v1.pdf'} +page_content=' 2022.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFQT4oBgHgl3EQf0TbP/content/2301.13416v1.pdf'} +page_content=' Learning Complementary Correlations for Depth Super-Resolution With Incomplete Data in Real World.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFQT4oBgHgl3EQf0TbP/content/2301.13416v1.pdf'} +page_content=' IEEE Transactions on Neural Networks and Learning Sys- tems.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFQT4oBgHgl3EQf0TbP/content/2301.13416v1.pdf'} +page_content=' Yang, J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFQT4oBgHgl3EQf0TbP/content/2301.13416v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFQT4oBgHgl3EQf0TbP/content/2301.13416v1.pdf'} +page_content=' Ye, X.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFQT4oBgHgl3EQf0TbP/content/2301.13416v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFQT4oBgHgl3EQf0TbP/content/2301.13416v1.pdf'} +page_content=' Li, K.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFQT4oBgHgl3EQf0TbP/content/2301.13416v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFQT4oBgHgl3EQf0TbP/content/2301.13416v1.pdf'} +page_content=' Hou, C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFQT4oBgHgl3EQf0TbP/content/2301.13416v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFQT4oBgHgl3EQf0TbP/content/2301.13416v1.pdf'} +page_content=' and Wang, Y.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFQT4oBgHgl3EQf0TbP/content/2301.13416v1.pdf'} +page_content=' 2014.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFQT4oBgHgl3EQf0TbP/content/2301.13416v1.pdf'} +page_content=' Color- guided depth recovery from RGB-D data using an adaptive autoregressive model.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFQT4oBgHgl3EQf0TbP/content/2301.13416v1.pdf'} +page_content=' IEEE transactions on image process- ing, 23(8): 3443–3458.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFQT4oBgHgl3EQf0TbP/content/2301.13416v1.pdf'} +page_content=' Zhao, Z.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFQT4oBgHgl3EQf0TbP/content/2301.13416v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFQT4oBgHgl3EQf0TbP/content/2301.13416v1.pdf'} +page_content=' Zhang, J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFQT4oBgHgl3EQf0TbP/content/2301.13416v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFQT4oBgHgl3EQf0TbP/content/2301.13416v1.pdf'} +page_content=' Xu, S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFQT4oBgHgl3EQf0TbP/content/2301.13416v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFQT4oBgHgl3EQf0TbP/content/2301.13416v1.pdf'} +page_content=' Lin, Z.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFQT4oBgHgl3EQf0TbP/content/2301.13416v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFQT4oBgHgl3EQf0TbP/content/2301.13416v1.pdf'} +page_content=' and Pfister, H.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFQT4oBgHgl3EQf0TbP/content/2301.13416v1.pdf'} +page_content=' 2022.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFQT4oBgHgl3EQf0TbP/content/2301.13416v1.pdf'} +page_content=' Discrete cosine transform network for guided depth map super-resolution.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFQT4oBgHgl3EQf0TbP/content/2301.13416v1.pdf'} +page_content=' In CVPR, 5697–5707.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFQT4oBgHgl3EQf0TbP/content/2301.13416v1.pdf'} +page_content=' Zhu, J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFQT4oBgHgl3EQf0TbP/content/2301.13416v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFQT4oBgHgl3EQf0TbP/content/2301.13416v1.pdf'} +page_content=' Zhai, W.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFQT4oBgHgl3EQf0TbP/content/2301.13416v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFQT4oBgHgl3EQf0TbP/content/2301.13416v1.pdf'} +page_content=' Cao, Y.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFQT4oBgHgl3EQf0TbP/content/2301.13416v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFQT4oBgHgl3EQf0TbP/content/2301.13416v1.pdf'} +page_content=' and Zha, Z.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFQT4oBgHgl3EQf0TbP/content/2301.13416v1.pdf'} +page_content='-J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFQT4oBgHgl3EQf0TbP/content/2301.13416v1.pdf'} +page_content=' 2018.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFQT4oBgHgl3EQf0TbP/content/2301.13416v1.pdf'} +page_content=' Co-occurrent structural edge detection for color-guided depth map super- resolution.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFQT4oBgHgl3EQf0TbP/content/2301.13416v1.pdf'} +page_content=' In MMM, 93–105.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFQT4oBgHgl3EQf0TbP/content/2301.13416v1.pdf'} +page_content=' Zuo, Y.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFQT4oBgHgl3EQf0TbP/content/2301.13416v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFQT4oBgHgl3EQf0TbP/content/2301.13416v1.pdf'} +page_content=' Wu, Q.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFQT4oBgHgl3EQf0TbP/content/2301.13416v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFQT4oBgHgl3EQf0TbP/content/2301.13416v1.pdf'} +page_content=' Fang, Y.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFQT4oBgHgl3EQf0TbP/content/2301.13416v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFQT4oBgHgl3EQf0TbP/content/2301.13416v1.pdf'} +page_content=' An, P.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFQT4oBgHgl3EQf0TbP/content/2301.13416v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFQT4oBgHgl3EQf0TbP/content/2301.13416v1.pdf'} +page_content=' Huang, L.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFQT4oBgHgl3EQf0TbP/content/2301.13416v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFQT4oBgHgl3EQf0TbP/content/2301.13416v1.pdf'} +page_content=' and Chen, Z.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFQT4oBgHgl3EQf0TbP/content/2301.13416v1.pdf'} +page_content=' 2019.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFQT4oBgHgl3EQf0TbP/content/2301.13416v1.pdf'} +page_content=' Multi-scale frequency reconstruction for guided depth map super-resolution via deep residual network.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFQT4oBgHgl3EQf0TbP/content/2301.13416v1.pdf'} +page_content=' IEEE Transactions on Circuits and Systems for Video Technology, 30(2): 297–306.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFQT4oBgHgl3EQf0TbP/content/2301.13416v1.pdf'}