diff --git "a/1dFQT4oBgHgl3EQf1TYk/content/tmp_files/load_file.txt" "b/1dFQT4oBgHgl3EQf1TYk/content/tmp_files/load_file.txt" new file mode 100644--- /dev/null +++ "b/1dFQT4oBgHgl3EQf1TYk/content/tmp_files/load_file.txt" @@ -0,0 +1,1289 @@ +filepath=/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1dFQT4oBgHgl3EQf1TYk/content/2301.13419v1.pdf,len=1288 +page_content='Recurrent Structure Attention Guidance for Depth Super-Resolution Jiayi Yuan*, Haobo Jiang*, Xiang Li, Jianjun Qian†, Jun Li†, Jian Yang PCA Lab, Key Lab of Intelligent Perception and Systems for High-Dimensional Information of Ministry of Education Jiangsu Key Lab of Image and Video Understanding for Social Security School of Computer Science and Engineering, Nanjing University of Science and Technology, Nanjing, China {jiayiyuan, jiang.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1dFQT4oBgHgl3EQf1TYk/content/2301.13419v1.pdf'} +page_content='hao.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1dFQT4oBgHgl3EQf1TYk/content/2301.13419v1.pdf'} +page_content='bo, xiang.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1dFQT4oBgHgl3EQf1TYk/content/2301.13419v1.pdf'} +page_content='li.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1dFQT4oBgHgl3EQf1TYk/content/2301.13419v1.pdf'} +page_content='implus, csjqian, junli, csjyang}@njust.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1dFQT4oBgHgl3EQf1TYk/content/2301.13419v1.pdf'} +page_content='edu.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1dFQT4oBgHgl3EQf1TYk/content/2301.13419v1.pdf'} +page_content='cn Abstract Image guidance is an effective strategy for depth super- resolution.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1dFQT4oBgHgl3EQf1TYk/content/2301.13419v1.pdf'} +page_content=' Generally, most existing methods employ hand- crafted operators to decompose the high-frequency (HF) and low-frequency (LF) ingredients from low-resolution depth maps and guide the HF ingredients by directly concatenat- ing them with image features.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1dFQT4oBgHgl3EQf1TYk/content/2301.13419v1.pdf'} +page_content=' However, the hand-designed operators usually cause inferior HF maps (e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1dFQT4oBgHgl3EQf1TYk/content/2301.13419v1.pdf'} +page_content='g.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1dFQT4oBgHgl3EQf1TYk/content/2301.13419v1.pdf'} +page_content=', distorted or structurally missing) due to the diverse appearance of com- plex depth maps.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1dFQT4oBgHgl3EQf1TYk/content/2301.13419v1.pdf'} +page_content=' Moreover, the direct concatenation often re- sults in weak guidance because not all image features have a positive effect on the HF maps.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1dFQT4oBgHgl3EQf1TYk/content/2301.13419v1.pdf'} +page_content=' In this paper, we de- velop a recurrent structure attention guided (RSAG) frame- work, consisting of two important parts.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1dFQT4oBgHgl3EQf1TYk/content/2301.13419v1.pdf'} +page_content=' First, we introduce a deep contrastive network with multi-scale filters for adaptive frequency-domain separation, which adopts contrastive net- works from large filters to small ones to calculate the pixel contrasts for adaptive high-quality HF predictions.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1dFQT4oBgHgl3EQf1TYk/content/2301.13419v1.pdf'} +page_content=' Second, instead of the coarse concatenation guidance, we propose a recurrent structure attention block, which iteratively utilizes the latest depth estimation and the image features to jointly select clear patterns and boundaries, aiming at providing re- fined guidance for accurate depth recovery.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1dFQT4oBgHgl3EQf1TYk/content/2301.13419v1.pdf'} +page_content=' In addition, we fuse the features of HF maps to enhance the edge structures in the decomposed LF maps.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1dFQT4oBgHgl3EQf1TYk/content/2301.13419v1.pdf'} +page_content=' Extensive experiments show that our approach obtains superior performance compared with state-of-the-art depth super-resolution methods.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1dFQT4oBgHgl3EQf1TYk/content/2301.13419v1.pdf'} +page_content=' Introduction Depth super-resolution (DSR) is a fundamental low-level vi- sion topic in computer vision as it plays an important role in a variety of applications, such as 3D reconstruction (Hou, Dai, and Nießner 2019), autonomous driving (Caesar et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1dFQT4oBgHgl3EQf1TYk/content/2301.13419v1.pdf'} +page_content=' 2020), and virtual reality (Meuleman et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1dFQT4oBgHgl3EQf1TYk/content/2301.13419v1.pdf'} +page_content=' 2020).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1dFQT4oBgHgl3EQf1TYk/content/2301.13419v1.pdf'} +page_content=' Gener- ally, DSR is to recover a high-resolution depth map precisely from a given low-resolution depth map.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1dFQT4oBgHgl3EQf1TYk/content/2301.13419v1.pdf'} +page_content=' Recently, an image guidance DSR framework becomes more and more popu- lar since it has demonstrated remarkable progress by bor- rowing the structures and boundaries in high-resolution im- age to improve the depth map (Hui, Loy, and Tang 2016).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1dFQT4oBgHgl3EQf1TYk/content/2301.13419v1.pdf'} +page_content=' As shown in Fig.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1dFQT4oBgHgl3EQf1TYk/content/2301.13419v1.pdf'} +page_content=' 1 (a), most efforts (Hui, Loy, and Tang These authors contributed equally.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1dFQT4oBgHgl3EQf1TYk/content/2301.13419v1.pdf'} +page_content=' †corresponding authors Copyright © 2023, Association for the Advancement of Artificial Intelligence (www.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1dFQT4oBgHgl3EQf1TYk/content/2301.13419v1.pdf'} +page_content='aaai.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1dFQT4oBgHgl3EQf1TYk/content/2301.13419v1.pdf'} +page_content='org).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1dFQT4oBgHgl3EQf1TYk/content/2301.13419v1.pdf'} +page_content=' All rights reserved.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1dFQT4oBgHgl3EQf1TYk/content/2301.13419v1.pdf'} +page_content=' ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1dFQT4oBgHgl3EQf1TYk/content/2301.13419v1.pdf'} +page_content='Hand-crafted HF ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1dFQT4oBgHgl3EQf1TYk/content/2301.13419v1.pdf'} +page_content='& LF Separation ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1dFQT4oBgHgl3EQf1TYk/content/2301.13419v1.pdf'} +page_content='RGB & HF ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1dFQT4oBgHgl3EQf1TYk/content/2301.13419v1.pdf'} +page_content='Feature Fusion ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1dFQT4oBgHgl3EQf1TYk/content/2301.13419v1.pdf'} +page_content='������������������������������������ ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1dFQT4oBgHgl3EQf1TYk/content/2301.13419v1.pdf'} +page_content='Adaptive HF & LF ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1dFQT4oBgHgl3EQf1TYk/content/2301.13419v1.pdf'} +page_content='Separation ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1dFQT4oBgHgl3EQf1TYk/content/2301.13419v1.pdf'} +page_content='Structure ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1dFQT4oBgHgl3EQf1TYk/content/2301.13419v1.pdf'} +page_content='Attention ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1dFQT4oBgHgl3EQf1TYk/content/2301.13419v1.pdf'} +page_content='(a) Image Guided Residual Framework ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1dFQT4oBgHgl3EQf1TYk/content/2301.13419v1.pdf'} +page_content='(b) Recurrent Structure Attention Guided Framework ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1dFQT4oBgHgl3EQf1TYk/content/2301.13419v1.pdf'} +page_content='HF & LF ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1dFQT4oBgHgl3EQf1TYk/content/2301.13419v1.pdf'} +page_content='Feature Fusion ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1dFQT4oBgHgl3EQf1TYk/content/2301.13419v1.pdf'} +page_content='������������������������������������ ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1dFQT4oBgHgl3EQf1TYk/content/2301.13419v1.pdf'} +page_content='������������������������������������ ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1dFQT4oBgHgl3EQf1TYk/content/2301.13419v1.pdf'} +page_content='������������������������������������ ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1dFQT4oBgHgl3EQf1TYk/content/2301.13419v1.pdf'} +page_content='������������������������������������ ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1dFQT4oBgHgl3EQf1TYk/content/2301.13419v1.pdf'} +page_content='������������������������������������ ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1dFQT4oBgHgl3EQf1TYk/content/2301.13419v1.pdf'} +page_content='Bicubic ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1dFQT4oBgHgl3EQf1TYk/content/2301.13419v1.pdf'} +page_content='Interpolation ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1dFQT4oBgHgl3EQf1TYk/content/2301.13419v1.pdf'} +page_content='������������������������������������ ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1dFQT4oBgHgl3EQf1TYk/content/2301.13419v1.pdf'} +page_content='������������������������������������ ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1dFQT4oBgHgl3EQf1TYk/content/2301.13419v1.pdf'} +page_content='������������������������������������ ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1dFQT4oBgHgl3EQf1TYk/content/2301.13419v1.pdf'} +page_content='������������������������������������ ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1dFQT4oBgHgl3EQf1TYk/content/2301.13419v1.pdf'} +page_content='Recurrent ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1dFQT4oBgHgl3EQf1TYk/content/2301.13419v1.pdf'} +page_content='Figure 1: Image guided DSR framework.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1dFQT4oBgHgl3EQf1TYk/content/2301.13419v1.pdf'} +page_content=' (a) Popular image guided residual DSR framework;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1dFQT4oBgHgl3EQf1TYk/content/2301.13419v1.pdf'} +page_content=' (b) Our recurrent structure attention guided DSR framework.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1dFQT4oBgHgl3EQf1TYk/content/2301.13419v1.pdf'} +page_content=' 2016;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1dFQT4oBgHgl3EQf1TYk/content/2301.13419v1.pdf'} +page_content=' Guo et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1dFQT4oBgHgl3EQf1TYk/content/2301.13419v1.pdf'} +page_content=' 2019;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1dFQT4oBgHgl3EQf1TYk/content/2301.13419v1.pdf'} +page_content=' Zuo et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1dFQT4oBgHgl3EQf1TYk/content/2301.13419v1.pdf'} +page_content=' 2019b;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1dFQT4oBgHgl3EQf1TYk/content/2301.13419v1.pdf'} +page_content=' Li et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1dFQT4oBgHgl3EQf1TYk/content/2301.13419v1.pdf'} +page_content=' 2019) usu- ally 1) use hand-crafted operators (e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1dFQT4oBgHgl3EQf1TYk/content/2301.13419v1.pdf'} +page_content='g.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1dFQT4oBgHgl3EQf1TYk/content/2301.13419v1.pdf'} +page_content=', hand-designed fil- ters) to perform an early spectral decomposition of the low- resolution depth map (i.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1dFQT4oBgHgl3EQf1TYk/content/2301.13419v1.pdf'} +page_content='e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1dFQT4oBgHgl3EQf1TYk/content/2301.13419v1.pdf'} +page_content=', high-frequency (HF) and low- frequency (LF)), 2) coarsely implement the image guidance by directly concatenating the image features into the HF maps, and 3) run a simple up-sampling like bicubic interpo- lation on the decomposed low-resolution LF map to a high- resolution one.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1dFQT4oBgHgl3EQf1TYk/content/2301.13419v1.pdf'} +page_content=' However, this framework still suffers from three challenging problems as follows.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1dFQT4oBgHgl3EQf1TYk/content/2301.13419v1.pdf'} +page_content=' Firstly, the hand-designed operators often cause a weak spectral decomposition as they are difficult to handle the di- verse structures in the complex depth map, resulting in lost object structures in the HF map of Fig.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1dFQT4oBgHgl3EQf1TYk/content/2301.13419v1.pdf'} +page_content=' 2 (b).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1dFQT4oBgHgl3EQf1TYk/content/2301.13419v1.pdf'} +page_content=' Secondly, the direct feature concatenation results in weak image guidance since the complex textures of the image usually produce in- ferior features based on our observation.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1dFQT4oBgHgl3EQf1TYk/content/2301.13419v1.pdf'} +page_content=' For example, Fig.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1dFQT4oBgHgl3EQf1TYk/content/2301.13419v1.pdf'} +page_content=' 2 (f-h) show clear features of the ceramic bottle (white box) and poor features of the poster (yellow box), corresponding to the complex and simple textures of the image in Fig.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1dFQT4oBgHgl3EQf1TYk/content/2301.13419v1.pdf'} +page_content=' 2 (j), respectively.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1dFQT4oBgHgl3EQf1TYk/content/2301.13419v1.pdf'} +page_content=' Thirdly, the bicubic interpolation also results in blurred edges in the LF map of Fig.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1dFQT4oBgHgl3EQf1TYk/content/2301.13419v1.pdf'} +page_content=' 2 (d), because it is unsuitable for up-sampling of all kinds of structures.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1dFQT4oBgHgl3EQf1TYk/content/2301.13419v1.pdf'} +page_content=' arXiv:2301.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1dFQT4oBgHgl3EQf1TYk/content/2301.13419v1.pdf'} +page_content='13419v1 [cs.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1dFQT4oBgHgl3EQf1TYk/content/2301.13419v1.pdf'} +page_content='CV] 31 Jan 2023 (c) HF (Ours) (b) HF (hand-crafted) (d) LF (hand-crafted) (e) LF (Ours) (f) Fea.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1dFQT4oBgHgl3EQf1TYk/content/2301.13419v1.pdf'} +page_content=' map (DMSG) (i) Fea.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1dFQT4oBgHgl3EQf1TYk/content/2301.13419v1.pdf'} +page_content=' map (Ours) (j) Color image (a) LR depth (h) Fea.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1dFQT4oBgHgl3EQf1TYk/content/2301.13419v1.pdf'} +page_content=' map (Ours-direct) (g) Fea.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1dFQT4oBgHgl3EQf1TYk/content/2301.13419v1.pdf'} +page_content=' map (DJFR) Figure 2: Visualizations of the decomposed HF&LF and guidance feature maps.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1dFQT4oBgHgl3EQf1TYk/content/2301.13419v1.pdf'} +page_content=' (b) and (d) show a weak frequency-domain separation using the hand-designed operators(Hui, Loy, and Tang 2016).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1dFQT4oBgHgl3EQf1TYk/content/2301.13419v1.pdf'} +page_content=' (f-h) show image guidance features with redundant textures and noise in DMSG (Hui, Loy, and Tang 2016), DJFR (Li et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1dFQT4oBgHgl3EQf1TYk/content/2301.13419v1.pdf'} +page_content=' 2019) and our network using direct concatenation.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1dFQT4oBgHgl3EQf1TYk/content/2301.13419v1.pdf'} +page_content=' Compared with them, our method produces better HF structure in (c), sharper LF boundaries in (e), and clearer guidance structure in (i).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1dFQT4oBgHgl3EQf1TYk/content/2301.13419v1.pdf'} +page_content=' LR depth map and color image are plotted in (a) and (g).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1dFQT4oBgHgl3EQf1TYk/content/2301.13419v1.pdf'} +page_content=' To address these problems, we develop a novel recur- rent structure attention guided (RSAG) framework for high- quality DSR in Fig.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1dFQT4oBgHgl3EQf1TYk/content/2301.13419v1.pdf'} +page_content=' 1 (b) through three aspects.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1dFQT4oBgHgl3EQf1TYk/content/2301.13419v1.pdf'} +page_content=' First of all, we introduce a deep contrastive network with multi-scale fil- ters (DCN) to effectively decompose the HF and LF compo- nents of the input depth, instead of the hand-designed opera- tors.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1dFQT4oBgHgl3EQf1TYk/content/2301.13419v1.pdf'} +page_content=' DCN is to subtly stack simple contrastive networks (Xu et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1dFQT4oBgHgl3EQf1TYk/content/2301.13419v1.pdf'} +page_content=' 2020) three times from large filters to small ones for a coarse-to-fine HF prediction with contextual structures, and to calculate the LF component by subtracting the HF predic- tion from the input depth map.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1dFQT4oBgHgl3EQf1TYk/content/2301.13419v1.pdf'} +page_content=' To better guide the depth fea- tures, in addition, we propose a recurrent structure attention (SA) block to select the useful image features, instead of the direct concatenation guidance.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1dFQT4oBgHgl3EQf1TYk/content/2301.13419v1.pdf'} +page_content=' The key step of SA is to add absolute values of contrastive attention features of the image and the latest depth prediction, and then calculate an atten- tion map by employing channel and spatial attention oper- ators.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1dFQT4oBgHgl3EQf1TYk/content/2301.13419v1.pdf'} +page_content=' Finally, we present an HF&LF feature fusion (HLF) block to improve the blurred edges in the LF component by concatenating the HF feature produced by our SA block, as its contextual structure can enhance the edges.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1dFQT4oBgHgl3EQf1TYk/content/2301.13419v1.pdf'} +page_content=' Overall, our RSAG framework has a significant improvement on the hand-crafted spectral decomposition and image guidance.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1dFQT4oBgHgl3EQf1TYk/content/2301.13419v1.pdf'} +page_content=' In summary, our contributions are as follows: We introduce a deep contrastive network with multi-scale filters (DCN) for the robust HF and LF reconstruction, where the HF structure is implemented by stacking the pixel-wise contrast from large to small kernels.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1dFQT4oBgHgl3EQf1TYk/content/2301.13419v1.pdf'} +page_content=' We propose a novel recurrent structure attention (SA) block by combining the latest depth prediction with the image feature to select useful image guidance features.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1dFQT4oBgHgl3EQf1TYk/content/2301.13419v1.pdf'} +page_content=' Extensive experiments on the benchmark datasets verify the superior effectiveness of the proposed framework and achieve state-of-the-art restoration performance.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1dFQT4oBgHgl3EQf1TYk/content/2301.13419v1.pdf'} +page_content=' Related work In this section, we mainly review the previous spectral de- composition and cross-modality fusion mechanisms used in depth map super-resolution (DSR).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1dFQT4oBgHgl3EQf1TYk/content/2301.13419v1.pdf'} +page_content=' Spectral Decomposition in DSR Since the HF component of the depth map can provide suf- ficient structure information which coincides well with the image boundaries, most methods adopt early spectral de- composition for efficient DSR.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1dFQT4oBgHgl3EQf1TYk/content/2301.13419v1.pdf'} +page_content=' A line of methods (Makarov, Aliev, and Gerasimova 2017;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1dFQT4oBgHgl3EQf1TYk/content/2301.13419v1.pdf'} +page_content=' Xiao et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1dFQT4oBgHgl3EQf1TYk/content/2301.13419v1.pdf'} +page_content=' 2018;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1dFQT4oBgHgl3EQf1TYk/content/2301.13419v1.pdf'} +page_content=' Li et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1dFQT4oBgHgl3EQf1TYk/content/2301.13419v1.pdf'} +page_content=' 2019;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1dFQT4oBgHgl3EQf1TYk/content/2301.13419v1.pdf'} +page_content=' Zuo et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1dFQT4oBgHgl3EQf1TYk/content/2301.13419v1.pdf'} +page_content=' 2019b;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1dFQT4oBgHgl3EQf1TYk/content/2301.13419v1.pdf'} +page_content=' Guo et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1dFQT4oBgHgl3EQf1TYk/content/2301.13419v1.pdf'} +page_content=' 2019) regard the interpolated depth input as the LF component and add a jump connection to transfer it to the end of the network.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1dFQT4oBgHgl3EQf1TYk/content/2301.13419v1.pdf'} +page_content=' This global residual learning forces the network to focus on recovering the HF details.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1dFQT4oBgHgl3EQf1TYk/content/2301.13419v1.pdf'} +page_content=' Another line of methods adopt the hand-designed filters (Hui, Loy, and Tang 2016;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1dFQT4oBgHgl3EQf1TYk/content/2301.13419v1.pdf'} +page_content=' Yang et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1dFQT4oBgHgl3EQf1TYk/content/2301.13419v1.pdf'} +page_content=' 2017) or edge- attention (Ye, Duan, and Li 2018;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1dFQT4oBgHgl3EQf1TYk/content/2301.13419v1.pdf'} +page_content=' Chen and Jung 2018) blocks to extract HF information.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1dFQT4oBgHgl3EQf1TYk/content/2301.13419v1.pdf'} +page_content=' However, these methods require additional completion operation, since the HF out- puts always include broken edges and holes.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1dFQT4oBgHgl3EQf1TYk/content/2301.13419v1.pdf'} +page_content=' Recently, oc- tave convolution (Chen et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1dFQT4oBgHgl3EQf1TYk/content/2301.13419v1.pdf'} +page_content=' 2019) is utilized for frequency division operation in DSR network (He et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1dFQT4oBgHgl3EQf1TYk/content/2301.13419v1.pdf'} +page_content=' 2021), which is a plug-and-play convolutional unit.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1dFQT4oBgHgl3EQf1TYk/content/2301.13419v1.pdf'} +page_content=' However, it separates the frequency domain in embedding space, which does not guar- antee that HF information is completely extracted.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1dFQT4oBgHgl3EQf1TYk/content/2301.13419v1.pdf'} +page_content=' Instead, we propose a simple, fast, and adaptive separation method at the pixel level to provide reliable HF and LF maps.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1dFQT4oBgHgl3EQf1TYk/content/2301.13419v1.pdf'} +page_content=' Cross-modality Fusion Mechanism Multi-path/scale Learning.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1dFQT4oBgHgl3EQf1TYk/content/2301.13419v1.pdf'} +page_content=' Previous methods (Li et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1dFQT4oBgHgl3EQf1TYk/content/2301.13419v1.pdf'} +page_content=' 2016;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1dFQT4oBgHgl3EQf1TYk/content/2301.13419v1.pdf'} +page_content=' Lutio et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1dFQT4oBgHgl3EQf1TYk/content/2301.13419v1.pdf'} +page_content=' 2019;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1dFQT4oBgHgl3EQf1TYk/content/2301.13419v1.pdf'} +page_content=' Zhu et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1dFQT4oBgHgl3EQf1TYk/content/2301.13419v1.pdf'} +page_content=' 2018;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1dFQT4oBgHgl3EQf1TYk/content/2301.13419v1.pdf'} +page_content=' Chen and Jung 2018;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1dFQT4oBgHgl3EQf1TYk/content/2301.13419v1.pdf'} +page_content=' Hao et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1dFQT4oBgHgl3EQf1TYk/content/2301.13419v1.pdf'} +page_content=' 2019;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1dFQT4oBgHgl3EQf1TYk/content/2301.13419v1.pdf'} +page_content=' Su et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1dFQT4oBgHgl3EQf1TYk/content/2301.13419v1.pdf'} +page_content=' 2019) extract features in ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1dFQT4oBgHgl3EQf1TYk/content/2301.13419v1.pdf'} +page_content='color space and depth space through two independent paths ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1dFQT4oBgHgl3EQf1TYk/content/2301.13419v1.pdf'} +page_content='Copy ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1dFQT4oBgHgl3EQf1TYk/content/2301.13419v1.pdf'} +page_content='Add ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1dFQT4oBgHgl3EQf1TYk/content/2301.13419v1.pdf'} +page_content='SA ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1dFQT4oBgHgl3EQf1TYk/content/2301.13419v1.pdf'} +page_content='Bicubic ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1dFQT4oBgHgl3EQf1TYk/content/2301.13419v1.pdf'} +page_content='������������������������������������ ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1dFQT4oBgHgl3EQf1TYk/content/2301.13419v1.pdf'} +page_content='������������������������������������������������ ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1dFQT4oBgHgl3EQf1TYk/content/2301.13419v1.pdf'} +page_content='������������������������ ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1dFQT4oBgHgl3EQf1TYk/content/2301.13419v1.pdf'} +page_content='������������������������ ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1dFQT4oBgHgl3EQf1TYk/content/2301.13419v1.pdf'} +page_content='DCN ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1dFQT4oBgHgl3EQf1TYk/content/2301.13419v1.pdf'} +page_content='������������������������������������ ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1dFQT4oBgHgl3EQf1TYk/content/2301.13419v1.pdf'} +page_content='������������������������������������ ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1dFQT4oBgHgl3EQf1TYk/content/2301.13419v1.pdf'} +page_content='SA ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1dFQT4oBgHgl3EQf1TYk/content/2301.13419v1.pdf'} +page_content='������������������������������������ ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1dFQT4oBgHgl3EQf1TYk/content/2301.13419v1.pdf'} +page_content='������������������������������������ ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1dFQT4oBgHgl3EQf1TYk/content/2301.13419v1.pdf'} +page_content='������������������������−������������ ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1dFQT4oBgHgl3EQf1TYk/content/2301.13419v1.pdf'} +page_content='������������������������ ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1dFQT4oBgHgl3EQf1TYk/content/2301.13419v1.pdf'} +page_content='������������������������������������ ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1dFQT4oBgHgl3EQf1TYk/content/2301.13419v1.pdf'} +page_content='Recurrent ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1dFQT4oBgHgl3EQf1TYk/content/2301.13419v1.pdf'} +page_content='������������������������������������ ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1dFQT4oBgHgl3EQf1TYk/content/2301.13419v1.pdf'} +page_content='Figure 3: The pipeline of our RSAG framework.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1dFQT4oBgHgl3EQf1TYk/content/2301.13419v1.pdf'} +page_content=' It consists of a green DCN module for the adaptive frequency-domain separa- tion, an orange recurrent SA module for the HF component recovery, and a blue module for the LF component recovery.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1dFQT4oBgHgl3EQf1TYk/content/2301.13419v1.pdf'} +page_content=' respectively, and transfer common structures through a joint branch.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1dFQT4oBgHgl3EQf1TYk/content/2301.13419v1.pdf'} +page_content=' However, the multi-path methods may cause details missing since the cross-modality features are only fused in one specific layer.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1dFQT4oBgHgl3EQf1TYk/content/2301.13419v1.pdf'} +page_content=' To handle the abovementioned problem, recent methods (Hui, Loy, and Tang 2016;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1dFQT4oBgHgl3EQf1TYk/content/2301.13419v1.pdf'} +page_content=' Guo et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1dFQT4oBgHgl3EQf1TYk/content/2301.13419v1.pdf'} +page_content=' 2019;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1dFQT4oBgHgl3EQf1TYk/content/2301.13419v1.pdf'} +page_content=' He et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1dFQT4oBgHgl3EQf1TYk/content/2301.13419v1.pdf'} +page_content=' 2021;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1dFQT4oBgHgl3EQf1TYk/content/2301.13419v1.pdf'} +page_content=' Zuo et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1dFQT4oBgHgl3EQf1TYk/content/2301.13419v1.pdf'} +page_content=' 2019b,a;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1dFQT4oBgHgl3EQf1TYk/content/2301.13419v1.pdf'} +page_content=' Yan et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1dFQT4oBgHgl3EQf1TYk/content/2301.13419v1.pdf'} +page_content=' 2022) adopt a multi-scale fusion strategy to merge the cross-modality fea- tures at different levels.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1dFQT4oBgHgl3EQf1TYk/content/2301.13419v1.pdf'} +page_content=' Although the multi-scale methods have achieved considerable performance, the coarse aggre- gation may cause texture copying and depth bleeding.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1dFQT4oBgHgl3EQf1TYk/content/2301.13419v1.pdf'} +page_content=' Recursive Learning.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1dFQT4oBgHgl3EQf1TYk/content/2301.13419v1.pdf'} +page_content=' In order to generate higher-level details without introducing excessive parameters, recursive learning repeatedly applies similar modules for progres- sive image reconstruction.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1dFQT4oBgHgl3EQf1TYk/content/2301.13419v1.pdf'} +page_content=' Existing recursive DSR meth- ods (Wen et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1dFQT4oBgHgl3EQf1TYk/content/2301.13419v1.pdf'} +page_content=' 2019;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1dFQT4oBgHgl3EQf1TYk/content/2301.13419v1.pdf'} +page_content=' Yang et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1dFQT4oBgHgl3EQf1TYk/content/2301.13419v1.pdf'} +page_content=' 2019;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1dFQT4oBgHgl3EQf1TYk/content/2301.13419v1.pdf'} +page_content=' Song et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1dFQT4oBgHgl3EQf1TYk/content/2301.13419v1.pdf'} +page_content=' 2020) construct the depth map in a coarse-to-fine manner by re- garding the previous crude depth output as the input of the DSR network.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1dFQT4oBgHgl3EQf1TYk/content/2301.13419v1.pdf'} +page_content=' Even though the multi-supervision and resid- ual learning avoid vanishing or exploding gradient problems to a certain extent, there still exists the risk of falling into a local optimum.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1dFQT4oBgHgl3EQf1TYk/content/2301.13419v1.pdf'} +page_content=' However, we propose a recurrent guidance for DSR, which considers the previous depth prediction as the guidance information for the next recursion.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1dFQT4oBgHgl3EQf1TYk/content/2301.13419v1.pdf'} +page_content=' As the re- cursion progresses, the continuously refined guidance is a strong constraint for better choosing the image features.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1dFQT4oBgHgl3EQf1TYk/content/2301.13419v1.pdf'} +page_content=' Attention Mechanism.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1dFQT4oBgHgl3EQf1TYk/content/2301.13419v1.pdf'} +page_content=' In recent years, the attention mechanism (Zhang et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1dFQT4oBgHgl3EQf1TYk/content/2301.13419v1.pdf'} +page_content=' 2019;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1dFQT4oBgHgl3EQf1TYk/content/2301.13419v1.pdf'} +page_content=' Guo et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1dFQT4oBgHgl3EQf1TYk/content/2301.13419v1.pdf'} +page_content=' 2020;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1dFQT4oBgHgl3EQf1TYk/content/2301.13419v1.pdf'} +page_content=' Wang et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1dFQT4oBgHgl3EQf1TYk/content/2301.13419v1.pdf'} +page_content=' 2021) has achieved significant improvements in the low- level vision field.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1dFQT4oBgHgl3EQf1TYk/content/2301.13419v1.pdf'} +page_content=' In DSR task, Song et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1dFQT4oBgHgl3EQf1TYk/content/2301.13419v1.pdf'} +page_content=' (Song et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1dFQT4oBgHgl3EQf1TYk/content/2301.13419v1.pdf'} +page_content=' 2020) utilize the channel attention to focus on HF depth.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1dFQT4oBgHgl3EQf1TYk/content/2301.13419v1.pdf'} +page_content=' Mean- while, Tang et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1dFQT4oBgHgl3EQf1TYk/content/2301.13419v1.pdf'} +page_content=' (Tang et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1dFQT4oBgHgl3EQf1TYk/content/2301.13419v1.pdf'} +page_content=' 2021) also design an HF attention bridge to extract the useful HF information dur- ing the depth estimation process and input it into the re- construction network.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1dFQT4oBgHgl3EQf1TYk/content/2301.13419v1.pdf'} +page_content=' Although these attention operations selectively highlight the HF information, they do not es- sentially solve the problem of texture copying and incon- sistent boundaries in guidance images.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1dFQT4oBgHgl3EQf1TYk/content/2301.13419v1.pdf'} +page_content=' The most related to our method is (Zhong et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1dFQT4oBgHgl3EQf1TYk/content/2301.13419v1.pdf'} +page_content=' 2021), which also aims to find the consistent structure with an attention mechanism.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1dFQT4oBgHgl3EQf1TYk/content/2301.13419v1.pdf'} +page_content=' However, there are big differences between them.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1dFQT4oBgHgl3EQf1TYk/content/2301.13419v1.pdf'} +page_content=' 1) The proposed method uses contrastive networks to explore the cross-modality correlation in the HF layer since the HF modalities of the depth map and image are closer.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1dFQT4oBgHgl3EQf1TYk/content/2301.13419v1.pdf'} +page_content=' 2) Com- pared with single image guidance, we complement the guid- ance with progressively refined depth prediction in a recur- sive fashion to accurately mine the consistent structure.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1dFQT4oBgHgl3EQf1TYk/content/2301.13419v1.pdf'} +page_content=' Approach In this section, we introduce our recurrent structure attention guided (RSAG) framework for DSR.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1dFQT4oBgHgl3EQf1TYk/content/2301.13419v1.pdf'} +page_content=' As shown in Fig.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1dFQT4oBgHgl3EQf1TYk/content/2301.13419v1.pdf'} +page_content=' 3, RSAG contains three modules, including a deep contrastive network with multi-scale filters (DCN), a recurrent structure attention module (SA), and an HF&LF feature fusion (HLF) module.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1dFQT4oBgHgl3EQf1TYk/content/2301.13419v1.pdf'} +page_content=' DCN adaptively learns the HF and LF decomposi- tion by cascading contrastive networks from large filters to small ones.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1dFQT4oBgHgl3EQf1TYk/content/2301.13419v1.pdf'} +page_content=' Then, by introducing the last depth prediction to complement the image guidance, recurrent SA jointly se- lects the useful and clear structure features of the image for accurate HF depth reconstruction.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1dFQT4oBgHgl3EQf1TYk/content/2301.13419v1.pdf'} +page_content=' Furthermore, during the reconstruction process, HF features guided by recurrent SA are integrated with LF features to refine the LF edges.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1dFQT4oBgHgl3EQf1TYk/content/2301.13419v1.pdf'} +page_content=' Before presenting our method, we denote a high- resolution (HR) image by Y hr ∈ RH×W , where H and W are the sizes of the image, an HR depth map by Dhr ∈ RH×W , a low-resolution (LR) depth map by Dlr ∈ RpH×pW , where 0 < p ≤ 1 is the downscaling factor (e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1dFQT4oBgHgl3EQf1TYk/content/2301.13419v1.pdf'} +page_content='g.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1dFQT4oBgHgl3EQf1TYk/content/2301.13419v1.pdf'} +page_content=', 1/4, 1/8, and 1/16).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1dFQT4oBgHgl3EQf1TYk/content/2301.13419v1.pdf'} +page_content=' For Dhr, Dlf ∈ RH×W and Dhf ∈ RH×W are denoted as its LF and HF components.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1dFQT4oBgHgl3EQf1TYk/content/2301.13419v1.pdf'} +page_content=' Deep Contrastive Network with Multi-scale Filters As shown in Fig.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1dFQT4oBgHgl3EQf1TYk/content/2301.13419v1.pdf'} +page_content=' 4 (a), we aim to explore a DCN network for high-quality frequency components, instead of the hand- designed operator for the frequency-domain decomposition.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1dFQT4oBgHgl3EQf1TYk/content/2301.13419v1.pdf'} +page_content=' Inspired by the contrast learning operator (Xu et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1dFQT4oBgHgl3EQf1TYk/content/2301.13419v1.pdf'} +page_content=' 2020),' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1dFQT4oBgHgl3EQf1TYk/content/2301.13419v1.pdf'} +page_content=' which is designed for RGB image decomposition,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1dFQT4oBgHgl3EQf1TYk/content/2301.13419v1.pdf'} +page_content=' we stack ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1dFQT4oBgHgl3EQf1TYk/content/2301.13419v1.pdf'} +page_content='Conv7 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1dFQT4oBgHgl3EQf1TYk/content/2301.13419v1.pdf'} +page_content='������������������������������������������������ ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1dFQT4oBgHgl3EQf1TYk/content/2301.13419v1.pdf'} +page_content='������������������������������������ ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1dFQT4oBgHgl3EQf1TYk/content/2301.13419v1.pdf'} +page_content='������������������������������������ ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1dFQT4oBgHgl3EQf1TYk/content/2301.13419v1.pdf'} +page_content='(a) Deep Contrastive Network with Multi-scale Filters ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1dFQT4oBgHgl3EQf1TYk/content/2301.13419v1.pdf'} +page_content='Conv5 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1dFQT4oBgHgl3EQf1TYk/content/2301.13419v1.pdf'} +page_content='Conv3 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1dFQT4oBgHgl3EQf1TYk/content/2301.13419v1.pdf'} +page_content='Conv5 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1dFQT4oBgHgl3EQf1TYk/content/2301.13419v1.pdf'} +page_content='Conv1 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1dFQT4oBgHgl3EQf1TYk/content/2301.13419v1.pdf'} +page_content='Conv3 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1dFQT4oBgHgl3EQf1TYk/content/2301.13419v1.pdf'} +page_content='LFE ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1dFQT4oBgHgl3EQf1TYk/content/2301.13419v1.pdf'} +page_content='Channel Attention ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1dFQT4oBgHgl3EQf1TYk/content/2301.13419v1.pdf'} +page_content='Space Attention ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1dFQT4oBgHgl3EQf1TYk/content/2301.13419v1.pdf'} +page_content='γ ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1dFQT4oBgHgl3EQf1TYk/content/2301.13419v1.pdf'} +page_content='c ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1dFQT4oBgHgl3EQf1TYk/content/2301.13419v1.pdf'} +page_content='������������������������������������ ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1dFQT4oBgHgl3EQf1TYk/content/2301.13419v1.pdf'} +page_content='������������������������−������������ ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1dFQT4oBgHgl3EQf1TYk/content/2301.13419v1.pdf'} +page_content='������������������������ ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1dFQT4oBgHgl3EQf1TYk/content/2301.13419v1.pdf'} +page_content='(b) Structure Attention ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1dFQT4oBgHgl3EQf1TYk/content/2301.13419v1.pdf'} +page_content='Conv1 Conv1 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1dFQT4oBgHgl3EQf1TYk/content/2301.13419v1.pdf'} +page_content='Conv3 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1dFQT4oBgHgl3EQf1TYk/content/2301.13419v1.pdf'} +page_content='Conv1 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1dFQT4oBgHgl3EQf1TYk/content/2301.13419v1.pdf'} +page_content='Conv3 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1dFQT4oBgHgl3EQf1TYk/content/2301.13419v1.pdf'} +page_content='������������������������ ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1dFQT4oBgHgl3EQf1TYk/content/2301.13419v1.pdf'} +page_content='������������ ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1dFQT4oBgHgl3EQf1TYk/content/2301.13419v1.pdf'} +page_content='LFE ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1dFQT4oBgHgl3EQf1TYk/content/2301.13419v1.pdf'} +page_content='Figure 4: The architectures of (a) DCN and (b) SA.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1dFQT4oBgHgl3EQf1TYk/content/2301.13419v1.pdf'} +page_content=' DCN aims to decompose HF and LF maps of a depth map by stacking three contrastive networks from large to small fil- ters.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1dFQT4oBgHgl3EQf1TYk/content/2301.13419v1.pdf'} +page_content=' SA tends to adaptively filter out unwanted textures and highlight the useful HF regions of the image.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1dFQT4oBgHgl3EQf1TYk/content/2301.13419v1.pdf'} +page_content=' it three times to a DCN with multi-scale filters for extracting high-quality HF components of the depth map.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1dFQT4oBgHgl3EQf1TYk/content/2301.13419v1.pdf'} +page_content=' Specifically, given an LR depth map Dlr ∈ RpH×pW as input, we first upscale it to the desired resolution map Dbic ∈ RH×W by bicubic interpolation.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1dFQT4oBgHgl3EQf1TYk/content/2301.13419v1.pdf'} +page_content=' We denote the number of layers of our DCN network as I, and the HF map Dhf is defined as a recursive formulation: Dhf = HI I , (1) HI i = Sigmoid � Convk(HI i−1) − Convk−2(HI i−1) � , (2) where HI 0 = Dbic;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1dFQT4oBgHgl3EQf1TYk/content/2301.13419v1.pdf'} +page_content=' HI i is the HF feature of the i-th layer in the DCN network with I layers (1 ≤ i ≤ I);' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1dFQT4oBgHgl3EQf1TYk/content/2301.13419v1.pdf'} +page_content=' Convk(·) represents a k × k convolutional operation followed by PReLU (He et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1dFQT4oBgHgl3EQf1TYk/content/2301.13419v1.pdf'} +page_content=' 2015) activation, k = 2(I −i)+3, and we set I = 3 in this paper.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1dFQT4oBgHgl3EQf1TYk/content/2301.13419v1.pdf'} +page_content=' Then, the LF map Dlf is calculated by subtracting the HF map Dhf from Dbic: Dlf = Dbic − Dhf.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1dFQT4oBgHgl3EQf1TYk/content/2301.13419v1.pdf'} +page_content=' (3) To better understand the DCN network with different layers (I = 1, 2, 3), Fig.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1dFQT4oBgHgl3EQf1TYk/content/2301.13419v1.pdf'} +page_content=' 5 shows their HF features.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1dFQT4oBgHgl3EQf1TYk/content/2301.13419v1.pdf'} +page_content=' Fig.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1dFQT4oBgHgl3EQf1TYk/content/2301.13419v1.pdf'} +page_content=' 5 (a-c) plot the HF features H1 1, H2 1 and H2 2 of shallow DCN networks.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1dFQT4oBgHgl3EQf1TYk/content/2301.13419v1.pdf'} +page_content=' Compared to H1 1 and H2 2, the HF feature H3 3 of deeper DCN network are shown in Fig.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1dFQT4oBgHgl3EQf1TYk/content/2301.13419v1.pdf'} +page_content=' 5 (f), which has the clearest and most complete edges.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1dFQT4oBgHgl3EQf1TYk/content/2301.13419v1.pdf'} +page_content=' According to Fig.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1dFQT4oBgHgl3EQf1TYk/content/2301.13419v1.pdf'} +page_content=' 5 (d-f), it is worth noticing that deeper DCN is prone to weaken depth informa- tion and enhance structural information (e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1dFQT4oBgHgl3EQf1TYk/content/2301.13419v1.pdf'} +page_content='g.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1dFQT4oBgHgl3EQf1TYk/content/2301.13419v1.pdf'} +page_content=', edge of plaster statue behind the teapot).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1dFQT4oBgHgl3EQf1TYk/content/2301.13419v1.pdf'} +page_content=' Recurrent Structure Attention Removing textures while making full use of consistent boundaries in the image is a key challenge for guided DSR.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1dFQT4oBgHgl3EQf1TYk/content/2301.13419v1.pdf'} +page_content=' Instead of trivial cross-modality feature concatenation, we propose a novel recurrent structure attention (SA) mecha- nism to bridge the modality gap between depth input and image guidance.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1dFQT4oBgHgl3EQf1TYk/content/2301.13419v1.pdf'} +page_content=' As shown in Fig.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1dFQT4oBgHgl3EQf1TYk/content/2301.13419v1.pdf'} +page_content=' 4 (b), we put our efforts into the following two aspects: (1) A cross-modality struc- ture feature attention is designed, where the consistent struc- tures are highlighted by contrast operators and the redundant features (e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1dFQT4oBgHgl3EQf1TYk/content/2301.13419v1.pdf'} +page_content='g.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1dFQT4oBgHgl3EQf1TYk/content/2301.13419v1.pdf'} +page_content=', textures and inconsistent boundaries) are sup- pressed in channel and space levels.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1dFQT4oBgHgl3EQf1TYk/content/2301.13419v1.pdf'} +page_content=' (2) For better guiding depth details restoration, useful image features are selected with the progressively refined depth prediction recursively.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1dFQT4oBgHgl3EQf1TYk/content/2301.13419v1.pdf'} +page_content=' Structure Attention.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1dFQT4oBgHgl3EQf1TYk/content/2301.13419v1.pdf'} +page_content=' Given the image Y hr ∈ RH×W and the same size depth map Dhr ∈ RH×W as input, we first use the learnable feature extractor to produce a set of hierarchical image and depth features, which match with the corresponding HF features in decoder path.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1dFQT4oBgHgl3EQf1TYk/content/2301.13419v1.pdf'} +page_content=' Then, sharing the same spirit as contrastive networks used in DCN, we ex- ploit the contextual information under multiple-level recep- tive fields and calculate the high contrastive features as HF components.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1dFQT4oBgHgl3EQf1TYk/content/2301.13419v1.pdf'} +page_content=' We further sum the depth and image contrast maps and use absolute operations to enforce their consistent structures and prevent HF smoothing caused by positive and negative cancellations.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1dFQT4oBgHgl3EQf1TYk/content/2301.13419v1.pdf'} +page_content=' This process can be formulated as: J = |Fy1 − Fy2| + |Fd1 − Fd2|, (4) Fyi = Conv2i−1(LFE(Y hr)), (5) Fdi = Conv2i−1(LFE(Dhr)), i ∈ {1, 2} , (6) where LFE(·) denotes the learnable feature extractor for ini- tial hierarchical features learning.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1dFQT4oBgHgl3EQf1TYk/content/2301.13419v1.pdf'} +page_content=' Conv2i−1(·) are the con- volutions with kernel size 2i − 1 followed by PReLU (He et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1dFQT4oBgHgl3EQf1TYk/content/2301.13419v1.pdf'} +page_content=' 2015) activation.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1dFQT4oBgHgl3EQf1TYk/content/2301.13419v1.pdf'} +page_content=' Fyi and Fdi are extracted image fea- tures and depth features under different receptive fields, re- spectively.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1dFQT4oBgHgl3EQf1TYk/content/2301.13419v1.pdf'} +page_content=' J denotes the joint HF features, which are further fed into the channel and spatial attention blocks (Woo et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1dFQT4oBgHgl3EQf1TYk/content/2301.13419v1.pdf'} +page_content=' 2018).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1dFQT4oBgHgl3EQf1TYk/content/2301.13419v1.pdf'} +page_content=' Such a design encourages learning the interaction be- tween different channels and focusing on the important spa- tial locations.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1dFQT4oBgHgl3EQf1TYk/content/2301.13419v1.pdf'} +page_content=' The features after the attention block denoted as structure-aware features Sa, can be formulated as: Sa = SpatA(CA(J)), (7) where SpatA(·) and CA(·) represent the spatial attention and the channel attention blocks, respectively.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1dFQT4oBgHgl3EQf1TYk/content/2301.13419v1.pdf'} +page_content=' At last, Sa is added to the image features and combined with the depth features.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1dFQT4oBgHgl3EQf1TYk/content/2301.13419v1.pdf'} +page_content=' The SA process is formulated as follows: G =SA(Dhr, Y hr) =Cat(LFE(Dhr), Sa + γ ∗ LFE(Conv1(Y hr))), (8) where γ denotes a learnable parameter for controlling the degree of highlighting and Cat(·) means concatenation of features.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1dFQT4oBgHgl3EQf1TYk/content/2301.13419v1.pdf'} +page_content=' Conv1(·) is a 1 × 1 convolutional kernel followed by PReLU (He et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1dFQT4oBgHgl3EQf1TYk/content/2301.13419v1.pdf'} +page_content=' 2015) activation.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1dFQT4oBgHgl3EQf1TYk/content/2301.13419v1.pdf'} +page_content=' G represents the fused guidance features for feeding into the decoder path.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1dFQT4oBgHgl3EQf1TYk/content/2301.13419v1.pdf'} +page_content=' Recurrent Mechanism with Refined Depth Guidance.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1dFQT4oBgHgl3EQf1TYk/content/2301.13419v1.pdf'} +page_content=' As mentioned above, compared to the single image guid- ance, the HR depth guidance owns the same modality as the LR depth input, which facilitates our attention module to lo- cate and select consistent edge structures in image guidance.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1dFQT4oBgHgl3EQf1TYk/content/2301.13419v1.pdf'} +page_content=' More clear depth structures can achieve more accurate guid- ance information for better details restoration, thence we re- fine the depth guidance in a recursive manner.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1dFQT4oBgHgl3EQf1TYk/content/2301.13419v1.pdf'} +page_content=' ������������1 1 (a) (b) (c) (d) (e) (f) ������������1 2 ������������2 2 ������������1 3 ������������2 3 ������������3 3 Figure 5: Visual HF features of our DCN network with dif- ferent layers (I = 1, 2, 3).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1dFQT4oBgHgl3EQf1TYk/content/2301.13419v1.pdf'} +page_content=' Specifically, for the first recursion, the input depth guid- ance is the up-sampled version of LR depth map Dbic ∈ RH×W by bicubic interpolation, i.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1dFQT4oBgHgl3EQf1TYk/content/2301.13419v1.pdf'} +page_content='e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1dFQT4oBgHgl3EQf1TYk/content/2301.13419v1.pdf'} +page_content=', G0 = SA(Dbic, Y hr).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1dFQT4oBgHgl3EQf1TYk/content/2301.13419v1.pdf'} +page_content=' For the k-th recursion, the latest output of our DSR network is taken as the input of the attention module next time.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1dFQT4oBgHgl3EQf1TYk/content/2301.13419v1.pdf'} +page_content=' The recurrent SA can be formulated as follows: Gk =SA(HLF(Gk−1, Dlf, Dhf), Y hr), (9) where HLF(·) is the HF&LF feature fusion operation.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1dFQT4oBgHgl3EQf1TYk/content/2301.13419v1.pdf'} +page_content=' As shown in Fig.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1dFQT4oBgHgl3EQf1TYk/content/2301.13419v1.pdf'} +page_content=' 6, the image feature map before being inputted into the SA module contains complex patterns and unclear boundaries.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1dFQT4oBgHgl3EQf1TYk/content/2301.13419v1.pdf'} +page_content=' As the recursion progresses, complex textures are removed (e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1dFQT4oBgHgl3EQf1TYk/content/2301.13419v1.pdf'} +page_content='g.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1dFQT4oBgHgl3EQf1TYk/content/2301.13419v1.pdf'} +page_content=', background pattern and cylinder label).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1dFQT4oBgHgl3EQf1TYk/content/2301.13419v1.pdf'} +page_content=' HF&LF feature fusion module Different from previous methods directly up-sampling LF component by bicubic interpolation, we propose an HF&LF feature fusion (HLF) module to reconstruct the HF compo- nent and improve the blurred LF edges.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1dFQT4oBgHgl3EQf1TYk/content/2301.13419v1.pdf'} +page_content=' The HF reconstruc- tion module is built upon the U-Net architecture, including an encoder path, an attention-based guidance branch, and a decoder branch (See orange blocks of Fig.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1dFQT4oBgHgl3EQf1TYk/content/2301.13419v1.pdf'} +page_content=' 3).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1dFQT4oBgHgl3EQf1TYk/content/2301.13419v1.pdf'} +page_content=' Rich hi- erarchical features extracted from the guidance branch and encoder-decoder structure are fused by using repeated resid- ual convolutional block attention modules (Woo et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1dFQT4oBgHgl3EQf1TYk/content/2301.13419v1.pdf'} +page_content=' 2018;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1dFQT4oBgHgl3EQf1TYk/content/2301.13419v1.pdf'} +page_content=' Guo et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1dFQT4oBgHgl3EQf1TYk/content/2301.13419v1.pdf'} +page_content=' 2020).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1dFQT4oBgHgl3EQf1TYk/content/2301.13419v1.pdf'} +page_content=' Then, the achieved contextual features in the decoder branch are concatenated with the LF features at multiple levels for the edges refining during the LF recon- struction (See blue blocks of Fig.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1dFQT4oBgHgl3EQf1TYk/content/2301.13419v1.pdf'} +page_content=' 3).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1dFQT4oBgHgl3EQf1TYk/content/2301.13419v1.pdf'} +page_content=' Loss Function We train our model by minimizing the smooth-L1 loss be- tween the network output Dhr of each recursion and the ground-truth depth map Dgt.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1dFQT4oBgHgl3EQf1TYk/content/2301.13419v1.pdf'} +page_content=' For the k-th recursion, the loss function Lk(·) is defined as below: Lk(Dhr k , Dgt) = N � i=1 smoothL1(Dhr k,i, Dgt i ), (10) where smoothL1(x) = � 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1dFQT4oBgHgl3EQf1TYk/content/2301.13419v1.pdf'} +page_content='5x2, if |x| < 1 |x| − 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1dFQT4oBgHgl3EQf1TYk/content/2301.13419v1.pdf'} +page_content='5, otherwise.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1dFQT4oBgHgl3EQf1TYk/content/2301.13419v1.pdf'} +page_content=' Dhr k de- notes the network output of the k-th recursion.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1dFQT4oBgHgl3EQf1TYk/content/2301.13419v1.pdf'} +page_content=' N and i in- ������������ = 0 ������������ = 1 ������������ = 2 Figure 6: Visual image features calculated by the Eq.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1dFQT4oBgHgl3EQf1TYk/content/2301.13419v1.pdf'} +page_content=' (8) when the recursive step is varied from k = 0 to 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1dFQT4oBgHgl3EQf1TYk/content/2301.13419v1.pdf'} +page_content=' dicate the pixel number and the pixel index in the map, re- spectively.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1dFQT4oBgHgl3EQf1TYk/content/2301.13419v1.pdf'} +page_content=' We can obtain K depth outputs and the overall loss is expressed as: Ls = K � k=1 λkLk, (11) where λk is the weight coefficient of the k-th loss.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1dFQT4oBgHgl3EQf1TYk/content/2301.13419v1.pdf'} +page_content=' Experiment Experimental Setting To evaluate the performance of our framework, we conduct sufficient experiments on five datasets: Middlebury (Hirschmuller and Scharstein 2007) & MPI Sintel (Butler et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1dFQT4oBgHgl3EQf1TYk/content/2301.13419v1.pdf'} +page_content=' 2012): Training dataset consists of 34 RGB/D pairs from Middlebury dataset and 58 RGB/D pairs from MPI Sintel dataset.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1dFQT4oBgHgl3EQf1TYk/content/2301.13419v1.pdf'} +page_content=' Testing dataset includes 6 RGB/D pairs (Art, Books, Dolls, Laundry, Mobeius, Reindeer) from Middlebury 2005.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1dFQT4oBgHgl3EQf1TYk/content/2301.13419v1.pdf'} +page_content=' NYU-v2 (Silberman et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1dFQT4oBgHgl3EQf1TYk/content/2301.13419v1.pdf'} +page_content=' 2012): Following the widely used data splitting manner, we sample 1000 pairs for training and the rest 449 pairs for testing.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1dFQT4oBgHgl3EQf1TYk/content/2301.13419v1.pdf'} +page_content=' Lu (Lu, Ren, and Liu 2014): We test 6 RGB/D pairs from this dataset with the training model on NYU-v2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1dFQT4oBgHgl3EQf1TYk/content/2301.13419v1.pdf'} +page_content=' RGB-D-D (He et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1dFQT4oBgHgl3EQf1TYk/content/2301.13419v1.pdf'} +page_content=' 2021): Following FDSR (He et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1dFQT4oBgHgl3EQf1TYk/content/2301.13419v1.pdf'} +page_content=' 2021), we use 405 RGB/D pairs for evaluation with the training model on NYU-v2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1dFQT4oBgHgl3EQf1TYk/content/2301.13419v1.pdf'} +page_content=' We compare our method with 3 traditional methods: TGV (Ferstl et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1dFQT4oBgHgl3EQf1TYk/content/2301.13419v1.pdf'} +page_content=' 2013), FBS (Barron and Poole 2016), SDF (Ham, Cho, and Ponce 2017), 3 classical methods: DJF (Li et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1dFQT4oBgHgl3EQf1TYk/content/2301.13419v1.pdf'} +page_content=' 2016), DMSG (Hui, Loy, and Tang 2016), DGDIE (Gu et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1dFQT4oBgHgl3EQf1TYk/content/2301.13419v1.pdf'} +page_content=' 2017) and 12 state-of-the-art (SOTA) methods: SVLRM (Pan et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1dFQT4oBgHgl3EQf1TYk/content/2301.13419v1.pdf'} +page_content=' 2019), GSPRT (Lutio et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1dFQT4oBgHgl3EQf1TYk/content/2301.13419v1.pdf'} +page_content=' 2019), DJFR (Li et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1dFQT4oBgHgl3EQf1TYk/content/2301.13419v1.pdf'} +page_content=' 2019), PacNet (Su et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1dFQT4oBgHgl3EQf1TYk/content/2301.13419v1.pdf'} +page_content=' 2019), GbFT (AlBahar and Huang 2019), CUNet (Deng and Dragotti 2020), PM- BAN (Ye et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1dFQT4oBgHgl3EQf1TYk/content/2301.13419v1.pdf'} +page_content=' 2020), DKN (Kim, Ponce, and Ham 2021), FDKN (Kim, Ponce, and Ham 2021), FDSR (He et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1dFQT4oBgHgl3EQf1TYk/content/2301.13419v1.pdf'} +page_content=' 2021), AHMF (Zhong et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1dFQT4oBgHgl3EQf1TYk/content/2301.13419v1.pdf'} +page_content=' 2021) and CTKT (Sun et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1dFQT4oBgHgl3EQf1TYk/content/2301.13419v1.pdf'} +page_content=' 2021).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1dFQT4oBgHgl3EQf1TYk/content/2301.13419v1.pdf'} +page_content=' Mean Absolute Error (MAD) and Root Mean Squared Error (RMSE) are used to evaluate the performance.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1dFQT4oBgHgl3EQf1TYk/content/2301.13419v1.pdf'} +page_content=' During training, we randomly extract patches with stride = {96, 96, 128} for the scale = {4, 8, 16} respectively as ground truth and use bicubic interpolation to get LR in- puts.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1dFQT4oBgHgl3EQf1TYk/content/2301.13419v1.pdf'} +page_content=' The training and testing data are normalized to the range [0, 1].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1dFQT4oBgHgl3EQf1TYk/content/2301.13419v1.pdf'} +page_content=' To balance the training time and network perfor- mance, we set the recurrent steps of the SA blocks as k = 2 in this paper.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1dFQT4oBgHgl3EQf1TYk/content/2301.13419v1.pdf'} +page_content=' The loss weights are set as λk = 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1dFQT4oBgHgl3EQf1TYk/content/2301.13419v1.pdf'} +page_content='5.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1dFQT4oBgHgl3EQf1TYk/content/2301.13419v1.pdf'} +page_content=' The Model Art Books Dolls Laundry Mobeius Reindeer ×4 ×8 ×16 ×4 ×8 ×16 ×4 ×8 ×16 ×4 ×8 ×16 ×4 ×8 ×16 ×4 ×8 ×16 Bicbuic 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1dFQT4oBgHgl3EQf1TYk/content/2301.13419v1.pdf'} +page_content='15 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1dFQT4oBgHgl3EQf1TYk/content/2301.13419v1.pdf'} +page_content='15 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1dFQT4oBgHgl3EQf1TYk/content/2301.13419v1.pdf'} +page_content='04 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1dFQT4oBgHgl3EQf1TYk/content/2301.13419v1.pdf'} +page_content='41 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1dFQT4oBgHgl3EQf1TYk/content/2301.13419v1.pdf'} +page_content='72 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1dFQT4oBgHgl3EQf1TYk/content/2301.13419v1.pdf'} +page_content='32 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1dFQT4oBgHgl3EQf1TYk/content/2301.13419v1.pdf'} +page_content='44 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1dFQT4oBgHgl3EQf1TYk/content/2301.13419v1.pdf'} +page_content='76 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1dFQT4oBgHgl3EQf1TYk/content/2301.13419v1.pdf'} +page_content='31 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1dFQT4oBgHgl3EQf1TYk/content/2301.13419v1.pdf'} +page_content='65 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1dFQT4oBgHgl3EQf1TYk/content/2301.13419v1.pdf'} +page_content='17 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1dFQT4oBgHgl3EQf1TYk/content/2301.13419v1.pdf'} +page_content='17 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1dFQT4oBgHgl3EQf1TYk/content/2301.13419v1.pdf'} +page_content='41 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1dFQT4oBgHgl3EQf1TYk/content/2301.13419v1.pdf'} +page_content='76 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1dFQT4oBgHgl3EQf1TYk/content/2301.13419v1.pdf'} +page_content='37 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1dFQT4oBgHgl3EQf1TYk/content/2301.13419v1.pdf'} +page_content='66 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1dFQT4oBgHgl3EQf1TYk/content/2301.13419v1.pdf'} +page_content='16 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1dFQT4oBgHgl3EQf1TYk/content/2301.13419v1.pdf'} +page_content='26 DJF 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1dFQT4oBgHgl3EQf1TYk/content/2301.13419v1.pdf'} +page_content='40 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1dFQT4oBgHgl3EQf1TYk/content/2301.13419v1.pdf'} +page_content='07 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1dFQT4oBgHgl3EQf1TYk/content/2301.13419v1.pdf'} +page_content='78 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1dFQT4oBgHgl3EQf1TYk/content/2301.13419v1.pdf'} +page_content='16 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1dFQT4oBgHgl3EQf1TYk/content/2301.13419v1.pdf'} +page_content='45 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1dFQT4oBgHgl3EQf1TYk/content/2301.13419v1.pdf'} +page_content='00 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1dFQT4oBgHgl3EQf1TYk/content/2301.13419v1.pdf'} +page_content='20 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1dFQT4oBgHgl3EQf1TYk/content/2301.13419v1.pdf'} +page_content='49 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1dFQT4oBgHgl3EQf1TYk/content/2301.13419v1.pdf'} +page_content='99 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1dFQT4oBgHgl3EQf1TYk/content/2301.13419v1.pdf'} +page_content='28 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1dFQT4oBgHgl3EQf1TYk/content/2301.13419v1.pdf'} +page_content='71 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1dFQT4oBgHgl3EQf1TYk/content/2301.13419v1.pdf'} +page_content='67 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1dFQT4oBgHgl3EQf1TYk/content/2301.13419v1.pdf'} +page_content='18 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1dFQT4oBgHgl3EQf1TYk/content/2301.13419v1.pdf'} +page_content='46 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1dFQT4oBgHgl3EQf1TYk/content/2301.13419v1.pdf'} +page_content='02 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1dFQT4oBgHgl3EQf1TYk/content/2301.13419v1.pdf'} +page_content='23 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1dFQT4oBgHgl3EQf1TYk/content/2301.13419v1.pdf'} +page_content='60 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1dFQT4oBgHgl3EQf1TYk/content/2301.13419v1.pdf'} +page_content='36 DMSG 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1dFQT4oBgHgl3EQf1TYk/content/2301.13419v1.pdf'} +page_content='46 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1dFQT4oBgHgl3EQf1TYk/content/2301.13419v1.pdf'} +page_content='76 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1dFQT4oBgHgl3EQf1TYk/content/2301.13419v1.pdf'} +page_content='53 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1dFQT4oBgHgl3EQf1TYk/content/2301.13419v1.pdf'} +page_content='15 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1dFQT4oBgHgl3EQf1TYk/content/2301.13419v1.pdf'} +page_content='41 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1dFQT4oBgHgl3EQf1TYk/content/2301.13419v1.pdf'} +page_content='76 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1dFQT4oBgHgl3EQf1TYk/content/2301.13419v1.pdf'} +page_content='25 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1dFQT4oBgHgl3EQf1TYk/content/2301.13419v1.pdf'} +page_content='51 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1dFQT4oBgHgl3EQf1TYk/content/2301.13419v1.pdf'} +page_content='87 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1dFQT4oBgHgl3EQf1TYk/content/2301.13419v1.pdf'} +page_content='30 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1dFQT4oBgHgl3EQf1TYk/content/2301.13419v1.pdf'} +page_content='46 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1dFQT4oBgHgl3EQf1TYk/content/2301.13419v1.pdf'} +page_content='12 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1dFQT4oBgHgl3EQf1TYk/content/2301.13419v1.pdf'} +page_content='21 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1dFQT4oBgHgl3EQf1TYk/content/2301.13419v1.pdf'} +page_content='43 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1dFQT4oBgHgl3EQf1TYk/content/2301.13419v1.pdf'} +page_content='76 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1dFQT4oBgHgl3EQf1TYk/content/2301.13419v1.pdf'} +page_content='31 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1dFQT4oBgHgl3EQf1TYk/content/2301.13419v1.pdf'} +page_content='52 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1dFQT4oBgHgl3EQf1TYk/content/2301.13419v1.pdf'} +page_content='99 DGDIE 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1dFQT4oBgHgl3EQf1TYk/content/2301.13419v1.pdf'} +page_content='48 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1dFQT4oBgHgl3EQf1TYk/content/2301.13419v1.pdf'} +page_content='20 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1dFQT4oBgHgl3EQf1TYk/content/2301.13419v1.pdf'} +page_content='44 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1dFQT4oBgHgl3EQf1TYk/content/2301.13419v1.pdf'} +page_content='30 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1dFQT4oBgHgl3EQf1TYk/content/2301.13419v1.pdf'} +page_content='58 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1dFQT4oBgHgl3EQf1TYk/content/2301.13419v1.pdf'} +page_content='02 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1dFQT4oBgHgl3EQf1TYk/content/2301.13419v1.pdf'} +page_content='34 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1dFQT4oBgHgl3EQf1TYk/content/2301.13419v1.pdf'} +page_content='63 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1dFQT4oBgHgl3EQf1TYk/content/2301.13419v1.pdf'} +page_content='93 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1dFQT4oBgHgl3EQf1TYk/content/2301.13419v1.pdf'} +page_content='35 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1dFQT4oBgHgl3EQf1TYk/content/2301.13419v1.pdf'} +page_content='86 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1dFQT4oBgHgl3EQf1TYk/content/2301.13419v1.pdf'} +page_content='56 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1dFQT4oBgHgl3EQf1TYk/content/2301.13419v1.pdf'} +page_content='28 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1dFQT4oBgHgl3EQf1TYk/content/2301.13419v1.pdf'} +page_content='58 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1dFQT4oBgHgl3EQf1TYk/content/2301.13419v1.pdf'} +page_content='98 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1dFQT4oBgHgl3EQf1TYk/content/2301.13419v1.pdf'} +page_content='35 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1dFQT4oBgHgl3EQf1TYk/content/2301.13419v1.pdf'} +page_content='73 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1dFQT4oBgHgl3EQf1TYk/content/2301.13419v1.pdf'} +page_content='29 GSPRT 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1dFQT4oBgHgl3EQf1TYk/content/2301.13419v1.pdf'} +page_content='48 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1dFQT4oBgHgl3EQf1TYk/content/2301.13419v1.pdf'} +page_content='74 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1dFQT4oBgHgl3EQf1TYk/content/2301.13419v1.pdf'} +page_content='48 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1dFQT4oBgHgl3EQf1TYk/content/2301.13419v1.pdf'} +page_content='21 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1dFQT4oBgHgl3EQf1TYk/content/2301.13419v1.pdf'} +page_content='38 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1dFQT4oBgHgl3EQf1TYk/content/2301.13419v1.pdf'} +page_content='76 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1dFQT4oBgHgl3EQf1TYk/content/2301.13419v1.pdf'} +page_content='28 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1dFQT4oBgHgl3EQf1TYk/content/2301.13419v1.pdf'} +page_content='48 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1dFQT4oBgHgl3EQf1TYk/content/2301.13419v1.pdf'} +page_content='79 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1dFQT4oBgHgl3EQf1TYk/content/2301.13419v1.pdf'} +page_content='33 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1dFQT4oBgHgl3EQf1TYk/content/2301.13419v1.pdf'} +page_content='56 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1dFQT4oBgHgl3EQf1TYk/content/2301.13419v1.pdf'} +page_content='24 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1dFQT4oBgHgl3EQf1TYk/content/2301.13419v1.pdf'} +page_content='24 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1dFQT4oBgHgl3EQf1TYk/content/2301.13419v1.pdf'} +page_content='49 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1dFQT4oBgHgl3EQf1TYk/content/2301.13419v1.pdf'} +page_content='80 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1dFQT4oBgHgl3EQf1TYk/content/2301.13419v1.pdf'} +page_content='31 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1dFQT4oBgHgl3EQf1TYk/content/2301.13419v1.pdf'} +page_content='61 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1dFQT4oBgHgl3EQf1TYk/content/2301.13419v1.pdf'} +page_content='07 DJFR 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1dFQT4oBgHgl3EQf1TYk/content/2301.13419v1.pdf'} +page_content='33 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1dFQT4oBgHgl3EQf1TYk/content/2301.13419v1.pdf'} +page_content='71 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1dFQT4oBgHgl3EQf1TYk/content/2301.13419v1.pdf'} +page_content='72 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1dFQT4oBgHgl3EQf1TYk/content/2301.13419v1.pdf'} +page_content='19 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1dFQT4oBgHgl3EQf1TYk/content/2301.13419v1.pdf'} +page_content='38 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1dFQT4oBgHgl3EQf1TYk/content/2301.13419v1.pdf'} +page_content='78 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1dFQT4oBgHgl3EQf1TYk/content/2301.13419v1.pdf'} +page_content='25 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1dFQT4oBgHgl3EQf1TYk/content/2301.13419v1.pdf'} +page_content='44 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1dFQT4oBgHgl3EQf1TYk/content/2301.13419v1.pdf'} +page_content='79 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1dFQT4oBgHgl3EQf1TYk/content/2301.13419v1.pdf'} +page_content='22 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1dFQT4oBgHgl3EQf1TYk/content/2301.13419v1.pdf'} +page_content='50 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1dFQT4oBgHgl3EQf1TYk/content/2301.13419v1.pdf'} +page_content='12 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1dFQT4oBgHgl3EQf1TYk/content/2301.13419v1.pdf'} +page_content='20 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1dFQT4oBgHgl3EQf1TYk/content/2301.13419v1.pdf'} +page_content='38 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1dFQT4oBgHgl3EQf1TYk/content/2301.13419v1.pdf'} +page_content='76 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1dFQT4oBgHgl3EQf1TYk/content/2301.13419v1.pdf'} +page_content='24 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1dFQT4oBgHgl3EQf1TYk/content/2301.13419v1.pdf'} +page_content='45 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1dFQT4oBgHgl3EQf1TYk/content/2301.13419v1.pdf'} +page_content='96 PacNet 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1dFQT4oBgHgl3EQf1TYk/content/2301.13419v1.pdf'} +page_content='40 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1dFQT4oBgHgl3EQf1TYk/content/2301.13419v1.pdf'} +page_content='82 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1dFQT4oBgHgl3EQf1TYk/content/2301.13419v1.pdf'} +page_content='59 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1dFQT4oBgHgl3EQf1TYk/content/2301.13419v1.pdf'} +page_content='22 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1dFQT4oBgHgl3EQf1TYk/content/2301.13419v1.pdf'} +page_content='49 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1dFQT4oBgHgl3EQf1TYk/content/2301.13419v1.pdf'} +page_content='84 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1dFQT4oBgHgl3EQf1TYk/content/2301.13419v1.pdf'} +page_content='28 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1dFQT4oBgHgl3EQf1TYk/content/2301.13419v1.pdf'} +page_content='53 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1dFQT4oBgHgl3EQf1TYk/content/2301.13419v1.pdf'} +page_content='85 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1dFQT4oBgHgl3EQf1TYk/content/2301.13419v1.pdf'} +page_content='28 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1dFQT4oBgHgl3EQf1TYk/content/2301.13419v1.pdf'} +page_content='56 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1dFQT4oBgHgl3EQf1TYk/content/2301.13419v1.pdf'} +page_content='08 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1dFQT4oBgHgl3EQf1TYk/content/2301.13419v1.pdf'} +page_content='23 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1dFQT4oBgHgl3EQf1TYk/content/2301.13419v1.pdf'} +page_content='44 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1dFQT4oBgHgl3EQf1TYk/content/2301.13419v1.pdf'} +page_content='79 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1dFQT4oBgHgl3EQf1TYk/content/2301.13419v1.pdf'} +page_content='29 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1dFQT4oBgHgl3EQf1TYk/content/2301.13419v1.pdf'} +page_content='53 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1dFQT4oBgHgl3EQf1TYk/content/2301.13419v1.pdf'} +page_content='00 CUNet 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1dFQT4oBgHgl3EQf1TYk/content/2301.13419v1.pdf'} +page_content='47 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1dFQT4oBgHgl3EQf1TYk/content/2301.13419v1.pdf'} +page_content='06 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1dFQT4oBgHgl3EQf1TYk/content/2301.13419v1.pdf'} +page_content='34 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1dFQT4oBgHgl3EQf1TYk/content/2301.13419v1.pdf'} +page_content='33 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1dFQT4oBgHgl3EQf1TYk/content/2301.13419v1.pdf'} +page_content='63 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1dFQT4oBgHgl3EQf1TYk/content/2301.13419v1.pdf'} +page_content='41 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1dFQT4oBgHgl3EQf1TYk/content/2301.13419v1.pdf'} +page_content='40 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1dFQT4oBgHgl3EQf1TYk/content/2301.13419v1.pdf'} +page_content='67 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1dFQT4oBgHgl3EQf1TYk/content/2301.13419v1.pdf'} +page_content='27 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1dFQT4oBgHgl3EQf1TYk/content/2301.13419v1.pdf'} +page_content='41 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1dFQT4oBgHgl3EQf1TYk/content/2301.13419v1.pdf'} +page_content='80 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1dFQT4oBgHgl3EQf1TYk/content/2301.13419v1.pdf'} +page_content='88 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1dFQT4oBgHgl3EQf1TYk/content/2301.13419v1.pdf'} +page_content='29 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1dFQT4oBgHgl3EQf1TYk/content/2301.13419v1.pdf'} +page_content='65 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1dFQT4oBgHgl3EQf1TYk/content/2301.13419v1.pdf'} +page_content='12 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1dFQT4oBgHgl3EQf1TYk/content/2301.13419v1.pdf'} +page_content='35 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1dFQT4oBgHgl3EQf1TYk/content/2301.13419v1.pdf'} +page_content='69 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1dFQT4oBgHgl3EQf1TYk/content/2301.13419v1.pdf'} +page_content='14 PMBAN 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1dFQT4oBgHgl3EQf1TYk/content/2301.13419v1.pdf'} +page_content='28 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1dFQT4oBgHgl3EQf1TYk/content/2301.13419v1.pdf'} +page_content='55 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1dFQT4oBgHgl3EQf1TYk/content/2301.13419v1.pdf'} +page_content='11 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1dFQT4oBgHgl3EQf1TYk/content/2301.13419v1.pdf'} +page_content='19 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1dFQT4oBgHgl3EQf1TYk/content/2301.13419v1.pdf'} +page_content='30 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1dFQT4oBgHgl3EQf1TYk/content/2301.13419v1.pdf'} +page_content='53 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1dFQT4oBgHgl3EQf1TYk/content/2301.13419v1.pdf'} +page_content='23 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1dFQT4oBgHgl3EQf1TYk/content/2301.13419v1.pdf'} +page_content='37 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1dFQT4oBgHgl3EQf1TYk/content/2301.13419v1.pdf'} +page_content='64 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1dFQT4oBgHgl3EQf1TYk/content/2301.13419v1.pdf'} +page_content='21 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1dFQT4oBgHgl3EQf1TYk/content/2301.13419v1.pdf'} +page_content='36 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1dFQT4oBgHgl3EQf1TYk/content/2301.13419v1.pdf'} +page_content='74 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1dFQT4oBgHgl3EQf1TYk/content/2301.13419v1.pdf'} +page_content='18 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1dFQT4oBgHgl3EQf1TYk/content/2301.13419v1.pdf'} +page_content='31 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1dFQT4oBgHgl3EQf1TYk/content/2301.13419v1.pdf'} +page_content='57 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1dFQT4oBgHgl3EQf1TYk/content/2301.13419v1.pdf'} +page_content='22 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1dFQT4oBgHgl3EQf1TYk/content/2301.13419v1.pdf'} +page_content='39 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1dFQT4oBgHgl3EQf1TYk/content/2301.13419v1.pdf'} +page_content='75 DKN 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1dFQT4oBgHgl3EQf1TYk/content/2301.13419v1.pdf'} +page_content='25 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1dFQT4oBgHgl3EQf1TYk/content/2301.13419v1.pdf'} +page_content='51 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1dFQT4oBgHgl3EQf1TYk/content/2301.13419v1.pdf'} +page_content='22 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1dFQT4oBgHgl3EQf1TYk/content/2301.13419v1.pdf'} +page_content='16 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1dFQT4oBgHgl3EQf1TYk/content/2301.13419v1.pdf'} +page_content='30 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1dFQT4oBgHgl3EQf1TYk/content/2301.13419v1.pdf'} +page_content='52 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1dFQT4oBgHgl3EQf1TYk/content/2301.13419v1.pdf'} +page_content='21 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1dFQT4oBgHgl3EQf1TYk/content/2301.13419v1.pdf'} +page_content='35 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1dFQT4oBgHgl3EQf1TYk/content/2301.13419v1.pdf'} +page_content='61 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1dFQT4oBgHgl3EQf1TYk/content/2301.13419v1.pdf'} +page_content='17 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1dFQT4oBgHgl3EQf1TYk/content/2301.13419v1.pdf'} +page_content='34 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1dFQT4oBgHgl3EQf1TYk/content/2301.13419v1.pdf'} +page_content='81 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1dFQT4oBgHgl3EQf1TYk/content/2301.13419v1.pdf'} +page_content='16 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1dFQT4oBgHgl3EQf1TYk/content/2301.13419v1.pdf'} +page_content='28 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1dFQT4oBgHgl3EQf1TYk/content/2301.13419v1.pdf'} +page_content='54 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1dFQT4oBgHgl3EQf1TYk/content/2301.13419v1.pdf'} +page_content='20 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1dFQT4oBgHgl3EQf1TYk/content/2301.13419v1.pdf'} +page_content='38 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1dFQT4oBgHgl3EQf1TYk/content/2301.13419v1.pdf'} +page_content='70 AHMF 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1dFQT4oBgHgl3EQf1TYk/content/2301.13419v1.pdf'} +page_content='22 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1dFQT4oBgHgl3EQf1TYk/content/2301.13419v1.pdf'} +page_content='50 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1dFQT4oBgHgl3EQf1TYk/content/2301.13419v1.pdf'} +page_content='04 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1dFQT4oBgHgl3EQf1TYk/content/2301.13419v1.pdf'} +page_content='14 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1dFQT4oBgHgl3EQf1TYk/content/2301.13419v1.pdf'} +page_content='30 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1dFQT4oBgHgl3EQf1TYk/content/2301.13419v1.pdf'} +page_content='50 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1dFQT4oBgHgl3EQf1TYk/content/2301.13419v1.pdf'} +page_content='18 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1dFQT4oBgHgl3EQf1TYk/content/2301.13419v1.pdf'} +page_content='35 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1dFQT4oBgHgl3EQf1TYk/content/2301.13419v1.pdf'} +page_content='62 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1dFQT4oBgHgl3EQf1TYk/content/2301.13419v1.pdf'} +page_content='15 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1dFQT4oBgHgl3EQf1TYk/content/2301.13419v1.pdf'} +page_content='34 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1dFQT4oBgHgl3EQf1TYk/content/2301.13419v1.pdf'} +page_content='73 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1dFQT4oBgHgl3EQf1TYk/content/2301.13419v1.pdf'} +page_content='14 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1dFQT4oBgHgl3EQf1TYk/content/2301.13419v1.pdf'} +page_content='28 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1dFQT4oBgHgl3EQf1TYk/content/2301.13419v1.pdf'} +page_content='53 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1dFQT4oBgHgl3EQf1TYk/content/2301.13419v1.pdf'} +page_content='18 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1dFQT4oBgHgl3EQf1TYk/content/2301.13419v1.pdf'} +page_content='37 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1dFQT4oBgHgl3EQf1TYk/content/2301.13419v1.pdf'} +page_content='64 CTKT 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1dFQT4oBgHgl3EQf1TYk/content/2301.13419v1.pdf'} +page_content='25 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1dFQT4oBgHgl3EQf1TYk/content/2301.13419v1.pdf'} +page_content='53 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1dFQT4oBgHgl3EQf1TYk/content/2301.13419v1.pdf'} +page_content='44 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1dFQT4oBgHgl3EQf1TYk/content/2301.13419v1.pdf'} +page_content='11 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1dFQT4oBgHgl3EQf1TYk/content/2301.13419v1.pdf'} +page_content='26 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1dFQT4oBgHgl3EQf1TYk/content/2301.13419v1.pdf'} +page_content='67 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1dFQT4oBgHgl3EQf1TYk/content/2301.13419v1.pdf'} +page_content='16 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1dFQT4oBgHgl3EQf1TYk/content/2301.13419v1.pdf'} +page_content='36 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1dFQT4oBgHgl3EQf1TYk/content/2301.13419v1.pdf'} +page_content='65 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1dFQT4oBgHgl3EQf1TYk/content/2301.13419v1.pdf'} +page_content='16 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1dFQT4oBgHgl3EQf1TYk/content/2301.13419v1.pdf'} +page_content='36 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1dFQT4oBgHgl3EQf1TYk/content/2301.13419v1.pdf'} +page_content='76 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1dFQT4oBgHgl3EQf1TYk/content/2301.13419v1.pdf'} +page_content='13 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1dFQT4oBgHgl3EQf1TYk/content/2301.13419v1.pdf'} +page_content='27 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1dFQT4oBgHgl3EQf1TYk/content/2301.13419v1.pdf'} +page_content='69 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1dFQT4oBgHgl3EQf1TYk/content/2301.13419v1.pdf'} +page_content='17 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1dFQT4oBgHgl3EQf1TYk/content/2301.13419v1.pdf'} +page_content='35 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1dFQT4oBgHgl3EQf1TYk/content/2301.13419v1.pdf'} +page_content='77 RSAG 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1dFQT4oBgHgl3EQf1TYk/content/2301.13419v1.pdf'} +page_content='13 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1dFQT4oBgHgl3EQf1TYk/content/2301.13419v1.pdf'} +page_content='23 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1dFQT4oBgHgl3EQf1TYk/content/2301.13419v1.pdf'} +page_content='88 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1dFQT4oBgHgl3EQf1TYk/content/2301.13419v1.pdf'} +page_content='09 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1dFQT4oBgHgl3EQf1TYk/content/2301.13419v1.pdf'} +page_content='14 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1dFQT4oBgHgl3EQf1TYk/content/2301.13419v1.pdf'} +page_content='50 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1dFQT4oBgHgl3EQf1TYk/content/2301.13419v1.pdf'} +page_content='15 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1dFQT4oBgHgl3EQf1TYk/content/2301.13419v1.pdf'} +page_content='20 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1dFQT4oBgHgl3EQf1TYk/content/2301.13419v1.pdf'} +page_content='57 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1dFQT4oBgHgl3EQf1TYk/content/2301.13419v1.pdf'} +page_content='10 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1dFQT4oBgHgl3EQf1TYk/content/2301.13419v1.pdf'} +page_content='19 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1dFQT4oBgHgl3EQf1TYk/content/2301.13419v1.pdf'} +page_content='58 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1dFQT4oBgHgl3EQf1TYk/content/2301.13419v1.pdf'} +page_content='12 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1dFQT4oBgHgl3EQf1TYk/content/2301.13419v1.pdf'} +page_content='17 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1dFQT4oBgHgl3EQf1TYk/content/2301.13419v1.pdf'} +page_content='42 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1dFQT4oBgHgl3EQf1TYk/content/2301.13419v1.pdf'} +page_content='13 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1dFQT4oBgHgl3EQf1TYk/content/2301.13419v1.pdf'} +page_content='18 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1dFQT4oBgHgl3EQf1TYk/content/2301.13419v1.pdf'} +page_content='52 Table 1: Quantitative comparisons (in MAD) on Middlebury dataset.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1dFQT4oBgHgl3EQf1TYk/content/2301.13419v1.pdf'} +page_content=' Bicubic TGV DJF FBS DMSG DJFR GbFT PacNet FDKN DKN FDSR CTKT DCTNet RSAG ×4 8.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1dFQT4oBgHgl3EQf1TYk/content/2301.13419v1.pdf'} +page_content='16 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1dFQT4oBgHgl3EQf1TYk/content/2301.13419v1.pdf'} +page_content='98 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1dFQT4oBgHgl3EQf1TYk/content/2301.13419v1.pdf'} +page_content='54 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1dFQT4oBgHgl3EQf1TYk/content/2301.13419v1.pdf'} +page_content='29 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1dFQT4oBgHgl3EQf1TYk/content/2301.13419v1.pdf'} +page_content='02 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1dFQT4oBgHgl3EQf1TYk/content/2301.13419v1.pdf'} +page_content='38 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1dFQT4oBgHgl3EQf1TYk/content/2301.13419v1.pdf'} +page_content='35 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1dFQT4oBgHgl3EQf1TYk/content/2301.13419v1.pdf'} +page_content='39 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1dFQT4oBgHgl3EQf1TYk/content/2301.13419v1.pdf'} +page_content='86 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1dFQT4oBgHgl3EQf1TYk/content/2301.13419v1.pdf'} +page_content='62 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1dFQT4oBgHgl3EQf1TYk/content/2301.13419v1.pdf'} +page_content='61 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1dFQT4oBgHgl3EQf1TYk/content/2301.13419v1.pdf'} +page_content='49 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1dFQT4oBgHgl3EQf1TYk/content/2301.13419v1.pdf'} +page_content='59 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1dFQT4oBgHgl3EQf1TYk/content/2301.13419v1.pdf'} +page_content='23 ×8 14.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1dFQT4oBgHgl3EQf1TYk/content/2301.13419v1.pdf'} +page_content='22 11.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1dFQT4oBgHgl3EQf1TYk/content/2301.13419v1.pdf'} +page_content='23 6.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1dFQT4oBgHgl3EQf1TYk/content/2301.13419v1.pdf'} +page_content='20 8.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1dFQT4oBgHgl3EQf1TYk/content/2301.13419v1.pdf'} +page_content='94 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1dFQT4oBgHgl3EQf1TYk/content/2301.13419v1.pdf'} +page_content='99 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1dFQT4oBgHgl3EQf1TYk/content/2301.13419v1.pdf'} +page_content='94 5.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1dFQT4oBgHgl3EQf1TYk/content/2301.13419v1.pdf'} +page_content='73 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1dFQT4oBgHgl3EQf1TYk/content/2301.13419v1.pdf'} +page_content='59 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1dFQT4oBgHgl3EQf1TYk/content/2301.13419v1.pdf'} +page_content='58 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1dFQT4oBgHgl3EQf1TYk/content/2301.13419v1.pdf'} +page_content='26 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1dFQT4oBgHgl3EQf1TYk/content/2301.13419v1.pdf'} +page_content='18 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1dFQT4oBgHgl3EQf1TYk/content/2301.13419v1.pdf'} +page_content='73 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1dFQT4oBgHgl3EQf1TYk/content/2301.13419v1.pdf'} +page_content='16 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1dFQT4oBgHgl3EQf1TYk/content/2301.13419v1.pdf'} +page_content='51 ×16 22.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1dFQT4oBgHgl3EQf1TYk/content/2301.13419v1.pdf'} +page_content='32 28.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1dFQT4oBgHgl3EQf1TYk/content/2301.13419v1.pdf'} +page_content='13 10.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1dFQT4oBgHgl3EQf1TYk/content/2301.13419v1.pdf'} +page_content='21 14.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1dFQT4oBgHgl3EQf1TYk/content/2301.13419v1.pdf'} +page_content='59 9.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1dFQT4oBgHgl3EQf1TYk/content/2301.13419v1.pdf'} +page_content='17 9.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1dFQT4oBgHgl3EQf1TYk/content/2301.13419v1.pdf'} +page_content='18 9.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1dFQT4oBgHgl3EQf1TYk/content/2301.13419v1.pdf'} +page_content='01 8.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1dFQT4oBgHgl3EQf1TYk/content/2301.13419v1.pdf'} +page_content='09 6.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1dFQT4oBgHgl3EQf1TYk/content/2301.13419v1.pdf'} +page_content='96 6.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1dFQT4oBgHgl3EQf1TYk/content/2301.13419v1.pdf'} +page_content='51 5.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1dFQT4oBgHgl3EQf1TYk/content/2301.13419v1.pdf'} +page_content='86 5.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1dFQT4oBgHgl3EQf1TYk/content/2301.13419v1.pdf'} +page_content='11 5.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1dFQT4oBgHgl3EQf1TYk/content/2301.13419v1.pdf'} +page_content='84 5.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1dFQT4oBgHgl3EQf1TYk/content/2301.13419v1.pdf'} +page_content='27 Table 2: Quantitative comparisons (in RMSE (cm)) on NYU-v2 dataset.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1dFQT4oBgHgl3EQf1TYk/content/2301.13419v1.pdf'} +page_content=' Model Lu RGB-D-D ×4 ×8 ×16 ×4 ×8 ×16 DJF 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1dFQT4oBgHgl3EQf1TYk/content/2301.13419v1.pdf'} +page_content='65 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1dFQT4oBgHgl3EQf1TYk/content/2301.13419v1.pdf'} +page_content='96 6.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1dFQT4oBgHgl3EQf1TYk/content/2301.13419v1.pdf'} +page_content='75 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1dFQT4oBgHgl3EQf1TYk/content/2301.13419v1.pdf'} +page_content='41 5.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1dFQT4oBgHgl3EQf1TYk/content/2301.13419v1.pdf'} +page_content='57 8.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1dFQT4oBgHgl3EQf1TYk/content/2301.13419v1.pdf'} +page_content='15 DJFR 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1dFQT4oBgHgl3EQf1TYk/content/2301.13419v1.pdf'} +page_content='15 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1dFQT4oBgHgl3EQf1TYk/content/2301.13419v1.pdf'} +page_content='57 6.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1dFQT4oBgHgl3EQf1TYk/content/2301.13419v1.pdf'} +page_content='77 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1dFQT4oBgHgl3EQf1TYk/content/2301.13419v1.pdf'} +page_content='35 5.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1dFQT4oBgHgl3EQf1TYk/content/2301.13419v1.pdf'} +page_content='57 7.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1dFQT4oBgHgl3EQf1TYk/content/2301.13419v1.pdf'} +page_content='99 FDKN 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1dFQT4oBgHgl3EQf1TYk/content/2301.13419v1.pdf'} +page_content='82 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1dFQT4oBgHgl3EQf1TYk/content/2301.13419v1.pdf'} +page_content='10 5.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1dFQT4oBgHgl3EQf1TYk/content/2301.13419v1.pdf'} +page_content='05 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1dFQT4oBgHgl3EQf1TYk/content/2301.13419v1.pdf'} +page_content='18 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1dFQT4oBgHgl3EQf1TYk/content/2301.13419v1.pdf'} +page_content='91 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1dFQT4oBgHgl3EQf1TYk/content/2301.13419v1.pdf'} +page_content='41 DKN 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1dFQT4oBgHgl3EQf1TYk/content/2301.13419v1.pdf'} +page_content='96 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1dFQT4oBgHgl3EQf1TYk/content/2301.13419v1.pdf'} +page_content='16 5.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1dFQT4oBgHgl3EQf1TYk/content/2301.13419v1.pdf'} +page_content='11 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1dFQT4oBgHgl3EQf1TYk/content/2301.13419v1.pdf'} +page_content='30 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1dFQT4oBgHgl3EQf1TYk/content/2301.13419v1.pdf'} +page_content='96 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1dFQT4oBgHgl3EQf1TYk/content/2301.13419v1.pdf'} +page_content='42 FDSR 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1dFQT4oBgHgl3EQf1TYk/content/2301.13419v1.pdf'} +page_content='81 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1dFQT4oBgHgl3EQf1TYk/content/2301.13419v1.pdf'} +page_content='91 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1dFQT4oBgHgl3EQf1TYk/content/2301.13419v1.pdf'} +page_content='64 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1dFQT4oBgHgl3EQf1TYk/content/2301.13419v1.pdf'} +page_content='16 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1dFQT4oBgHgl3EQf1TYk/content/2301.13419v1.pdf'} +page_content='82 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1dFQT4oBgHgl3EQf1TYk/content/2301.13419v1.pdf'} +page_content='06 RSAG 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1dFQT4oBgHgl3EQf1TYk/content/2301.13419v1.pdf'} +page_content='79 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1dFQT4oBgHgl3EQf1TYk/content/2301.13419v1.pdf'} +page_content='67 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1dFQT4oBgHgl3EQf1TYk/content/2301.13419v1.pdf'} +page_content='30 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1dFQT4oBgHgl3EQf1TYk/content/2301.13419v1.pdf'} +page_content='14 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1dFQT4oBgHgl3EQf1TYk/content/2301.13419v1.pdf'} +page_content='75 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1dFQT4oBgHgl3EQf1TYk/content/2301.13419v1.pdf'} +page_content='96 Table 3: Quantitative comparisons (in RMSE) on Lu dataset and RGB-D-D dataset.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1dFQT4oBgHgl3EQf1TYk/content/2301.13419v1.pdf'} +page_content=' proposed method is implemented using PyTorch with one RTX 2080Ti GPU.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1dFQT4oBgHgl3EQf1TYk/content/2301.13419v1.pdf'} +page_content=' For simplicity, we name our Recurrent Structure Attention Guided framework as RSAG.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1dFQT4oBgHgl3EQf1TYk/content/2301.13419v1.pdf'} +page_content=' Comparing to State-of-the-Arts Quantitative Comparisons.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1dFQT4oBgHgl3EQf1TYk/content/2301.13419v1.pdf'} +page_content=' We first show the quantita- tive evaluation results with SOTA methods under the same conditions.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1dFQT4oBgHgl3EQf1TYk/content/2301.13419v1.pdf'} +page_content=' Table 1 shows the results on Middlebury dataset under three up-scaling factors.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1dFQT4oBgHgl3EQf1TYk/content/2301.13419v1.pdf'} +page_content=' It can be observed that the proposed RSAG outperforms the SOTA methods by signifi- cant margins for all up-scaling factors.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1dFQT4oBgHgl3EQf1TYk/content/2301.13419v1.pdf'} +page_content=' For example, RSAG decreases the average MAD by 25%(×4), 48%(×8), and 30%(×16) compared to CTKT (Sun et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1dFQT4oBgHgl3EQf1TYk/content/2301.13419v1.pdf'} +page_content=' 2021).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1dFQT4oBgHgl3EQf1TYk/content/2301.13419v1.pdf'} +page_content=' We fur- ther evaluate the proposed method on NYU-v2 dataset in Table 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1dFQT4oBgHgl3EQf1TYk/content/2301.13419v1.pdf'} +page_content=' The proposed method yields the best performance for ×4 and ×8 DSR and comparable performance for ×16 DSR.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1dFQT4oBgHgl3EQf1TYk/content/2301.13419v1.pdf'} +page_content=' Compared with the second-best method, RSAG de- creases the average RMSE by 17% for ×4 DSR.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1dFQT4oBgHgl3EQf1TYk/content/2301.13419v1.pdf'} +page_content=' To verify the generalization ability of our method on Lu dataset and RGB-D-D dataset, we test RSAG for ×4, ×8, and ×16 DSR, which is trained on NYU dataset.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1dFQT4oBgHgl3EQf1TYk/content/2301.13419v1.pdf'} +page_content=' As shown Model Middlebury NYU-v2 baseline 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1dFQT4oBgHgl3EQf1TYk/content/2301.13419v1.pdf'} +page_content='26 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1dFQT4oBgHgl3EQf1TYk/content/2301.13419v1.pdf'} +page_content='60 baseline + DCN 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1dFQT4oBgHgl3EQf1TYk/content/2301.13419v1.pdf'} +page_content='24 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1dFQT4oBgHgl3EQf1TYk/content/2301.13419v1.pdf'} +page_content='10 baseline + DCN + HLF 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1dFQT4oBgHgl3EQf1TYk/content/2301.13419v1.pdf'} +page_content='23 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1dFQT4oBgHgl3EQf1TYk/content/2301.13419v1.pdf'} +page_content='02 baseline + DCN + HLF + SA 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1dFQT4oBgHgl3EQf1TYk/content/2301.13419v1.pdf'} +page_content='19 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1dFQT4oBgHgl3EQf1TYk/content/2301.13419v1.pdf'} +page_content='51 Table 4: Ablation studies of RSAG (in MAD) on Middlebury dataset and (in RMSE) on NYU-v2 dataset for ×8 DSR.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1dFQT4oBgHgl3EQf1TYk/content/2301.13419v1.pdf'} +page_content=' in Table 3, we can see that RSAG performs the competi- tive generalization results for all up-sampling cases, which demonstrates the accuracy and effectiveness of our method.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1dFQT4oBgHgl3EQf1TYk/content/2301.13419v1.pdf'} +page_content=' Visual Comparisons.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1dFQT4oBgHgl3EQf1TYk/content/2301.13419v1.pdf'} +page_content=' We provide the visual comparisons of the ×8 upsampled results on Middlebury dataset in Fig.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1dFQT4oBgHgl3EQf1TYk/content/2301.13419v1.pdf'} +page_content=' 7.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1dFQT4oBgHgl3EQf1TYk/content/2301.13419v1.pdf'} +page_content=' It is worth noted that edges and luxuriant details are hard to be reconstructed by interpolation or simple feature concate- nation.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1dFQT4oBgHgl3EQf1TYk/content/2301.13419v1.pdf'} +page_content=' Even though CUNet (Deng and Dragotti 2020) and DKN (Kim, Ponce, and Ham 2021) can recover most bound- aries, they fail to reconstruct some complex structures, such as texture beside pencils in Art and boundaries of antlers in Reindeer.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1dFQT4oBgHgl3EQf1TYk/content/2301.13419v1.pdf'} +page_content=' In contrast, our results show sharper edges and smaller errors with the ground truth.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1dFQT4oBgHgl3EQf1TYk/content/2301.13419v1.pdf'} +page_content=' Fig.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1dFQT4oBgHgl3EQf1TYk/content/2301.13419v1.pdf'} +page_content=' 8 shows ×8 re- sults on NYU-v2 dataset.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1dFQT4oBgHgl3EQf1TYk/content/2301.13419v1.pdf'} +page_content=' Boundaries and details generated by RSAG are more accurate without introducing the texture copying artifacts, which demonstrates that RSAG can well recover both HF structures and LF content.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1dFQT4oBgHgl3EQf1TYk/content/2301.13419v1.pdf'} +page_content=' Furthermore, Fig.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1dFQT4oBgHgl3EQf1TYk/content/2301.13419v1.pdf'} +page_content=' 9 demonstrates the good generalization ability of the proposed method on Lu dataset for ×16 DSR.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1dFQT4oBgHgl3EQf1TYk/content/2301.13419v1.pdf'} +page_content=' Most methods generally tend to over-smooth the results and fail to recover the depth details with low-light guidance im- ages, while our method produces more convincing results.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1dFQT4oBgHgl3EQf1TYk/content/2301.13419v1.pdf'} +page_content=' (g) GT (f) Ours (e) DKN (d) CUNet (c) DJF (b) Bicubic (a) GT and image Figure 7: Visual comparisons of Art and Laundry on Middlebury dataset (×8 case).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1dFQT4oBgHgl3EQf1TYk/content/2301.13419v1.pdf'} +page_content=' (a) Image (b) DKN (c) FDSR (d) Ours (e) GT Figure 8: Visual comparisons on NYU-v2 dataset (×8 case).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1dFQT4oBgHgl3EQf1TYk/content/2301.13419v1.pdf'} +page_content=' (d) Ours (b) DKN (c) FDSR (e) GT (b) DKN (c) FDSR (d) Ours (e) GT (a) Images (a) Images Figure 9: Visual comparisons on Lu dataset (×16 case).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1dFQT4oBgHgl3EQf1TYk/content/2301.13419v1.pdf'} +page_content=' Ablation Study Effect of DCN and HLF modules.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1dFQT4oBgHgl3EQf1TYk/content/2301.13419v1.pdf'} +page_content=' Table 4 reports the abla- tion studies on the DCN and HLF modules in our frame- work.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1dFQT4oBgHgl3EQf1TYk/content/2301.13419v1.pdf'} +page_content=' As shown in the first row of Table 4, the baseline model uses a hand-designed operator for frequency-domain decomposition and direct concatenation for cross-modality feature fusion.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1dFQT4oBgHgl3EQf1TYk/content/2301.13419v1.pdf'} +page_content=' The second row demonstrates that the pro- posed DCN module, which selects HF component adap- tively in a coarse-to-fine manner, can significantly improve the performance over the baseline.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1dFQT4oBgHgl3EQf1TYk/content/2301.13419v1.pdf'} +page_content=' When the HLF module is added, the average RMSE of the NYU-v2 dataset shown in the third row can be reduced from 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1dFQT4oBgHgl3EQf1TYk/content/2301.13419v1.pdf'} +page_content='60 to 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1dFQT4oBgHgl3EQf1TYk/content/2301.13419v1.pdf'} +page_content='02, which further verifies the effectiveness of high-quality frequency-domain separation and HF&LF feature fusion modules.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1dFQT4oBgHgl3EQf1TYk/content/2301.13419v1.pdf'} +page_content=' Effect of SA module.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1dFQT4oBgHgl3EQf1TYk/content/2301.13419v1.pdf'} +page_content=' The last row in Table 4 demon- strates the effectiveness of the SA module, which iteratively utilizes the latest depth estimation to choose clear and con- sistent image features.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1dFQT4oBgHgl3EQf1TYk/content/2301.13419v1.pdf'} +page_content=' We can see that the SA module can outperform them by a large margin.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1dFQT4oBgHgl3EQf1TYk/content/2301.13419v1.pdf'} +page_content=' From the results of Ta- 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1dFQT4oBgHgl3EQf1TYk/content/2301.13419v1.pdf'} +page_content='50 MAD NYU-v2 0(w/o RMA) 1 2 3 4 RMSE 0(w/o RMA) 1 2 3 4 Middlebury Recurrent Steps 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1dFQT4oBgHgl3EQf1TYk/content/2301.13419v1.pdf'} +page_content='00 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1dFQT4oBgHgl3EQf1TYk/content/2301.13419v1.pdf'} +page_content='50 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1dFQT4oBgHgl3EQf1TYk/content/2301.13419v1.pdf'} +page_content='25 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1dFQT4oBgHgl3EQf1TYk/content/2301.13419v1.pdf'} +page_content='20 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1dFQT4oBgHgl3EQf1TYk/content/2301.13419v1.pdf'} +page_content='15 Figure 10: Ablation studies of SA with different recursive steps on Middlebury and NYU-v2 datasets (×8 case).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1dFQT4oBgHgl3EQf1TYk/content/2301.13419v1.pdf'} +page_content=' ble 4, it is observed that all the modules proposed in the RSAG framework have made a positive contribution to the ultimate success of our method.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1dFQT4oBgHgl3EQf1TYk/content/2301.13419v1.pdf'} +page_content=' To further study the impact of the recurrent steps of SA, we conduct experiments on Middlebury and NYU-v2 datasets by varying the step from 0 (w/o SA) to 4, as illustrated in Fig.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1dFQT4oBgHgl3EQf1TYk/content/2301.13419v1.pdf'} +page_content=' 10.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1dFQT4oBgHgl3EQf1TYk/content/2301.13419v1.pdf'} +page_content=' It can be found that the method achieves better performance when the recursion steps increase, where 2 recurrent steps obtain the best trade- off between speed and accuracy.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1dFQT4oBgHgl3EQf1TYk/content/2301.13419v1.pdf'} +page_content=' It also proves that higher- quality depth information can help obtain a more reliable guidance structure for subsequent depth reconstruction.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1dFQT4oBgHgl3EQf1TYk/content/2301.13419v1.pdf'} +page_content=' Conclusion In this paper, we proposed a novel recurrent structure atten- tion guided (RSAG) framework for depth super-resolution.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1dFQT4oBgHgl3EQf1TYk/content/2301.13419v1.pdf'} +page_content=' In our framework, a deep contrastive network with multi- scale filters (DCN) block was designed to adaptively de- compose the high-quality HF and LF components by us- ing contrastive networks from large kernels to small ones.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1dFQT4oBgHgl3EQf1TYk/content/2301.13419v1.pdf'} +page_content=' In addition, by leveraging the latest depth output and high- resolution image as guidance, we introduced recurrent struc- ture attention (SA) block, instead of the trivial feature con- catenation, to select consistent and clear image features for subsequent cross-modality fusion.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1dFQT4oBgHgl3EQf1TYk/content/2301.13419v1.pdf'} +page_content=' Furthermore, we pre- sented the HF&LF feature fusion block to refine the blurred edges of the LF component.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1dFQT4oBgHgl3EQf1TYk/content/2301.13419v1.pdf'} +page_content=' Extensive experiments on var- ious benchmark datasets demonstrated the superiority and effectiveness of the proposed framework.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1dFQT4oBgHgl3EQf1TYk/content/2301.13419v1.pdf'} +page_content=' Acknowledgement This work was supported by the National Science Fund of China under Grant Nos.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1dFQT4oBgHgl3EQf1TYk/content/2301.13419v1.pdf'} +page_content=' U1713208 and 62072242.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1dFQT4oBgHgl3EQf1TYk/content/2301.13419v1.pdf'} +page_content=' References AlBahar, B.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1dFQT4oBgHgl3EQf1TYk/content/2301.13419v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1dFQT4oBgHgl3EQf1TYk/content/2301.13419v1.pdf'} +page_content=' and Huang, J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1dFQT4oBgHgl3EQf1TYk/content/2301.13419v1.pdf'} +page_content='-B.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1dFQT4oBgHgl3EQf1TYk/content/2301.13419v1.pdf'} +page_content=' 2019.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1dFQT4oBgHgl3EQf1TYk/content/2301.13419v1.pdf'} +page_content=' Guided image-to-image translation with bi-directional feature transformation.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1dFQT4oBgHgl3EQf1TYk/content/2301.13419v1.pdf'} +page_content=' In ICCV, 9016–9025.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1dFQT4oBgHgl3EQf1TYk/content/2301.13419v1.pdf'} +page_content=' Barron, J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1dFQT4oBgHgl3EQf1TYk/content/2301.13419v1.pdf'} +page_content=' T.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1dFQT4oBgHgl3EQf1TYk/content/2301.13419v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1dFQT4oBgHgl3EQf1TYk/content/2301.13419v1.pdf'} +page_content=' and Poole, B.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1dFQT4oBgHgl3EQf1TYk/content/2301.13419v1.pdf'} +page_content=' 2016.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1dFQT4oBgHgl3EQf1TYk/content/2301.13419v1.pdf'} +page_content=' The fast bilateral solver.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1dFQT4oBgHgl3EQf1TYk/content/2301.13419v1.pdf'} +page_content=' In ECCV, 617–632.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1dFQT4oBgHgl3EQf1TYk/content/2301.13419v1.pdf'} +page_content=' Butler, D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1dFQT4oBgHgl3EQf1TYk/content/2301.13419v1.pdf'} +page_content=' J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1dFQT4oBgHgl3EQf1TYk/content/2301.13419v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1dFQT4oBgHgl3EQf1TYk/content/2301.13419v1.pdf'} +page_content=' Wulff, J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1dFQT4oBgHgl3EQf1TYk/content/2301.13419v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1dFQT4oBgHgl3EQf1TYk/content/2301.13419v1.pdf'} +page_content=' Stanley, G.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1dFQT4oBgHgl3EQf1TYk/content/2301.13419v1.pdf'} +page_content=' B.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1dFQT4oBgHgl3EQf1TYk/content/2301.13419v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1dFQT4oBgHgl3EQf1TYk/content/2301.13419v1.pdf'} +page_content=' and Black, M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1dFQT4oBgHgl3EQf1TYk/content/2301.13419v1.pdf'} +page_content=' J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1dFQT4oBgHgl3EQf1TYk/content/2301.13419v1.pdf'} +page_content=' 2012.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1dFQT4oBgHgl3EQf1TYk/content/2301.13419v1.pdf'} +page_content=' A naturalistic open source movie for optical flow evaluation.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1dFQT4oBgHgl3EQf1TYk/content/2301.13419v1.pdf'} +page_content=' In ECCV, 611–625.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1dFQT4oBgHgl3EQf1TYk/content/2301.13419v1.pdf'} +page_content=' Caesar, H.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1dFQT4oBgHgl3EQf1TYk/content/2301.13419v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1dFQT4oBgHgl3EQf1TYk/content/2301.13419v1.pdf'} +page_content=' Bankiti, V.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1dFQT4oBgHgl3EQf1TYk/content/2301.13419v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1dFQT4oBgHgl3EQf1TYk/content/2301.13419v1.pdf'} +page_content=' Lang, A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1dFQT4oBgHgl3EQf1TYk/content/2301.13419v1.pdf'} +page_content=' H.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1dFQT4oBgHgl3EQf1TYk/content/2301.13419v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1dFQT4oBgHgl3EQf1TYk/content/2301.13419v1.pdf'} +page_content=' Vora, S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1dFQT4oBgHgl3EQf1TYk/content/2301.13419v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1dFQT4oBgHgl3EQf1TYk/content/2301.13419v1.pdf'} +page_content=' Liong, V.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1dFQT4oBgHgl3EQf1TYk/content/2301.13419v1.pdf'} +page_content=' E.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1dFQT4oBgHgl3EQf1TYk/content/2301.13419v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1dFQT4oBgHgl3EQf1TYk/content/2301.13419v1.pdf'} +page_content=' Xu, Q.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1dFQT4oBgHgl3EQf1TYk/content/2301.13419v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1dFQT4oBgHgl3EQf1TYk/content/2301.13419v1.pdf'} +page_content=' Krishnan, A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1dFQT4oBgHgl3EQf1TYk/content/2301.13419v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1dFQT4oBgHgl3EQf1TYk/content/2301.13419v1.pdf'} +page_content=' Pan, Y.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1dFQT4oBgHgl3EQf1TYk/content/2301.13419v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1dFQT4oBgHgl3EQf1TYk/content/2301.13419v1.pdf'} +page_content=' Baldan, G.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1dFQT4oBgHgl3EQf1TYk/content/2301.13419v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1dFQT4oBgHgl3EQf1TYk/content/2301.13419v1.pdf'} +page_content=' and Beijbom, O.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1dFQT4oBgHgl3EQf1TYk/content/2301.13419v1.pdf'} +page_content=' 2020.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1dFQT4oBgHgl3EQf1TYk/content/2301.13419v1.pdf'} +page_content=' nuscenes: A multimodal dataset for autonomous driv- ing.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1dFQT4oBgHgl3EQf1TYk/content/2301.13419v1.pdf'} +page_content=' In CVPR, 11621–11631.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1dFQT4oBgHgl3EQf1TYk/content/2301.13419v1.pdf'} +page_content=' Chen, B.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1dFQT4oBgHgl3EQf1TYk/content/2301.13419v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1dFQT4oBgHgl3EQf1TYk/content/2301.13419v1.pdf'} +page_content=' and Jung, C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1dFQT4oBgHgl3EQf1TYk/content/2301.13419v1.pdf'} +page_content=' 2018.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1dFQT4oBgHgl3EQf1TYk/content/2301.13419v1.pdf'} +page_content=' Single depth image super- resolution using convolutional neural networks.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1dFQT4oBgHgl3EQf1TYk/content/2301.13419v1.pdf'} +page_content=' In ICASSP, 1473–1477.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1dFQT4oBgHgl3EQf1TYk/content/2301.13419v1.pdf'} +page_content=' Chen, Y.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1dFQT4oBgHgl3EQf1TYk/content/2301.13419v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1dFQT4oBgHgl3EQf1TYk/content/2301.13419v1.pdf'} +page_content=' Fan, H.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1dFQT4oBgHgl3EQf1TYk/content/2301.13419v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1dFQT4oBgHgl3EQf1TYk/content/2301.13419v1.pdf'} +page_content=' Xu, B.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1dFQT4oBgHgl3EQf1TYk/content/2301.13419v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1dFQT4oBgHgl3EQf1TYk/content/2301.13419v1.pdf'} +page_content=' Yan, Z.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1dFQT4oBgHgl3EQf1TYk/content/2301.13419v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1dFQT4oBgHgl3EQf1TYk/content/2301.13419v1.pdf'} +page_content=' Kalantidis, Y.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1dFQT4oBgHgl3EQf1TYk/content/2301.13419v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1dFQT4oBgHgl3EQf1TYk/content/2301.13419v1.pdf'} +page_content=' Rohrbach, M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1dFQT4oBgHgl3EQf1TYk/content/2301.13419v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1dFQT4oBgHgl3EQf1TYk/content/2301.13419v1.pdf'} +page_content=' Yan, S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1dFQT4oBgHgl3EQf1TYk/content/2301.13419v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1dFQT4oBgHgl3EQf1TYk/content/2301.13419v1.pdf'} +page_content=' and Feng, J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1dFQT4oBgHgl3EQf1TYk/content/2301.13419v1.pdf'} +page_content=' 2019.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1dFQT4oBgHgl3EQf1TYk/content/2301.13419v1.pdf'} +page_content=' Drop an octave: Reducing spatial redundancy in convolutional neural networks with octave convolution.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1dFQT4oBgHgl3EQf1TYk/content/2301.13419v1.pdf'} +page_content=' In ICCV, 3435–3444.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1dFQT4oBgHgl3EQf1TYk/content/2301.13419v1.pdf'} +page_content=' Deng, X.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1dFQT4oBgHgl3EQf1TYk/content/2301.13419v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1dFQT4oBgHgl3EQf1TYk/content/2301.13419v1.pdf'} +page_content=' and Dragotti, P.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1dFQT4oBgHgl3EQf1TYk/content/2301.13419v1.pdf'} +page_content=' L.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1dFQT4oBgHgl3EQf1TYk/content/2301.13419v1.pdf'} +page_content=' 2020.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1dFQT4oBgHgl3EQf1TYk/content/2301.13419v1.pdf'} +page_content=' Deep convolutional neu- ral network for multi-modal image restoration and fusion.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1dFQT4oBgHgl3EQf1TYk/content/2301.13419v1.pdf'} +page_content=' IEEE transactions on pattern analysis and machine intelli- gence, PP(99): 1–1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1dFQT4oBgHgl3EQf1TYk/content/2301.13419v1.pdf'} +page_content=' Ferstl, D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1dFQT4oBgHgl3EQf1TYk/content/2301.13419v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1dFQT4oBgHgl3EQf1TYk/content/2301.13419v1.pdf'} +page_content=' Reinbacher, C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1dFQT4oBgHgl3EQf1TYk/content/2301.13419v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1dFQT4oBgHgl3EQf1TYk/content/2301.13419v1.pdf'} +page_content=' Ranftl, R.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1dFQT4oBgHgl3EQf1TYk/content/2301.13419v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1dFQT4oBgHgl3EQf1TYk/content/2301.13419v1.pdf'} +page_content=' R¨uther, M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1dFQT4oBgHgl3EQf1TYk/content/2301.13419v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1dFQT4oBgHgl3EQf1TYk/content/2301.13419v1.pdf'} +page_content=' and Bischof, H.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1dFQT4oBgHgl3EQf1TYk/content/2301.13419v1.pdf'} +page_content=' 2013.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1dFQT4oBgHgl3EQf1TYk/content/2301.13419v1.pdf'} +page_content=' Image guided depth upsampling using anisotropic total generalized variation.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1dFQT4oBgHgl3EQf1TYk/content/2301.13419v1.pdf'} +page_content=' In ICCV, 993–1000.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1dFQT4oBgHgl3EQf1TYk/content/2301.13419v1.pdf'} +page_content=' Gu, S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1dFQT4oBgHgl3EQf1TYk/content/2301.13419v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1dFQT4oBgHgl3EQf1TYk/content/2301.13419v1.pdf'} +page_content=' Zuo, W.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1dFQT4oBgHgl3EQf1TYk/content/2301.13419v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1dFQT4oBgHgl3EQf1TYk/content/2301.13419v1.pdf'} +page_content=' Guo, S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1dFQT4oBgHgl3EQf1TYk/content/2301.13419v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1dFQT4oBgHgl3EQf1TYk/content/2301.13419v1.pdf'} +page_content=' Chen, Y.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1dFQT4oBgHgl3EQf1TYk/content/2301.13419v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1dFQT4oBgHgl3EQf1TYk/content/2301.13419v1.pdf'} +page_content=' Chen, C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1dFQT4oBgHgl3EQf1TYk/content/2301.13419v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1dFQT4oBgHgl3EQf1TYk/content/2301.13419v1.pdf'} +page_content=' and Zhang, L.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1dFQT4oBgHgl3EQf1TYk/content/2301.13419v1.pdf'} +page_content=' 2017.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1dFQT4oBgHgl3EQf1TYk/content/2301.13419v1.pdf'} +page_content=' Learning dynamic guidance for depth image enhance- ment.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1dFQT4oBgHgl3EQf1TYk/content/2301.13419v1.pdf'} +page_content=' In CVPR, 3769–3778.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1dFQT4oBgHgl3EQf1TYk/content/2301.13419v1.pdf'} +page_content=' Guo, C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1dFQT4oBgHgl3EQf1TYk/content/2301.13419v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1dFQT4oBgHgl3EQf1TYk/content/2301.13419v1.pdf'} +page_content=' Li, C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1dFQT4oBgHgl3EQf1TYk/content/2301.13419v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1dFQT4oBgHgl3EQf1TYk/content/2301.13419v1.pdf'} +page_content=' Guo, J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1dFQT4oBgHgl3EQf1TYk/content/2301.13419v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1dFQT4oBgHgl3EQf1TYk/content/2301.13419v1.pdf'} +page_content=' Cong, R.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1dFQT4oBgHgl3EQf1TYk/content/2301.13419v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1dFQT4oBgHgl3EQf1TYk/content/2301.13419v1.pdf'} +page_content=' Fu, H.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1dFQT4oBgHgl3EQf1TYk/content/2301.13419v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1dFQT4oBgHgl3EQf1TYk/content/2301.13419v1.pdf'} +page_content=' and Han, P.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1dFQT4oBgHgl3EQf1TYk/content/2301.13419v1.pdf'} +page_content=' 2019.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1dFQT4oBgHgl3EQf1TYk/content/2301.13419v1.pdf'} +page_content=' Hierarchical Features Driven Residual Learning for Depth Map Super-Resolution.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1dFQT4oBgHgl3EQf1TYk/content/2301.13419v1.pdf'} +page_content=' IEEE Transactions on Image Pro- cessing, 2545–2557.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1dFQT4oBgHgl3EQf1TYk/content/2301.13419v1.pdf'} +page_content=' Guo, Y.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1dFQT4oBgHgl3EQf1TYk/content/2301.13419v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1dFQT4oBgHgl3EQf1TYk/content/2301.13419v1.pdf'} +page_content=' Chen, J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1dFQT4oBgHgl3EQf1TYk/content/2301.13419v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1dFQT4oBgHgl3EQf1TYk/content/2301.13419v1.pdf'} +page_content=' Wang, J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1dFQT4oBgHgl3EQf1TYk/content/2301.13419v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1dFQT4oBgHgl3EQf1TYk/content/2301.13419v1.pdf'} +page_content=' Chen, Q.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1dFQT4oBgHgl3EQf1TYk/content/2301.13419v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1dFQT4oBgHgl3EQf1TYk/content/2301.13419v1.pdf'} +page_content=' Cao, J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1dFQT4oBgHgl3EQf1TYk/content/2301.13419v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1dFQT4oBgHgl3EQf1TYk/content/2301.13419v1.pdf'} +page_content=' Deng, Z.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1dFQT4oBgHgl3EQf1TYk/content/2301.13419v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1dFQT4oBgHgl3EQf1TYk/content/2301.13419v1.pdf'} +page_content=' Xu, Y.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1dFQT4oBgHgl3EQf1TYk/content/2301.13419v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1dFQT4oBgHgl3EQf1TYk/content/2301.13419v1.pdf'} +page_content=' and Tan, M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1dFQT4oBgHgl3EQf1TYk/content/2301.13419v1.pdf'} +page_content=' 2020.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1dFQT4oBgHgl3EQf1TYk/content/2301.13419v1.pdf'} +page_content=' Closed-loop matters: Dual regression networks for single image super-resolution.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1dFQT4oBgHgl3EQf1TYk/content/2301.13419v1.pdf'} +page_content=' In CVPR, 5407– 5416.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1dFQT4oBgHgl3EQf1TYk/content/2301.13419v1.pdf'} +page_content=' Ham, B.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1dFQT4oBgHgl3EQf1TYk/content/2301.13419v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1dFQT4oBgHgl3EQf1TYk/content/2301.13419v1.pdf'} +page_content=' Cho, M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1dFQT4oBgHgl3EQf1TYk/content/2301.13419v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1dFQT4oBgHgl3EQf1TYk/content/2301.13419v1.pdf'} +page_content=' and Ponce, J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1dFQT4oBgHgl3EQf1TYk/content/2301.13419v1.pdf'} +page_content=' 2017.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1dFQT4oBgHgl3EQf1TYk/content/2301.13419v1.pdf'} +page_content=' Robust guided image filtering using nonconvex potentials.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1dFQT4oBgHgl3EQf1TYk/content/2301.13419v1.pdf'} +page_content=' IEEE transactions on pattern analysis and machine intelligence, 40(1): 192–207.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1dFQT4oBgHgl3EQf1TYk/content/2301.13419v1.pdf'} +page_content=' Hao, X.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1dFQT4oBgHgl3EQf1TYk/content/2301.13419v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1dFQT4oBgHgl3EQf1TYk/content/2301.13419v1.pdf'} +page_content=' Lu, T.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1dFQT4oBgHgl3EQf1TYk/content/2301.13419v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1dFQT4oBgHgl3EQf1TYk/content/2301.13419v1.pdf'} +page_content=' Zhang, Y.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1dFQT4oBgHgl3EQf1TYk/content/2301.13419v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1dFQT4oBgHgl3EQf1TYk/content/2301.13419v1.pdf'} +page_content=' Wang, Z.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1dFQT4oBgHgl3EQf1TYk/content/2301.13419v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1dFQT4oBgHgl3EQf1TYk/content/2301.13419v1.pdf'} +page_content=' and Chen, H.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1dFQT4oBgHgl3EQf1TYk/content/2301.13419v1.pdf'} +page_content=' 2019.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1dFQT4oBgHgl3EQf1TYk/content/2301.13419v1.pdf'} +page_content=' Multi-Source Deep Residual Fusion Network for Depth Im- age Super-resolution.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1dFQT4oBgHgl3EQf1TYk/content/2301.13419v1.pdf'} +page_content=' In RSVT, 62–67.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1dFQT4oBgHgl3EQf1TYk/content/2301.13419v1.pdf'} +page_content=' He, K.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1dFQT4oBgHgl3EQf1TYk/content/2301.13419v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1dFQT4oBgHgl3EQf1TYk/content/2301.13419v1.pdf'} +page_content=' Zhang, X.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1dFQT4oBgHgl3EQf1TYk/content/2301.13419v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1dFQT4oBgHgl3EQf1TYk/content/2301.13419v1.pdf'} +page_content=' Ren, S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1dFQT4oBgHgl3EQf1TYk/content/2301.13419v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1dFQT4oBgHgl3EQf1TYk/content/2301.13419v1.pdf'} +page_content=' and Sun, J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1dFQT4oBgHgl3EQf1TYk/content/2301.13419v1.pdf'} +page_content=' 2015.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1dFQT4oBgHgl3EQf1TYk/content/2301.13419v1.pdf'} +page_content=' Delving deep into rectifiers: Surpassing human-level performance on ima- genet classification.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1dFQT4oBgHgl3EQf1TYk/content/2301.13419v1.pdf'} +page_content=' In ICCV, 1026–1034.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1dFQT4oBgHgl3EQf1TYk/content/2301.13419v1.pdf'} +page_content=' He, L.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1dFQT4oBgHgl3EQf1TYk/content/2301.13419v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1dFQT4oBgHgl3EQf1TYk/content/2301.13419v1.pdf'} +page_content=' Zhu, H.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1dFQT4oBgHgl3EQf1TYk/content/2301.13419v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1dFQT4oBgHgl3EQf1TYk/content/2301.13419v1.pdf'} +page_content=' Li, F.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1dFQT4oBgHgl3EQf1TYk/content/2301.13419v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1dFQT4oBgHgl3EQf1TYk/content/2301.13419v1.pdf'} +page_content=' Bai, H.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1dFQT4oBgHgl3EQf1TYk/content/2301.13419v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1dFQT4oBgHgl3EQf1TYk/content/2301.13419v1.pdf'} +page_content=' Cong, R.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1dFQT4oBgHgl3EQf1TYk/content/2301.13419v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1dFQT4oBgHgl3EQf1TYk/content/2301.13419v1.pdf'} +page_content=' Zhang, C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1dFQT4oBgHgl3EQf1TYk/content/2301.13419v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1dFQT4oBgHgl3EQf1TYk/content/2301.13419v1.pdf'} +page_content=' Lin, C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1dFQT4oBgHgl3EQf1TYk/content/2301.13419v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1dFQT4oBgHgl3EQf1TYk/content/2301.13419v1.pdf'} +page_content=' Liu, M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1dFQT4oBgHgl3EQf1TYk/content/2301.13419v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1dFQT4oBgHgl3EQf1TYk/content/2301.13419v1.pdf'} +page_content=' and Zhao, Y.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1dFQT4oBgHgl3EQf1TYk/content/2301.13419v1.pdf'} +page_content=' 2021.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1dFQT4oBgHgl3EQf1TYk/content/2301.13419v1.pdf'} +page_content=' Towards Fast and Accurate Real-World Depth Super-Resolution: Benchmark Dataset Baseline and.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1dFQT4oBgHgl3EQf1TYk/content/2301.13419v1.pdf'} +page_content=' In CVPR, 9229–9238.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1dFQT4oBgHgl3EQf1TYk/content/2301.13419v1.pdf'} +page_content=' Hirschmuller, H.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1dFQT4oBgHgl3EQf1TYk/content/2301.13419v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1dFQT4oBgHgl3EQf1TYk/content/2301.13419v1.pdf'} +page_content=' and Scharstein, D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1dFQT4oBgHgl3EQf1TYk/content/2301.13419v1.pdf'} +page_content=' 2007.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1dFQT4oBgHgl3EQf1TYk/content/2301.13419v1.pdf'} +page_content=' Evaluation of cost functions for stereo matching.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1dFQT4oBgHgl3EQf1TYk/content/2301.13419v1.pdf'} +page_content=' In CVPR, 1–8.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1dFQT4oBgHgl3EQf1TYk/content/2301.13419v1.pdf'} +page_content=' Hou, J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1dFQT4oBgHgl3EQf1TYk/content/2301.13419v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1dFQT4oBgHgl3EQf1TYk/content/2301.13419v1.pdf'} +page_content=' Dai, A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1dFQT4oBgHgl3EQf1TYk/content/2301.13419v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1dFQT4oBgHgl3EQf1TYk/content/2301.13419v1.pdf'} +page_content=' and Nießner, M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1dFQT4oBgHgl3EQf1TYk/content/2301.13419v1.pdf'} +page_content=' 2019.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1dFQT4oBgHgl3EQf1TYk/content/2301.13419v1.pdf'} +page_content=' 3d-sis: 3d semantic instance segmentation of rgb-d scans.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1dFQT4oBgHgl3EQf1TYk/content/2301.13419v1.pdf'} +page_content=' In CVPR, 4421–4430.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1dFQT4oBgHgl3EQf1TYk/content/2301.13419v1.pdf'} +page_content=' Hui, T.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1dFQT4oBgHgl3EQf1TYk/content/2301.13419v1.pdf'} +page_content='-W.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1dFQT4oBgHgl3EQf1TYk/content/2301.13419v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1dFQT4oBgHgl3EQf1TYk/content/2301.13419v1.pdf'} +page_content=' Loy, C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1dFQT4oBgHgl3EQf1TYk/content/2301.13419v1.pdf'} +page_content=' C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1dFQT4oBgHgl3EQf1TYk/content/2301.13419v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1dFQT4oBgHgl3EQf1TYk/content/2301.13419v1.pdf'} +page_content=' and Tang, X.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1dFQT4oBgHgl3EQf1TYk/content/2301.13419v1.pdf'} +page_content=' 2016.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1dFQT4oBgHgl3EQf1TYk/content/2301.13419v1.pdf'} +page_content=' Depth Map Super-Resolution by Deep Multi-Scale Guidance.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1dFQT4oBgHgl3EQf1TYk/content/2301.13419v1.pdf'} +page_content=' In ECCV, 353–369.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1dFQT4oBgHgl3EQf1TYk/content/2301.13419v1.pdf'} +page_content=' Kim, B.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1dFQT4oBgHgl3EQf1TYk/content/2301.13419v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1dFQT4oBgHgl3EQf1TYk/content/2301.13419v1.pdf'} +page_content=' Ponce, J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1dFQT4oBgHgl3EQf1TYk/content/2301.13419v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1dFQT4oBgHgl3EQf1TYk/content/2301.13419v1.pdf'} +page_content=' and Ham, B.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1dFQT4oBgHgl3EQf1TYk/content/2301.13419v1.pdf'} +page_content=' 2021.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1dFQT4oBgHgl3EQf1TYk/content/2301.13419v1.pdf'} +page_content=' Deformable kernel networks for joint image filtering.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1dFQT4oBgHgl3EQf1TYk/content/2301.13419v1.pdf'} +page_content=' International Journal of Computer Vision, 129(2): 579–600.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1dFQT4oBgHgl3EQf1TYk/content/2301.13419v1.pdf'} +page_content=' Li, Y.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1dFQT4oBgHgl3EQf1TYk/content/2301.13419v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1dFQT4oBgHgl3EQf1TYk/content/2301.13419v1.pdf'} +page_content=' Huang, J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1dFQT4oBgHgl3EQf1TYk/content/2301.13419v1.pdf'} +page_content='-B.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1dFQT4oBgHgl3EQf1TYk/content/2301.13419v1.pdf'} +page_content=' ;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1dFQT4oBgHgl3EQf1TYk/content/2301.13419v1.pdf'} +page_content=' Ahuja, N.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1dFQT4oBgHgl3EQf1TYk/content/2301.13419v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1dFQT4oBgHgl3EQf1TYk/content/2301.13419v1.pdf'} +page_content=' and Yang, M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1dFQT4oBgHgl3EQf1TYk/content/2301.13419v1.pdf'} +page_content='-H.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1dFQT4oBgHgl3EQf1TYk/content/2301.13419v1.pdf'} +page_content=' 2016.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1dFQT4oBgHgl3EQf1TYk/content/2301.13419v1.pdf'} +page_content=' Deep joint image filtering.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1dFQT4oBgHgl3EQf1TYk/content/2301.13419v1.pdf'} +page_content=' In ECCV, 154–169.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1dFQT4oBgHgl3EQf1TYk/content/2301.13419v1.pdf'} +page_content=' Li, Y.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1dFQT4oBgHgl3EQf1TYk/content/2301.13419v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1dFQT4oBgHgl3EQf1TYk/content/2301.13419v1.pdf'} +page_content=' Huang, J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1dFQT4oBgHgl3EQf1TYk/content/2301.13419v1.pdf'} +page_content='-B.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1dFQT4oBgHgl3EQf1TYk/content/2301.13419v1.pdf'} +page_content=' ;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1dFQT4oBgHgl3EQf1TYk/content/2301.13419v1.pdf'} +page_content=' Ahuja, N.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1dFQT4oBgHgl3EQf1TYk/content/2301.13419v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1dFQT4oBgHgl3EQf1TYk/content/2301.13419v1.pdf'} +page_content=' and Yang, M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1dFQT4oBgHgl3EQf1TYk/content/2301.13419v1.pdf'} +page_content='-H.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1dFQT4oBgHgl3EQf1TYk/content/2301.13419v1.pdf'} +page_content=' 2019.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1dFQT4oBgHgl3EQf1TYk/content/2301.13419v1.pdf'} +page_content=' Joint image filtering with deep convolutional networks.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1dFQT4oBgHgl3EQf1TYk/content/2301.13419v1.pdf'} +page_content=' IEEE transactions on pattern analysis and machine intelligence, 41(8): 1909–1923.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1dFQT4oBgHgl3EQf1TYk/content/2301.13419v1.pdf'} +page_content=' Lu, S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1dFQT4oBgHgl3EQf1TYk/content/2301.13419v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1dFQT4oBgHgl3EQf1TYk/content/2301.13419v1.pdf'} +page_content=' Ren, X.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1dFQT4oBgHgl3EQf1TYk/content/2301.13419v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1dFQT4oBgHgl3EQf1TYk/content/2301.13419v1.pdf'} +page_content=' and Liu, F.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1dFQT4oBgHgl3EQf1TYk/content/2301.13419v1.pdf'} +page_content=' 2014.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1dFQT4oBgHgl3EQf1TYk/content/2301.13419v1.pdf'} +page_content=' Depth enhancement via low-rank matrix completion.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1dFQT4oBgHgl3EQf1TYk/content/2301.13419v1.pdf'} +page_content=' In CVPR, 3390–3397.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1dFQT4oBgHgl3EQf1TYk/content/2301.13419v1.pdf'} +page_content=' Lutio, R.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1dFQT4oBgHgl3EQf1TYk/content/2301.13419v1.pdf'} +page_content=' d.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1dFQT4oBgHgl3EQf1TYk/content/2301.13419v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1dFQT4oBgHgl3EQf1TYk/content/2301.13419v1.pdf'} +page_content=' D’aronco, S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1dFQT4oBgHgl3EQf1TYk/content/2301.13419v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1dFQT4oBgHgl3EQf1TYk/content/2301.13419v1.pdf'} +page_content=' Wegner, J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1dFQT4oBgHgl3EQf1TYk/content/2301.13419v1.pdf'} +page_content=' D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1dFQT4oBgHgl3EQf1TYk/content/2301.13419v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1dFQT4oBgHgl3EQf1TYk/content/2301.13419v1.pdf'} +page_content=' and Schindler, K.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1dFQT4oBgHgl3EQf1TYk/content/2301.13419v1.pdf'} +page_content=' 2019.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1dFQT4oBgHgl3EQf1TYk/content/2301.13419v1.pdf'} +page_content=' Guided super-resolution as pixel-to-pixel transforma- tion.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1dFQT4oBgHgl3EQf1TYk/content/2301.13419v1.pdf'} +page_content=' In ICCV, 8829–8837.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1dFQT4oBgHgl3EQf1TYk/content/2301.13419v1.pdf'} +page_content=' Makarov, I.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1dFQT4oBgHgl3EQf1TYk/content/2301.13419v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1dFQT4oBgHgl3EQf1TYk/content/2301.13419v1.pdf'} +page_content=' Aliev, V.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1dFQT4oBgHgl3EQf1TYk/content/2301.13419v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1dFQT4oBgHgl3EQf1TYk/content/2301.13419v1.pdf'} +page_content=' and Gerasimova, O.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1dFQT4oBgHgl3EQf1TYk/content/2301.13419v1.pdf'} +page_content=' 2017.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1dFQT4oBgHgl3EQf1TYk/content/2301.13419v1.pdf'} +page_content=' Semi- dense depth interpolation using deep convolutional neural networks.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1dFQT4oBgHgl3EQf1TYk/content/2301.13419v1.pdf'} +page_content=' In ACMMM, 1407–1415.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1dFQT4oBgHgl3EQf1TYk/content/2301.13419v1.pdf'} +page_content=' Meuleman, A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1dFQT4oBgHgl3EQf1TYk/content/2301.13419v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1dFQT4oBgHgl3EQf1TYk/content/2301.13419v1.pdf'} +page_content=' Baek, S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1dFQT4oBgHgl3EQf1TYk/content/2301.13419v1.pdf'} +page_content='-H.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1dFQT4oBgHgl3EQf1TYk/content/2301.13419v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1dFQT4oBgHgl3EQf1TYk/content/2301.13419v1.pdf'} +page_content=' Heide, F.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1dFQT4oBgHgl3EQf1TYk/content/2301.13419v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1dFQT4oBgHgl3EQf1TYk/content/2301.13419v1.pdf'} +page_content=' and Kim, M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1dFQT4oBgHgl3EQf1TYk/content/2301.13419v1.pdf'} +page_content=' H.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1dFQT4oBgHgl3EQf1TYk/content/2301.13419v1.pdf'} +page_content=' 2020.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1dFQT4oBgHgl3EQf1TYk/content/2301.13419v1.pdf'} +page_content=' Single-shot monocular rgb-d imaging using uneven double refraction.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1dFQT4oBgHgl3EQf1TYk/content/2301.13419v1.pdf'} +page_content=' In CVPR, 2465–2474.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1dFQT4oBgHgl3EQf1TYk/content/2301.13419v1.pdf'} +page_content=' Pan, J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1dFQT4oBgHgl3EQf1TYk/content/2301.13419v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1dFQT4oBgHgl3EQf1TYk/content/2301.13419v1.pdf'} +page_content=' Dong, J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1dFQT4oBgHgl3EQf1TYk/content/2301.13419v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1dFQT4oBgHgl3EQf1TYk/content/2301.13419v1.pdf'} +page_content=' Ren, J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1dFQT4oBgHgl3EQf1TYk/content/2301.13419v1.pdf'} +page_content=' S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1dFQT4oBgHgl3EQf1TYk/content/2301.13419v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1dFQT4oBgHgl3EQf1TYk/content/2301.13419v1.pdf'} +page_content=' Lin, L.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1dFQT4oBgHgl3EQf1TYk/content/2301.13419v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1dFQT4oBgHgl3EQf1TYk/content/2301.13419v1.pdf'} +page_content=' Tang, J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1dFQT4oBgHgl3EQf1TYk/content/2301.13419v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1dFQT4oBgHgl3EQf1TYk/content/2301.13419v1.pdf'} +page_content=' and Yang, M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1dFQT4oBgHgl3EQf1TYk/content/2301.13419v1.pdf'} +page_content='- H.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1dFQT4oBgHgl3EQf1TYk/content/2301.13419v1.pdf'} +page_content=' 2019.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1dFQT4oBgHgl3EQf1TYk/content/2301.13419v1.pdf'} +page_content=' Spatially variant linear representation models for joint filtering.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1dFQT4oBgHgl3EQf1TYk/content/2301.13419v1.pdf'} +page_content=' In CVPR, 1702–1711.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1dFQT4oBgHgl3EQf1TYk/content/2301.13419v1.pdf'} +page_content=' Silberman, N.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1dFQT4oBgHgl3EQf1TYk/content/2301.13419v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1dFQT4oBgHgl3EQf1TYk/content/2301.13419v1.pdf'} +page_content=' Hoiem, D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1dFQT4oBgHgl3EQf1TYk/content/2301.13419v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1dFQT4oBgHgl3EQf1TYk/content/2301.13419v1.pdf'} +page_content=' Kohli, P.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1dFQT4oBgHgl3EQf1TYk/content/2301.13419v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1dFQT4oBgHgl3EQf1TYk/content/2301.13419v1.pdf'} +page_content=' and Fergus, R.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1dFQT4oBgHgl3EQf1TYk/content/2301.13419v1.pdf'} +page_content=' 2012.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1dFQT4oBgHgl3EQf1TYk/content/2301.13419v1.pdf'} +page_content=' Indoor segmentation and support inference from rgbd im- ages.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1dFQT4oBgHgl3EQf1TYk/content/2301.13419v1.pdf'} +page_content=' In ECCV, 746–760.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1dFQT4oBgHgl3EQf1TYk/content/2301.13419v1.pdf'} +page_content=' Song, X.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1dFQT4oBgHgl3EQf1TYk/content/2301.13419v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1dFQT4oBgHgl3EQf1TYk/content/2301.13419v1.pdf'} +page_content=' Dai, Y.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1dFQT4oBgHgl3EQf1TYk/content/2301.13419v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1dFQT4oBgHgl3EQf1TYk/content/2301.13419v1.pdf'} +page_content=' Zhou, D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1dFQT4oBgHgl3EQf1TYk/content/2301.13419v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1dFQT4oBgHgl3EQf1TYk/content/2301.13419v1.pdf'} +page_content=' Liu, L.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1dFQT4oBgHgl3EQf1TYk/content/2301.13419v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1dFQT4oBgHgl3EQf1TYk/content/2301.13419v1.pdf'} +page_content=' Li, W.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1dFQT4oBgHgl3EQf1TYk/content/2301.13419v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1dFQT4oBgHgl3EQf1TYk/content/2301.13419v1.pdf'} +page_content=' Li, H.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1dFQT4oBgHgl3EQf1TYk/content/2301.13419v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1dFQT4oBgHgl3EQf1TYk/content/2301.13419v1.pdf'} +page_content=' and Yang, R.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1dFQT4oBgHgl3EQf1TYk/content/2301.13419v1.pdf'} +page_content=' 2020.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1dFQT4oBgHgl3EQf1TYk/content/2301.13419v1.pdf'} +page_content=' Channel attention based iterative residual learning for depth map super-resolution.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1dFQT4oBgHgl3EQf1TYk/content/2301.13419v1.pdf'} +page_content=' In CVPR, 5631–5640.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1dFQT4oBgHgl3EQf1TYk/content/2301.13419v1.pdf'} +page_content=' Su, H.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1dFQT4oBgHgl3EQf1TYk/content/2301.13419v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1dFQT4oBgHgl3EQf1TYk/content/2301.13419v1.pdf'} +page_content=' Jampani, V.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1dFQT4oBgHgl3EQf1TYk/content/2301.13419v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1dFQT4oBgHgl3EQf1TYk/content/2301.13419v1.pdf'} +page_content=' Sun, D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1dFQT4oBgHgl3EQf1TYk/content/2301.13419v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1dFQT4oBgHgl3EQf1TYk/content/2301.13419v1.pdf'} +page_content=' Gallo, O.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1dFQT4oBgHgl3EQf1TYk/content/2301.13419v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1dFQT4oBgHgl3EQf1TYk/content/2301.13419v1.pdf'} +page_content=' Learned-Miller, E.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1dFQT4oBgHgl3EQf1TYk/content/2301.13419v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1dFQT4oBgHgl3EQf1TYk/content/2301.13419v1.pdf'} +page_content=' and Kautz, J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1dFQT4oBgHgl3EQf1TYk/content/2301.13419v1.pdf'} +page_content=' 2019.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1dFQT4oBgHgl3EQf1TYk/content/2301.13419v1.pdf'} +page_content=' Pixel-adaptive convolutional neural net- works.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1dFQT4oBgHgl3EQf1TYk/content/2301.13419v1.pdf'} +page_content=' In CVPR, 11166–11175.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1dFQT4oBgHgl3EQf1TYk/content/2301.13419v1.pdf'} +page_content=' Sun, B.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1dFQT4oBgHgl3EQf1TYk/content/2301.13419v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1dFQT4oBgHgl3EQf1TYk/content/2301.13419v1.pdf'} +page_content=' Ye, X.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1dFQT4oBgHgl3EQf1TYk/content/2301.13419v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1dFQT4oBgHgl3EQf1TYk/content/2301.13419v1.pdf'} +page_content=' Li, B.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1dFQT4oBgHgl3EQf1TYk/content/2301.13419v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1dFQT4oBgHgl3EQf1TYk/content/2301.13419v1.pdf'} +page_content=' Li, H.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1dFQT4oBgHgl3EQf1TYk/content/2301.13419v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1dFQT4oBgHgl3EQf1TYk/content/2301.13419v1.pdf'} +page_content=' Wang, Z.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1dFQT4oBgHgl3EQf1TYk/content/2301.13419v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1dFQT4oBgHgl3EQf1TYk/content/2301.13419v1.pdf'} +page_content=' and Xu, R.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1dFQT4oBgHgl3EQf1TYk/content/2301.13419v1.pdf'} +page_content=' 2021.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1dFQT4oBgHgl3EQf1TYk/content/2301.13419v1.pdf'} +page_content=' Learning Scene Structure Guidance via Cross-Task Knowl- edge Transfer for Single Depth Super-Resolution.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1dFQT4oBgHgl3EQf1TYk/content/2301.13419v1.pdf'} +page_content=' In CVPR, 7792–7801.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1dFQT4oBgHgl3EQf1TYk/content/2301.13419v1.pdf'} +page_content=' Tang, Q.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1dFQT4oBgHgl3EQf1TYk/content/2301.13419v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1dFQT4oBgHgl3EQf1TYk/content/2301.13419v1.pdf'} +page_content=' Cong, R.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1dFQT4oBgHgl3EQf1TYk/content/2301.13419v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1dFQT4oBgHgl3EQf1TYk/content/2301.13419v1.pdf'} +page_content=' Sheng, R.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1dFQT4oBgHgl3EQf1TYk/content/2301.13419v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1dFQT4oBgHgl3EQf1TYk/content/2301.13419v1.pdf'} +page_content=' He, L.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1dFQT4oBgHgl3EQf1TYk/content/2301.13419v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1dFQT4oBgHgl3EQf1TYk/content/2301.13419v1.pdf'} +page_content=' Zhang, D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1dFQT4oBgHgl3EQf1TYk/content/2301.13419v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1dFQT4oBgHgl3EQf1TYk/content/2301.13419v1.pdf'} +page_content=' Zhao, Y.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1dFQT4oBgHgl3EQf1TYk/content/2301.13419v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1dFQT4oBgHgl3EQf1TYk/content/2301.13419v1.pdf'} +page_content=' and Kwong, S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1dFQT4oBgHgl3EQf1TYk/content/2301.13419v1.pdf'} +page_content=' 2021.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1dFQT4oBgHgl3EQf1TYk/content/2301.13419v1.pdf'} +page_content=' BridgeNet: A Joint Learning Network of Depth Map Super-Resolution and Monocular Depth Esti- mation.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1dFQT4oBgHgl3EQf1TYk/content/2301.13419v1.pdf'} +page_content=' In ACMMM, 2148–2157.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1dFQT4oBgHgl3EQf1TYk/content/2301.13419v1.pdf'} +page_content=' Wang, K.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1dFQT4oBgHgl3EQf1TYk/content/2301.13419v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1dFQT4oBgHgl3EQf1TYk/content/2301.13419v1.pdf'} +page_content=' Zhang, Z.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1dFQT4oBgHgl3EQf1TYk/content/2301.13419v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1dFQT4oBgHgl3EQf1TYk/content/2301.13419v1.pdf'} +page_content=' Yan, Z.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1dFQT4oBgHgl3EQf1TYk/content/2301.13419v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1dFQT4oBgHgl3EQf1TYk/content/2301.13419v1.pdf'} +page_content=' Li, X.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1dFQT4oBgHgl3EQf1TYk/content/2301.13419v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1dFQT4oBgHgl3EQf1TYk/content/2301.13419v1.pdf'} +page_content=' Xu, B.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1dFQT4oBgHgl3EQf1TYk/content/2301.13419v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1dFQT4oBgHgl3EQf1TYk/content/2301.13419v1.pdf'} +page_content=' Li, J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1dFQT4oBgHgl3EQf1TYk/content/2301.13419v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1dFQT4oBgHgl3EQf1TYk/content/2301.13419v1.pdf'} +page_content=' and Yang, J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1dFQT4oBgHgl3EQf1TYk/content/2301.13419v1.pdf'} +page_content=' 2021.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1dFQT4oBgHgl3EQf1TYk/content/2301.13419v1.pdf'} +page_content=' Regularizing nighttime weirdness: Efficient self-supervised monocular depth estimation in the dark.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1dFQT4oBgHgl3EQf1TYk/content/2301.13419v1.pdf'} +page_content=' In ICCV, 16055–16064.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1dFQT4oBgHgl3EQf1TYk/content/2301.13419v1.pdf'} +page_content=' Wen, Y.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1dFQT4oBgHgl3EQf1TYk/content/2301.13419v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1dFQT4oBgHgl3EQf1TYk/content/2301.13419v1.pdf'} +page_content=' Sheng, B.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1dFQT4oBgHgl3EQf1TYk/content/2301.13419v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1dFQT4oBgHgl3EQf1TYk/content/2301.13419v1.pdf'} +page_content=' Li, P.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1dFQT4oBgHgl3EQf1TYk/content/2301.13419v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1dFQT4oBgHgl3EQf1TYk/content/2301.13419v1.pdf'} +page_content=' Lin, W.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1dFQT4oBgHgl3EQf1TYk/content/2301.13419v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1dFQT4oBgHgl3EQf1TYk/content/2301.13419v1.pdf'} +page_content=' and Feng, D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1dFQT4oBgHgl3EQf1TYk/content/2301.13419v1.pdf'} +page_content=' D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1dFQT4oBgHgl3EQf1TYk/content/2301.13419v1.pdf'} +page_content=' 2019.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1dFQT4oBgHgl3EQf1TYk/content/2301.13419v1.pdf'} +page_content=' Deep color guided coarse-to-fine convolutional network cas- cade for depth image super-resolution.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1dFQT4oBgHgl3EQf1TYk/content/2301.13419v1.pdf'} +page_content=' IEEE Transactions on Image Processing, 28(2): 994–1006.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1dFQT4oBgHgl3EQf1TYk/content/2301.13419v1.pdf'} +page_content=' Woo, S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1dFQT4oBgHgl3EQf1TYk/content/2301.13419v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1dFQT4oBgHgl3EQf1TYk/content/2301.13419v1.pdf'} +page_content=' Park, J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1dFQT4oBgHgl3EQf1TYk/content/2301.13419v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1dFQT4oBgHgl3EQf1TYk/content/2301.13419v1.pdf'} +page_content=' Lee, J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1dFQT4oBgHgl3EQf1TYk/content/2301.13419v1.pdf'} +page_content='-Y.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1dFQT4oBgHgl3EQf1TYk/content/2301.13419v1.pdf'} +page_content=' ;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1dFQT4oBgHgl3EQf1TYk/content/2301.13419v1.pdf'} +page_content=' and Kweon, I.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1dFQT4oBgHgl3EQf1TYk/content/2301.13419v1.pdf'} +page_content=' S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1dFQT4oBgHgl3EQf1TYk/content/2301.13419v1.pdf'} +page_content=' 2018.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1dFQT4oBgHgl3EQf1TYk/content/2301.13419v1.pdf'} +page_content=' Cbam: Convolutional block attention module.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1dFQT4oBgHgl3EQf1TYk/content/2301.13419v1.pdf'} +page_content=' In ECCV, 3–19.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1dFQT4oBgHgl3EQf1TYk/content/2301.13419v1.pdf'} +page_content=' Xiao, Y.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1dFQT4oBgHgl3EQf1TYk/content/2301.13419v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1dFQT4oBgHgl3EQf1TYk/content/2301.13419v1.pdf'} +page_content=' Cao, X.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1dFQT4oBgHgl3EQf1TYk/content/2301.13419v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1dFQT4oBgHgl3EQf1TYk/content/2301.13419v1.pdf'} +page_content=' Zhu, X.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1dFQT4oBgHgl3EQf1TYk/content/2301.13419v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1dFQT4oBgHgl3EQf1TYk/content/2301.13419v1.pdf'} +page_content=' Yang, R.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1dFQT4oBgHgl3EQf1TYk/content/2301.13419v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1dFQT4oBgHgl3EQf1TYk/content/2301.13419v1.pdf'} +page_content=' and Zheng, Y.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1dFQT4oBgHgl3EQf1TYk/content/2301.13419v1.pdf'} +page_content=' 2018.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1dFQT4oBgHgl3EQf1TYk/content/2301.13419v1.pdf'} +page_content=' Joint convolutional neural pyramid for depth map super- resolution.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1dFQT4oBgHgl3EQf1TYk/content/2301.13419v1.pdf'} +page_content=' arXiv preprint arXiv:1801.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1dFQT4oBgHgl3EQf1TYk/content/2301.13419v1.pdf'} +page_content='00968.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1dFQT4oBgHgl3EQf1TYk/content/2301.13419v1.pdf'} +page_content=' Xu, K.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1dFQT4oBgHgl3EQf1TYk/content/2301.13419v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1dFQT4oBgHgl3EQf1TYk/content/2301.13419v1.pdf'} +page_content=' Yang, X.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1dFQT4oBgHgl3EQf1TYk/content/2301.13419v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1dFQT4oBgHgl3EQf1TYk/content/2301.13419v1.pdf'} +page_content=' Yin, B.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1dFQT4oBgHgl3EQf1TYk/content/2301.13419v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1dFQT4oBgHgl3EQf1TYk/content/2301.13419v1.pdf'} +page_content=' and Lau, R.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1dFQT4oBgHgl3EQf1TYk/content/2301.13419v1.pdf'} +page_content=' W.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1dFQT4oBgHgl3EQf1TYk/content/2301.13419v1.pdf'} +page_content=' 2020.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1dFQT4oBgHgl3EQf1TYk/content/2301.13419v1.pdf'} +page_content=' Learn- ing to restore low-light images via decomposition-and- enhancement.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1dFQT4oBgHgl3EQf1TYk/content/2301.13419v1.pdf'} +page_content=' In CVPR, 2281–2290.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1dFQT4oBgHgl3EQf1TYk/content/2301.13419v1.pdf'} +page_content=' Yan, Z.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1dFQT4oBgHgl3EQf1TYk/content/2301.13419v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1dFQT4oBgHgl3EQf1TYk/content/2301.13419v1.pdf'} +page_content=' Wang, K.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1dFQT4oBgHgl3EQf1TYk/content/2301.13419v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1dFQT4oBgHgl3EQf1TYk/content/2301.13419v1.pdf'} +page_content=' Li, X.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1dFQT4oBgHgl3EQf1TYk/content/2301.13419v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1dFQT4oBgHgl3EQf1TYk/content/2301.13419v1.pdf'} +page_content=' Zhang, Z.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1dFQT4oBgHgl3EQf1TYk/content/2301.13419v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1dFQT4oBgHgl3EQf1TYk/content/2301.13419v1.pdf'} +page_content=' Li, G.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1dFQT4oBgHgl3EQf1TYk/content/2301.13419v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1dFQT4oBgHgl3EQf1TYk/content/2301.13419v1.pdf'} +page_content=' Li, J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1dFQT4oBgHgl3EQf1TYk/content/2301.13419v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1dFQT4oBgHgl3EQf1TYk/content/2301.13419v1.pdf'} +page_content=' and Yang, J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1dFQT4oBgHgl3EQf1TYk/content/2301.13419v1.pdf'} +page_content=' 2022.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1dFQT4oBgHgl3EQf1TYk/content/2301.13419v1.pdf'} +page_content=' Learning Complementary Correlations for Depth Super-Resolution With Incomplete Data in Real World.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1dFQT4oBgHgl3EQf1TYk/content/2301.13419v1.pdf'} +page_content=' IEEE Transactions on Neural Networks and Learning Sys- tems.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1dFQT4oBgHgl3EQf1TYk/content/2301.13419v1.pdf'} +page_content=' Yang, B.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1dFQT4oBgHgl3EQf1TYk/content/2301.13419v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1dFQT4oBgHgl3EQf1TYk/content/2301.13419v1.pdf'} +page_content=' Fan, X.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1dFQT4oBgHgl3EQf1TYk/content/2301.13419v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1dFQT4oBgHgl3EQf1TYk/content/2301.13419v1.pdf'} +page_content=' Zheng, Z.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1dFQT4oBgHgl3EQf1TYk/content/2301.13419v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1dFQT4oBgHgl3EQf1TYk/content/2301.13419v1.pdf'} +page_content=' Liu, X.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1dFQT4oBgHgl3EQf1TYk/content/2301.13419v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1dFQT4oBgHgl3EQf1TYk/content/2301.13419v1.pdf'} +page_content=' Zhang, K.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1dFQT4oBgHgl3EQf1TYk/content/2301.13419v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1dFQT4oBgHgl3EQf1TYk/content/2301.13419v1.pdf'} +page_content=' and Lei, J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1dFQT4oBgHgl3EQf1TYk/content/2301.13419v1.pdf'} +page_content=' 2019.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1dFQT4oBgHgl3EQf1TYk/content/2301.13419v1.pdf'} +page_content=' Depth Map Super-Resolution via Multilevel Recur- sive Guidance and Progressive Supervision.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1dFQT4oBgHgl3EQf1TYk/content/2301.13419v1.pdf'} +page_content=' IEEE Access, 7: 57616–57622.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1dFQT4oBgHgl3EQf1TYk/content/2301.13419v1.pdf'} +page_content=' Yang, J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1dFQT4oBgHgl3EQf1TYk/content/2301.13419v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1dFQT4oBgHgl3EQf1TYk/content/2301.13419v1.pdf'} +page_content=' Lan, H.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1dFQT4oBgHgl3EQf1TYk/content/2301.13419v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1dFQT4oBgHgl3EQf1TYk/content/2301.13419v1.pdf'} +page_content=' Song, X.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1dFQT4oBgHgl3EQf1TYk/content/2301.13419v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1dFQT4oBgHgl3EQf1TYk/content/2301.13419v1.pdf'} +page_content=' and Li, K.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1dFQT4oBgHgl3EQf1TYk/content/2301.13419v1.pdf'} +page_content=' 2017.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1dFQT4oBgHgl3EQf1TYk/content/2301.13419v1.pdf'} +page_content=' Depth super- resolution via fully edge-augmented guidance.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1dFQT4oBgHgl3EQf1TYk/content/2301.13419v1.pdf'} +page_content=' In VCIP, 1– 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1dFQT4oBgHgl3EQf1TYk/content/2301.13419v1.pdf'} +page_content=' Ye, X.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1dFQT4oBgHgl3EQf1TYk/content/2301.13419v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1dFQT4oBgHgl3EQf1TYk/content/2301.13419v1.pdf'} +page_content=' Duan, X.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1dFQT4oBgHgl3EQf1TYk/content/2301.13419v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1dFQT4oBgHgl3EQf1TYk/content/2301.13419v1.pdf'} +page_content=' and Li, H.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1dFQT4oBgHgl3EQf1TYk/content/2301.13419v1.pdf'} +page_content=' 2018.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1dFQT4oBgHgl3EQf1TYk/content/2301.13419v1.pdf'} +page_content=' Depth super-resolution with deep edge-inference network and edge-guided depth filling.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1dFQT4oBgHgl3EQf1TYk/content/2301.13419v1.pdf'} +page_content=' In ICASSP, 1398–1402.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1dFQT4oBgHgl3EQf1TYk/content/2301.13419v1.pdf'} +page_content=' Ye, X.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1dFQT4oBgHgl3EQf1TYk/content/2301.13419v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1dFQT4oBgHgl3EQf1TYk/content/2301.13419v1.pdf'} +page_content=' Sun, B.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1dFQT4oBgHgl3EQf1TYk/content/2301.13419v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1dFQT4oBgHgl3EQf1TYk/content/2301.13419v1.pdf'} +page_content=' Wang, Z.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1dFQT4oBgHgl3EQf1TYk/content/2301.13419v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1dFQT4oBgHgl3EQf1TYk/content/2301.13419v1.pdf'} +page_content=' Yang, J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1dFQT4oBgHgl3EQf1TYk/content/2301.13419v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1dFQT4oBgHgl3EQf1TYk/content/2301.13419v1.pdf'} +page_content=' Xu, R.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1dFQT4oBgHgl3EQf1TYk/content/2301.13419v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1dFQT4oBgHgl3EQf1TYk/content/2301.13419v1.pdf'} +page_content=' Li, H.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1dFQT4oBgHgl3EQf1TYk/content/2301.13419v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1dFQT4oBgHgl3EQf1TYk/content/2301.13419v1.pdf'} +page_content=' and Li, B.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1dFQT4oBgHgl3EQf1TYk/content/2301.13419v1.pdf'} +page_content=' 2020.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1dFQT4oBgHgl3EQf1TYk/content/2301.13419v1.pdf'} +page_content=' Pmbanet: Progressive multi-branch aggregation net- work for scene depth super-resolution.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1dFQT4oBgHgl3EQf1TYk/content/2301.13419v1.pdf'} +page_content=' IEEE Transactions on Image Processing, 29: 7427–7442.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1dFQT4oBgHgl3EQf1TYk/content/2301.13419v1.pdf'} +page_content=' Zhang, H.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1dFQT4oBgHgl3EQf1TYk/content/2301.13419v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1dFQT4oBgHgl3EQf1TYk/content/2301.13419v1.pdf'} +page_content=' Goodfellow, I.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1dFQT4oBgHgl3EQf1TYk/content/2301.13419v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1dFQT4oBgHgl3EQf1TYk/content/2301.13419v1.pdf'} +page_content=' Metaxas, D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1dFQT4oBgHgl3EQf1TYk/content/2301.13419v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1dFQT4oBgHgl3EQf1TYk/content/2301.13419v1.pdf'} +page_content=' and Odena, A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1dFQT4oBgHgl3EQf1TYk/content/2301.13419v1.pdf'} +page_content=' 2019.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1dFQT4oBgHgl3EQf1TYk/content/2301.13419v1.pdf'} +page_content=' Self-attention generative adversarial networks.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1dFQT4oBgHgl3EQf1TYk/content/2301.13419v1.pdf'} +page_content=' In ICML, 7354–7363.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1dFQT4oBgHgl3EQf1TYk/content/2301.13419v1.pdf'} +page_content=' Zhong, Z.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1dFQT4oBgHgl3EQf1TYk/content/2301.13419v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1dFQT4oBgHgl3EQf1TYk/content/2301.13419v1.pdf'} +page_content=' Liu, X.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1dFQT4oBgHgl3EQf1TYk/content/2301.13419v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1dFQT4oBgHgl3EQf1TYk/content/2301.13419v1.pdf'} +page_content=' Jiang, J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1dFQT4oBgHgl3EQf1TYk/content/2301.13419v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1dFQT4oBgHgl3EQf1TYk/content/2301.13419v1.pdf'} +page_content=' Zhao, D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1dFQT4oBgHgl3EQf1TYk/content/2301.13419v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1dFQT4oBgHgl3EQf1TYk/content/2301.13419v1.pdf'} +page_content=' Chen, Z.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1dFQT4oBgHgl3EQf1TYk/content/2301.13419v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1dFQT4oBgHgl3EQf1TYk/content/2301.13419v1.pdf'} +page_content=' and Ji, X.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1dFQT4oBgHgl3EQf1TYk/content/2301.13419v1.pdf'} +page_content=' 2021.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1dFQT4oBgHgl3EQf1TYk/content/2301.13419v1.pdf'} +page_content=' High-resolution Depth Maps Imaging via Attention- based Hierarchical Multi-modal Fusion.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1dFQT4oBgHgl3EQf1TYk/content/2301.13419v1.pdf'} +page_content=' arXiv preprint arXiv:2104.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1dFQT4oBgHgl3EQf1TYk/content/2301.13419v1.pdf'} +page_content='01530.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1dFQT4oBgHgl3EQf1TYk/content/2301.13419v1.pdf'} +page_content=' Zhu, J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1dFQT4oBgHgl3EQf1TYk/content/2301.13419v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1dFQT4oBgHgl3EQf1TYk/content/2301.13419v1.pdf'} +page_content=' Zhai, W.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1dFQT4oBgHgl3EQf1TYk/content/2301.13419v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1dFQT4oBgHgl3EQf1TYk/content/2301.13419v1.pdf'} +page_content=' Cao, Y.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1dFQT4oBgHgl3EQf1TYk/content/2301.13419v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1dFQT4oBgHgl3EQf1TYk/content/2301.13419v1.pdf'} +page_content=' and Zha, Z.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1dFQT4oBgHgl3EQf1TYk/content/2301.13419v1.pdf'} +page_content='-J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1dFQT4oBgHgl3EQf1TYk/content/2301.13419v1.pdf'} +page_content=' 2018.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1dFQT4oBgHgl3EQf1TYk/content/2301.13419v1.pdf'} +page_content=' Co-occurrent structural edge detection for color-guided depth map super- resolution.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1dFQT4oBgHgl3EQf1TYk/content/2301.13419v1.pdf'} +page_content=' In MMM, 93–105.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1dFQT4oBgHgl3EQf1TYk/content/2301.13419v1.pdf'} +page_content=' Zuo, Y.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1dFQT4oBgHgl3EQf1TYk/content/2301.13419v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1dFQT4oBgHgl3EQf1TYk/content/2301.13419v1.pdf'} +page_content=' Fang, Y.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1dFQT4oBgHgl3EQf1TYk/content/2301.13419v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1dFQT4oBgHgl3EQf1TYk/content/2301.13419v1.pdf'} +page_content=' Yang, Y.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1dFQT4oBgHgl3EQf1TYk/content/2301.13419v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1dFQT4oBgHgl3EQf1TYk/content/2301.13419v1.pdf'} +page_content=' Shang, X.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1dFQT4oBgHgl3EQf1TYk/content/2301.13419v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1dFQT4oBgHgl3EQf1TYk/content/2301.13419v1.pdf'} +page_content=' and Wang, B.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1dFQT4oBgHgl3EQf1TYk/content/2301.13419v1.pdf'} +page_content=' 2019a.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1dFQT4oBgHgl3EQf1TYk/content/2301.13419v1.pdf'} +page_content=' Residual dense network for intensity-guided depth map en- hancement.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1dFQT4oBgHgl3EQf1TYk/content/2301.13419v1.pdf'} +page_content=' Information Sciences, 495: 52–64.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1dFQT4oBgHgl3EQf1TYk/content/2301.13419v1.pdf'} +page_content=' Zuo, Y.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1dFQT4oBgHgl3EQf1TYk/content/2301.13419v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1dFQT4oBgHgl3EQf1TYk/content/2301.13419v1.pdf'} +page_content=' Wu, Q.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1dFQT4oBgHgl3EQf1TYk/content/2301.13419v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1dFQT4oBgHgl3EQf1TYk/content/2301.13419v1.pdf'} +page_content=' Fang, Y.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1dFQT4oBgHgl3EQf1TYk/content/2301.13419v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1dFQT4oBgHgl3EQf1TYk/content/2301.13419v1.pdf'} +page_content=' An, P.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1dFQT4oBgHgl3EQf1TYk/content/2301.13419v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1dFQT4oBgHgl3EQf1TYk/content/2301.13419v1.pdf'} +page_content=' Huang, L.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1dFQT4oBgHgl3EQf1TYk/content/2301.13419v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1dFQT4oBgHgl3EQf1TYk/content/2301.13419v1.pdf'} +page_content=' and Chen, Z.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1dFQT4oBgHgl3EQf1TYk/content/2301.13419v1.pdf'} +page_content=' 2019b.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1dFQT4oBgHgl3EQf1TYk/content/2301.13419v1.pdf'} +page_content=' Multi-scale frequency reconstruction for guided depth map super-resolution via deep residual network.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1dFQT4oBgHgl3EQf1TYk/content/2301.13419v1.pdf'} +page_content=' IEEE Transactions on Circuits and Systems for Video Technology, 30(2): 297–306.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1dFQT4oBgHgl3EQf1TYk/content/2301.13419v1.pdf'}