diff --git "a/JtAzT4oBgHgl3EQfIPso/content/tmp_files/load_file.txt" "b/JtAzT4oBgHgl3EQfIPso/content/tmp_files/load_file.txt" new file mode 100644--- /dev/null +++ "b/JtAzT4oBgHgl3EQfIPso/content/tmp_files/load_file.txt" @@ -0,0 +1,888 @@ +filepath=/home/zjlab/wf/langchain-ChatGLM/knowledge_base/JtAzT4oBgHgl3EQfIPso/content/2301.01057v1.pdf,len=887 +page_content='BS3D: Building-scale 3D Reconstruction from RGB-D Images Janne Mustaniemi1, Juho Kannala2, Esa Rahtu3, Li Liu1, and Janne Heikkilä1 1 Center for Machine Vision and Signal Analysis, University of Oulu, Finland 2 Department of Computer Science, Aalto University, Finland 3 Tampere University, Finland janne.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/JtAzT4oBgHgl3EQfIPso/content/2301.01057v1.pdf'} +page_content='mustaniemi@oulu.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/JtAzT4oBgHgl3EQfIPso/content/2301.01057v1.pdf'} +page_content='fi Abstract.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/JtAzT4oBgHgl3EQfIPso/content/2301.01057v1.pdf'} +page_content=' Various datasets have been proposed for simultaneous local- ization and mapping (SLAM) and related problems.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/JtAzT4oBgHgl3EQfIPso/content/2301.01057v1.pdf'} +page_content=' Existing datasets often include small environments, have incomplete ground truth, or lack important sensor data, such as depth and infrared images.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/JtAzT4oBgHgl3EQfIPso/content/2301.01057v1.pdf'} +page_content=' We propose an easy-to-use framework for acquiring building-scale 3D reconstruction using a consumer depth camera.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/JtAzT4oBgHgl3EQfIPso/content/2301.01057v1.pdf'} +page_content=' Unlike complex and expensive acquisi- tion setups, our system enables crowd-sourcing, which can greatly bene- fit data-hungry algorithms.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/JtAzT4oBgHgl3EQfIPso/content/2301.01057v1.pdf'} +page_content=' Compared to similar systems, we utilize raw depth maps for odometry computation and loop closure refinement which results in better reconstructions.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/JtAzT4oBgHgl3EQfIPso/content/2301.01057v1.pdf'} +page_content=' We acquire a building-scale 3D dataset (BS3D) and demonstrate its value by training an improved monocular depth estimation model.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/JtAzT4oBgHgl3EQfIPso/content/2301.01057v1.pdf'} +page_content=' As a unique experiment, we benchmark visual- inertial odometry methods using both color and active infrared images.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/JtAzT4oBgHgl3EQfIPso/content/2301.01057v1.pdf'} +page_content=' Keywords: Depth camera · SLAM · Large-scale.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/JtAzT4oBgHgl3EQfIPso/content/2301.01057v1.pdf'} +page_content=' 1 Introduction Simultaneous localization and mapping (SLAM) is an essential component in robot navigation, virtual reality (VR), and augmented reality (AR) systems.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/JtAzT4oBgHgl3EQfIPso/content/2301.01057v1.pdf'} +page_content=' Var- ious datasets and benchmarks have been proposed for SLAM [11,35,39] and re- lated problems, including visual-intertial odometry [30,6], camera re-localization [29,32,15], and depth estimation [21,33].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/JtAzT4oBgHgl3EQfIPso/content/2301.01057v1.pdf'} +page_content=' Currently, there exists only a few building- scale SLAM datasets [28] that include ground truth camera poses and dense 3D geometry.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/JtAzT4oBgHgl3EQfIPso/content/2301.01057v1.pdf'} +page_content=' Such datasets enable, for example, evaluation of algorithms needed in large-scale AR applications.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/JtAzT4oBgHgl3EQfIPso/content/2301.01057v1.pdf'} +page_content=' The lack of building-scale SLAM datasets is explained by the difficulty of acquiring ground truth data.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/JtAzT4oBgHgl3EQfIPso/content/2301.01057v1.pdf'} +page_content=' Some have utilized a high-end LiDAR for obtaining 3D geometry of the environment [26,2,4,28].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/JtAzT4oBgHgl3EQfIPso/content/2301.01057v1.pdf'} +page_content=' Ground truth camera poses may be acquired using a motion capture (MoCap) system when the environment is small enough [35,40].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/JtAzT4oBgHgl3EQfIPso/content/2301.01057v1.pdf'} +page_content=' The high cost of equipment, complex sensor setup, and slow capturing process make these approaches less attractive and inconvenient for crowd-sourced data collection.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/JtAzT4oBgHgl3EQfIPso/content/2301.01057v1.pdf'} +page_content=' arXiv:2301.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/JtAzT4oBgHgl3EQfIPso/content/2301.01057v1.pdf'} +page_content='01057v1 [cs.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/JtAzT4oBgHgl3EQfIPso/content/2301.01057v1.pdf'} +page_content='CV] 3 Jan 2023 2 Mustaniemi et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/JtAzT4oBgHgl3EQfIPso/content/2301.01057v1.pdf'} +page_content=' An alternative is to perform 3D reconstruction using a monocular, stereo, or depth camera.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/JtAzT4oBgHgl3EQfIPso/content/2301.01057v1.pdf'} +page_content=' Consumer RGB-D cameras, in particular, are interesting because of their relatively good accuracy, fast acquisition speed, low-cost, and effective- ness in textureless environments.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/JtAzT4oBgHgl3EQfIPso/content/2301.01057v1.pdf'} +page_content=' RGB-D cameras have been used to collect datasets for depth estimation [21,33], scene understanding [8], and camera re- localization [32,38], among other tasks.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/JtAzT4oBgHgl3EQfIPso/content/2301.01057v1.pdf'} +page_content=' The problem is that existing RGB-D reconstruction systems (e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/JtAzT4oBgHgl3EQfIPso/content/2301.01057v1.pdf'} +page_content='g.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/JtAzT4oBgHgl3EQfIPso/content/2301.01057v1.pdf'} +page_content=' [22,9,5]) are limited to room-scale and apartment- scale environments.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/JtAzT4oBgHgl3EQfIPso/content/2301.01057v1.pdf'} +page_content=' Synthetic SLAM datasets have also been proposed [20,39,27] that include per- fect ground truth.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/JtAzT4oBgHgl3EQfIPso/content/2301.01057v1.pdf'} +page_content=' The challenge is that data such as time-of-flight (ToF) depth maps and infrared images are difficult to synthesize realistically.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/JtAzT4oBgHgl3EQfIPso/content/2301.01057v1.pdf'} +page_content=' Consequently, training and evaluation done using synthetic data may not reflect algorithm’s real-world performance.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/JtAzT4oBgHgl3EQfIPso/content/2301.01057v1.pdf'} +page_content=' To address the domain gap problem, algorithms are often fine-tuned using real data.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/JtAzT4oBgHgl3EQfIPso/content/2301.01057v1.pdf'} +page_content=' We propose a framework to create building-scale 3D reconstructions using a consumer depth camera (Azure Kinect).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/JtAzT4oBgHgl3EQfIPso/content/2301.01057v1.pdf'} +page_content=' Unlike existing approaches, we register color images and depth maps using color-to-depth (C2D) strategy.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/JtAzT4oBgHgl3EQfIPso/content/2301.01057v1.pdf'} +page_content=' This allows us to directly utilize the raw depth maps captured by the wide field-of-view (FoV) infrared camera.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/JtAzT4oBgHgl3EQfIPso/content/2301.01057v1.pdf'} +page_content=' Coupled with an open-source SLAM library [19], we acquire a building-scale 3D vision dataset (BS3D) that is considerably larger than similar datasets as shown in Fig.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/JtAzT4oBgHgl3EQfIPso/content/2301.01057v1.pdf'} +page_content=' 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/JtAzT4oBgHgl3EQfIPso/content/2301.01057v1.pdf'} +page_content=' The BS3D dataset includes 392k synchronized color images, depth maps and infrared images, inertial measurements, camera poses, enhanced depth maps, surface reconstructions, and laser scans.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/JtAzT4oBgHgl3EQfIPso/content/2301.01057v1.pdf'} +page_content=' Our framework will be released for the public to enable fast, easy and affordable indoor 3D reconstruction.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/JtAzT4oBgHgl3EQfIPso/content/2301.01057v1.pdf'} +page_content=' 240 m 80 m 8 m Zoomed Fig.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/JtAzT4oBgHgl3EQfIPso/content/2301.01057v1.pdf'} +page_content=' 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/JtAzT4oBgHgl3EQfIPso/content/2301.01057v1.pdf'} +page_content=' Building-scale 3D reconstruction (4300 m2) obtained using an RGB-D camera and the proposed framework.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/JtAzT4oBgHgl3EQfIPso/content/2301.01057v1.pdf'} +page_content=' The magnified area (90 m2) is larger than any recon- struction in the ScanNet dataset [8].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/JtAzT4oBgHgl3EQfIPso/content/2301.01057v1.pdf'} +page_content=' 2 Related work This section introduces commonly used RGB-D SLAM datasets and correspond- ing data acquisition processes.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/JtAzT4oBgHgl3EQfIPso/content/2301.01057v1.pdf'} +page_content=' A summary of the datasets is provided in Table BS3D: Building-scale 3D Reconstruction from RGB-D Images 3 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/JtAzT4oBgHgl3EQfIPso/content/2301.01057v1.pdf'} +page_content=' As there exist countless SLAM datasets, the scope is restricted to real-world indoor scenarios.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/JtAzT4oBgHgl3EQfIPso/content/2301.01057v1.pdf'} +page_content=' We leave out datasets focusing on aerial scenarios (e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/JtAzT4oBgHgl3EQfIPso/content/2301.01057v1.pdf'} +page_content='g.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/JtAzT4oBgHgl3EQfIPso/content/2301.01057v1.pdf'} +page_content=' Eu- RoC MAV [2]) and autonomous driving (e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/JtAzT4oBgHgl3EQfIPso/content/2301.01057v1.pdf'} +page_content='g.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/JtAzT4oBgHgl3EQfIPso/content/2301.01057v1.pdf'} +page_content=' KITTI [11]).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/JtAzT4oBgHgl3EQfIPso/content/2301.01057v1.pdf'} +page_content=' We also omit RGB-D datasets captured with a stationary scanner (e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/JtAzT4oBgHgl3EQfIPso/content/2301.01057v1.pdf'} +page_content='g.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/JtAzT4oBgHgl3EQfIPso/content/2301.01057v1.pdf'} +page_content=' Matterport3D [4]) as they can- not be used for SLAM evaluation.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/JtAzT4oBgHgl3EQfIPso/content/2301.01057v1.pdf'} +page_content=' Synthetic datasets, such as SceneNet RGB-D [20], TartanAir [39], and ICL [27] are also omitted.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/JtAzT4oBgHgl3EQfIPso/content/2301.01057v1.pdf'} +page_content=' ADVIO [6] dataset is a realistic visual-inertial odometry benchmark that in- cludes building-scale environments.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/JtAzT4oBgHgl3EQfIPso/content/2301.01057v1.pdf'} +page_content=' Ground truth trajectory is computed using an inertial navigation system (INS) together with manual location fixes.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/JtAzT4oBgHgl3EQfIPso/content/2301.01057v1.pdf'} +page_content=' The main limitation of the dataset is that it does not come with ground truth 3D geometry.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/JtAzT4oBgHgl3EQfIPso/content/2301.01057v1.pdf'} +page_content=' LaMAR [28] is a large-scale SLAM benchmark that utilizes high-end mapping platforms (NavVis M6 trolley and VLX backpack) for ground truth generation.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/JtAzT4oBgHgl3EQfIPso/content/2301.01057v1.pdf'} +page_content=' Although the capturing setup includes a variety of devices (e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/JtAzT4oBgHgl3EQfIPso/content/2301.01057v1.pdf'} +page_content='g.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/JtAzT4oBgHgl3EQfIPso/content/2301.01057v1.pdf'} +page_content=' HoloLens2 and iPad Pro), it does not include a dedicated RGB-D camera.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/JtAzT4oBgHgl3EQfIPso/content/2301.01057v1.pdf'} +page_content=' OpenLORIS-Scene [31] focuses on the lifelong SLAM scenario where environ- ments are dynamic and changing, similar to LaMAR [28].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/JtAzT4oBgHgl3EQfIPso/content/2301.01057v1.pdf'} +page_content=' The data is collected over an extended period of time using wheeled robots equipped with various sensors, including RGB-D, stereo, IMU, wheel odometry, and LiDAR.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/JtAzT4oBgHgl3EQfIPso/content/2301.01057v1.pdf'} +page_content=' Ground truth poses are acquired using an external motion capture (MoCap) system, or with a 2D laser SLAM method.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/JtAzT4oBgHgl3EQfIPso/content/2301.01057v1.pdf'} +page_content=' The dataset is not ideal for handheld SLAM evaluation because of the limited motion patterns of a ground robot.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/JtAzT4oBgHgl3EQfIPso/content/2301.01057v1.pdf'} +page_content=' TUM RGB-D SLAM [35] is one of the most popular SLAM datasets.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/JtAzT4oBgHgl3EQfIPso/content/2301.01057v1.pdf'} +page_content=' The RGB-D images are acquired using a consumer depth camera Microsoft Kinect v1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/JtAzT4oBgHgl3EQfIPso/content/2301.01057v1.pdf'} +page_content=' Ground truth trajectory is incomplete because the MoCap system can only cover a small part of the environment.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/JtAzT4oBgHgl3EQfIPso/content/2301.01057v1.pdf'} +page_content=' CoRBS [40] consists of four room-scale environments.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/JtAzT4oBgHgl3EQfIPso/content/2301.01057v1.pdf'} +page_content=' It also utilizes MoCap for acquiring ground truth trajectories for Microsoft Kinect v2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/JtAzT4oBgHgl3EQfIPso/content/2301.01057v1.pdf'} +page_content=' Unlike [35], CoRBS provides ground truth 3D geometry acquired using a laser scanner.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/JtAzT4oBgHgl3EQfIPso/content/2301.01057v1.pdf'} +page_content=' The data also includes infrared images, but not inertial measurements, unlike our dataset.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/JtAzT4oBgHgl3EQfIPso/content/2301.01057v1.pdf'} +page_content=' 7-Scenes [32] and 12-Scenes [38] are commonly used for evaluating camera lo- calization.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/JtAzT4oBgHgl3EQfIPso/content/2301.01057v1.pdf'} +page_content=' 7-Scenes includes seven scenes captured using Kinect v1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/JtAzT4oBgHgl3EQfIPso/content/2301.01057v1.pdf'} +page_content=' KinectFusion [22] is used to obtain ground truth poses and dense 3D models from the RGB- D images.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/JtAzT4oBgHgl3EQfIPso/content/2301.01057v1.pdf'} +page_content=' 12-Scenes consists of multiple rooms captured using the Structure.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/JtAzT4oBgHgl3EQfIPso/content/2301.01057v1.pdf'} +page_content='io depth sensor and iPad color camera.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/JtAzT4oBgHgl3EQfIPso/content/2301.01057v1.pdf'} +page_content=' The reconstructions are larger compared to 7-Scenes, about 37 m3 on average.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/JtAzT4oBgHgl3EQfIPso/content/2301.01057v1.pdf'} +page_content=' They are acquired using the VoxelHashing framework [23], an alternative to KinectFusion with better scalability.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/JtAzT4oBgHgl3EQfIPso/content/2301.01057v1.pdf'} +page_content=' ScanNet [8] is an RGB-D dataset containing 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/JtAzT4oBgHgl3EQfIPso/content/2301.01057v1.pdf'} +page_content='5M views acquired in 707 distinct spaces.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/JtAzT4oBgHgl3EQfIPso/content/2301.01057v1.pdf'} +page_content=' It includes estimated calibration parameters, camera poses, 3D surface reconstructions, textured meshes, and object-level semantic segmenta- tions.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/JtAzT4oBgHgl3EQfIPso/content/2301.01057v1.pdf'} +page_content=' The hardware consists of a Structure.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/JtAzT4oBgHgl3EQfIPso/content/2301.01057v1.pdf'} +page_content='io depth sensor attached to a tablet computer.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/JtAzT4oBgHgl3EQfIPso/content/2301.01057v1.pdf'} +page_content=' Pose estimation is done using BundleFusion [9], after which volumet- ric integration is performed through VoxelHashing [23].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/JtAzT4oBgHgl3EQfIPso/content/2301.01057v1.pdf'} +page_content=' Sun3D [43] is a large RGB-D database with camera poses, point clouds, object labels, and refined depth maps.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/JtAzT4oBgHgl3EQfIPso/content/2301.01057v1.pdf'} +page_content=' The reconstruction process is based on structure from motion (SfM) where manual object annotations are utilized to 4 Mustaniemi et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/JtAzT4oBgHgl3EQfIPso/content/2301.01057v1.pdf'} +page_content=' reduce drift and loop-closure failures.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/JtAzT4oBgHgl3EQfIPso/content/2301.01057v1.pdf'} +page_content=' Refined depth maps are obtained via vol- umetric fusion similar to KinectFusion [22].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/JtAzT4oBgHgl3EQfIPso/content/2301.01057v1.pdf'} +page_content=' We emphasize that ScanNet [8] and Sun3D [43] reconstructions are considerably smaller and have lower quality than those provided in our dataset.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/JtAzT4oBgHgl3EQfIPso/content/2301.01057v1.pdf'} +page_content=' Unlike [28,31,35], our system also does not require a complex and expensive capturing setup, or manual annotation [6,43].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/JtAzT4oBgHgl3EQfIPso/content/2301.01057v1.pdf'} +page_content=' Table 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/JtAzT4oBgHgl3EQfIPso/content/2301.01057v1.pdf'} +page_content=' List of indoor RGB-D SLAM datasets.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/JtAzT4oBgHgl3EQfIPso/content/2301.01057v1.pdf'} +page_content=' The BS3D acquisition setup does not require high-end LiDARs [40,31,28], MoCap systems [40,31,36], or manual annota- tion [43,6].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/JtAzT4oBgHgl3EQfIPso/content/2301.01057v1.pdf'} +page_content=' BS3D is building-scale, unlike [32,36,8,40,38,43].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/JtAzT4oBgHgl3EQfIPso/content/2301.01057v1.pdf'} +page_content=' Note that ADVIO [6] and LaMAR [28] do not have a dedicated depth camera.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/JtAzT4oBgHgl3EQfIPso/content/2301.01057v1.pdf'} +page_content=' Dataset Scale Depth IMU IR Ground truth 7-Scenes [32] room Kinect v1 RGBD-recons.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/JtAzT4oBgHgl3EQfIPso/content/2301.01057v1.pdf'} +page_content=' TUM RGBD [36] room Kinect v1 ✓ MoCap ScanNet [8] room Structure.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/JtAzT4oBgHgl3EQfIPso/content/2301.01057v1.pdf'} +page_content='io ✓ RGBD-recons.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/JtAzT4oBgHgl3EQfIPso/content/2301.01057v1.pdf'} +page_content=' CoRBS [40] room Kinect v2 ✓ MoCap+LiDAR 12-Scenes [38] apartment Structure.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/JtAzT4oBgHgl3EQfIPso/content/2301.01057v1.pdf'} +page_content='io RGBD-recons.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/JtAzT4oBgHgl3EQfIPso/content/2301.01057v1.pdf'} +page_content=' Sun3D [43] apartment Xtion Pro Live RGBD+manual OpenLORIS [31] building RS-D435i ✓ MoCap+LiDAR ADVIO [6] building Tango ✓ INS+manual LaMAR [28] building HoloLens2 ✓ ✓ LiDAR+VIO+SfM BS3D (ours) building Azure Kinect ✓ ✓ RGBD-recons.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/JtAzT4oBgHgl3EQfIPso/content/2301.01057v1.pdf'} +page_content=' 3 Reconstruction framework In this section, we introduce the RGB-D reconstruction framework shown in Fig.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/JtAzT4oBgHgl3EQfIPso/content/2301.01057v1.pdf'} +page_content=' 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/JtAzT4oBgHgl3EQfIPso/content/2301.01057v1.pdf'} +page_content=' The framework produces accurate 3D reconstructions of building-scale environments using low-cost hardware.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/JtAzT4oBgHgl3EQfIPso/content/2301.01057v1.pdf'} +page_content=' The system is fully automatic and robust against poor lighting conditions and fast motions.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/JtAzT4oBgHgl3EQfIPso/content/2301.01057v1.pdf'} +page_content=' Color images are only used for loop closure detection as they are susceptible to motion blur and rolling shutter distortion.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/JtAzT4oBgHgl3EQfIPso/content/2301.01057v1.pdf'} +page_content=' Raw depth maps enable accurate odometry and the refinement of loop closure transformations.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/JtAzT4oBgHgl3EQfIPso/content/2301.01057v1.pdf'} +page_content=' 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/JtAzT4oBgHgl3EQfIPso/content/2301.01057v1.pdf'} +page_content='1 Hardware Data is captured using the Azure Kinect depth camera, which is well-suited for crowd-sourcing due to its popularity and affordability.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/JtAzT4oBgHgl3EQfIPso/content/2301.01057v1.pdf'} +page_content=' We capture synchronized depth, color, and infrared images at 30 Hz using the official recorder application running on a laptop computer.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/JtAzT4oBgHgl3EQfIPso/content/2301.01057v1.pdf'} +page_content=' We use the wide FoV mode of the infrared camera with 2x2 binning to extend the Z-range.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/JtAzT4oBgHgl3EQfIPso/content/2301.01057v1.pdf'} +page_content=' The resolution of raw depth maps and IR images is 512 x 512 pixels.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/JtAzT4oBgHgl3EQfIPso/content/2301.01057v1.pdf'} +page_content=' Auto-exposure is enabled when capturing color images at the resolution of 720 x 1280 pixels.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/JtAzT4oBgHgl3EQfIPso/content/2301.01057v1.pdf'} +page_content=' We also record accelerometer and gyroscope readings at 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/JtAzT4oBgHgl3EQfIPso/content/2301.01057v1.pdf'} +page_content='6 kHz.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/JtAzT4oBgHgl3EQfIPso/content/2301.01057v1.pdf'} +page_content=' BS3D: Building-scale 3D Reconstruction from RGB-D Images 5 Fig.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/JtAzT4oBgHgl3EQfIPso/content/2301.01057v1.pdf'} +page_content=' 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/JtAzT4oBgHgl3EQfIPso/content/2301.01057v1.pdf'} +page_content=' Overview of the RGB-D reconstruction system.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/JtAzT4oBgHgl3EQfIPso/content/2301.01057v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/JtAzT4oBgHgl3EQfIPso/content/2301.01057v1.pdf'} +page_content=' 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/JtAzT4oBgHgl3EQfIPso/content/2301.01057v1.pdf'} +page_content='2 Color-to-depth alignment Most RGB-D reconstruction systems expect that color images and depth maps have been spatially and temporally aligned.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/JtAzT4oBgHgl3EQfIPso/content/2301.01057v1.pdf'} +page_content=' Modern depth cameras typically produce temporally synchronized images so the main concern is the spatial align- ment.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/JtAzT4oBgHgl3EQfIPso/content/2301.01057v1.pdf'} +page_content=' Conventionally, raw depth maps are transformed to the coordinate system of the color camera, which we refer to as the depth-to-color (D2C) alignment.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/JtAzT4oBgHgl3EQfIPso/content/2301.01057v1.pdf'} +page_content=' In the case of Azure Kinect, the color camera’s FoV is much narrower (90 x 59 degrees) compared to the infrared camera (120 x 120 degrees).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/JtAzT4oBgHgl3EQfIPso/content/2301.01057v1.pdf'} +page_content=' Thus, the D2C alignment would not take advantage of the infrared camera’s wide FoV because depth maps would be heavily cropped.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/JtAzT4oBgHgl3EQfIPso/content/2301.01057v1.pdf'} +page_content=' Moreover, the D2C alignment might introduce artefacts to the raw depth maps.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/JtAzT4oBgHgl3EQfIPso/content/2301.01057v1.pdf'} +page_content=' We propose an alternative called color-to-depth (C2D) alignment where color images are transformed instead.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/JtAzT4oBgHgl3EQfIPso/content/2301.01057v1.pdf'} +page_content=' In the experiments, we show that this drastically improves the quality of the reconstructions.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/JtAzT4oBgHgl3EQfIPso/content/2301.01057v1.pdf'} +page_content=' The main challenge of C2D is that it requires a fully dense depth map.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/JtAzT4oBgHgl3EQfIPso/content/2301.01057v1.pdf'} +page_content=' Fortunately, a reasonably good alignment can be achieved even with a low quality depth map.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/JtAzT4oBgHgl3EQfIPso/content/2301.01057v1.pdf'} +page_content=' This is because the baseline between the cameras is narrow and missing depths often appear in areas that are far away from the camera.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/JtAzT4oBgHgl3EQfIPso/content/2301.01057v1.pdf'} +page_content=' For the C2D alignment, we first perform depth inpainting using linear in- terpolation.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/JtAzT4oBgHgl3EQfIPso/content/2301.01057v1.pdf'} +page_content=' Then, the color image is transformed to the raw depth frame.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/JtAzT4oBgHgl3EQfIPso/content/2301.01057v1.pdf'} +page_content=' To keep as much of the color information as possible, the output resolution will be higher (1024 x 1024 pixels) compared to the raw depth maps .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/JtAzT4oBgHgl3EQfIPso/content/2301.01057v1.pdf'} +page_content=' After that, holes in the color image due to occlusions are inpainted using the OpenCV library’s implementation of [37].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/JtAzT4oBgHgl3EQfIPso/content/2301.01057v1.pdf'} +page_content=' We note that minor artefacts in the aligned color images will have little impact on the SIFT-based loop closure detection.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/JtAzT4oBgHgl3EQfIPso/content/2301.01057v1.pdf'} +page_content=' 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/JtAzT4oBgHgl3EQfIPso/content/2301.01057v1.pdf'} +page_content='3 RGB-D Mapping We process the RGB-D sequences using an open-source SLAM library called RTAB-Map [19].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/JtAzT4oBgHgl3EQfIPso/content/2301.01057v1.pdf'} +page_content=' Odometry is computed from the raw depth maps using the point-to-plane variant of the iterative closest point (ICP) algorithm [25].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/JtAzT4oBgHgl3EQfIPso/content/2301.01057v1.pdf'} +page_content=' We use the scan-to-map odometry strategy [19] where incoming frames are registered against a point cloud map created from past keyframes.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/JtAzT4oBgHgl3EQfIPso/content/2301.01057v1.pdf'} +page_content=' The wide FoV ensures that ICP-odometry rarely fails, but in case it does, a new map is initialized.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/JtAzT4oBgHgl3EQfIPso/content/2301.01057v1.pdf'} +page_content=' RGBD RGB Depth RGBD Color-to-depth Loop closures Volumetric (C2D) (PnP + ICP) fusion Poses Normals Depth (raw) Mesh (optimized) Poses Odometry (odometry) Graph Render (ICP) optimization6 Mustaniemi et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/JtAzT4oBgHgl3EQfIPso/content/2301.01057v1.pdf'} +page_content=' Loop closure detection is needed for drift correction and merging of individual maps.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/JtAzT4oBgHgl3EQfIPso/content/2301.01057v1.pdf'} +page_content=' For this purpose, SIFT features are extracted from the valid area of the aligned color images.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/JtAzT4oBgHgl3EQfIPso/content/2301.01057v1.pdf'} +page_content=' Loop closures are detected using the bag-of-words approach [18], and transformations are estimated using the Perspective-n-Point RANSAC algorithm and refined using ICP [25].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/JtAzT4oBgHgl3EQfIPso/content/2301.01057v1.pdf'} +page_content=' Graph optimization is done using the GTSAM library [10] and Gauss-Newton algorithm.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/JtAzT4oBgHgl3EQfIPso/content/2301.01057v1.pdf'} +page_content=' RTAB-Map supports multi-session mapping which is a necessary feature when reconstructing building-scale environments.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/JtAzT4oBgHgl3EQfIPso/content/2301.01057v1.pdf'} +page_content=' It is not practical to collect possibly hours of data at once.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/JtAzT4oBgHgl3EQfIPso/content/2301.01057v1.pdf'} +page_content=' Furthermore, having the ability to later update and expand the map is a useful feature.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/JtAzT4oBgHgl3EQfIPso/content/2301.01057v1.pdf'} +page_content=' In practise, individual sequences are first processed separately, followed by multi-session mapping.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/JtAzT4oBgHgl3EQfIPso/content/2301.01057v1.pdf'} +page_content=' The sessions are merged by finding loop closures and by performing graph optimization.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/JtAzT4oBgHgl3EQfIPso/content/2301.01057v1.pdf'} +page_content=' The in- put is a sequence of keyframes along with odometry poses and SIFT features computed during single-session mapping.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/JtAzT4oBgHgl3EQfIPso/content/2301.01057v1.pdf'} +page_content=' The sessions are processed in such or- der that there is at least some overlap between the current session and the global map build so far.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/JtAzT4oBgHgl3EQfIPso/content/2301.01057v1.pdf'} +page_content=' 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/JtAzT4oBgHgl3EQfIPso/content/2301.01057v1.pdf'} +page_content='4 Surface reconstruction It is often useful to have a 3D surface representation of the environment.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/JtAzT4oBgHgl3EQfIPso/content/2301.01057v1.pdf'} +page_content=' There exists many classical [14,22] and learning-based [41,1] surface reconstruction ap- proaches.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/JtAzT4oBgHgl3EQfIPso/content/2301.01057v1.pdf'} +page_content=' Methods that utilize deep neural networks, such as NeuralFusion [41], have produced impressive results on the task of depth map fusion.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/JtAzT4oBgHgl3EQfIPso/content/2301.01057v1.pdf'} +page_content=' Neural ra- diance fields (NeRFs) have also been adapted to RGB-D imagery [1] showing good performance.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/JtAzT4oBgHgl3EQfIPso/content/2301.01057v1.pdf'} +page_content=' We did not use learning-based approaches in this work be- cause they are limited to small scenes, at least for the time being.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/JtAzT4oBgHgl3EQfIPso/content/2301.01057v1.pdf'} +page_content=' Moreover, scene-specific learning [1] takes several hours even with powerful hardware.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/JtAzT4oBgHgl3EQfIPso/content/2301.01057v1.pdf'} +page_content=' Surface reconstruction is done in segments due to the large scale of the en- vironment and the vast number of frames.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/JtAzT4oBgHgl3EQfIPso/content/2301.01057v1.pdf'} +page_content=' To that end, we first create a point cloud from downsampled raw depth maps.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/JtAzT4oBgHgl3EQfIPso/content/2301.01057v1.pdf'} +page_content=' Every point includes a view index along with 3D coordinates.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/JtAzT4oBgHgl3EQfIPso/content/2301.01057v1.pdf'} +page_content=' The point cloud is partitioned into manageable seg- ments using the K-means algorithm.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/JtAzT4oBgHgl3EQfIPso/content/2301.01057v1.pdf'} +page_content=' A mesh is created for each segment using the scalable TSDF fusion implementation [46] that is based on [7,22].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/JtAzT4oBgHgl3EQfIPso/content/2301.01057v1.pdf'} +page_content=' It uses a hierarchical hashing structure to support large scenes.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/JtAzT4oBgHgl3EQfIPso/content/2301.01057v1.pdf'} +page_content=' 4 BS3D dataset The BS3D dataset was collected at the university campus using Azure Kinect (Section 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/JtAzT4oBgHgl3EQfIPso/content/2301.01057v1.pdf'} +page_content='1).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/JtAzT4oBgHgl3EQfIPso/content/2301.01057v1.pdf'} +page_content=' Figure 3 shows example frames from the dataset.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/JtAzT4oBgHgl3EQfIPso/content/2301.01057v1.pdf'} +page_content=' The collection was done in multiple sessions due to large scale of the environment.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/JtAzT4oBgHgl3EQfIPso/content/2301.01057v1.pdf'} +page_content=' The record- ings were processed using the framework described in Section 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/JtAzT4oBgHgl3EQfIPso/content/2301.01057v1.pdf'} +page_content=' 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/JtAzT4oBgHgl3EQfIPso/content/2301.01057v1.pdf'} +page_content='1 Dataset features The reconstruction shown in Fig.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/JtAzT4oBgHgl3EQfIPso/content/2301.01057v1.pdf'} +page_content=' 1 consists of 47 overlapping recording sessions.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/JtAzT4oBgHgl3EQfIPso/content/2301.01057v1.pdf'} +page_content=' Additional 14 sessions, including 3D laser scans, were recorded for evaluation BS3D: Building-scale 3D Reconstruction from RGB-D Images 7 Cafeteria Stairs Study Corridor Lobby Fig.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/JtAzT4oBgHgl3EQfIPso/content/2301.01057v1.pdf'} +page_content=' 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/JtAzT4oBgHgl3EQfIPso/content/2301.01057v1.pdf'} +page_content=' Example frames from the dataset.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/JtAzT4oBgHgl3EQfIPso/content/2301.01057v1.pdf'} +page_content=' Environments are diverse and challenging, including cafeterias, stairs, study areas, corridors, and lobbies.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/JtAzT4oBgHgl3EQfIPso/content/2301.01057v1.pdf'} +page_content=' purposes.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/JtAzT4oBgHgl3EQfIPso/content/2301.01057v1.pdf'} +page_content=' Most sessions begin and end at the same location to encourage loop closure detection.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/JtAzT4oBgHgl3EQfIPso/content/2301.01057v1.pdf'} +page_content=' The total duration of the sessions is 3 hours and 38 minutes and the combined trajectory length is 6.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/JtAzT4oBgHgl3EQfIPso/content/2301.01057v1.pdf'} +page_content='4 kilometers.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/JtAzT4oBgHgl3EQfIPso/content/2301.01057v1.pdf'} +page_content=' The reconstructed floor area is approximately 4300 m2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/JtAzT4oBgHgl3EQfIPso/content/2301.01057v1.pdf'} +page_content=' The dataset consists of 392k frames, including color images, raw depth maps, and infrared images.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/JtAzT4oBgHgl3EQfIPso/content/2301.01057v1.pdf'} +page_content=' Color images and depth maps are provided in both coordi- nate frames (color and infrared camera).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/JtAzT4oBgHgl3EQfIPso/content/2301.01057v1.pdf'} +page_content=' The images have been undistorted for convenience, but the original recordings are also included.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/JtAzT4oBgHgl3EQfIPso/content/2301.01057v1.pdf'} +page_content=' We provide camera poses in a global reference frame for every image.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/JtAzT4oBgHgl3EQfIPso/content/2301.01057v1.pdf'} +page_content=' Data also includes inertial mea- surements, enhanced depth maps and surface normals that have been rendered from the mesh as visualized in Fig.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/JtAzT4oBgHgl3EQfIPso/content/2301.01057v1.pdf'} +page_content=' 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/JtAzT4oBgHgl3EQfIPso/content/2301.01057v1.pdf'} +page_content=' Color Infrared Normals (render) Mesh Depth Depth (raw) Depth (render) Fig.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/JtAzT4oBgHgl3EQfIPso/content/2301.01057v1.pdf'} +page_content=' 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/JtAzT4oBgHgl3EQfIPso/content/2301.01057v1.pdf'} +page_content=' The BS3D dataset includes color and infrared images, depth maps, IMU data, camera parameters, and surface reconstructions.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/JtAzT4oBgHgl3EQfIPso/content/2301.01057v1.pdf'} +page_content=' Enhanced depth maps and surface normals are rendered from the mesh.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/JtAzT4oBgHgl3EQfIPso/content/2301.01057v1.pdf'} +page_content=' 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/JtAzT4oBgHgl3EQfIPso/content/2301.01057v1.pdf'} +page_content='2 Laser scan We utilize the FARO 3D X 130 laser scanner for acquiring ground truth 3D geometry.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/JtAzT4oBgHgl3EQfIPso/content/2301.01057v1.pdf'} +page_content=' The scanned area includes a lobby and corridors of different sizes (800 m2).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/JtAzT4oBgHgl3EQfIPso/content/2301.01057v1.pdf'} +page_content=' The 28 individual scans were registered using the SCENE software that comes with the laser scanner.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/JtAzT4oBgHgl3EQfIPso/content/2301.01057v1.pdf'} +page_content=' Noticeable artefacts, e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/JtAzT4oBgHgl3EQfIPso/content/2301.01057v1.pdf'} +page_content='g.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/JtAzT4oBgHgl3EQfIPso/content/2301.01057v1.pdf'} +page_content=' those caused by mirrors, 8 Mustaniemi et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/JtAzT4oBgHgl3EQfIPso/content/2301.01057v1.pdf'} +page_content=' were manually removed.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/JtAzT4oBgHgl3EQfIPso/content/2301.01057v1.pdf'} +page_content=' The laser scan is used to evaluate the reconstruction framework in Section 5.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/JtAzT4oBgHgl3EQfIPso/content/2301.01057v1.pdf'} +page_content=' However, this data also enables, for example, training and evaluation of RGB-D surface reconstruction algorithms.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/JtAzT4oBgHgl3EQfIPso/content/2301.01057v1.pdf'} +page_content=' 5 Experiments We compare our framework with the state-of-the-art RGB-D reconstruction methods [5,9,3].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/JtAzT4oBgHgl3EQfIPso/content/2301.01057v1.pdf'} +page_content=' The value of the BS3D dataset is demonstrated by training a recent monocular depth estimation model [44].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/JtAzT4oBgHgl3EQfIPso/content/2301.01057v1.pdf'} +page_content=' We also benchmark visual- inertial odometry approaches [12,34,3] using either color or infrared images to further highlight the unique aspects of the BS3D dataset.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/JtAzT4oBgHgl3EQfIPso/content/2301.01057v1.pdf'} +page_content=' 5.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/JtAzT4oBgHgl3EQfIPso/content/2301.01057v1.pdf'} +page_content='1 Reconstruction framework In this experiment, we compare the framework against Redwood [5], Bundle- Fusion [9], and ORB-SLAM3 [3].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/JtAzT4oBgHgl3EQfIPso/content/2301.01057v1.pdf'} +page_content=' RGBD images are provided for [5,9,3] in the coordinate frame of the color camera.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/JtAzT4oBgHgl3EQfIPso/content/2301.01057v1.pdf'} +page_content=' Given the estimated camera poses, we cre- ate a point cloud and compare it to the laser scan (Section 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/JtAzT4oBgHgl3EQfIPso/content/2301.01057v1.pdf'} +page_content='2).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/JtAzT4oBgHgl3EQfIPso/content/2301.01057v1.pdf'} +page_content=' The evaluation metrics include overlap of the point clouds and RMSE of inlier correspondences.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/JtAzT4oBgHgl3EQfIPso/content/2301.01057v1.pdf'} +page_content=' Before comparison, we create uniformly sampled point clouds using voxel down- sampling (1 cm3 voxel) that computes the centroid of the points in each voxel.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/JtAzT4oBgHgl3EQfIPso/content/2301.01057v1.pdf'} +page_content=' The overlap is defined as the ratio of inlier correspondences and the number of ground truth points.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/JtAzT4oBgHgl3EQfIPso/content/2301.01057v1.pdf'} +page_content=' A 3D point is considered to be an inlier if the distance to the closest ground truth point is below threshold γ.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/JtAzT4oBgHgl3EQfIPso/content/2301.01057v1.pdf'} +page_content=' Table 2 shows the results for environments of different sizes.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/JtAzT4oBgHgl3EQfIPso/content/2301.01057v1.pdf'} +page_content=' All methods are able to reconstruct the small environment (35 m2) consisting of 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/JtAzT4oBgHgl3EQfIPso/content/2301.01057v1.pdf'} +page_content='8k frames.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/JtAzT4oBgHgl3EQfIPso/content/2301.01057v1.pdf'} +page_content=' The differences between the methods become more evident when reconstructing the medium-size environment (160 m2) consisting of 7.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/JtAzT4oBgHgl3EQfIPso/content/2301.01057v1.pdf'} +page_content='3k frames.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/JtAzT4oBgHgl3EQfIPso/content/2301.01057v1.pdf'} +page_content=' BundleFusion [9] only produces a partial reconstruction because of odometry failures.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/JtAzT4oBgHgl3EQfIPso/content/2301.01057v1.pdf'} +page_content=' The proposed approach gives the most accurate reconstructions as visualized in Fig.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/JtAzT4oBgHgl3EQfIPso/content/2301.01057v1.pdf'} +page_content=' 5.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/JtAzT4oBgHgl3EQfIPso/content/2301.01057v1.pdf'} +page_content=' Note that it is not possible to achieve 100 % overlap because the depth camera does not observe all parts of the ground truth.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/JtAzT4oBgHgl3EQfIPso/content/2301.01057v1.pdf'} +page_content=' The largest environment (350 m2) consists of 24k frames acquired in four sessions.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/JtAzT4oBgHgl3EQfIPso/content/2301.01057v1.pdf'} +page_content=' Redwood [5] does not scale to input sequences of this long.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/JtAzT4oBgHgl3EQfIPso/content/2301.01057v1.pdf'} +page_content=' ORB-SLAM3 [3] frequently loses the odometry in open spaces which leads to incomplete and less accurate reconstructions.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/JtAzT4oBgHgl3EQfIPso/content/2301.01057v1.pdf'} +page_content=' Our method suffers the same problem when C2D is disabled.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/JtAzT4oBgHgl3EQfIPso/content/2301.01057v1.pdf'} +page_content=' Unreliable odometry is likely due to the color camera’s limited FoV, rolling shutter distortion, and motion blur.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/JtAzT4oBgHgl3EQfIPso/content/2301.01057v1.pdf'} +page_content=' The C2D alignment improves the accuracy and robustness of ICP-based odometry and loop closures.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/JtAzT4oBgHgl3EQfIPso/content/2301.01057v1.pdf'} +page_content=' Without C2D, the frequent odometry failures result in disconnected maps and noticeable drift.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/JtAzT4oBgHgl3EQfIPso/content/2301.01057v1.pdf'} +page_content=' We note that the reconstruction in Fig.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/JtAzT4oBgHgl3EQfIPso/content/2301.01057v1.pdf'} +page_content=' 1 was computed from ∼300k frames which is far more than [5,9,3] can handle.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/JtAzT4oBgHgl3EQfIPso/content/2301.01057v1.pdf'} +page_content=' BS3D: Building-scale 3D Reconstruction from RGB-D Images 9 Table 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/JtAzT4oBgHgl3EQfIPso/content/2301.01057v1.pdf'} +page_content=' Comparison of RGB-D reconstruction methods in small, medium and large- scale environments (from top to bottom).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/JtAzT4oBgHgl3EQfIPso/content/2301.01057v1.pdf'} +page_content=' Overlap of the point clouds and inlier RMSE computed for distance thresholds γ (mm).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/JtAzT4oBgHgl3EQfIPso/content/2301.01057v1.pdf'} +page_content=' Some methods only work in small and/or medium scale environments.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/JtAzT4oBgHgl3EQfIPso/content/2301.01057v1.pdf'} +page_content=' γ = 10 (mm) γ = 20 (mm) γ = 50 (mm) Method Overlap ↑ RMSE ↓ Overlap ↑ RMSE ↓ Overlap ↑ RMSE ↓ Redwood [5] 66.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/JtAzT4oBgHgl3EQfIPso/content/2301.01057v1.pdf'} +page_content='5 5.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/JtAzT4oBgHgl3EQfIPso/content/2301.01057v1.pdf'} +page_content='6 77.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/JtAzT4oBgHgl3EQfIPso/content/2301.01057v1.pdf'} +page_content='9 7.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/JtAzT4oBgHgl3EQfIPso/content/2301.01057v1.pdf'} +page_content='6 87.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/JtAzT4oBgHgl3EQfIPso/content/2301.01057v1.pdf'} +page_content='1 12.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/JtAzT4oBgHgl3EQfIPso/content/2301.01057v1.pdf'} +page_content='6 BundleFusion [9] 72.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/JtAzT4oBgHgl3EQfIPso/content/2301.01057v1.pdf'} +page_content='1 5.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/JtAzT4oBgHgl3EQfIPso/content/2301.01057v1.pdf'} +page_content='5 80.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/JtAzT4oBgHgl3EQfIPso/content/2301.01057v1.pdf'} +page_content='8 6.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/JtAzT4oBgHgl3EQfIPso/content/2301.01057v1.pdf'} +page_content='9 88.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/JtAzT4oBgHgl3EQfIPso/content/2301.01057v1.pdf'} +page_content='3 11.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/JtAzT4oBgHgl3EQfIPso/content/2301.01057v1.pdf'} +page_content='7 ORB-SLAM3 [3] 78.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/JtAzT4oBgHgl3EQfIPso/content/2301.01057v1.pdf'} +page_content='2 5.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/JtAzT4oBgHgl3EQfIPso/content/2301.01057v1.pdf'} +page_content='3 85.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/JtAzT4oBgHgl3EQfIPso/content/2301.01057v1.pdf'} +page_content='2 6.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/JtAzT4oBgHgl3EQfIPso/content/2301.01057v1.pdf'} +page_content='5 91.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/JtAzT4oBgHgl3EQfIPso/content/2301.01057v1.pdf'} +page_content='3 10.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/JtAzT4oBgHgl3EQfIPso/content/2301.01057v1.pdf'} +page_content='6 Prop.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/JtAzT4oBgHgl3EQfIPso/content/2301.01057v1.pdf'} +page_content=' (w/o C2D) 66.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/JtAzT4oBgHgl3EQfIPso/content/2301.01057v1.pdf'} +page_content='8 5.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/JtAzT4oBgHgl3EQfIPso/content/2301.01057v1.pdf'} +page_content='7 77.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/JtAzT4oBgHgl3EQfIPso/content/2301.01057v1.pdf'} +page_content='8 7.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/JtAzT4oBgHgl3EQfIPso/content/2301.01057v1.pdf'} +page_content='5 87.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/JtAzT4oBgHgl3EQfIPso/content/2301.01057v1.pdf'} +page_content='0 12.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/JtAzT4oBgHgl3EQfIPso/content/2301.01057v1.pdf'} +page_content='7 Proposed 78.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/JtAzT4oBgHgl3EQfIPso/content/2301.01057v1.pdf'} +page_content='4 5.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/JtAzT4oBgHgl3EQfIPso/content/2301.01057v1.pdf'} +page_content='2 85.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/JtAzT4oBgHgl3EQfIPso/content/2301.01057v1.pdf'} +page_content='7 6.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/JtAzT4oBgHgl3EQfIPso/content/2301.01057v1.pdf'} +page_content='5 91.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/JtAzT4oBgHgl3EQfIPso/content/2301.01057v1.pdf'} +page_content='6 10.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/JtAzT4oBgHgl3EQfIPso/content/2301.01057v1.pdf'} +page_content='6 Redwood [5] 30.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/JtAzT4oBgHgl3EQfIPso/content/2301.01057v1.pdf'} +page_content='4 6.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/JtAzT4oBgHgl3EQfIPso/content/2301.01057v1.pdf'} +page_content='2 44.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/JtAzT4oBgHgl3EQfIPso/content/2301.01057v1.pdf'} +page_content='5 9.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/JtAzT4oBgHgl3EQfIPso/content/2301.01057v1.pdf'} +page_content='8 63.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/JtAzT4oBgHgl3EQfIPso/content/2301.01057v1.pdf'} +page_content='9 19.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/JtAzT4oBgHgl3EQfIPso/content/2301.01057v1.pdf'} +page_content='9 BundleFusion [9] 8.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/JtAzT4oBgHgl3EQfIPso/content/2301.01057v1.pdf'} +page_content='1 6.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/JtAzT4oBgHgl3EQfIPso/content/2301.01057v1.pdf'} +page_content='2 11.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/JtAzT4oBgHgl3EQfIPso/content/2301.01057v1.pdf'} +page_content='1 9.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/JtAzT4oBgHgl3EQfIPso/content/2301.01057v1.pdf'} +page_content='2 14.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/JtAzT4oBgHgl3EQfIPso/content/2301.01057v1.pdf'} +page_content='8 18.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/JtAzT4oBgHgl3EQfIPso/content/2301.01057v1.pdf'} +page_content='8 ORB-SLAM3 [3] 44.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/JtAzT4oBgHgl3EQfIPso/content/2301.01057v1.pdf'} +page_content='3 6.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/JtAzT4oBgHgl3EQfIPso/content/2301.01057v1.pdf'} +page_content='0 57.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/JtAzT4oBgHgl3EQfIPso/content/2301.01057v1.pdf'} +page_content='7 8.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/JtAzT4oBgHgl3EQfIPso/content/2301.01057v1.pdf'} +page_content='7 71.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/JtAzT4oBgHgl3EQfIPso/content/2301.01057v1.pdf'} +page_content='0 16.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/JtAzT4oBgHgl3EQfIPso/content/2301.01057v1.pdf'} +page_content='2 Prop.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/JtAzT4oBgHgl3EQfIPso/content/2301.01057v1.pdf'} +page_content=' (w/o C2D) 36.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/JtAzT4oBgHgl3EQfIPso/content/2301.01057v1.pdf'} +page_content='5 6.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/JtAzT4oBgHgl3EQfIPso/content/2301.01057v1.pdf'} +page_content='1 49.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/JtAzT4oBgHgl3EQfIPso/content/2301.01057v1.pdf'} +page_content='2 9.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/JtAzT4oBgHgl3EQfIPso/content/2301.01057v1.pdf'} +page_content='0 64.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/JtAzT4oBgHgl3EQfIPso/content/2301.01057v1.pdf'} +page_content='3 18.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/JtAzT4oBgHgl3EQfIPso/content/2301.01057v1.pdf'} +page_content='3 Proposed 54.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/JtAzT4oBgHgl3EQfIPso/content/2301.01057v1.pdf'} +page_content='1 5.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/JtAzT4oBgHgl3EQfIPso/content/2301.01057v1.pdf'} +page_content='7 64.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/JtAzT4oBgHgl3EQfIPso/content/2301.01057v1.pdf'} +page_content='8 7.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/JtAzT4oBgHgl3EQfIPso/content/2301.01057v1.pdf'} +page_content='7 73.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/JtAzT4oBgHgl3EQfIPso/content/2301.01057v1.pdf'} +page_content='2 13.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/JtAzT4oBgHgl3EQfIPso/content/2301.01057v1.pdf'} +page_content='4 ORB-SLAM3 [3] 9.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/JtAzT4oBgHgl3EQfIPso/content/2301.01057v1.pdf'} +page_content='5 6.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/JtAzT4oBgHgl3EQfIPso/content/2301.01057v1.pdf'} +page_content='3 14.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/JtAzT4oBgHgl3EQfIPso/content/2301.01057v1.pdf'} +page_content='4 9.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/JtAzT4oBgHgl3EQfIPso/content/2301.01057v1.pdf'} +page_content='9 20.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/JtAzT4oBgHgl3EQfIPso/content/2301.01057v1.pdf'} +page_content='8 20.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/JtAzT4oBgHgl3EQfIPso/content/2301.01057v1.pdf'} +page_content='7 Prop.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/JtAzT4oBgHgl3EQfIPso/content/2301.01057v1.pdf'} +page_content=' (w/o C2D) 23.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/JtAzT4oBgHgl3EQfIPso/content/2301.01057v1.pdf'} +page_content='1 6.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/JtAzT4oBgHgl3EQfIPso/content/2301.01057v1.pdf'} +page_content='7 40.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/JtAzT4oBgHgl3EQfIPso/content/2301.01057v1.pdf'} +page_content='6 10.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/JtAzT4oBgHgl3EQfIPso/content/2301.01057v1.pdf'} +page_content='9 64.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/JtAzT4oBgHgl3EQfIPso/content/2301.01057v1.pdf'} +page_content='7 22.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/JtAzT4oBgHgl3EQfIPso/content/2301.01057v1.pdf'} +page_content='4 Proposed 34.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/JtAzT4oBgHgl3EQfIPso/content/2301.01057v1.pdf'} +page_content='7 6.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/JtAzT4oBgHgl3EQfIPso/content/2301.01057v1.pdf'} +page_content='4 52.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/JtAzT4oBgHgl3EQfIPso/content/2301.01057v1.pdf'} +page_content='7 10.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/JtAzT4oBgHgl3EQfIPso/content/2301.01057v1.pdf'} +page_content='0 75.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/JtAzT4oBgHgl3EQfIPso/content/2301.01057v1.pdf'} +page_content='0 19.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/JtAzT4oBgHgl3EQfIPso/content/2301.01057v1.pdf'} +page_content='8 ORB-SLAM3 [3] Proposed Redwood [5] Proposed (w/o C2D) ϵ < 20 mm 20 ≤ ϵ < 50 50 ≤ ϵ < 100 100 ≤ ϵ < 200 ϵ ≥ 200 mm Fig.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/JtAzT4oBgHgl3EQfIPso/content/2301.01057v1.pdf'} +page_content=' 5.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/JtAzT4oBgHgl3EQfIPso/content/2301.01057v1.pdf'} +page_content=' Reconstructions obtained using Redwood [5], ORB-SLAM3 [3], and the pro- posed method.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/JtAzT4oBgHgl3EQfIPso/content/2301.01057v1.pdf'} +page_content=' Colors depict errors (distance to the closest ground truth point).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/JtAzT4oBgHgl3EQfIPso/content/2301.01057v1.pdf'} +page_content=' 10 Mustaniemi et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/JtAzT4oBgHgl3EQfIPso/content/2301.01057v1.pdf'} +page_content=' 5.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/JtAzT4oBgHgl3EQfIPso/content/2301.01057v1.pdf'} +page_content='2 Depth estimation We investigate whether the BS3D dataset can be used to train better models for monocular depth estimation.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/JtAzT4oBgHgl3EQfIPso/content/2301.01057v1.pdf'} +page_content=' For this experiment, we use the state-of-the- art LeReS model [44] based on ResNet50.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/JtAzT4oBgHgl3EQfIPso/content/2301.01057v1.pdf'} +page_content=' The original model has been trained using 354k samples taken from various datasets [45,24,16,13,42].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/JtAzT4oBgHgl3EQfIPso/content/2301.01057v1.pdf'} +page_content=' We finetune the model using 16.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/JtAzT4oBgHgl3EQfIPso/content/2301.01057v1.pdf'} +page_content='5k samples from BS3D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/JtAzT4oBgHgl3EQfIPso/content/2301.01057v1.pdf'} +page_content=' We set the learning rate to 2e-5 and train only 4 epochs to avoid overfitting.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/JtAzT4oBgHgl3EQfIPso/content/2301.01057v1.pdf'} +page_content=' Other training details, including loss functions are the same as in [44].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/JtAzT4oBgHgl3EQfIPso/content/2301.01057v1.pdf'} +page_content=' For testing, we use NYUD-v2 [21] and iBims-1 [17] that are not seen during training.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/JtAzT4oBgHgl3EQfIPso/content/2301.01057v1.pdf'} +page_content=' We also evaluate using BS3D by sampling 535 images from an unseen part of the building.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/JtAzT4oBgHgl3EQfIPso/content/2301.01057v1.pdf'} +page_content=' Table 3 shows that finetuning improves the performance on iBims-1 and BS3D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/JtAzT4oBgHgl3EQfIPso/content/2301.01057v1.pdf'} +page_content=' The finetuned model performs marginally worse on NYUD- v2 which is not surprising considering that NYUD-v2 mainly contains room-scale scenes that are not present in BS3D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/JtAzT4oBgHgl3EQfIPso/content/2301.01057v1.pdf'} +page_content=' The qualitative comparison in Fig.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/JtAzT4oBgHgl3EQfIPso/content/2301.01057v1.pdf'} +page_content=' 6 also shows a clear improvement over the pretrained model on iBims-1 that contains both small and large scenes.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/JtAzT4oBgHgl3EQfIPso/content/2301.01057v1.pdf'} +page_content=' The model trained only using BS3D cannot compete with other models, except on BS3D on which the performance is surprisingly good.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/JtAzT4oBgHgl3EQfIPso/content/2301.01057v1.pdf'} +page_content=' The poor performance on other datasets is not surprising because of the small training set.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/JtAzT4oBgHgl3EQfIPso/content/2301.01057v1.pdf'} +page_content=' Table 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/JtAzT4oBgHgl3EQfIPso/content/2301.01057v1.pdf'} +page_content=' Monocular depth estimation using LeReS [44] trained from scratch using BS3D, pretrained model, and finetuned model.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/JtAzT4oBgHgl3EQfIPso/content/2301.01057v1.pdf'} +page_content=' NUYD-v2 [21], iBims-1 [17], and BS3D are used for testing.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/JtAzT4oBgHgl3EQfIPso/content/2301.01057v1.pdf'} +page_content=' NYUD-v2 [21] iBims-1 [17] BS3D Training data AbsRel ↓ δ1 ↑ AbsRel ↓ δ1 ↑ AbsRel ↓ δ1 ↑ BS3D 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/JtAzT4oBgHgl3EQfIPso/content/2301.01057v1.pdf'} +page_content='181 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/JtAzT4oBgHgl3EQfIPso/content/2301.01057v1.pdf'} +page_content='764 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/JtAzT4oBgHgl3EQfIPso/content/2301.01057v1.pdf'} +page_content='188 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/JtAzT4oBgHgl3EQfIPso/content/2301.01057v1.pdf'} +page_content='763 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/JtAzT4oBgHgl3EQfIPso/content/2301.01057v1.pdf'} +page_content='144 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/JtAzT4oBgHgl3EQfIPso/content/2301.01057v1.pdf'} +page_content='828 Pretrained 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/JtAzT4oBgHgl3EQfIPso/content/2301.01057v1.pdf'} +page_content='096 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/JtAzT4oBgHgl3EQfIPso/content/2301.01057v1.pdf'} +page_content='913 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/JtAzT4oBgHgl3EQfIPso/content/2301.01057v1.pdf'} +page_content='115 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/JtAzT4oBgHgl3EQfIPso/content/2301.01057v1.pdf'} +page_content='890 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/JtAzT4oBgHgl3EQfIPso/content/2301.01057v1.pdf'} +page_content='157 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/JtAzT4oBgHgl3EQfIPso/content/2301.01057v1.pdf'} +page_content='785 Pre.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/JtAzT4oBgHgl3EQfIPso/content/2301.01057v1.pdf'} +page_content=' + BS3D 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/JtAzT4oBgHgl3EQfIPso/content/2301.01057v1.pdf'} +page_content='100 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/JtAzT4oBgHgl3EQfIPso/content/2301.01057v1.pdf'} +page_content='907 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/JtAzT4oBgHgl3EQfIPso/content/2301.01057v1.pdf'} +page_content='098 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/JtAzT4oBgHgl3EQfIPso/content/2301.01057v1.pdf'} +page_content='901 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/JtAzT4oBgHgl3EQfIPso/content/2301.01057v1.pdf'} +page_content='115 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/JtAzT4oBgHgl3EQfIPso/content/2301.01057v1.pdf'} +page_content='881 Color Pretrained Finetuned Ground truth Fig.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/JtAzT4oBgHgl3EQfIPso/content/2301.01057v1.pdf'} +page_content=' 6.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/JtAzT4oBgHgl3EQfIPso/content/2301.01057v1.pdf'} +page_content=' Comparison of pretrained and finetuned (BS3D) monocular depth estimation model LeReS [44] on an independent iBims-1 [17] dataset unseen during training.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/JtAzT4oBgHgl3EQfIPso/content/2301.01057v1.pdf'} +page_content=' BS3D: Building-scale 3D Reconstruction from RGB-D Images 11 5.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/JtAzT4oBgHgl3EQfIPso/content/2301.01057v1.pdf'} +page_content='3 Visual-inertial odometry The BS3D dataset includes active infrared images along with color and IMU data.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/JtAzT4oBgHgl3EQfIPso/content/2301.01057v1.pdf'} +page_content=' This opens interesting possibilities, for example, the comparison of color and infrared as inputs for visual-inertial odometry.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/JtAzT4oBgHgl3EQfIPso/content/2301.01057v1.pdf'} +page_content=' Infrared-inertial odometry is an attractive approach in the sense that it does not require external light, meaning it would work in completely dark environments.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/JtAzT4oBgHgl3EQfIPso/content/2301.01057v1.pdf'} +page_content=' We evaluate OpenVINS [12], ORB-SLAM3 [3], and DM-VIO [34] using color- inertial and infrared-inertial inputs.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/JtAzT4oBgHgl3EQfIPso/content/2301.01057v1.pdf'} +page_content=' Note that ORB-SLAM3 has an unfair ad- vantage because it has a loop closure detector that cannot be disabled.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/JtAzT4oBgHgl3EQfIPso/content/2301.01057v1.pdf'} +page_content=' In the case of infrared images, we apply a power law transformation (I = 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/JtAzT4oBgHgl3EQfIPso/content/2301.01057v1.pdf'} +page_content='04 · I0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/JtAzT4oBgHgl3EQfIPso/content/2301.01057v1.pdf'} +page_content='6) to increase brightness.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/JtAzT4oBgHgl3EQfIPso/content/2301.01057v1.pdf'} +page_content=' As supported by [34], we provide a mask of valid pix- els to ignore black areas near the edges of the infrared images.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/JtAzT4oBgHgl3EQfIPso/content/2301.01057v1.pdf'} +page_content=' We adjust the parameters related to feature detection when using infrared images with [12,3].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/JtAzT4oBgHgl3EQfIPso/content/2301.01057v1.pdf'} +page_content=' We use the standard error metrics, namely absolute trajectory error (ATE) and relative pose error (RPE) which measures the drift per second.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/JtAzT4oBgHgl3EQfIPso/content/2301.01057v1.pdf'} +page_content=' The methods are evaluated 5 times on each of the 10 sequences (Table 4).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/JtAzT4oBgHgl3EQfIPso/content/2301.01057v1.pdf'} +page_content=' From the results in Table 5, we can see that ORB-SLAM3 has the lowest ATE when evaluating color-inertial odometry, mainly because of loop closure detection.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/JtAzT4oBgHgl3EQfIPso/content/2301.01057v1.pdf'} +page_content=' In most cases, ORB-SLAM3 and OpenVINS fail to initialize when using infrared images.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/JtAzT4oBgHgl3EQfIPso/content/2301.01057v1.pdf'} +page_content=' We conclude that off-the-shelve feature detectors (FAST and ORB) are quite poor at detecting good features from infrared images.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/JtAzT4oBgHgl3EQfIPso/content/2301.01057v1.pdf'} +page_content=' Inter- estingly, DM-VIO performs better when using infrared images instead of color which is likely due to the infrared camera’s global shutter and wider FoV.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/JtAzT4oBgHgl3EQfIPso/content/2301.01057v1.pdf'} +page_content=' This result reveals the great potential of using active infrared images for visual-inertial odometry and the need for new research.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/JtAzT4oBgHgl3EQfIPso/content/2301.01057v1.pdf'} +page_content=' Table 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/JtAzT4oBgHgl3EQfIPso/content/2301.01057v1.pdf'} +page_content=' Evaluation sequences used in the visual-inertial odometry experiment.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/JtAzT4oBgHgl3EQfIPso/content/2301.01057v1.pdf'} +page_content=' Last column shows if the camera returns to the starting point (chance for a loop closure).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/JtAzT4oBgHgl3EQfIPso/content/2301.01057v1.pdf'} +page_content=' Sequence Duration (s) Length (m) Dimensions (m) Loop cafeteria 200 90.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/JtAzT4oBgHgl3EQfIPso/content/2301.01057v1.pdf'} +page_content='0 12.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/JtAzT4oBgHgl3EQfIPso/content/2301.01057v1.pdf'} +page_content='4 x 15.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/JtAzT4oBgHgl3EQfIPso/content/2301.01057v1.pdf'} +page_content='7 x 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/JtAzT4oBgHgl3EQfIPso/content/2301.01057v1.pdf'} +page_content='8 ✓ central 242 155.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/JtAzT4oBgHgl3EQfIPso/content/2301.01057v1.pdf'} +page_content='0 25.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/JtAzT4oBgHgl3EQfIPso/content/2301.01057v1.pdf'} +page_content='5 x 42.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/JtAzT4oBgHgl3EQfIPso/content/2301.01057v1.pdf'} +page_content='1 x 5.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/JtAzT4oBgHgl3EQfIPso/content/2301.01057v1.pdf'} +page_content='3 ✓ dining 192 109.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/JtAzT4oBgHgl3EQfIPso/content/2301.01057v1.pdf'} +page_content='2 33.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/JtAzT4oBgHgl3EQfIPso/content/2301.01057v1.pdf'} +page_content='8 x 25.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/JtAzT4oBgHgl3EQfIPso/content/2301.01057v1.pdf'} +page_content='0 x 5.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/JtAzT4oBgHgl3EQfIPso/content/2301.01057v1.pdf'} +page_content='5 ✓ corridor 174 77.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/JtAzT4oBgHgl3EQfIPso/content/2301.01057v1.pdf'} +page_content='6 31.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/JtAzT4oBgHgl3EQfIPso/content/2301.01057v1.pdf'} +page_content='1 x 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/JtAzT4oBgHgl3EQfIPso/content/2301.01057v1.pdf'} +page_content='7 x 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/JtAzT4oBgHgl3EQfIPso/content/2301.01057v1.pdf'} +page_content='4 ✓ foobar 75 37.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/JtAzT4oBgHgl3EQfIPso/content/2301.01057v1.pdf'} +page_content='1 5.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/JtAzT4oBgHgl3EQfIPso/content/2301.01057v1.pdf'} +page_content='4 x 14.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/JtAzT4oBgHgl3EQfIPso/content/2301.01057v1.pdf'} +page_content='4 x 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/JtAzT4oBgHgl3EQfIPso/content/2301.01057v1.pdf'} +page_content='6 ✓ hub 124 52.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/JtAzT4oBgHgl3EQfIPso/content/2301.01057v1.pdf'} +page_content='3 11.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/JtAzT4oBgHgl3EQfIPso/content/2301.01057v1.pdf'} +page_content='4 x 5.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/JtAzT4oBgHgl3EQfIPso/content/2301.01057v1.pdf'} +page_content='9 x 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/JtAzT4oBgHgl3EQfIPso/content/2301.01057v1.pdf'} +page_content='7 juice 103 42.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/JtAzT4oBgHgl3EQfIPso/content/2301.01057v1.pdf'} +page_content='7 6.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/JtAzT4oBgHgl3EQfIPso/content/2301.01057v1.pdf'} +page_content='3 x 8.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/JtAzT4oBgHgl3EQfIPso/content/2301.01057v1.pdf'} +page_content='6 x 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/JtAzT4oBgHgl3EQfIPso/content/2301.01057v1.pdf'} +page_content='5 lounge 222 94.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/JtAzT4oBgHgl3EQfIPso/content/2301.01057v1.pdf'} +page_content='2 14.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/JtAzT4oBgHgl3EQfIPso/content/2301.01057v1.pdf'} +page_content='4 x 10.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/JtAzT4oBgHgl3EQfIPso/content/2301.01057v1.pdf'} +page_content='3 x 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/JtAzT4oBgHgl3EQfIPso/content/2301.01057v1.pdf'} +page_content='1 ✓ study 87 40.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/JtAzT4oBgHgl3EQfIPso/content/2301.01057v1.pdf'} +page_content='0 5.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/JtAzT4oBgHgl3EQfIPso/content/2301.01057v1.pdf'} +page_content='6 x 9.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/JtAzT4oBgHgl3EQfIPso/content/2301.01057v1.pdf'} +page_content='8 x 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/JtAzT4oBgHgl3EQfIPso/content/2301.01057v1.pdf'} +page_content='6 waiting 139 60.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/JtAzT4oBgHgl3EQfIPso/content/2301.01057v1.pdf'} +page_content='1 9.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/JtAzT4oBgHgl3EQfIPso/content/2301.01057v1.pdf'} +page_content='8 x 6.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/JtAzT4oBgHgl3EQfIPso/content/2301.01057v1.pdf'} +page_content='7 x 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/JtAzT4oBgHgl3EQfIPso/content/2301.01057v1.pdf'} +page_content='9 ✓ 12 Mustaniemi et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/JtAzT4oBgHgl3EQfIPso/content/2301.01057v1.pdf'} +page_content=' Table 5.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/JtAzT4oBgHgl3EQfIPso/content/2301.01057v1.pdf'} +page_content=' Comparison of visual-inertial odometry methods using color-inertial and infrared-inertial inputs.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/JtAzT4oBgHgl3EQfIPso/content/2301.01057v1.pdf'} +page_content=' Average absolute trajectory error (ATE) and relative pose error (RPE).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/JtAzT4oBgHgl3EQfIPso/content/2301.01057v1.pdf'} +page_content=' Last column shows the percentage of successful runs.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/JtAzT4oBgHgl3EQfIPso/content/2301.01057v1.pdf'} +page_content=' Color-inertial odometry Infrared-inertial odometry Method ATE ↓ (m) RPE ↓ (deg/s) RPE ↓ (m/s) Succ.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/JtAzT4oBgHgl3EQfIPso/content/2301.01057v1.pdf'} +page_content=' ↑ (%) ATE ↓ (m) RPE ↓ (deg/s) RPE ↓ (m/s) Succ.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/JtAzT4oBgHgl3EQfIPso/content/2301.01057v1.pdf'} +page_content=' ↑ (%) OpenVINS [12] 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/JtAzT4oBgHgl3EQfIPso/content/2301.01057v1.pdf'} +page_content='347 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/JtAzT4oBgHgl3EQfIPso/content/2301.01057v1.pdf'} +page_content='37 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/JtAzT4oBgHgl3EQfIPso/content/2301.01057v1.pdf'} +page_content='031 76.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/JtAzT4oBgHgl3EQfIPso/content/2301.01057v1.pdf'} +page_content='0 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/JtAzT4oBgHgl3EQfIPso/content/2301.01057v1.pdf'} +page_content='597 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/JtAzT4oBgHgl3EQfIPso/content/2301.01057v1.pdf'} +page_content='42 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/JtAzT4oBgHgl3EQfIPso/content/2301.01057v1.pdf'} +page_content='057 36.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/JtAzT4oBgHgl3EQfIPso/content/2301.01057v1.pdf'} +page_content='0 ORB-SLAM3 [3] 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/JtAzT4oBgHgl3EQfIPso/content/2301.01057v1.pdf'} +page_content='298 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/JtAzT4oBgHgl3EQfIPso/content/2301.01057v1.pdf'} +page_content='29 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/JtAzT4oBgHgl3EQfIPso/content/2301.01057v1.pdf'} +page_content='026 100.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/JtAzT4oBgHgl3EQfIPso/content/2301.01057v1.pdf'} +page_content='0 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/JtAzT4oBgHgl3EQfIPso/content/2301.01057v1.pdf'} +page_content='193 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/JtAzT4oBgHgl3EQfIPso/content/2301.01057v1.pdf'} +page_content='29 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/JtAzT4oBgHgl3EQfIPso/content/2301.01057v1.pdf'} +page_content='025 24.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/JtAzT4oBgHgl3EQfIPso/content/2301.01057v1.pdf'} +page_content='0 DM-VIO [34] 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/JtAzT4oBgHgl3EQfIPso/content/2301.01057v1.pdf'} +page_content='491 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/JtAzT4oBgHgl3EQfIPso/content/2301.01057v1.pdf'} +page_content='29 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/JtAzT4oBgHgl3EQfIPso/content/2301.01057v1.pdf'} +page_content='033 100.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/JtAzT4oBgHgl3EQfIPso/content/2301.01057v1.pdf'} +page_content='0 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/JtAzT4oBgHgl3EQfIPso/content/2301.01057v1.pdf'} +page_content='433 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/JtAzT4oBgHgl3EQfIPso/content/2301.01057v1.pdf'} +page_content='29 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/JtAzT4oBgHgl3EQfIPso/content/2301.01057v1.pdf'} +page_content='025 100.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/JtAzT4oBgHgl3EQfIPso/content/2301.01057v1.pdf'} +page_content='0 6 Conclusion We presented a framework for acquiring high-quality 3D reconstructions using a consumer depth camera.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/JtAzT4oBgHgl3EQfIPso/content/2301.01057v1.pdf'} +page_content=' The ability to produce building-scale reconstructions is a significant improvement over existing methods that are limited to smaller environments such as rooms or apartments.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/JtAzT4oBgHgl3EQfIPso/content/2301.01057v1.pdf'} +page_content=' The proposed C2D alignment en- ables the use of raw depth maps, resulting in more accurate 3D reconstructions.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/JtAzT4oBgHgl3EQfIPso/content/2301.01057v1.pdf'} +page_content=' Our approach is fast, easy to use, and requires no expensive hardware, making it ideal for crowd-sourced data collection.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/JtAzT4oBgHgl3EQfIPso/content/2301.01057v1.pdf'} +page_content=' We acquire building-scale 3D dataset (BS3D) and demonstrate its value for monocular depth estimation.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/JtAzT4oBgHgl3EQfIPso/content/2301.01057v1.pdf'} +page_content=' BS3D is unique also because it includes active infrared images, which are often miss- ing in other datasets.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/JtAzT4oBgHgl3EQfIPso/content/2301.01057v1.pdf'} +page_content=' We employ infrared images for visual-inertial odometry, discovering a promising new research direction.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/JtAzT4oBgHgl3EQfIPso/content/2301.01057v1.pdf'} +page_content=' References 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/JtAzT4oBgHgl3EQfIPso/content/2301.01057v1.pdf'} +page_content=' Azinović, D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/JtAzT4oBgHgl3EQfIPso/content/2301.01057v1.pdf'} +page_content=', Martin-Brualla, R.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/JtAzT4oBgHgl3EQfIPso/content/2301.01057v1.pdf'} +page_content=', Goldman, D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/JtAzT4oBgHgl3EQfIPso/content/2301.01057v1.pdf'} +page_content='B.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/JtAzT4oBgHgl3EQfIPso/content/2301.01057v1.pdf'} +page_content=', Nießner, M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/JtAzT4oBgHgl3EQfIPso/content/2301.01057v1.pdf'} +page_content=', Thies, J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/JtAzT4oBgHgl3EQfIPso/content/2301.01057v1.pdf'} +page_content=': Neural RGB-D surface reconstruction.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/JtAzT4oBgHgl3EQfIPso/content/2301.01057v1.pdf'} +page_content=' In: Conference on Computer Vision and Pattern Recognition (CVPR).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/JtAzT4oBgHgl3EQfIPso/content/2301.01057v1.pdf'} +page_content=' pp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/JtAzT4oBgHgl3EQfIPso/content/2301.01057v1.pdf'} +page_content=' 6290–6301 (2022) 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/JtAzT4oBgHgl3EQfIPso/content/2301.01057v1.pdf'} +page_content=' Burri, M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/JtAzT4oBgHgl3EQfIPso/content/2301.01057v1.pdf'} +page_content=', Nikolic, J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/JtAzT4oBgHgl3EQfIPso/content/2301.01057v1.pdf'} +page_content=', Gohl, P.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/JtAzT4oBgHgl3EQfIPso/content/2301.01057v1.pdf'} +page_content=', Schneider, T.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/JtAzT4oBgHgl3EQfIPso/content/2301.01057v1.pdf'} +page_content=', Rehder, J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/JtAzT4oBgHgl3EQfIPso/content/2301.01057v1.pdf'} +page_content=', Omari, S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/JtAzT4oBgHgl3EQfIPso/content/2301.01057v1.pdf'} +page_content=', Achtelik, M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/JtAzT4oBgHgl3EQfIPso/content/2301.01057v1.pdf'} +page_content='W.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/JtAzT4oBgHgl3EQfIPso/content/2301.01057v1.pdf'} +page_content=', Siegwart, R.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/JtAzT4oBgHgl3EQfIPso/content/2301.01057v1.pdf'} +page_content=': The EuRoC micro aerial vehicle datasets.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/JtAzT4oBgHgl3EQfIPso/content/2301.01057v1.pdf'} +page_content=' The International Journal of Robotics Research 35(10), 1157–1163 (2016) 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/JtAzT4oBgHgl3EQfIPso/content/2301.01057v1.pdf'} +page_content=' Campos, C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/JtAzT4oBgHgl3EQfIPso/content/2301.01057v1.pdf'} +page_content=', Elvira, R.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/JtAzT4oBgHgl3EQfIPso/content/2301.01057v1.pdf'} +page_content=', Rodríguez, J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/JtAzT4oBgHgl3EQfIPso/content/2301.01057v1.pdf'} +page_content='J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/JtAzT4oBgHgl3EQfIPso/content/2301.01057v1.pdf'} +page_content='G.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/JtAzT4oBgHgl3EQfIPso/content/2301.01057v1.pdf'} +page_content=', Montiel, J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/JtAzT4oBgHgl3EQfIPso/content/2301.01057v1.pdf'} +page_content='M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/JtAzT4oBgHgl3EQfIPso/content/2301.01057v1.pdf'} +page_content=', Tardós, J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/JtAzT4oBgHgl3EQfIPso/content/2301.01057v1.pdf'} +page_content='D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/JtAzT4oBgHgl3EQfIPso/content/2301.01057v1.pdf'} +page_content=' : ORB- SLAM3: An accurate open-source library for visual, visual-inertial, and multimap SLAM.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/JtAzT4oBgHgl3EQfIPso/content/2301.01057v1.pdf'} +page_content=' IEEE Transactions on Robotics 37(6), 1874–1890 (2021) 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/JtAzT4oBgHgl3EQfIPso/content/2301.01057v1.pdf'} +page_content=' Chang, A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/JtAzT4oBgHgl3EQfIPso/content/2301.01057v1.pdf'} +page_content=', Dai, A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/JtAzT4oBgHgl3EQfIPso/content/2301.01057v1.pdf'} +page_content=', Funkhouser, T.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/JtAzT4oBgHgl3EQfIPso/content/2301.01057v1.pdf'} +page_content=', Halber, M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/JtAzT4oBgHgl3EQfIPso/content/2301.01057v1.pdf'} +page_content=', Niessner, M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/JtAzT4oBgHgl3EQfIPso/content/2301.01057v1.pdf'} +page_content=', Savva, M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/JtAzT4oBgHgl3EQfIPso/content/2301.01057v1.pdf'} +page_content=', Song, S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/JtAzT4oBgHgl3EQfIPso/content/2301.01057v1.pdf'} +page_content=', Zeng, A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/JtAzT4oBgHgl3EQfIPso/content/2301.01057v1.pdf'} +page_content=', Zhang, Y.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/JtAzT4oBgHgl3EQfIPso/content/2301.01057v1.pdf'} +page_content=': Matterport3D: Learning from RGB-D data in indoor environments.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/JtAzT4oBgHgl3EQfIPso/content/2301.01057v1.pdf'} +page_content=' arXiv preprint arXiv:1709.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/JtAzT4oBgHgl3EQfIPso/content/2301.01057v1.pdf'} +page_content='06158 (2017) 5.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/JtAzT4oBgHgl3EQfIPso/content/2301.01057v1.pdf'} +page_content=' Choi, S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/JtAzT4oBgHgl3EQfIPso/content/2301.01057v1.pdf'} +page_content=', Zhou, Q.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/JtAzT4oBgHgl3EQfIPso/content/2301.01057v1.pdf'} +page_content='Y.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/JtAzT4oBgHgl3EQfIPso/content/2301.01057v1.pdf'} +page_content=', Koltun, V.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/JtAzT4oBgHgl3EQfIPso/content/2301.01057v1.pdf'} +page_content=': Robust reconstruction of indoor scenes.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/JtAzT4oBgHgl3EQfIPso/content/2301.01057v1.pdf'} +page_content=' In: IEEE Conference on Computer Vision and Pattern Recognition (CVPR).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/JtAzT4oBgHgl3EQfIPso/content/2301.01057v1.pdf'} +page_content=' pp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/JtAzT4oBgHgl3EQfIPso/content/2301.01057v1.pdf'} +page_content=' 5556–5565 (2015) 6.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/JtAzT4oBgHgl3EQfIPso/content/2301.01057v1.pdf'} +page_content=' Cortés, S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/JtAzT4oBgHgl3EQfIPso/content/2301.01057v1.pdf'} +page_content=', Solin, A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/JtAzT4oBgHgl3EQfIPso/content/2301.01057v1.pdf'} +page_content=', Rahtu, E.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/JtAzT4oBgHgl3EQfIPso/content/2301.01057v1.pdf'} +page_content=', Kannala, J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/JtAzT4oBgHgl3EQfIPso/content/2301.01057v1.pdf'} +page_content=': ADVIO: An authentic dataset for visual-inertial odometry.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/JtAzT4oBgHgl3EQfIPso/content/2301.01057v1.pdf'} +page_content=' In: European Conference on Computer Vision (ECCV).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/JtAzT4oBgHgl3EQfIPso/content/2301.01057v1.pdf'} +page_content=' pp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/JtAzT4oBgHgl3EQfIPso/content/2301.01057v1.pdf'} +page_content=' 419–434 (2018) BS3D: Building-scale 3D Reconstruction from RGB-D Images 13 7.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/JtAzT4oBgHgl3EQfIPso/content/2301.01057v1.pdf'} +page_content=' Curless, B.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/JtAzT4oBgHgl3EQfIPso/content/2301.01057v1.pdf'} +page_content=', Levoy, M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/JtAzT4oBgHgl3EQfIPso/content/2301.01057v1.pdf'} +page_content=': A volumetric method for building complex models from range images.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/JtAzT4oBgHgl3EQfIPso/content/2301.01057v1.pdf'} +page_content=' In: Conference on Computer Graphics and Interactive Techniques.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/JtAzT4oBgHgl3EQfIPso/content/2301.01057v1.pdf'} +page_content=' pp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/JtAzT4oBgHgl3EQfIPso/content/2301.01057v1.pdf'} +page_content=' 303–312 (1996) 8.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/JtAzT4oBgHgl3EQfIPso/content/2301.01057v1.pdf'} +page_content=' Dai, A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/JtAzT4oBgHgl3EQfIPso/content/2301.01057v1.pdf'} +page_content=', Chang, A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/JtAzT4oBgHgl3EQfIPso/content/2301.01057v1.pdf'} +page_content='X.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/JtAzT4oBgHgl3EQfIPso/content/2301.01057v1.pdf'} +page_content=', Savva, M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/JtAzT4oBgHgl3EQfIPso/content/2301.01057v1.pdf'} +page_content=', Halber, M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/JtAzT4oBgHgl3EQfIPso/content/2301.01057v1.pdf'} +page_content=', Funkhouser, T.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/JtAzT4oBgHgl3EQfIPso/content/2301.01057v1.pdf'} +page_content=', Nießner, M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/JtAzT4oBgHgl3EQfIPso/content/2301.01057v1.pdf'} +page_content=': Scan- Net: Richly-annotated 3D reconstructions of indoor scenes.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/JtAzT4oBgHgl3EQfIPso/content/2301.01057v1.pdf'} +page_content=' In: IEEE Conference on Computer Vision and Pattern Recognition (CVPR).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/JtAzT4oBgHgl3EQfIPso/content/2301.01057v1.pdf'} +page_content=' pp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/JtAzT4oBgHgl3EQfIPso/content/2301.01057v1.pdf'} +page_content=' 5828–5839 (2017) 9.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/JtAzT4oBgHgl3EQfIPso/content/2301.01057v1.pdf'} +page_content=' Dai, A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/JtAzT4oBgHgl3EQfIPso/content/2301.01057v1.pdf'} +page_content=', Nießner, M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/JtAzT4oBgHgl3EQfIPso/content/2301.01057v1.pdf'} +page_content=', Zollhöfer, M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/JtAzT4oBgHgl3EQfIPso/content/2301.01057v1.pdf'} +page_content=', Izadi, S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/JtAzT4oBgHgl3EQfIPso/content/2301.01057v1.pdf'} +page_content=', Theobalt, C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/JtAzT4oBgHgl3EQfIPso/content/2301.01057v1.pdf'} +page_content=': BundleFusion: Real- time globally consistent 3D reconstruction using on-the-fly surface reintegration.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/JtAzT4oBgHgl3EQfIPso/content/2301.01057v1.pdf'} +page_content=' ACM Transactions on Graphics (ToG) 36(4), 1 (2017) 10.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/JtAzT4oBgHgl3EQfIPso/content/2301.01057v1.pdf'} +page_content=' Dellaert, F.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/JtAzT4oBgHgl3EQfIPso/content/2301.01057v1.pdf'} +page_content=': Factor graphs and GTSAM: A hands-on introduction.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/JtAzT4oBgHgl3EQfIPso/content/2301.01057v1.pdf'} +page_content=' Tech.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/JtAzT4oBgHgl3EQfIPso/content/2301.01057v1.pdf'} +page_content=' rep.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/JtAzT4oBgHgl3EQfIPso/content/2301.01057v1.pdf'} +page_content=', Georgia Institute of Technology (2012) 11.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/JtAzT4oBgHgl3EQfIPso/content/2301.01057v1.pdf'} +page_content=' Geiger, A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/JtAzT4oBgHgl3EQfIPso/content/2301.01057v1.pdf'} +page_content=', Lenz, P.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/JtAzT4oBgHgl3EQfIPso/content/2301.01057v1.pdf'} +page_content=', Stiller, C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/JtAzT4oBgHgl3EQfIPso/content/2301.01057v1.pdf'} +page_content=', Urtasun, R.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/JtAzT4oBgHgl3EQfIPso/content/2301.01057v1.pdf'} +page_content=': Vision meets robotics: The KITTI dataset.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/JtAzT4oBgHgl3EQfIPso/content/2301.01057v1.pdf'} +page_content=' The International Journal of Robotics Research 32(11), 1231–1237 (2013) 12.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/JtAzT4oBgHgl3EQfIPso/content/2301.01057v1.pdf'} +page_content=' Geneva, P.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/JtAzT4oBgHgl3EQfIPso/content/2301.01057v1.pdf'} +page_content=', Eckenhoff, K.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/JtAzT4oBgHgl3EQfIPso/content/2301.01057v1.pdf'} +page_content=', Lee, W.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/JtAzT4oBgHgl3EQfIPso/content/2301.01057v1.pdf'} +page_content=', Yang, Y.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/JtAzT4oBgHgl3EQfIPso/content/2301.01057v1.pdf'} +page_content=', Huang, G.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/JtAzT4oBgHgl3EQfIPso/content/2301.01057v1.pdf'} +page_content=': OpenVINS: A research platform for visual-inertial estimation.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/JtAzT4oBgHgl3EQfIPso/content/2301.01057v1.pdf'} +page_content=' In: International Conference on Robotics and Automation (ICRA).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/JtAzT4oBgHgl3EQfIPso/content/2301.01057v1.pdf'} +page_content=' pp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/JtAzT4oBgHgl3EQfIPso/content/2301.01057v1.pdf'} +page_content=' 4666–4672.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/JtAzT4oBgHgl3EQfIPso/content/2301.01057v1.pdf'} +page_content=' IEEE (2020) 13.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/JtAzT4oBgHgl3EQfIPso/content/2301.01057v1.pdf'} +page_content=' Hua, Y.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/JtAzT4oBgHgl3EQfIPso/content/2301.01057v1.pdf'} +page_content=', Kohli, P.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/JtAzT4oBgHgl3EQfIPso/content/2301.01057v1.pdf'} +page_content=', Uplavikar, P.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/JtAzT4oBgHgl3EQfIPso/content/2301.01057v1.pdf'} +page_content=', Ravi, A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/JtAzT4oBgHgl3EQfIPso/content/2301.01057v1.pdf'} +page_content=', Gunaseelan, S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/JtAzT4oBgHgl3EQfIPso/content/2301.01057v1.pdf'} +page_content=', Orozco, J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/JtAzT4oBgHgl3EQfIPso/content/2301.01057v1.pdf'} +page_content=', Li, E.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/JtAzT4oBgHgl3EQfIPso/content/2301.01057v1.pdf'} +page_content=': Holopix50k: A large-scale in-the-wild stereo image dataset.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/JtAzT4oBgHgl3EQfIPso/content/2301.01057v1.pdf'} +page_content=' arXiv preprint arXiv:2003.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/JtAzT4oBgHgl3EQfIPso/content/2301.01057v1.pdf'} +page_content='11172 (2020) 14.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/JtAzT4oBgHgl3EQfIPso/content/2301.01057v1.pdf'} +page_content=' Kazhdan, M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/JtAzT4oBgHgl3EQfIPso/content/2301.01057v1.pdf'} +page_content=', Bolitho, M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/JtAzT4oBgHgl3EQfIPso/content/2301.01057v1.pdf'} +page_content=', Hoppe, H.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/JtAzT4oBgHgl3EQfIPso/content/2301.01057v1.pdf'} +page_content=': Poisson surface reconstruction.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/JtAzT4oBgHgl3EQfIPso/content/2301.01057v1.pdf'} +page_content=' In: Pro- ceedings of the Fourth Eurographics Symposium on Geometry Processing.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/JtAzT4oBgHgl3EQfIPso/content/2301.01057v1.pdf'} +page_content=' vol.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/JtAzT4oBgHgl3EQfIPso/content/2301.01057v1.pdf'} +page_content=' 7 (2006) 15.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/JtAzT4oBgHgl3EQfIPso/content/2301.01057v1.pdf'} +page_content=' Kendall, A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/JtAzT4oBgHgl3EQfIPso/content/2301.01057v1.pdf'} +page_content=', Grimes, M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/JtAzT4oBgHgl3EQfIPso/content/2301.01057v1.pdf'} +page_content=', Cipolla, R.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/JtAzT4oBgHgl3EQfIPso/content/2301.01057v1.pdf'} +page_content=': PoseNet: A convolutional network for real- time 6-dof camera relocalization.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/JtAzT4oBgHgl3EQfIPso/content/2301.01057v1.pdf'} +page_content=' In: IEEE International Conference on Computer Vision (ICCV).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/JtAzT4oBgHgl3EQfIPso/content/2301.01057v1.pdf'} +page_content=' pp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/JtAzT4oBgHgl3EQfIPso/content/2301.01057v1.pdf'} +page_content=' 2938–2946 (2015) 16.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/JtAzT4oBgHgl3EQfIPso/content/2301.01057v1.pdf'} +page_content=' Kim, Y.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/JtAzT4oBgHgl3EQfIPso/content/2301.01057v1.pdf'} +page_content=', Jung, H.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/JtAzT4oBgHgl3EQfIPso/content/2301.01057v1.pdf'} +page_content=', Min, D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/JtAzT4oBgHgl3EQfIPso/content/2301.01057v1.pdf'} +page_content=', Sohn, K.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/JtAzT4oBgHgl3EQfIPso/content/2301.01057v1.pdf'} +page_content=': Deep monocular depth estimation via in- tegration of global and local predictions.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/JtAzT4oBgHgl3EQfIPso/content/2301.01057v1.pdf'} +page_content=' IEEE Transactions on Image Processing 27(8), 4131–4144 (2018) 17.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/JtAzT4oBgHgl3EQfIPso/content/2301.01057v1.pdf'} +page_content=' Koch, T.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/JtAzT4oBgHgl3EQfIPso/content/2301.01057v1.pdf'} +page_content=', Liebel, L.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/JtAzT4oBgHgl3EQfIPso/content/2301.01057v1.pdf'} +page_content=', Fraundorfer, F.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/JtAzT4oBgHgl3EQfIPso/content/2301.01057v1.pdf'} +page_content=', Korner, M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/JtAzT4oBgHgl3EQfIPso/content/2301.01057v1.pdf'} +page_content=': Evaluation of CNN-based single- image depth estimation methods.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/JtAzT4oBgHgl3EQfIPso/content/2301.01057v1.pdf'} +page_content=' In: European Conference on Computer Vision (ECCV) Workshops.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/JtAzT4oBgHgl3EQfIPso/content/2301.01057v1.pdf'} +page_content=' pp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/JtAzT4oBgHgl3EQfIPso/content/2301.01057v1.pdf'} +page_content=' 0–0 (2018) 18.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/JtAzT4oBgHgl3EQfIPso/content/2301.01057v1.pdf'} +page_content=' Labbe, M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/JtAzT4oBgHgl3EQfIPso/content/2301.01057v1.pdf'} +page_content=', Michaud, F.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/JtAzT4oBgHgl3EQfIPso/content/2301.01057v1.pdf'} +page_content=': Appearance-based loop closure detection for online large- scale and long-term operation.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/JtAzT4oBgHgl3EQfIPso/content/2301.01057v1.pdf'} +page_content=' IEEE Transactions on Robotics 29(3), 734–745 (2013) 19.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/JtAzT4oBgHgl3EQfIPso/content/2301.01057v1.pdf'} +page_content=' Labbé, M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/JtAzT4oBgHgl3EQfIPso/content/2301.01057v1.pdf'} +page_content=', Michaud, F.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/JtAzT4oBgHgl3EQfIPso/content/2301.01057v1.pdf'} +page_content=': RTAB-Map as an open-source lidar and visual simultane- ous localization and mapping library for large-scale and long-term online operation.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/JtAzT4oBgHgl3EQfIPso/content/2301.01057v1.pdf'} +page_content=' Journal of Field Robotics 36(2), 416–446 (2019) 20.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/JtAzT4oBgHgl3EQfIPso/content/2301.01057v1.pdf'} +page_content=' McCormac, J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/JtAzT4oBgHgl3EQfIPso/content/2301.01057v1.pdf'} +page_content=', Handa, A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/JtAzT4oBgHgl3EQfIPso/content/2301.01057v1.pdf'} +page_content=', Leutenegger, S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/JtAzT4oBgHgl3EQfIPso/content/2301.01057v1.pdf'} +page_content=', Davison, A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/JtAzT4oBgHgl3EQfIPso/content/2301.01057v1.pdf'} +page_content='J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/JtAzT4oBgHgl3EQfIPso/content/2301.01057v1.pdf'} +page_content=' : SceneNet RGB-D: Can 5m synthetic images beat generic ImageNet pre-training on indoor segmentation?' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/JtAzT4oBgHgl3EQfIPso/content/2301.01057v1.pdf'} +page_content=' In: IEEE International Conference on Computer Vision (ICCV).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/JtAzT4oBgHgl3EQfIPso/content/2301.01057v1.pdf'} +page_content=' pp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/JtAzT4oBgHgl3EQfIPso/content/2301.01057v1.pdf'} +page_content=' 2678–2687 (2017) 21.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/JtAzT4oBgHgl3EQfIPso/content/2301.01057v1.pdf'} +page_content=' Nathan Silberman, Derek Hoiem, P.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/JtAzT4oBgHgl3EQfIPso/content/2301.01057v1.pdf'} +page_content='K.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/JtAzT4oBgHgl3EQfIPso/content/2301.01057v1.pdf'} +page_content=', Fergus, R.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/JtAzT4oBgHgl3EQfIPso/content/2301.01057v1.pdf'} +page_content=': Indoor segmentation and sup- port inference from RGBD images.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/JtAzT4oBgHgl3EQfIPso/content/2301.01057v1.pdf'} +page_content=' In: European Conference on Computer Vision (ECCV) (2012) 22.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/JtAzT4oBgHgl3EQfIPso/content/2301.01057v1.pdf'} +page_content=' Newcombe, R.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/JtAzT4oBgHgl3EQfIPso/content/2301.01057v1.pdf'} +page_content='A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/JtAzT4oBgHgl3EQfIPso/content/2301.01057v1.pdf'} +page_content=', Izadi, S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/JtAzT4oBgHgl3EQfIPso/content/2301.01057v1.pdf'} +page_content=', Hilliges, O.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/JtAzT4oBgHgl3EQfIPso/content/2301.01057v1.pdf'} +page_content=', Molyneaux, D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/JtAzT4oBgHgl3EQfIPso/content/2301.01057v1.pdf'} +page_content=', Kim, D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/JtAzT4oBgHgl3EQfIPso/content/2301.01057v1.pdf'} +page_content=', Davison, A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/JtAzT4oBgHgl3EQfIPso/content/2301.01057v1.pdf'} +page_content='J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/JtAzT4oBgHgl3EQfIPso/content/2301.01057v1.pdf'} +page_content=', Kohi, P.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/JtAzT4oBgHgl3EQfIPso/content/2301.01057v1.pdf'} +page_content=', Shotton, J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/JtAzT4oBgHgl3EQfIPso/content/2301.01057v1.pdf'} +page_content=', Hodges, S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/JtAzT4oBgHgl3EQfIPso/content/2301.01057v1.pdf'} +page_content=', Fitzgibbon, A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/JtAzT4oBgHgl3EQfIPso/content/2301.01057v1.pdf'} +page_content=': KinectFusion: Real-time dense surface mapping and tracking.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/JtAzT4oBgHgl3EQfIPso/content/2301.01057v1.pdf'} +page_content=' In: IEEE International Symposium on Mixed and Augmented Reality.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/JtAzT4oBgHgl3EQfIPso/content/2301.01057v1.pdf'} +page_content=' pp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/JtAzT4oBgHgl3EQfIPso/content/2301.01057v1.pdf'} +page_content=' 127–136.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/JtAzT4oBgHgl3EQfIPso/content/2301.01057v1.pdf'} +page_content=' IEEE (2011) 23.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/JtAzT4oBgHgl3EQfIPso/content/2301.01057v1.pdf'} +page_content=' Nießner, M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/JtAzT4oBgHgl3EQfIPso/content/2301.01057v1.pdf'} +page_content=', Zollhöfer, M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/JtAzT4oBgHgl3EQfIPso/content/2301.01057v1.pdf'} +page_content=', Izadi, S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/JtAzT4oBgHgl3EQfIPso/content/2301.01057v1.pdf'} +page_content=', Stamminger, M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/JtAzT4oBgHgl3EQfIPso/content/2301.01057v1.pdf'} +page_content=': Real-time 3D reconstruction at scale using voxel hashing.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/JtAzT4oBgHgl3EQfIPso/content/2301.01057v1.pdf'} +page_content=' ACM Transactions on Graphics (ToG) 32(6), 1–11 (2013) 14 Mustaniemi et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/JtAzT4oBgHgl3EQfIPso/content/2301.01057v1.pdf'} +page_content=' 24.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/JtAzT4oBgHgl3EQfIPso/content/2301.01057v1.pdf'} +page_content=' Niklaus, S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/JtAzT4oBgHgl3EQfIPso/content/2301.01057v1.pdf'} +page_content=', Mai, L.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/JtAzT4oBgHgl3EQfIPso/content/2301.01057v1.pdf'} +page_content=', Yang, J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/JtAzT4oBgHgl3EQfIPso/content/2301.01057v1.pdf'} +page_content=', Liu, F.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/JtAzT4oBgHgl3EQfIPso/content/2301.01057v1.pdf'} +page_content=': 3D Ken Burns effect from a single image.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/JtAzT4oBgHgl3EQfIPso/content/2301.01057v1.pdf'} +page_content=' ACM Transactions on Graphics (ToG) 38(6), 1–15 (2019) 25.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/JtAzT4oBgHgl3EQfIPso/content/2301.01057v1.pdf'} +page_content=' Pomerleau, F.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/JtAzT4oBgHgl3EQfIPso/content/2301.01057v1.pdf'} +page_content=', Colas, F.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/JtAzT4oBgHgl3EQfIPso/content/2301.01057v1.pdf'} +page_content=', Siegwart, R.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/JtAzT4oBgHgl3EQfIPso/content/2301.01057v1.pdf'} +page_content=', Magnenat, S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/JtAzT4oBgHgl3EQfIPso/content/2301.01057v1.pdf'} +page_content=': Comparing ICP variants on real-world data sets.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/JtAzT4oBgHgl3EQfIPso/content/2301.01057v1.pdf'} +page_content=' Autonomous Robots 34(3), 133–148 (Feb 2013) 26.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/JtAzT4oBgHgl3EQfIPso/content/2301.01057v1.pdf'} +page_content=' Ramakrishnan, S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/JtAzT4oBgHgl3EQfIPso/content/2301.01057v1.pdf'} +page_content='K.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/JtAzT4oBgHgl3EQfIPso/content/2301.01057v1.pdf'} +page_content=', Gokaslan, A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/JtAzT4oBgHgl3EQfIPso/content/2301.01057v1.pdf'} +page_content=', Wijmans, E.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/JtAzT4oBgHgl3EQfIPso/content/2301.01057v1.pdf'} +page_content=', Maksymets, O.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/JtAzT4oBgHgl3EQfIPso/content/2301.01057v1.pdf'} +page_content=', Clegg, A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/JtAzT4oBgHgl3EQfIPso/content/2301.01057v1.pdf'} +page_content=', Turner, J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/JtAzT4oBgHgl3EQfIPso/content/2301.01057v1.pdf'} +page_content=', Undersander, E.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/JtAzT4oBgHgl3EQfIPso/content/2301.01057v1.pdf'} +page_content=', Galuba, W.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/JtAzT4oBgHgl3EQfIPso/content/2301.01057v1.pdf'} +page_content=', Westbury, A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/JtAzT4oBgHgl3EQfIPso/content/2301.01057v1.pdf'} +page_content=', Chang, A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/JtAzT4oBgHgl3EQfIPso/content/2301.01057v1.pdf'} +page_content='X.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/JtAzT4oBgHgl3EQfIPso/content/2301.01057v1.pdf'} +page_content=', et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/JtAzT4oBgHgl3EQfIPso/content/2301.01057v1.pdf'} +page_content=' : Habitat-Matterport 3D Dataset (HM3D): 1000 large-scale 3D environments for embodied AI.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/JtAzT4oBgHgl3EQfIPso/content/2301.01057v1.pdf'} +page_content=' arXiv preprint arXiv:2109.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/JtAzT4oBgHgl3EQfIPso/content/2301.01057v1.pdf'} +page_content='08238 (2021) 27.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/JtAzT4oBgHgl3EQfIPso/content/2301.01057v1.pdf'} +page_content=' Saeedi, S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/JtAzT4oBgHgl3EQfIPso/content/2301.01057v1.pdf'} +page_content=', Carvalho, E.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/JtAzT4oBgHgl3EQfIPso/content/2301.01057v1.pdf'} +page_content='D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/JtAzT4oBgHgl3EQfIPso/content/2301.01057v1.pdf'} +page_content=', Li, W.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/JtAzT4oBgHgl3EQfIPso/content/2301.01057v1.pdf'} +page_content=', Tzoumanikas, D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/JtAzT4oBgHgl3EQfIPso/content/2301.01057v1.pdf'} +page_content=', Leutenegger, S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/JtAzT4oBgHgl3EQfIPso/content/2301.01057v1.pdf'} +page_content=', Kelly, P.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/JtAzT4oBgHgl3EQfIPso/content/2301.01057v1.pdf'} +page_content='H.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/JtAzT4oBgHgl3EQfIPso/content/2301.01057v1.pdf'} +page_content=', Davison, A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/JtAzT4oBgHgl3EQfIPso/content/2301.01057v1.pdf'} +page_content='J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/JtAzT4oBgHgl3EQfIPso/content/2301.01057v1.pdf'} +page_content=' : Characterizing visual localization and mapping datasets.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/JtAzT4oBgHgl3EQfIPso/content/2301.01057v1.pdf'} +page_content=' In: Inter- national Conference on Robotics and Automation (ICRA).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/JtAzT4oBgHgl3EQfIPso/content/2301.01057v1.pdf'} +page_content=' pp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/JtAzT4oBgHgl3EQfIPso/content/2301.01057v1.pdf'} +page_content=' 6699–6705.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/JtAzT4oBgHgl3EQfIPso/content/2301.01057v1.pdf'} +page_content=' IEEE (2019) 28.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/JtAzT4oBgHgl3EQfIPso/content/2301.01057v1.pdf'} +page_content=' Sarlin, P.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/JtAzT4oBgHgl3EQfIPso/content/2301.01057v1.pdf'} +page_content='E.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/JtAzT4oBgHgl3EQfIPso/content/2301.01057v1.pdf'} +page_content=', Dusmanu, M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/JtAzT4oBgHgl3EQfIPso/content/2301.01057v1.pdf'} +page_content=', Schönberger, J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/JtAzT4oBgHgl3EQfIPso/content/2301.01057v1.pdf'} +page_content='L.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/JtAzT4oBgHgl3EQfIPso/content/2301.01057v1.pdf'} +page_content=', Speciale, P.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/JtAzT4oBgHgl3EQfIPso/content/2301.01057v1.pdf'} +page_content=', Gruber, L.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/JtAzT4oBgHgl3EQfIPso/content/2301.01057v1.pdf'} +page_content=', Larsson, V.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/JtAzT4oBgHgl3EQfIPso/content/2301.01057v1.pdf'} +page_content=', Miksik, O.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/JtAzT4oBgHgl3EQfIPso/content/2301.01057v1.pdf'} +page_content=', Pollefeys, M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/JtAzT4oBgHgl3EQfIPso/content/2301.01057v1.pdf'} +page_content=': LaMAR: Benchmarking localization and mapping for augmented reality.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/JtAzT4oBgHgl3EQfIPso/content/2301.01057v1.pdf'} +page_content=' In: European Conference on Computer Vision (ECCV).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/JtAzT4oBgHgl3EQfIPso/content/2301.01057v1.pdf'} +page_content=' pp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/JtAzT4oBgHgl3EQfIPso/content/2301.01057v1.pdf'} +page_content=' 686–704.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/JtAzT4oBgHgl3EQfIPso/content/2301.01057v1.pdf'} +page_content=' Springer (2022) 29.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/JtAzT4oBgHgl3EQfIPso/content/2301.01057v1.pdf'} +page_content=' Sattler, T.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/JtAzT4oBgHgl3EQfIPso/content/2301.01057v1.pdf'} +page_content=', Maddern, W.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/JtAzT4oBgHgl3EQfIPso/content/2301.01057v1.pdf'} +page_content=', Toft, C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/JtAzT4oBgHgl3EQfIPso/content/2301.01057v1.pdf'} +page_content=', Torii, A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/JtAzT4oBgHgl3EQfIPso/content/2301.01057v1.pdf'} +page_content=', Hammarstrand, L.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/JtAzT4oBgHgl3EQfIPso/content/2301.01057v1.pdf'} +page_content=', Stenborg, E.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/JtAzT4oBgHgl3EQfIPso/content/2301.01057v1.pdf'} +page_content=', Safari, D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/JtAzT4oBgHgl3EQfIPso/content/2301.01057v1.pdf'} +page_content=', Okutomi, M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/JtAzT4oBgHgl3EQfIPso/content/2301.01057v1.pdf'} +page_content=', Pollefeys, M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/JtAzT4oBgHgl3EQfIPso/content/2301.01057v1.pdf'} +page_content=', Sivic, J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/JtAzT4oBgHgl3EQfIPso/content/2301.01057v1.pdf'} +page_content=', et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/JtAzT4oBgHgl3EQfIPso/content/2301.01057v1.pdf'} +page_content=' : Benchmarking 6dof out- door visual localization in changing conditions.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/JtAzT4oBgHgl3EQfIPso/content/2301.01057v1.pdf'} +page_content=' In: IEEE Conference on Computer Vision and Pattern Recognition (CVPR).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/JtAzT4oBgHgl3EQfIPso/content/2301.01057v1.pdf'} +page_content=' pp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/JtAzT4oBgHgl3EQfIPso/content/2301.01057v1.pdf'} +page_content=' 8601–8610 (2018) 30.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/JtAzT4oBgHgl3EQfIPso/content/2301.01057v1.pdf'} +page_content=' Schubert, D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/JtAzT4oBgHgl3EQfIPso/content/2301.01057v1.pdf'} +page_content=', Goll, T.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/JtAzT4oBgHgl3EQfIPso/content/2301.01057v1.pdf'} +page_content=', Demmel, N.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/JtAzT4oBgHgl3EQfIPso/content/2301.01057v1.pdf'} +page_content=', Usenko, V.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/JtAzT4oBgHgl3EQfIPso/content/2301.01057v1.pdf'} +page_content=', Stückler, J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/JtAzT4oBgHgl3EQfIPso/content/2301.01057v1.pdf'} +page_content=', Cremers, D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/JtAzT4oBgHgl3EQfIPso/content/2301.01057v1.pdf'} +page_content=': The TUM VI benchmark for evaluating visual-inertial odometry.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/JtAzT4oBgHgl3EQfIPso/content/2301.01057v1.pdf'} +page_content=' In: IEEE/RSJ Inter- national Conference on Intelligent Robots and Systems (IROS).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/JtAzT4oBgHgl3EQfIPso/content/2301.01057v1.pdf'} +page_content=' pp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/JtAzT4oBgHgl3EQfIPso/content/2301.01057v1.pdf'} +page_content=' 1680–1687.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/JtAzT4oBgHgl3EQfIPso/content/2301.01057v1.pdf'} +page_content=' IEEE (2018) 31.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/JtAzT4oBgHgl3EQfIPso/content/2301.01057v1.pdf'} +page_content=' Shi, X.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/JtAzT4oBgHgl3EQfIPso/content/2301.01057v1.pdf'} +page_content=', Li, D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/JtAzT4oBgHgl3EQfIPso/content/2301.01057v1.pdf'} +page_content=', Zhao, P.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/JtAzT4oBgHgl3EQfIPso/content/2301.01057v1.pdf'} +page_content=', Tian, Q.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/JtAzT4oBgHgl3EQfIPso/content/2301.01057v1.pdf'} +page_content=', Tian, Y.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/JtAzT4oBgHgl3EQfIPso/content/2301.01057v1.pdf'} +page_content=', Long, Q.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/JtAzT4oBgHgl3EQfIPso/content/2301.01057v1.pdf'} +page_content=', Zhu, C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/JtAzT4oBgHgl3EQfIPso/content/2301.01057v1.pdf'} +page_content=', Song, J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/JtAzT4oBgHgl3EQfIPso/content/2301.01057v1.pdf'} +page_content=', Qiao, F.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/JtAzT4oBgHgl3EQfIPso/content/2301.01057v1.pdf'} +page_content=', Song, L.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/JtAzT4oBgHgl3EQfIPso/content/2301.01057v1.pdf'} +page_content=', et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/JtAzT4oBgHgl3EQfIPso/content/2301.01057v1.pdf'} +page_content=' : Are we ready for service robots?' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/JtAzT4oBgHgl3EQfIPso/content/2301.01057v1.pdf'} +page_content=' the OpenLORIS-scene datasets for lifelong SLAM.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/JtAzT4oBgHgl3EQfIPso/content/2301.01057v1.pdf'} +page_content=' In: IEEE International Conference on Robotics and Automation (ICRA).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/JtAzT4oBgHgl3EQfIPso/content/2301.01057v1.pdf'} +page_content=' pp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/JtAzT4oBgHgl3EQfIPso/content/2301.01057v1.pdf'} +page_content=' 3139–3145.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/JtAzT4oBgHgl3EQfIPso/content/2301.01057v1.pdf'} +page_content=' IEEE (2020) 32.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/JtAzT4oBgHgl3EQfIPso/content/2301.01057v1.pdf'} +page_content=' Shotton, J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/JtAzT4oBgHgl3EQfIPso/content/2301.01057v1.pdf'} +page_content=', Glocker, B.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/JtAzT4oBgHgl3EQfIPso/content/2301.01057v1.pdf'} +page_content=', Zach, C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/JtAzT4oBgHgl3EQfIPso/content/2301.01057v1.pdf'} +page_content=', Izadi, S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/JtAzT4oBgHgl3EQfIPso/content/2301.01057v1.pdf'} +page_content=', Criminisi, A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/JtAzT4oBgHgl3EQfIPso/content/2301.01057v1.pdf'} +page_content=', Fitzgibbon, A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/JtAzT4oBgHgl3EQfIPso/content/2301.01057v1.pdf'} +page_content=': Scene coordinate regression forests for camera relocalization in RGB-D images.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/JtAzT4oBgHgl3EQfIPso/content/2301.01057v1.pdf'} +page_content=' In: IEEE Conference on Computer Vision and Pattern Recognition (CVPR).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/JtAzT4oBgHgl3EQfIPso/content/2301.01057v1.pdf'} +page_content=' pp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/JtAzT4oBgHgl3EQfIPso/content/2301.01057v1.pdf'} +page_content=' 2930–2937 (2013) 33.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/JtAzT4oBgHgl3EQfIPso/content/2301.01057v1.pdf'} +page_content=' Song, S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/JtAzT4oBgHgl3EQfIPso/content/2301.01057v1.pdf'} +page_content=', Lichtenberg, S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/JtAzT4oBgHgl3EQfIPso/content/2301.01057v1.pdf'} +page_content='P.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/JtAzT4oBgHgl3EQfIPso/content/2301.01057v1.pdf'} +page_content=', Xiao, J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/JtAzT4oBgHgl3EQfIPso/content/2301.01057v1.pdf'} +page_content=': Sun RGB-D: A RGB-D scene understanding benchmark suite.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/JtAzT4oBgHgl3EQfIPso/content/2301.01057v1.pdf'} +page_content=' In: IEEE Conference on Computer Vision and Pattern Recogni- tion (CVPR).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/JtAzT4oBgHgl3EQfIPso/content/2301.01057v1.pdf'} +page_content=' pp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/JtAzT4oBgHgl3EQfIPso/content/2301.01057v1.pdf'} +page_content=' 567–576 (2015) 34.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/JtAzT4oBgHgl3EQfIPso/content/2301.01057v1.pdf'} +page_content=' von Stumberg, L.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/JtAzT4oBgHgl3EQfIPso/content/2301.01057v1.pdf'} +page_content=', Cremers, D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/JtAzT4oBgHgl3EQfIPso/content/2301.01057v1.pdf'} +page_content=': DM-VIO: Delayed marginalization visual-inertial odometry.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/JtAzT4oBgHgl3EQfIPso/content/2301.01057v1.pdf'} +page_content=' IEEE Robotics and Automation Letters 7(2), 1408–1415 (2022) 35.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/JtAzT4oBgHgl3EQfIPso/content/2301.01057v1.pdf'} +page_content=' Sturm, J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/JtAzT4oBgHgl3EQfIPso/content/2301.01057v1.pdf'} +page_content=', Engelhard, N.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/JtAzT4oBgHgl3EQfIPso/content/2301.01057v1.pdf'} +page_content=', Endres, F.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/JtAzT4oBgHgl3EQfIPso/content/2301.01057v1.pdf'} +page_content=', Burgard, W.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/JtAzT4oBgHgl3EQfIPso/content/2301.01057v1.pdf'} +page_content=', Cremers, D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/JtAzT4oBgHgl3EQfIPso/content/2301.01057v1.pdf'} +page_content=': A benchmark for the evaluation of RGB-D SLAM systems.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/JtAzT4oBgHgl3EQfIPso/content/2301.01057v1.pdf'} +page_content=' In: International Conference on Intelli- gent Robot Systems (IROS) (Oct 2012) 36.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/JtAzT4oBgHgl3EQfIPso/content/2301.01057v1.pdf'} +page_content=' Sturm, J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/JtAzT4oBgHgl3EQfIPso/content/2301.01057v1.pdf'} +page_content=', Engelhard, N.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/JtAzT4oBgHgl3EQfIPso/content/2301.01057v1.pdf'} +page_content=', Endres, F.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/JtAzT4oBgHgl3EQfIPso/content/2301.01057v1.pdf'} +page_content=', Burgard, W.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/JtAzT4oBgHgl3EQfIPso/content/2301.01057v1.pdf'} +page_content=', Cremers, D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/JtAzT4oBgHgl3EQfIPso/content/2301.01057v1.pdf'} +page_content=': A benchmark for the evaluation of RGB-D SLAM systems.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/JtAzT4oBgHgl3EQfIPso/content/2301.01057v1.pdf'} +page_content=' In: IEEE/RSJ International Conference on Intelligent Robots and Systems.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/JtAzT4oBgHgl3EQfIPso/content/2301.01057v1.pdf'} +page_content=' pp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/JtAzT4oBgHgl3EQfIPso/content/2301.01057v1.pdf'} +page_content=' 573–580.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/JtAzT4oBgHgl3EQfIPso/content/2301.01057v1.pdf'} +page_content=' IEEE (2012) 37.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/JtAzT4oBgHgl3EQfIPso/content/2301.01057v1.pdf'} +page_content=' Telea, A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/JtAzT4oBgHgl3EQfIPso/content/2301.01057v1.pdf'} +page_content=': An image inpainting technique based on the fast marching method.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/JtAzT4oBgHgl3EQfIPso/content/2301.01057v1.pdf'} +page_content=' Journal of graphics tools 9(1), 23–34 (2004) 38.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/JtAzT4oBgHgl3EQfIPso/content/2301.01057v1.pdf'} +page_content=' Valentin, J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/JtAzT4oBgHgl3EQfIPso/content/2301.01057v1.pdf'} +page_content=', Dai, A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/JtAzT4oBgHgl3EQfIPso/content/2301.01057v1.pdf'} +page_content=', Nießner, M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/JtAzT4oBgHgl3EQfIPso/content/2301.01057v1.pdf'} +page_content=', Kohli, P.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/JtAzT4oBgHgl3EQfIPso/content/2301.01057v1.pdf'} +page_content=', Torr, P.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/JtAzT4oBgHgl3EQfIPso/content/2301.01057v1.pdf'} +page_content=', Izadi, S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/JtAzT4oBgHgl3EQfIPso/content/2301.01057v1.pdf'} +page_content=', Keskin, C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/JtAzT4oBgHgl3EQfIPso/content/2301.01057v1.pdf'} +page_content=': Learning to navigate the energy landscape.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/JtAzT4oBgHgl3EQfIPso/content/2301.01057v1.pdf'} +page_content=' In: Fourth International Conference on 3D Vision (3DV).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/JtAzT4oBgHgl3EQfIPso/content/2301.01057v1.pdf'} +page_content=' pp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/JtAzT4oBgHgl3EQfIPso/content/2301.01057v1.pdf'} +page_content=' 323–332.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/JtAzT4oBgHgl3EQfIPso/content/2301.01057v1.pdf'} +page_content=' IEEE (2016) 39.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/JtAzT4oBgHgl3EQfIPso/content/2301.01057v1.pdf'} +page_content=' Wang, W.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/JtAzT4oBgHgl3EQfIPso/content/2301.01057v1.pdf'} +page_content=', Zhu, D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/JtAzT4oBgHgl3EQfIPso/content/2301.01057v1.pdf'} +page_content=', Wang, X.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/JtAzT4oBgHgl3EQfIPso/content/2301.01057v1.pdf'} +page_content=', Hu, Y.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/JtAzT4oBgHgl3EQfIPso/content/2301.01057v1.pdf'} +page_content=', Qiu, Y.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/JtAzT4oBgHgl3EQfIPso/content/2301.01057v1.pdf'} +page_content=', Wang, C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/JtAzT4oBgHgl3EQfIPso/content/2301.01057v1.pdf'} +page_content=', Hu, Y.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/JtAzT4oBgHgl3EQfIPso/content/2301.01057v1.pdf'} +page_content=', Kapoor, A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/JtAzT4oBgHgl3EQfIPso/content/2301.01057v1.pdf'} +page_content=', Scherer, S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/JtAzT4oBgHgl3EQfIPso/content/2301.01057v1.pdf'} +page_content=': TartanAir: A dataset to push the limits of visual SLAM.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/JtAzT4oBgHgl3EQfIPso/content/2301.01057v1.pdf'} +page_content=' In: IEEE/RSJ BS3D: Building-scale 3D Reconstruction from RGB-D Images 15 International Conference on Intelligent Robots and Systems (IROS).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/JtAzT4oBgHgl3EQfIPso/content/2301.01057v1.pdf'} +page_content=' pp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/JtAzT4oBgHgl3EQfIPso/content/2301.01057v1.pdf'} +page_content=' 4909– 4916.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/JtAzT4oBgHgl3EQfIPso/content/2301.01057v1.pdf'} +page_content=' IEEE (2020) 40.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/JtAzT4oBgHgl3EQfIPso/content/2301.01057v1.pdf'} +page_content=' Wasenmüller, O.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/JtAzT4oBgHgl3EQfIPso/content/2301.01057v1.pdf'} +page_content=', Meyer, M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/JtAzT4oBgHgl3EQfIPso/content/2301.01057v1.pdf'} +page_content=', Stricker, D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/JtAzT4oBgHgl3EQfIPso/content/2301.01057v1.pdf'} +page_content=': CoRBS: Comprehensive RGB-D bench- mark for SLAM using Kinect v2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/JtAzT4oBgHgl3EQfIPso/content/2301.01057v1.pdf'} +page_content=' In: IEEE Winter Conference on Applications of Computer Vision (WACV).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/JtAzT4oBgHgl3EQfIPso/content/2301.01057v1.pdf'} +page_content=' pp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/JtAzT4oBgHgl3EQfIPso/content/2301.01057v1.pdf'} +page_content=' 1–7.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/JtAzT4oBgHgl3EQfIPso/content/2301.01057v1.pdf'} +page_content=' IEEE (2016) 41.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/JtAzT4oBgHgl3EQfIPso/content/2301.01057v1.pdf'} +page_content=' Weder, S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/JtAzT4oBgHgl3EQfIPso/content/2301.01057v1.pdf'} +page_content=', Schonberger, J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/JtAzT4oBgHgl3EQfIPso/content/2301.01057v1.pdf'} +page_content='L.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/JtAzT4oBgHgl3EQfIPso/content/2301.01057v1.pdf'} +page_content=', Pollefeys, M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/JtAzT4oBgHgl3EQfIPso/content/2301.01057v1.pdf'} +page_content=', Oswald, M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/JtAzT4oBgHgl3EQfIPso/content/2301.01057v1.pdf'} +page_content='R.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/JtAzT4oBgHgl3EQfIPso/content/2301.01057v1.pdf'} +page_content=' : NeuralFusion: Online depth fusion in latent space.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/JtAzT4oBgHgl3EQfIPso/content/2301.01057v1.pdf'} +page_content=' In: Conference on Computer Vision and Pattern Recognition (CVPR).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/JtAzT4oBgHgl3EQfIPso/content/2301.01057v1.pdf'} +page_content=' pp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/JtAzT4oBgHgl3EQfIPso/content/2301.01057v1.pdf'} +page_content=' 3162–3172 (2021) 42.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/JtAzT4oBgHgl3EQfIPso/content/2301.01057v1.pdf'} +page_content=' Xian, K.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/JtAzT4oBgHgl3EQfIPso/content/2301.01057v1.pdf'} +page_content=', Zhang, J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/JtAzT4oBgHgl3EQfIPso/content/2301.01057v1.pdf'} +page_content=', Wang, O.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/JtAzT4oBgHgl3EQfIPso/content/2301.01057v1.pdf'} +page_content=', Mai, L.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/JtAzT4oBgHgl3EQfIPso/content/2301.01057v1.pdf'} +page_content=', Lin, Z.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/JtAzT4oBgHgl3EQfIPso/content/2301.01057v1.pdf'} +page_content=', Cao, Z.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/JtAzT4oBgHgl3EQfIPso/content/2301.01057v1.pdf'} +page_content=': Structure-guided ranking loss for single image depth prediction.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/JtAzT4oBgHgl3EQfIPso/content/2301.01057v1.pdf'} +page_content=' In: IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/JtAzT4oBgHgl3EQfIPso/content/2301.01057v1.pdf'} +page_content=' pp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/JtAzT4oBgHgl3EQfIPso/content/2301.01057v1.pdf'} +page_content=' 611–620 (2020) 43.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/JtAzT4oBgHgl3EQfIPso/content/2301.01057v1.pdf'} +page_content=' Xiao, J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/JtAzT4oBgHgl3EQfIPso/content/2301.01057v1.pdf'} +page_content=', Owens, A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/JtAzT4oBgHgl3EQfIPso/content/2301.01057v1.pdf'} +page_content=', Torralba, A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/JtAzT4oBgHgl3EQfIPso/content/2301.01057v1.pdf'} +page_content=': Sun3D: A database of big spaces reconstructed using SfM and object labels.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/JtAzT4oBgHgl3EQfIPso/content/2301.01057v1.pdf'} +page_content=' In: Proceedings of the IEEE international conference on computer vision.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/JtAzT4oBgHgl3EQfIPso/content/2301.01057v1.pdf'} +page_content=' pp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/JtAzT4oBgHgl3EQfIPso/content/2301.01057v1.pdf'} +page_content=' 1625–1632 (2013) 44.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/JtAzT4oBgHgl3EQfIPso/content/2301.01057v1.pdf'} +page_content=' Yin, W.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/JtAzT4oBgHgl3EQfIPso/content/2301.01057v1.pdf'} +page_content=', Zhang, J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/JtAzT4oBgHgl3EQfIPso/content/2301.01057v1.pdf'} +page_content=', Wang, O.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/JtAzT4oBgHgl3EQfIPso/content/2301.01057v1.pdf'} +page_content=', Niklaus, S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/JtAzT4oBgHgl3EQfIPso/content/2301.01057v1.pdf'} +page_content=', Mai, L.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/JtAzT4oBgHgl3EQfIPso/content/2301.01057v1.pdf'} +page_content=', Chen, S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/JtAzT4oBgHgl3EQfIPso/content/2301.01057v1.pdf'} +page_content=', Shen, C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/JtAzT4oBgHgl3EQfIPso/content/2301.01057v1.pdf'} +page_content=': Learning to recover 3d scene shape from a single image.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/JtAzT4oBgHgl3EQfIPso/content/2301.01057v1.pdf'} +page_content=' In: Conference on Computer Vision and Pattern Recognition (CVPR) (2021) 45.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/JtAzT4oBgHgl3EQfIPso/content/2301.01057v1.pdf'} +page_content=' Zamir, A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/JtAzT4oBgHgl3EQfIPso/content/2301.01057v1.pdf'} +page_content='R.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/JtAzT4oBgHgl3EQfIPso/content/2301.01057v1.pdf'} +page_content=', Sax, A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/JtAzT4oBgHgl3EQfIPso/content/2301.01057v1.pdf'} +page_content=', Shen, W.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/JtAzT4oBgHgl3EQfIPso/content/2301.01057v1.pdf'} +page_content=', Guibas, L.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/JtAzT4oBgHgl3EQfIPso/content/2301.01057v1.pdf'} +page_content='J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/JtAzT4oBgHgl3EQfIPso/content/2301.01057v1.pdf'} +page_content=', Malik, J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/JtAzT4oBgHgl3EQfIPso/content/2301.01057v1.pdf'} +page_content=', Savarese, S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/JtAzT4oBgHgl3EQfIPso/content/2301.01057v1.pdf'} +page_content=': Taskonomy: Disentangling task transfer learning.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/JtAzT4oBgHgl3EQfIPso/content/2301.01057v1.pdf'} +page_content=' In: IEEE Conference on Computer Vision and Pattern Recognition (CVPR).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/JtAzT4oBgHgl3EQfIPso/content/2301.01057v1.pdf'} +page_content=' pp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/JtAzT4oBgHgl3EQfIPso/content/2301.01057v1.pdf'} +page_content=' 3712–3722 (2018) 46.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/JtAzT4oBgHgl3EQfIPso/content/2301.01057v1.pdf'} +page_content=' Zhou, Q.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/JtAzT4oBgHgl3EQfIPso/content/2301.01057v1.pdf'} +page_content='Y.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/JtAzT4oBgHgl3EQfIPso/content/2301.01057v1.pdf'} +page_content=', Park, J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/JtAzT4oBgHgl3EQfIPso/content/2301.01057v1.pdf'} +page_content=', Koltun, V.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/JtAzT4oBgHgl3EQfIPso/content/2301.01057v1.pdf'} +page_content=': Open3D: A modern library for 3D data process- ing.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/JtAzT4oBgHgl3EQfIPso/content/2301.01057v1.pdf'} +page_content=' arXiv preprint arXiv:1801.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/JtAzT4oBgHgl3EQfIPso/content/2301.01057v1.pdf'} +page_content='09847 (2018)' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/JtAzT4oBgHgl3EQfIPso/content/2301.01057v1.pdf'}