TY - CONF
T1 - Direct Camera Pose Tracking and Mapping With Signed Distance Functions
AU - Bylow, Erik
AU - Sturm, Jürgen
AU - Kerl, Christian
AU - Kahl, Fredrik
AU - Cremers, Daniel
N1 - RGB-D 2013 was held in conjunction with Robotics: Science and Systems (RSS) Conference 2013.
PY - 2013
Y1 - 2013
N2 - In many areas, the ability to create accurate 3D models is of great interest, for example, in computer vision, robotics, architecture, and augmented reality. In this paper we show how a textured indoor environment can be reconstructed
in 3D using an RGB-D camera. Real-time performance can be achieved using a GPU. We show how the camera pose can be estimated directly using the geometry that we represent as a signed distance function (SDF). Since the SDF contains information
about the distance to the surface, it defines an error-metric which is minimized to estimate the pose of the camera. By iteratively estimating the camera pose and integrating the new depth images into the model, the 3D reconstruction is computed on the fly. We present several examples of 3D reconstructions made from
a handheld and robot-mounted depth sensor, including detailed reconstructions from medium-sized rooms with almost drift-free pose estimation. Furthermore, we demonstrate that our algorithm is robust enough for 3D reconstruction using data recorded from a quadrocopter, making it potentially useful for navigation
applications.
AB - In many areas, the ability to create accurate 3D models is of great interest, for example, in computer vision, robotics, architecture, and augmented reality. In this paper we show how a textured indoor environment can be reconstructed
in 3D using an RGB-D camera. Real-time performance can be achieved using a GPU. We show how the camera pose can be estimated directly using the geometry that we represent as a signed distance function (SDF). Since the SDF contains information
about the distance to the surface, it defines an error-metric which is minimized to estimate the pose of the camera. By iteratively estimating the camera pose and integrating the new depth images into the model, the 3D reconstruction is computed on the fly. We present several examples of 3D reconstructions made from
a handheld and robot-mounted depth sensor, including detailed reconstructions from medium-sized rooms with almost drift-free pose estimation. Furthermore, we demonstrate that our algorithm is robust enough for 3D reconstruction using data recorded from a quadrocopter, making it potentially useful for navigation
applications.
M3 - Paper, not in proceeding
T2 - RGB-D Workshop on Advanced Reasoning with Depth Cameras (RGB-D 2013)
Y2 - 27 June 2013
ER -