News
Youtube Channel CG
3D Reconstruction
Since real-time range cameras (Time-of-Flight or Kinect cameras) become more and more available in the market, new 3D applications arise such as scene understanding, augmented reality or human-computer interaction. Recently, a cheap 3D reconstruction method using the kinect camera has been proposed. But its extension for ToF cameras has not yet been demonstrated and will bring further challenges due to ToF camera properties (e.g. low resolution, complex noise behavior, etc.).
This research focuses on developing a fast 3D geometry reconstruction based on a simple points representation that allows adaptive resolution and workspace. Our method uses the iterative closest point (ICP) to track the camera motion (describing all data in the same reference), fuse all similar data to a single vertex list (model) and visualizes this model using surfel rendering. A surfel is composed of a 3D position (center), an orientation (normal) and a size (radius). Note that the data fusion module assumes a simple gaussian distribution.
Damien Lefloch1, Markus Kluge1, Hamed Sarbolandi1, Tim Weyrich2, Andreas Kolb1: Comprehensive Use of Curvature For Robust And Accurate Online Surface Reconstruction IEEE Transactions on Pattern Analysis and Machine Intelligence (PAMI), 2017 |
Abstract: Interactive real-time scene acquisition from hand-held depth cameras has recently developed much momentum, enabling applications in ad-hoc object acquisition, augmented reality and other fields. A key challenge to online reconstruction remains error accumulation in the reconstructed camera trajectory, due to drift-inducing instabilities in the range scan alignments of the underlying iterative-closest-point (ICP) algorithm. Various strategies have been proposed to mitigate that drift, including SIFT-based pre-alignment, color-based weighting of ICP pairs, stronger weighting of edge features, and so on. In our work, we focus on surface curvature as a feature that is detectable on range scans alone and hence does not depend on accurate multi-sensor alignment. In contrast to previous work that took curvature into consideration, however, we treat curvature as an independent quantity that we consistently incorporate into every stage of the real-time reconstruction pipeline, including densely curvature-weighted ICP, range image fusion, local surface reconstruction, and rendering. Using multiple benchmark sequences, and in direct comparison to other state-of-the-art online acquisition systems, we show that our approach significantly reduces drift, both when analyzing individual pipeline stages in isolation, as well as seen across the online reconstruction pipeline as a whole. 1 University of Siegen |
Web Page Paper Video Bibtex |
Since no volumetric data is used, our method can be easily extended to dynamics environment. Dynamic objects are detected using the matching measurement provided by the ICP algorithm. A GPU based region growing method is later applied to the outlier map and the input data with respect to depth and normal similarities in order to fully segment dynamic objects.
Maik Keller1, Damien Lefloch1, Martin Lambers1, Shahram Izadi2, Tim Weyrich3, Andreas Kolb1: Real-time 3D Reconstruction in Dynamic Scenes using Point-based Fusion International Conference on 3D Vision (3DV), 2013 |
|
Abstract: Real-time or online 3D reconstruction has wide applicability and receives further interest due to availability of consumer depth cameras. Typical approaches use a moving sensor to accumulate depth measurements into a single model which is continuously refined. Designing such systems is an intricate balance between reconstruction quality, speed, spatial scale, and scene assumptions. Existing online methods either trade scale to achieve higher quality reconstructions of small objects/scenes. Or handle larger scenes by trading real-time performance and/or quality, or by limiting the bounds of the active reconstruction. Additionally, many systems assume a static scene, and cannot robustly handle scene motion or reconstructions that evolve to reflect scene changes. We address these limitations with a new system for real-time dense reconstruction with equivalent quality to existing online methods, but with support for additional spatial scale and robustness in dynamic scenes. Our system is designed around a simple and flat point-based representation, which directly works with the input acquired from range/depth sensors, without the overhead of converting between representations. The use of points enables speed and memory efficiency, directly leveraging the standard graphics pipeline for all central operations; i.e., camera pose estimation, data association, outlier removal, fusion of depth maps into a single denoised model, and detection and update of dynamic objects. We conclude with qualitative and quantitative results that highlight robust tracking and high quality reconstructions of a diverse set of scenes at varying scales. 1 University of Siegen |
Paper Video Bibtex |
Additionally, a new framework has been developed in order to account for noise characteristic of different depth cameras. The noise is modelled as anisotropic reliability matrix that is fused on the fly and used during the geometry accumulation. We show that taking into account depth camera noise characteristic, during the accumulation step, leads to an improvement of the reconstruction quality for simulated data.
Damien Lefloch1, Tim Weyrich2, Andreas Kolb1: Anisotropic Point-Based Fusion International Conference on Information Fusion (FUSION), 2015 |
|
Abstract: We propose a new real-time framework which efficiently reconstructs large-scale scenery by accumulating anisotropic point representations in combination with memory efficient representation of point attributes. The reduced memory footprint allows to store additional point properties that represent the accumulated anisotropic noise of the input range data in the reconstructed scene. We propose an efficient processing scheme for the extended and compressed point attributes that does not obstruct real-time reconstruction. Furthermore, we evaluate the positive impact of the anisotropy handling on the data accumulation and the 3D reconstruction quality. 1 University of Siegen |
Paper Bibtex |
The project was a collaboration with Microsoft Research Cambridge and University College London.
This research has partly been funded by the German Research Foundation (DFG), grant GRK-1564 Imaging New Modalities and by the FP7 EU collaborative project BEAMING (248620).
We analyze recent developments in RGB‐D scene reconstruction in detail and review essential related work.
Michael Zollhöfer1, Patrick Stotko2, Andreas Görlitz3, Christian Theobalt4, Matthias Nießner5, Reinhard Klein2, Andreas Kolb3: State of the Art on 3D Reconstruction with RGB-D Cameras Computer Graphics Forum (Eurographics STAR), 2018 |
|
Abstract: The advent of affordable consumer grade RGB‐D cameras has brought about a profound advancement of visual scene reconstruction methods. Both computer graphics and computer vision researchers spend significant effort to develop entirely new algorithms to capture comprehensive shape models of static and dynamic scenes with RGB‐D cameras. This led to significant advances of the state of the art along several dimensions. Some methods achieve very high reconstruction detail, despite limited sensor resolution. Others even achieve real‐time performance, yet possibly at lower quality. New concepts were developed to capture scenes at larger spatial and temporal extent. Other recent algorithms flank shape reconstruction with concurrent material and lighting estimation, even in general scenes and unconstrained conditions. In this state‐of‐the‐art report, we analyze these recent developments in RGB‐D scene reconstruction in detail and review essential related work. We explain, compare, and critically analyze the common underlying algorithmic concepts that enabled these recent advancements. Furthermore, we show how algorithms are designed to best exploit the benefits of RGB‐D data while suppressing their often non‐trivial data distortions. In addition, this report identifies and discusses important open research questions and suggests relevant directions for future work. 1 Stanford University |
Paper Bibtex |