News
Youtube Kanal CG
Progressive Refinement Imaging
Markus Kluge1, Tim Weyrich2, Andreas Kolb1: Progressive Refinement Imaging with Depth-Assisted Disparity Correction Computers & Graphics, 2023 |
|
Abstract: In recent years, the increasing on-board compute power of mobile camera devices gave rise to a class of digitization algorithms that dynamically fuse a stream of camera observations into a progressively updated scene representation. Previous algorithms either obtain general 3D surface representations, often exploiting range maps from a depth camera, such as, Kinect Fusion, etc.; or they reconstruct planar (or distant spherical, respectively) 2D images with respect to a single (perspective or orthographic) reference view, such as, panoramic stitching or aerial mapping. Our work sets out to combine aspects of both, reconstructing a 2.5-D representation (color and depth) as seen from a fixed viewpoint, at spatially variable resolution. Inspired by previous work on “progressive refinement imaging”, we propose a hierarchical representation that enables progressive refinement of both colors and depths by ingesting RGB-D images from a handheld depth camera that is carried through the scene. We evaluate our system by comparing it against state-of-the-art methods in 2D progressive refinement and 3D scene reconstruction, using high-detail indoor and outdoor data sets comprising medium to large disparities. As we will show, the restriction to 2.5-D from a fixed viewpoint affords added robustness (particularly against self-localization drift, as well as backprojection errors near silhouettes), increased geometric and photometric fidelity, as well as greatly improved storage efficiency, compared to more general 3D reconstructions. We envision that our representation will enable scene exploration with realistic parallax from within a constrained range of vantage points, including stereo pair generation, visual surface inspection, or scene presentation within a fixed VR viewing volume.
1 University of Siegen |
Markus Kluge1, Tim Weyrich2, Andreas Kolb1: Progressive Refinement Imaging Computer Graphics Forum (CGF), 2020 |
A sample result of our progressive refinement imaging pipeline applied to a data set (left) comprising one reference image that is refined using six additional images captured with six different cameras over the period of 10 years. Compared to prior work (middle), our method (right) successfully generates photometrically and geometrically consistent results in an online and memory-efficient fashion without global optimization. |
Abstract: This paper presents a novel technique for progressive online integration of uncalibrated image sequences with substantial geometric and/or photometric discrepancies into a single, geometrically and photometrically consistent image. Our approach can handle large sets of images, acquired from a nearly planar or infinitely distant scene at different resolutions in object domain and under variable local or global illumination conditions. It allows for efficient user guidance as its progressive nature provides a valid and consistent reconstruction at any moment during the online refinement process. Our approach avoids global optimization techniques, as commonly used in the field of image refinement, and progressively incorporates new imagery into a dynamically extendable and memory-efficient Laplacian pyramid. Our image registration process includes a coarse homography and a local refinement stage using optical flow. Photometric consistency is achieved by retaining the photometric intensities given in a reference image, while it is being refined. Globally blurred imagery and local geometric inconsistencies due to, e.g. motion are detected and removed prior to image fusion. We demonstrate the quality and robustness of our approach using several image and video sequences, including handheld acquisition with mobile phones and zooming sequences with consumer cameras.
1 University of Siegen
Photo: "Herculaneum" by Johnboy Davidson is licensed under CC BY-NC 2.0
|