Inpainting of Sparse Depth Maps from Monocular Depth-from-Focus on Pixel Processor Arrays
Abstract
Depth estimation is essential for robotics and effective navigation. While many recent methods attempt to estimate dense depth maps from a single RGB image or a combination of an RGB image and sparse depth measurements, our work leverages the in-pixel computing capabilities of a pixel processor array (PPA), combined with an electrically tunable liquid lens, to capture semi-sparse depth maps via a depth-from-focus approach. We consider the problem of reconstructing dense depth maps from such measurements.We simulate a PPA-based depth-from-focus algorithm on a synthetic focal stack derived from a monocular RGB-D dataset, demonstrating competitive dense depth map reconstruction from depth frames containing as few as 10% non-zero pixels, with a 5-bit resolution. Furthermore, we enhance the performance of semi-sparse depth completion by fusing these PPA-captured depth cues with concurrently acquired RGB images. We also use belief propagation, allowing for highly localized and parallel computation without access to global memory, offering a promising solution from the perspective of PPAs. We show the performance of these algorithms on semi-sparse image depth reconstruction tasks.