ObjectMeshDeform : Towards recovering precise 3D geometry of real objects via image-guided mesh deformation of 3D generative priors
Abstract
3D Generative Models that synthesize high fidelity 3D assets from single view or multi-view images cannot recover precise geometry and real-world object measurements needed for practical applications. On the other hand, multi-view 3D reconstruction methods based on structure from motion, implicit surfaces, gaussian splatting, fail to recover high fidelity object meshes with dense geometry, shape regularity, smooth surfaces present in real world objects. In this paper we propose a novel approach that leverages a 3D mesh prior synthesized by generative models pre-trained on large scale 3D synthetic datasets. Our method refines the initial mesh geometry without use of any additional training data while improving accuracy of 3D geometry via multi-view consistency, without degrading the mesh surface quality. Our method can automatically reconstruct meshes from images of objects in real world scenes, without requiring any additional large-scale training data or manual inputs.