Skip to yearly menu bar Skip to main content


Keynote

Sparse View Synthesis

Ravi Ramamoorthi


Abstract:

We seek the ability to take a few images of a scene of interest, and turn it into an immersive visual experience, where one can explore it from different viewpoints, in effect visualizing a 3D representation of an object, scene or photograph, and providing numerous applications in augmented reality, e-commerce and 3D photography. This problem, known as view synthesis or image-based rendering in computer vision and graphics, has a three-decade plus history, and is currently undergoing a renaissance with new representations of 3D geometry enabling unparalleled realism. We discuss some of the history in terms of capturing the light field (the space of light rays for any spatial position and viewing direction), and our own work on a sampling theory for view synthesis based on light fields, leading to the development of volumetric radiance fields as a fundamentally new approach to representing 3D geometry for view synthesis. We will also discuss parallels to Monte Carlo and volumetric rendering and simulation problems in computer graphics. We then ask the question of how far we can push the required number of images, in order to achieve sparse view synthesis with very few images, in the limit only one photograph. In this context, we also discuss our recent results on a number of applications including real-time live portraits, generative AI for 3D scenes, and differentiable light transport for inverse rendering.

Live content is unavailable. Log in and register to view live content