Sketch3R: Rapid and Realistic 3D VR Sketch Creation to Shape Retrieval
Abstract
Large 3D shape repositories are rapidly expanding, driven by advances in generative modeling, making efficient shape retrieval increasingly important for authoring tools. While text queries capture high-level semantics, they often fail to convey precise geometric details. 3D sketches provide a more expressive means of representing shape geometry, and recent AR/VR developments have made sketch-based retrieval practical. However, existing 3D sketch datasets face three major limitations:(1) reliance on quad meshes or voxel hulls, which often fail on complex or non-manifold shapes; (2) use of fixed-size point clouds that discard stroke connectivity and limit geometric fidelity; and (3) dependence on expensive curve-based or multi-view rendering pipelines, which hinder large-scale data generation. Limited point cloud representations also fail to capture sketch connectivity and topology when used to train retrieval models. To address these challenges, we propose Sketch3R, a scalable framework that converts arbitrary 3D meshes into human-like VR sketches using a graph-based representation that preserves stroke connectivity and adapts to sketch complexity. Leveraging this representation, Sketch3R employs a lightweight graph-attention Siamese network for efficient and accurate sketch-to-shape retrieval. Experiments demonstrate that our method outperforms prior approaches in both accuracy and speed, while robustly handling 3D shapes across diverse topologies.