Indexed by:
Abstract:
We tackle the problem of single-shape 3D generation, aiming to synthesize diverse and plausible shapes conditioned on a single input exemplar. This task is challenging due to the absence of dataset-level variation, requiring models to internalize structural patterns and generate novel shapes from limited local geometric cues. To address this, we propose a unified framework combining geometry-aware representation learning with a multiscale diffusion process. Our approach centers on a triplane autoencoder enhanced with a spatial pattern predictor and attention-based feature fusion, enabling fine-grained perception of local structures. To preserve structural coherence during generation, we introduce a soft feature distribution alignment loss that aligns features between input and generated shapes, balancing fidelity and diversity. Finally, we adopt a hierarchical diffusion strategy that progressively refines triplane features from coarse to fine, stabilizing training and improving quality. Extensive experiments demonstrate that our method produces high-fidelity, structurally consistent, and diverse shapes, establishing a strong baseline for single-shape generation.
Keyword:
Reprint 's Address:
Version:
Source :
COMPUTERS & GRAPHICS-UK
ISSN: 0097-8493
Year: 2025
Volume: 132
2 . 5 0 0
JCR@2023
Cited Count:
SCOPUS Cited Count:
ESI Highly Cited Papers on the List: 0 Unfold All
WanFang Cited Count:
Chinese Cited Count:
30 Days PV: 2
Affiliated Colleges: