3D Neural Field Generation using Triplane Diffusion | CVPR 2023

J. Ryan Shue, Eric Ryan Chan, Ryan Po, Zachary Ankner, Jiajun Wu, Gordon Wetzstein

State-of-the-art 3D diffusion model for unconditional object generation.

ABSTRACT

Diffusion models have emerged as the state-of-the-art for image generation, among other tasks. Here, we present an efficient diffusion-based model for 3D-aware generation of neural fields. Our approach pre-processes training data, such as ShapeNet meshes, by converting them to continuous occupancy fields and factoring them into a set of axis-aligned triplane feature representations. Thus, our 3D training scenes are all represented by 2D feature planes, and we can directly train existing 2D diffusion models on these representations to generate 3D neural fields with high quality and diversity, outperforming alternative approaches to 3D-aware generation. Our approach requires essential modifications to existing triplane factorization pipelines to make the resulting features easy to learn for the diffusion model. We demonstrate state-of-the-art results on 3D generation on several object classes from ShapeNet.

FILES

CITATION

J. R. Shue, E. R. Chan, R. Po, Z. Ankner, J. Wu, G. Wetzstein, 3D Neural Field Generation using Triplane Diffusion, CVPR 2023.

@inproceedings{Shue2023triplanediffusion,
author = {J. R. Shue and E. R. Chan and R. Po and Z. Ankner and J. Wu and G. Wetzstein},
title = {3D Neural Field Generation using Triplane Diffusion},
booktitle = {CVPR},
year = {2023},
}

Overview and Results

We generate 3D neural fields using a novel triplane-based diffusion model that turns noise into shapes.

Please see additional results on the project website!

RELATED PROJECTS

You may also be interested in related projects on neural scene representations, such as :

  • Chan et al. EG3D. CVPR 2022 (link)
  • Chan et al. pi-GAN. CVPR 2021 (link)
  • Sitzmann et al. Implicit Neural Representations with Periodic Activation Functions. NeurIPS 2020 (link)