Articulated 3D Head Avatar Generation using Text-to-Image Diffusion Models

Alexander W. Bergman, Wang Yifan, Gordon Wetzstein

Generation of articulable head avatars from text prompts using diffusion models.

From a parametric head model, we optimize its geometry and texture to fit a text prompt. This generated avatar can easily be articulated.

ABSTRACT

The ability to generate diverse 3D articulated head avatars is vital to a plethora of applications, including augmented reality, cinematography, and education. Recent work on text-guided 3D object generation has shown great promise in addressing these needs. These methods directly leverage pre-trained 2D text-to-image diffusion models to generate 3D-multi-view-consistent radiance fields of generic objects. However, due to the lack of geometry and texture priors, these methods have limited control over the generated 3D objects, making it difficult to operate inside a specific domain, e.g., human heads. In this work, we develop a new approach to text-guided 3D head avatar generation to address this limitation. Our framework directly operates on the geometry and texture of an articulable 3D morphable model (3DMM) of a head, and introduces novel optimization procedures to update the geometry and texture while keeping the 2D and 3D facial features aligned. The result is a 3D head avatar that is consistent with the text description and can be readily articulated using the deformation model of the 3DMM. We show that our diffusion-based articulated head avatars outperform state-of-the-art approaches for this task. The latter are typically based on CLIP, which is known to provide limited diversity of generation and accuracy for 3D object generation.

FILES

  • Technical paper and supplement (link to arxiv)
  • Code (coming soon)

CITATION

A.W. Bergman, W. Yifan, G. Wetzstein, Articulated 3D Head Avatar Generation using Text-to-Image Diffusion Models, arXiv preprint arXiv:2307.04859, 2023.

@inproceedings{bergman2023texttohead,
author = {Bergman, Alexander W. and Yifan, Wang and Wetzstein, Gordon},
title = {Articulated 3D Head Avatar Generation using Text-to-Image Diffusion Models},
booktitle = {arXiv preprint arXiv:2307.04859},
year = {2023},
}

METHOD

Overview of our method. We represent a 3D avatar using a textured 3DMM. Given a text prompt, we optimize for a neural texture map, geometry (stored in a geometry network), the shape blendshape parameters of the 3DMM, and per-vertex features of the 3DMM. During each optimization step, we sample a new expression, joint location, and camera position. We then use a differentiable mesh rasterizer to produce a feature map which is processed and then passed through a latent diffusion decoder to produce an output image. Score distillation sampling is used to obtain gradients from the diffusion network to match the trainable scene parameters to the given text prompt. The segmentation loss ensures texture-geometry alignment.

Related Projects

You may also be interested in related projects on generative 3D models:

  • Po et al. Compositional 3D Scene Generation, arXiv 2023 (link)
  • Chan et al. GeNVS, 2023 (link)
  • Deng et al. LumiGAN, 2023 (link)
  • Bergman et al. GNARF, NeurIPS 2022 (link)

RESULTS

Novel View Synthesis



Prompts: “Shrek”, “Batman”, “Santa Claus”, “a Jedi”
The generated head avatars can be synthesized from different views.

Articulated expressions




Prompts: “Dwayne Johnson”, “a robot”
Generated head avatars are aligned with a 3DMM and thus are articulable. We drive the generated results with expression parameters for the FLAME model.

Additional examples


Prompts: “a man with a large afro”, “a zombie”

Prompts: “Shaq”, “an alien”