DreamHOI: Subject-Driven Generation of 3D Human-Object Interactions with Diffusion Priors

Thomas Zhu1,2†      Ruining Li1*      Tomas Jakab1*
1University of Oxford    2Carnegie Mellon University
ArXiv Preprint
Work done while at University of Oxford
*Equal advising

DreamHOI takes as inputs a skinned human model, an object mesh, and a textual description of the intended interaction between them. It then poses the human model to create realistic interactions. Given a fixed object, the generated poses vary faithfully to different intended interactions. Given the same interaction description, the generated pose naturally conforms to the intricacies of the input object to be interacted with.

Abstract

We present DreamHOI, a novel method for zero-shot synthesis of human-object interactions (HOIs), enabling a 3D human model to realistically interact with any given object based on a textual description. This task is complicated by the varying categories and geometries of real-world objects and the scarcity of datasets encompassing diverse HOIs. To circumvent the need for extensive data, we leverage text-to-image diffusion models trained on billions of image-caption pairs. We optimize the articulation of a skinned human mesh using Score Distillation Sampling (SDS) gradients obtained from these models, which predict image-space edits. However, directly backpropagating image-space gradients into complex articulation parameters is ineffective due to the local nature of such gradients. To overcome this, we introduce a dual implicit-explicit representation of a skinned mesh, combining (implicit) neural radiance fields (NeRFs) with (explicit) skeleton-driven mesh articulation. During optimization, we transition between implicit and explicit forms, grounding the NeRF generation while refining the mesh articulation. We validate our approach through extensive experiments, demonstrating its effectiveness in generating realistic HOIs.

Method

DreamHOI takes a human identity (in the form of a skinned body mesh) and an object mesh MObj (e.g., a 3D chair), together with their intended interaction (as a textual prompt, e.g., "sit"), as input. It first fits a NeRF fθ0 for the human using a mixture of diffusion guidance and regularizers, and then estimates its pose ξ0. The posed human mesh Mξt is used to re-initialize and further optimize the NeRF fθt, for iterations t ≤ T. The final output is the posed human MξT at the last iteration.

More Results

Select an input human identity.

  • No human
  • Human 1
  • Human 2
  • Human 3
  • Human 4
  • Human 5

Generated human poses from DreamHOI:

BibTeX

@article{zhu2024dreamhoi,
  title   = {{DreamHOI}: Subject-Driven Generation of 3D Human-Object Interactions with Diffusion Priors},
  author  = {Thomas Hanwen Zhu and Ruining Li and Tomas Jakab},
  journal = {arXiv preprint arXiv:2409.08278},
  year    = {2024}
}