MaGRITTe: Manipulative and Generative 3D Realization from Image, Topview and Text

Takayuki Hara1 and Tatsuya Harada1,2
1The University of Tokyo, 2RIKEN

MaGRITTe generates 3D scenes from a image, top-view (floor plans or terrain maps) and text prompts.

table sink window door TV sofa chair

Input images include photos taken by the author as well as those downloaded from the following websites: and


The generation of 3D scenes from user-specified conditions offers a promising avenue for alleviating the production burden in 3D applications. Previous studies required significant effort to realize the desired scene, owing to limited control conditions. We propose a method for controlling and generating 3D scenes under multimodal conditions using partial images, layout information represented in the top view, and text prompts. Combining these conditions to generate a 3D scene involves the following significant difficulties: (1) the creation of large datasets, (2) reflection on the interaction of multimodal conditions, and (3) domain dependence of the layout conditions. We decompose the process of 3D scene generation into 2D image generation from the given conditions and 3D scene generation from 2D images. 2D image generation is achieved by fine-tuning a pretrained text-to-image model with a small artificial dataset of partial images and layouts, and 3D scene generation is achieved by layout-conditioned depth estimation and neural radiance fields (NeRF), thereby avoiding the creation of large datasets. The use of a common representation of spatial information using 360-degree images allows for the consideration of multimodal condition interactions and reduces the domain dependence of the layout control. The experimental results qualitatively and quantitatively demonstrated that the proposed method can generate 3D scenes in diverse domains, from indoor to outdoor, according to multimodal conditions.


Additional examples


        title={MaGRITTe: Manipulative and Generative 3D Realization from Image, Topview and Text},
        author={Takayuki Hara and Tatsuya Harada},
        journal={arXiv preprint arXiv:2404.00345},