SceneComposer: Any-Level Semantic Image Synthesis
CVPR 2023 (Highlight)
Yu Zeng1
Zhe Lin2
Jianming Zhang2
Qing Liu2
John Collomosse2
Jason Kuen2
Vishal M. Patel1
1Johns Hopkins University
2Adobe Research
[Paper]
[GitHub]
[Web App]
Examples of image synthesis from any-level semantic layouts. (a) The coarsest layout, i.e. at the 0-th precision level, is equivalent to a text input; (d) the finest layout, i.e. at the highest level, is close to an accurate segmentation map; (a)-(d) intermediate level layouts (from coarse to fine), the shape control becomes tighter with increasing levels. (e) We can specify different precision levels for different components, e.g. to include a 0-th level style indicator while the remaining regions are of higher levels.

Abstract

We propose a new framework for conditional image synthesis from semantic layouts of any precision levels, ranging from pure text to a 2D semantic canvas with precise shapes. More specifically, the input layout consists of one or more semantic regions with free-form text descriptions and adjustable precision levels, which can be set based on the desired controllability. The framework naturally reduces to text-to-image (T2I) at the lowest level with no shape information, and it becomes segmentation-to-image (S2I) at the highest level. By supporting the levels in-between, our framework is flexible in assisting users of different drawing expertise and at different stages of their creative workflow. We introduce several novel techniques to address the challenges coming with this new setup, including a pipeline for collecting training data; a precision-encoded mask pyramid and a text feature map representation to jointly encode precision level, semantics, and composition information; and a multi-scale guided diffusion model to synthesize images. To evaluate the proposed method, we collect a test dataset containing user-drawn layouts with diverse scenes and styles. Experimental results show that the proposed method can generate high-quality images following the layout at given precision, and compares favorably against existing methods.


Web APP

Please fill out this agreement form to request access. We will send your unique link to access the demo



Demo Video



Method

An overview of the proposed method. We provide an intuitive interface where users can easily define a layout using a semantic brush associated with a free-form text description and adjustable precision level. The masks, regional descriptions, and precision levels are jointly encoded into a text feature pyramid, and then translated into an image by a multi-scale guided diffusion model.


Results at Different Levels

Level 0Level 3Level 4Level 5Level 6



Results with Region-Specific Levels

Numbers indicate the precision levels. We sample five images from an input layout. The objects with a higher precision level has less varied shape.



Text-to-Image Generation

Albert Einstein in spacesuit on a horse A tornado made of bees crashing into a skyscraper. painting in the style of watercolor.
w/o LayoutLayout Guidancewith Layout w/o LayoutLayout Guidancewith Layout



Segmentation-to-Image Generation

InputSPADEOurs InputSPADEOurs



Inpainting/Editing Results

ImageLayoutResultImageLayoutResult
ImageLayoutResultImageLayoutResult



Concepts Interpolation

a photo of zebra a photo of watermelon
a photo of bird a photo of cat



Paper and Supplementary Material

SceneComposer: Any-Level Semantic Image Synthesis.
Preprint, 2022.
(hosted on ArXiv)


[Bibtex]


Acknowledgements

This template was originally made by Phillip Isola and Richard Zhang.