|
|
|
|
|
|
|
|
|
|
|
|
|
Examples of image synthesis from any-level semantic layouts. (a) The coarsest layout, i.e. at the 0-th precision level, is equivalent to a text input; (d) the finest layout, i.e. at the highest level, is close to an accurate segmentation map; (a)-(d) intermediate level layouts (from coarse to fine), the shape control becomes tighter with increasing levels. (e) We can specify different precision levels for different components, e.g. to include a 0-th level style indicator while the remaining regions are of higher levels. |
We propose a new framework for conditional image synthesis from semantic layouts of any precision levels, ranging from pure text to a 2D semantic canvas with precise shapes. More specifically, the input layout consists of one or more semantic regions with free-form text descriptions and adjustable precision levels, which can be set based on the desired controllability. The framework naturally reduces to text-to-image (T2I) at the lowest level with no shape information, and it becomes segmentation-to-image (S2I) at the highest level. By supporting the levels in-between, our framework is flexible in assisting users of different drawing expertise and at different stages of their creative workflow. We introduce several novel techniques to address the challenges coming with this new setup, including a pipeline for collecting training data; a precision-encoded mask pyramid and a text feature map representation to jointly encode precision level, semantics, and composition information; and a multi-scale guided diffusion model to synthesize images. To evaluate the proposed method, we collect a test dataset containing user-drawn layouts with diverse scenes and styles. Experimental results show that the proposed method can generate high-quality images following the layout at given precision, and compares favorably against existing methods. |
Please fill out this agreement form to request access. We will send your unique link to access the demo
An overview of the proposed method. We provide an intuitive interface where users can easily define a layout using a semantic brush associated with a free-form text description and adjustable precision level. The masks, regional descriptions, and precision levels are jointly encoded into a text feature pyramid, and then translated into an image by a multi-scale guided diffusion model. |
Level 0 | Level 3 | Level 4 | Level 5 | Level 6 |
Numbers indicate the precision levels. We sample five images from an input layout. The objects with a higher precision level has less varied shape. |
Albert Einstein in spacesuit on a horse | A tornado made of bees crashing into a skyscraper. painting in the style of watercolor. |
w/o Layout | Layout Guidance | with Layout | w/o Layout | Layout Guidance | with Layout |
Input | SPADE | Ours | Input | SPADE | Ours |
Image | Layout | Result | Image | Layout | Result |
Image | Layout | Result | Image | Layout | Result |
a photo of zebra ➡ a photo of watermelon |
a photo of bird ➡ a photo of cat |
SceneComposer: Any-Level Semantic Image Synthesis. Preprint, 2022. (hosted on ArXiv) |
AcknowledgementsThis template was originally made by Phillip Isola and Richard Zhang. |