Abstract

Consistency Models (CMs) have showed a promise in creating visual content efficiently and with high quality. However, the way to add new conditional controls to the pretrained CMs has not been explored. In this technical report, we consider alternative strategies for adding ControlNet-like conditional control to CMs and present three significant findings. 1) ControlNet trained for diffusion models (DMs) can be directly applied to CMs for high-level semantic controls but struggles with low-level detail and realism control. 2) CMs serve as an independent class of generative models, based on which ControlNet can be trained from scratch using Consistency Training proposed by Song et al. 3) A lightweight adapter can be jointly optimized under multiple conditions through Consistency Training, allowing for the swift transfer of DMs-based ControlNet to CMs. We study these three solutions across various conditional controls, including edge, depth, human pose, low-resolution image and masked image with text-to-image latent consistency models.

Visual Examples


Images generated using our re-trained Text-to-Image CM with 4-step inference. Image Resolution: 1024x1024.

count

Controllable Generation using ControlNet trained on diffusion models. NFEs=4. Image Resolution: 1024x1024.

count

Controllable Generation using ControlNet trained with consistency training. NFEs=4. Image Resolution: 1024x1024.

count

Visual results of DM's ControlNet without/with a unified adapter. NFEs=4. Image Resolution: 1024x1024.

count

BibTeX

@article{xiao2023ccm,
    title={CCM: Adding Conditional Controls to Text-to-Image Consistency Models},
    author={Jie Xiao and Kai Zhu and Han Zhang and Zhiheng Liu and Yujun Shen and Yu Liu and Xueyang Fu and Zheng-Jun Zha},
    year={2023},
    eprint={2312.06971},
    archivePrefix={arXiv},
    primaryClass={cs.CV}
}