RefDrop: Controllable Consistency in Image or Video Generation via Reference Feature Guidance
Jiaojiao Fan1    Haotian Xue1    Qinsheng Zhang2    Yongxin Chen1
1Georgia Tech    2NVIDIA
TLDR: Our novel self-attention layer boosts control over feature injection from a single / multiple reference images, enhancing both subject consistency in image generation or temporal consistency in video generation.
Abstract
There is a rapidly growing interest in controlling consistency across multiple generated images using diffusion models. Among various methods, recent works have found that simply manipulating attention modules by concatenating features from multiple reference images provides an efficient approach to enhancing consistency without fine-tuning. Despite its popularity and success, few studies have elucidated the underlying mechanisms that contribute to its effectiveness. In this work, we reveal that the popular approach is a linear interpolation of image self-attention and cross-attention between synthesized content and reference features, with a constant rank-1 coefficient. Motivated by this observation, we find that a rank-1 coefficient is not necessary and simplifies the controllable generation mechanism. The resulting algorithm, which we coin as RefDrop, allows users to control the influence of reference context in a direct and precise manner. Besides further enhancing consistency in single-subject image generation, our method also enables more interesting applications, such as the consistent generation of multiple subjects, suppressing specific features to encourage more diverse content, and high-quality personalized video generation by boosting temporal consistency. Even compared with state-of-the-art image-prompt-based generators, such as IP-Adapter, RefDrop is competitive in terms of controllability and quality while avoiding the need to train a separate image encoder for feature injection from reference images, making it a versatile plug-and-play solution for any image or video diffusion model.
We allow flexible control over the reference effect through a reference strength coefficient
Framework Overview
Overview of RefDrop Framework. During each diffusion denoising step, we facilitate the injection of features from a generated reference image into the generation process of other images through RFG (Reference Feature Guidance). The RFG layer produces a linear combination of the attention outputs from both the standard and referenced routes. A negative coefficient c encourages divergence of Ii from I1, while a positive coefficient fosters consistency between them.
Experimental Results

Consistent image generation

By using a single generated image as a reference and applying a positive reference strength, we enhance subject consistency across multiple generated images.

Second Image First Image

Blending multiple reference images

By using multiple generated images as references and applying the positive reference strength, we can create a single cohesive object that merges features from all the references.

Diverse image generation

By using a single generated image as a reference and applying a negative reference strength, we can enhance diversity across generated images.

Second Image First Image

Improving temporal consistency in video generation

****************Please see this link for video visualization.****************

By using the first frame as a reference and applying a positive reference strength, we can enhance temporal consistency in video generation.

BibTeX
@article{fan2024refdrop,
    title={RefDrop: Controllable Consistency in Image or Video Generation via Reference Feature Guidance},
    author={Fan, Jiaojiao and Xue, Haotian and Zhang, Qinsheng and Chen, Yongxin},
    journal={arXiv preprint arXiv:2405.17661},
    year={2024}
}