RL for Consistency Models: Reward Guided Text-to-Image Generation with Fast Inference

By Owen Oertell, Jonathan Daniel Chang, Yiyi Zhang, Kianté Brantley, and Wen Sun

Reinforcement Learning Journal, vol. 4, 2024, pp. 1656–1673.

Presented at the Reinforcement Learning Conference (RLC), Amherst Massachusetts, August 9–12, 2024.


Download:

Abstract:

Reinforcement learning (RL) has improved guided image generation with diffusion models by directly optimizing rewards that capture image quality, aesthetics, and instruction following capabilities. However, the resulting generative policies inherit the same iterative sampling process of diffusion models that causes slow generation. To overcome this limitation, consistency models proposed learning a new class of generative models that directly map noise to data, resulting in a model that can generate an image in as few as one sampling iteration. In this work, to optimize text-to-image generative models for task specific rewards and enable fast training and inference, we propose a framework for fine-tuning consistency models via RL. Our framework, called Reinforcement Learning for Consistency Model (RLCM), frames the iterative inference process of a consistency model as an RL procedure. Comparing to RL finetuned diffusion models, RLCM trains significantly faster, improves the quality of the generation measured under the reward objectives, and speeds up the inference procedure by generating high quality images with as few as two inference steps. Experimentally, we show that RLCM can adapt text-to-image consistency models to objectives that are challenging to express with prompting, such as image compressibility, and those derived from human feedback, such as aesthetic quality. Our code is available at https://rlcm.owenoertell.com.


Citation Information:

Owen Oertell, Jonathan Daniel Chang, Yiyi Zhang, Kianté Brantley, and Wen Sun. "RL for Consistency Models: Reward Guided Text-to-Image Generation with Fast Inference." Reinforcement Learning Journal, vol. 4, 2024, pp. 1656–1673.

BibTeX:

@article{oertell2024consistency,
    title={{RL} for Consistency Models: {R}eward Guided Text-to-Image Generation with Fast Inference},
    author={Oertell, Owen and Chang, Jonathan Daniel and Zhang, Yiyi and Brantley, Kiant{\'{e}} and Sun, Wen},
    journal={Reinforcement Learning Journal},
    volume={4},
    pages={1656--1673},
    year={2024}
}