This repository provides the official PyTorch implementation for DCPO.
By: Amir Saeidi*, Yiran Luo*, Agneet Chatterjee, Shamanthak Hegde, Bimsara Pathiraja, Yezhou Yang, Chitta Baral
(* indicates equal contribution)
Contents:
We ran our experiments on a node of 8 A100s (80GB). But dcpo_trainer.py
can run on a single GPU having at least 40GB VRAM.
Create a Python virtual environment with your favorite package manager.
After activating the environment, install PyTorch. We recommend following the official website for this.
We refer to diffusers library to install other packages.
We performed our experiments on the Pick-Double Caption dataset and a subset of Pic-a-Pic V2 adataset which is has 20,000 samples.
Below is an example training command for a single-GPU run:
accelerate launch dcpo_trainer.py \
--pretrained_model_name_or_path=$MODEL_NAME \
--dataset_name=$DATASET_NAME \
--caption_1="caption_1" \
--caption_0="caption_0" \
--train_batch_size=1 \
--dataloader_num_workers=16 \
--gradient_accumulation_steps=128 \
--max_train_steps=2000 \
--lr_scheduler="constant_with_warmup" \
--lr_warmup_steps=500 \
--learning_rate=1e-08 \
--scale_lr \
--checkpointing_steps 500 \
--output_dir=$OUTDIR \
--mixed_precision="fp16" \
--beta_dpo 500 \
Note:
caption_0
andcaption_1
refer to the caption of image 0 and image 1 in the Pick-Double Caption dataset, similar to the Pick-a-Pic dataset.
To create the Pick-Double Caption Dataset, we first generated captions for preferred and less preferred images in yuvalkirstain/pickapic_v2
dataset using the LLava-v1.6-34b and Emu2 models. Then, we used the DIPPER model to perturb the generated captions of less preferred images. We refer readers to Appendix D in DCPO paper for more information about perturbation.
Examples of Pick-Double Caption dataset
Below is an example perurbation command:
python perturb_caption.py \
--hf_file=$DATASET_NAME \
--out_folder=$OUTDIR \
We followed the respective official codebases for evaluation with metrics like Pickscore, HPSv2.1, and ImageReward.
- Pickscore - https://github.com/yuvalkirstain/PickScore?tab=readme-ov-file#inference-with-pickscore
- HPSv2.1 - https://github.com/tgxs002/HPSv2?tab=readme-ov-file#image-comparison
- ImageReward - https://github.com/THUDM/ImageReward
We calculate the CLIPScore using Torchmetrics
We evaluate our model with GenEval based on the official repository
- GenEval - https://github.com/djghosh13/geneval
Our work can be found via our Hugging Face Hub organization: https://huggingface.co/DualCPO.
We thank the Research Computing (RC) at Arizona State University (ASU) and cr8dl.ai for their generous support in providing computing resources. The views and opinions of the authors expressed herein do not necessarily state or reflect those of the funding agencies and employers.
@misc{saeidi2025dualcaptionpreferenceoptimization,
title={Dual Caption Preference Optimization for Diffusion Models},
author={Amir Saeidi and Yiran Luo and Agneet Chatterjee and Shamanthak Hegde and Bimsara Pathiraja and Yezhou Yang and Chitta Baral},
year={2025},
eprint={2502.06023},
archivePrefix={arXiv},
primaryClass={cs.CV},
url={https://arxiv.org/abs/2502.06023},
}