Text-to-Image
Diffusers
Safetensors
English
ChatsSDXLPipeline

CHATS: Combining Human-Aligned Optimization and Test-Time Sampling for Text-to-Image Generation (ICML2025)

πŸ“ Paper β€’ πŸ’‘ ηŸ₯乎 β€’ πŸ’» Github

CHATS is a next-generation framework that unifies human preference alignment with classifier-free guidance by modeling both preferred and dispreferred distributions and using a proxy-prompt-based sampling strategy for superior text–image alignment, fidelity, and aesthetic consistency. See the images generated below for examples.

CHATS results
Generation examples using CHATS (cf. Fig.1 in our paper).

πŸš€ Key Features

  • Human-Aligned Fine-Tuning with CFG Integration
    We integrate human preference alignment with classifier-free guidance sampling into a unified framework.

  • Proxy-Prompt Sampling
    Leverage useful signals from both preferred and dispreferred distributions at test time.

  • Data Efficiency
    State-of-the-art results across benchmarks with minimal fine-tuning effort on a small, high-quality dataset.

  • Plug-and-Play
    Compatible with any diffusion backbone and existing guidance methods.


πŸ“¦ Installation

git clone https://github.com/AIDC-AI/CHATS.git
cd CHATS
pip install -r requirements.txt

πŸ“‚ Model Checkpoints

We provide pretrained CHATS checkpoints on SDXL for easy download and evaluation:

  • Model Repository: Hugging Face

πŸ› οΈ Quick Start

import torch
from pipeline import ChatsSDXLPipeline

# Load CHATS-SDXL pipeline
pipe = ChatsSDXLPipeline.from_pretrained(
        "AIDC-AI/CHATS",
        torch_dtype=torch.bfloat16
)
pipe.to("cuda")

# Generate images
images = pipe(
    prompt=["A serene mountain lake at sunset"],
    num_inference_steps=50,
    guidance_scale=5,
    seed=0
)

# Save outputs
for i, img in enumerate(images):
    img.save(f"output_{i}.png")

πŸ‹οΈ Training

To train CHATS from scratch or fine-tune on your own data, run:

accelerate launch --config_file=config/ac_ds_8gpu_zero0.yaml  train.py \
        --pretrained_model_name_or_path=stabilityai/stable-diffusion-xl-base-1.0 \
        --pretrained_vae_model_name_or_path=madebyollin/sdxl-vae-fp16-fix \
        --resolution=1024 \
        --dataloader_num_workers 16 \
        --train_batch_size=1 \
        --gradient_accumulation_steps=16 \
        --max_train_steps=6000 \
        --learning_rate=3e-09 --scale_lr --lr_scheduler=constant_with_warmup --lr_warmup_steps=100 \
        --mixed_precision=bf16 \
        --allow_tf32 \
        --checkpointing_steps=100 \
        --output_dir=output \
        --resume_from_checkpoint latest \
        --use_adafactor \
        --gradient_checkpointing \
        --dataset_name=data-is-better-together/open-image-preferences-v1-binarized \

Args:

  • config_file: This DeepSpeed parameter allows you to specify the configuration file. If you wish to adjust the number of GPUs used for training, simply change the value of num_processes in the ac_ds_xgpu_zero0.yaml file to reflect the desired GPU count.
  • pretrained_model_name_or_path: name or patch of unet model to load
  • pretrained_vae_model_name_or_path: name or patch of vae model to load
  • max_train_steps: max steps to train
  • output: output dir
  • dataset_name: the huggingface sufix of the selected dataset (e.g. OIP)

πŸ“š Citation

If you use CHATS, please cite our ICML 2025 paper:

@inproceedings{fu2025chats,
title={CHATS: Combining Human-Aligned Optimization and Test-Time Sampling for Text-to-Image Generation},
author={Fu, Minghao and Wang, Guo-Hua and Cao, Liangfu and Chen, Qing-Guo and Xu, Zhao and Luo, Weihua and Zhang, Kaifu},
booktitle={International Conference on Machine Learning (ICML)},
year={2025}
}

πŸ™ Acknowledgments

The code is built upon DiffusionDPO, Diffusers, and Transformers.

πŸ“„ License

The project is released under Apache License 2.0 (http://www.apache.org/licenses/LICENSE-2.0, SPDX-License-identifier: Apache-2.0).

🚨 Disclaimer

We used compliance checking algorithms during the training process, to ensure the compliance of the trained model to the best of our ability. Due to complex data and the diversity of language model usage scenarios, we cannot guarantee that the model is completely free of copyright issues or improper content. If you believe anything infringes on your rights or generates improper content, please contact us, and we will promptly address the matter.

Downloads last month
38
Inference Providers NEW
This model isn't deployed by any Inference Provider. πŸ™‹ Ask for provider support

Model tree for AIDC-AI/CHATS

Finetuned
(1177)
this model

Dataset used to train AIDC-AI/CHATS

Space using AIDC-AI/CHATS 1