Datasets:

Modalities:
Image
Languages:
English
ArXiv:
Tags:
image
License:
CommunityForensics / README.md
jespark's picture
Merge `v1.0_updating` into `main`
3875b7d verified
|
raw
history blame
10.5 kB
metadata
license: cc-by-4.0
task_categories:
  - image-classification
pretty_name: Community Forensics
configs:
  - config_name: default
    data_files:
      - split: Systematic
        path:
          - data/systematic/*.parquet
      - split: Manual
        path:
          - data/manual/*.parquet
      - split: PublicEval
        path:
          - data/publicEval/*.parquet
      - split: Commercial
        path:
          - data/commercial/*.parquet
tags:
  - image
size_categories:
  - 1M<n<10M
language:
  - en

Community Forensics: Using Thousands of Generators to Train Fake Image Detectors (CVPR 2025)

Paper/Project Page

We are currently working on releasing a smaller version of our dataset paired with redistributable "real" data for easier prototyping.

Changes:
04/09/25: Initial version released.

Dataset Summary

  • The Community Forensics dataset is a dataset intended for developing and benchmarking forensics methods that detect or analyze AI-generated images. It contains 2.7M generated images collected from 4803 generator models.

Supported Tasks

  • Image Classification: identify whether the given image is AI-generated. We mainly study this task in our paper, but other tasks may be possible with our dataset.

Dataset Structure

Data Instances

Our dataset is formatted in a Parquet data frame of the following structure:

{
  "image_name": "00000162.png", 
  "format": "PNG",
  "resolution": "[512, 512]", 
  "mode": "RGB",
  "image_data": "b'\x89PNG\r\n\x1a\n\x00\x00\x00\rIHDR\..." 
  "model_name": "stabilityai/stable-diffusion-2", 
  "nsfw_flag": False,
  "prompt": "montreal grand prix 2018 von icrdesigns",
  "real_source": "LAION",
  "subset": "Systematic",
  "split": "train",
  "label": "1"
}

Data Fields

image_name: Filename of an image.
format: PIL image format.
resolution: Image resolution.
mode: PIL image mode (e.g., RGB)
image_data: Image data in byte format. Can be read using Python's BytesIO.
model_name: Name of the model used to sample this image. Has format {author_name}/{model_name} for Systematic subset, and {model_name} for other subsets.
nsfw_flag: NSFW flag determined using Stable Diffusion Safety Checker.
prompt: Input prompt (if exists).
real_source: Paired real dataset(s) that was used to source the prompts or to train the generators.
subset: Denotes which subset the image belongs to (Systematic: Hugging Face models, Manual: manually downloaded models, Commercial: commercial models).
split: Train/test split.
label: Fake/Real label. (1: Fake, 0: Real)

Data splits

Systematic: Systematically downloaded subset of the data (data downloaded from Hugging Face via automatic pipeline)
Manual: Manually downloaded subset of the data
Commercial: Commercial models subset
PublicEval: Evaluation set where generated images are paired with COCO or FFHQ for license-compliant redistribution. Note that these are not the "source" datasets used to sample the generated images.

Usage examples

Default train/eval settings:

import datasets as ds
import PIL.Image as Image
import io

# default training set
commfor_train = ds.load_dataset("OwensLab/CommunityForensics", split="Systematic+Manual", cache_dir="~/.cache/huggingface/datasets")
commfor_eval = ds.load_dataset("OwensLab/CommunityForensics", split="PublicEval", cache_dir="~/.cache/huggingface/datasets")

# optionally shuffle the dataset
commfor_train = commfor_train.shuffle(seed=123, writer_batch_size=3000)

for i, data in enumerate(commfor_train):
  img, label = Image.open(io.BytesIO(data['image_data'])), data['label']
  ## Your operations here ##
  # e.g., img_torch = torchvision.transforms.functional.pil_to_tensor(img)

Note:

  • Downloading and indexing the data can take some time, but only for the first time. Downloading may use up to 2.2TB (1.1TB data + 1.1TB re-indexed arrow files)
  • It is possible to randomly access data by passing an index (e.g., commfor_train[10], commfor_train[247]).
  • It may be wise to set cache_dir to some other directory if your home directory is limited. By default, it will download data to ~/.cache/huggingface/datasets.

It is also possible to use streaming for some use cases (e.g., downloading only a certain subset or a small portion of data).

import datasets as ds
import PIL.Image as Image
import io

# full data streaming
commfor_train_stream = ds.load_dataset("OwensLab/CommunityForensics", split='Systematic+Manual', streaming=True)

# streaming only the evaluation set
commfor_eval_stream = ds.load_dataset("OwensLab/CommunityForensics", split='PublicEval', streaming=True)

# streaming only 10% of training data. Note that this does not contain the full set of models!
commfor_train_stream_10p = ds.load_dataset("OwensLab/CommunityForensics", split='Systematic[:10%]+Manual[:10%]', streaming=True)

# optionally shuffle the streaming dataset
commfor_train_stream_10p = commfor_train_stream_10p.shuffle(seed=123, buffer_size=3000)

# usage example
for i, data in enumerate(commfor_train_stream_10p):
  img, label = Image.open(io.BytesIO(data['image_data'])), data['label']
  ## Your operations here ##
  # e.g., img_torch = torchvision.transforms.functional.pil_to_tensor(img)
  

Please check Hugging Face documentation for more usage examples.

Training fake image classifiers

For training a fake image classifier, it is necessary to pair the generated images with "real" images (here, "real" refers to images that are not generated by AI). In our paper, we used 11 different image datasets: LAION, ImageNet, COCO, FFHQ, CelebA, MetFaces, AFHQ-v2, Forchheim, IMD2020, Landscapes HQ, and VISION, for sampling the generators and training the classifiers. To accurately reproduce our training settings, it is necessary to download all datasets and pair them with the generated images. We understand that this may be inconvenient for simple prototyping, and thus are also working on releasing a smaller subset of our dataset, paired with datasets with licenses that allow redistribution (e.g., COCO, FFHQ).

Dataset Creation

Curation Rationale

This dataset is created to address the limited model diversity of the existing datasets for generated image detection. While some existing datasets contain millions of images, they are typically sampled from handful of generator models. We instead sample 2.7M images from 4803 generator models, approximately 34 times more generators than the most extensive previous dataset that we are aware of.

Collection Methodology

We collect generators in three different subgroups. (1) We systematically download and sample open source latent diffusion models from Hugging Face. (2) We manually sample open source generators with various architectures and training procedures. (3) We sample from both open and closed commercially available generators.

Personal and Sensitive Information

The dataset does not contain any sensitive identifying information (i.e., does not contain data that reveals information such as racial or ethnic origin, sexual orientation, religious or political beliefs).

Considerations of Using the Data

Social Impact of Dataset

This dataset may be useful for researchers in developing and benchmarking forensics methods. Such methods may aid users in better understanding the given image. However, we believe the classifiers, at least the ones that we have trained or benchmarked, still show far too high error rates to be used directly in the wild which can lead to unwanted consequences (e.g., falsely accusing an author of creating fake images or allowing generated content to be certified as real).

Discussion of Biases

The dataset has been primarily sampled from LAION captions. This may introduce biases that could be present in web-scale data (e.g., favoring human photos instead of other categories of photos). In addition, a vast majority of the generators we collect are derivatives of Stable Diffusion, which may introduce bias towards detecting certain types of generators.

Other Known Limitations

The generative models are sourced from the community and may contain inappropriate content. While in many contexts it is important to detect such images, these generated images may require further scrutiny before being used in other downstream applications.

Additional Information

Acknowledgement

We thank the creators of the many open source models that we used to collect the Community Forensics dataset. We thank Chenhao Zheng, Cameron Johnson, Matthias Kirchner, Daniel Geng, Ziyang Chen, Ayush Shrivastava, Yiming Dou, Chao Feng, Xuanchen Lu, Zihao Wei, Zixuan Pan, Inbum Park, Rohit Banerjee, and Ang Cao for the valuable discussions and feedback. This research was developed with funding from the Defense Advanced Research Projects Agency (DARPA) under Contract No. HR001120C0123.

Licensing Information

We release the dataset with a cc-by-4.0 license for research purposes only. In addition, we note that each image in this dataset has been generated by the models with their respective licenses. We therefore provide metadata of all models present in our dataset with their license information. A vast majority of the generators use the CreativeML OpenRAIL-M license. Please refer to the metadata for detailed licensing information for your specific application.

Citation Information

@misc{park2024communityforensics,
    title={Community Forensics: Using Thousands of Generators to Train Fake Image Detectors}, 
    author={Jeongsoo Park and Andrew Owens},
    year={2024},
    eprint={2411.04125},
    archivePrefix={arXiv},
    primaryClass={cs.CV},
    url={https://arxiv.org/abs/2411.04125}, 
}