Dataset Preview
Full Screen
The full dataset viewer is not available (click to read why). Only showing a preview of the rows.
The dataset generation failed
Error code:   DatasetGenerationError
Exception:    UnicodeDecodeError
Message:      'utf-8' codec can't decode byte 0xff in position 0: invalid start byte
Traceback:    Traceback (most recent call last):
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/builder.py", line 1854, in _prepare_split_single
                  for _, table in generator:
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/packaged_modules/text/text.py", line 73, in _generate_tables
                  batch = f.read(self.config.chunksize)
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/utils/file_utils.py", line 826, in read_with_retries
                  out = read(*args, **kwargs)
                File "/usr/local/lib/python3.9/codecs.py", line 322, in decode
                  (result, consumed) = self._buffer_decode(data, self.errors, final)
              UnicodeDecodeError: 'utf-8' codec can't decode byte 0xff in position 0: invalid start byte
              
              The above exception was the direct cause of the following exception:
              
              Traceback (most recent call last):
                File "/src/services/worker/src/worker/job_runners/config/parquet_and_info.py", line 1420, in compute_config_parquet_and_info_response
                  parquet_operations = convert_to_parquet(builder)
                File "/src/services/worker/src/worker/job_runners/config/parquet_and_info.py", line 1052, in convert_to_parquet
                  builder.download_and_prepare(
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/builder.py", line 924, in download_and_prepare
                  self._download_and_prepare(
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/builder.py", line 1000, in _download_and_prepare
                  self._prepare_split(split_generator, **prepare_split_kwargs)
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/builder.py", line 1741, in _prepare_split
                  for job_id, done, content in self._prepare_split_single(
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/builder.py", line 1897, in _prepare_split_single
                  raise DatasetGenerationError("An error occurred while generating the dataset") from e
              datasets.exceptions.DatasetGenerationError: An error occurred while generating the dataset

Need help to make the dataset viewer work? Make sure to review how to configure the dataset viewer, and open a discussion for direct support.

text
string
Here are my personal standards on the segments that I am doing:
* No pedo, X-rated
* No text, no signatures, no watermarks
* No borders, frames, multi-panel, or character sheets
* No amateur hour, no blurry, no bad digitization
* No chibi. no "sticker" style. No thick black outlines
* No pure CGI render. Just because it "looks good" doesnt mean it belongs
data-0001/6011743.jpg
data-0000/6014704.jpg
data-0001/6010100.jpg
data-0001/6009864.jpg
data-0001/6011187.jpg
data-0010/5962176.jpg
data-0000/6015915.jpg
data-0010/5961319.jpg
data-0001/6013104.jpg
data-0001/6010534.jpg
data-0010/5963941.jpg
data-0000/6013889.jpg
data-0000/6013997.jpg
data-0010/5961324.jpg
data-0000/6014485.jpg
data-0010/5963334.jpg
data-0010/5961392.jpg
data-0001/6008849.jpg
data-0001/6010726.jpg
data-0000/6015247.jpg
data-0000/6014833.jpg
data-0000/6014894.jpg
data-0000/6014228.jpg
data-0001/6013638.jpg
data-0000/6015440.jpg
data-0010/5960977.jpg
data-0001/6010293.jpg
data-0010/5962235.jpg
data-0001/6011722.jpg
data-0000/6015948.jpg
data-0010/5961756.jpg
data-0001/6010140.jpg
data-0010/5961677.jpg
data-0001/6013110.jpg
data-0001/6011221.jpg
data-0000/6014190.jpg
data-0001/6009903.jpg
data-0001/6008572.jpg
data-0010/5961670.jpg
data-0000/6013817.jpg
data-0001/6012041.jpg
data-0001/6008908.jpg
data-0010/5963099.jpg
data-0010/5964779.jpg
data-0000/6014971.jpg
data-0000/6015272.jpg
data-0001/6009889.jpg
data-0001/6011628.jpg
data-0010/5962351.jpg
data-0001/6011322.jpg
data-0000/6014486.jpg
data-0000/6013951.jpg
data-0001/6011198.jpg
data-0010/5964814.jpg
data-0000/6013950.jpg
data-0000/6014206.jpg
data-0010/5962368.jpg
data-0001/6009916.jpg
data-0000/6014843.jpg
data-0000/6015547.jpg
data-0000/6015959.jpg
data-0001/6012791.jpg
data-0010/5964050.jpg
data-0001/6010648.jpg
data-0001/6010651.jpg
data-0001/6009599.jpg
data-0001/6009077.jpg
data-0000/6015631.jpg
data-0010/5963187.jpg
data-0001/6013496.jpg
data-0000/6014108.jpg
data-0000/6015956.jpg
data-0001/6009015.jpg
data-0000/6015613.jpg
data-0001/6012710.jpg
data-0000/6014427.jpg
data-0001/6011466.jpg
data-0010/5963197.jpg
data-0000/6015194.jpg
data-0010/5961964.jpg
data-0001/6008510.jpg
data-0010/5961382.jpg
data-0000/6015962.jpg
data-0010/5963551.jpg
data-0000/6014810.jpg
data-0001/6008606.jpg
data-0001/6012800.jpg
data-0000/6015354.jpg
data-0001/6011694.jpg
data-0000/6013821.jpg
data-0000/6014513.jpg
End of preview.

What is this?

This hosts a collaborative effort to clean up the mess that is the danbooru dataset. The Danbooru dataset is a wealth of (mostly) free anime style images, that are already individually tagged! The problem being that the images are indiscriminately included. Some pics are explicitly copyrighted and shouldnt be in it. Some have watermarks. Some have legally questionable subject matter. Some just frankly arent good. But the rest... are really good

So, this is a public effort to pull out just the clean, AI-training-usable images. Note that we do not remove a signature or watermark from an image. We just exclude the image entirely from the set.

We are working from https://huggingface.co/datasets/animelover/danbooru2022/

which has the advantage of containing images that have already been somewhat appropriately resized. So now we copy a block of them, and delete the "bad" images. One zip file at a time. (See the "STANDARDS.txt") file, for suggested criteria for deleting images.

There are literally hundreds of blocks of data, each block having perhaps 4000-5000 images.https://huggingface.co/datasets/animelover/danbooru2022 So this is an organized, crowd-sourced volunteer effort.

Training tips:

While he images should be "usable" for AI training, and they are all individually tagged with .txt files, if you just dumped them all into a trainer, the results would probably not be coherent. You probably will want to take an image directory browser, and select a group of them for either subject matter, or style, and pull those specific ones out for training.

You may also want to add in your own custom tags, on top of the ones that are already there.

Currently spoken-for segments:

  • 0000-0010: ppbrown (data-0001 - data-0003 complete)
  • 0010: complete
  • 0040: complete
Downloads last month
63