Create README.md
Browse files
README.md
ADDED
@@ -0,0 +1,80 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
---
|
2 |
+
# For reference on dataset card metadata, see the spec: https://github.com/huggingface/hub-docs/blob/main/datasetcard.md?plain=1
|
3 |
+
# Doc / guide: https://huggingface.co/docs/hub/datasets-cards
|
4 |
+
{}
|
5 |
+
---
|
6 |
+
|
7 |
+
# Dataset Card for Describable Textures Dataset (DTD)
|
8 |
+
|
9 |
+
<!-- Provide a quick summary of the dataset. -->
|
10 |
+
|
11 |
+
## Dataset Details
|
12 |
+
|
13 |
+
### Dataset Description
|
14 |
+
|
15 |
+
<!-- Provide a longer summary of what this dataset is. -->
|
16 |
+
The Describable Textures Dataset (DTD) is a texture classification dataset consisting of 5,640 images categorized into 47 texture classes based on human perception. Each image is labeled with a primary texture category (key attribute) and may have additional joint attributes representing secondary textures. The dataset is divided into three equal splits (train, validation, test) with 40 images per class per split.
|
17 |
+
|
18 |
+
### Dataset Sources
|
19 |
+
|
20 |
+
<!-- Provide the basic links for the dataset. -->
|
21 |
+
|
22 |
+
- **Homepage:** https://www.robots.ox.ac.uk/~vgg/data/dtd/
|
23 |
+
- **Paper:** Cimpoi, M., Maji, S., Kokkinos, I., Mohamed, S., & Vedaldi, A. (2014). Describing textures in the wild. In Proceedings of the IEEE conference on computer vision and pattern recognition (pp. 3606-3613).
|
24 |
+
|
25 |
+
## Dataset Structure
|
26 |
+
|
27 |
+
<!-- This section provides a description of the dataset fields, and additional information about the dataset structure such as criteria used to create the splits, relationships between data points, etc. -->
|
28 |
+
|
29 |
+
Each sample in the dataset contains:
|
30 |
+
|
31 |
+
- **image:** A variable-sized RGB image
|
32 |
+
|
33 |
+
- **label:** A categorical label representing the texture class
|
34 |
+
|
35 |
+
Total images: 5,640
|
36 |
+
|
37 |
+
Classes: 47 (e.g., banded, blotchy, chequered, cracked, dotted, grid, lined, marbled, porous, striped, etc.)
|
38 |
+
|
39 |
+
Splits:
|
40 |
+
|
41 |
+
- **Train:** 1,880 images (40 per class)
|
42 |
+
|
43 |
+
- **Validation:** 1,880 images (40 per class)
|
44 |
+
|
45 |
+
- **Test:** 1,880 images (40 per class)
|
46 |
+
|
47 |
+
Image specs: Variable sizes (300×300 to 640×640 pixels), RGB
|
48 |
+
|
49 |
+
## Example Usage
|
50 |
+
Below is a quick example of how to load this dataset via the Hugging Face Datasets library.
|
51 |
+
```
|
52 |
+
from datasets import load_dataset
|
53 |
+
|
54 |
+
# Load the dataset
|
55 |
+
dataset = load_dataset("../../aidatasets/images/dtd.py", split="train", trust_remote_code=True)
|
56 |
+
# dataset = load_dataset("../../aidatasets/images/dtd.py", split="validation", trust_remote_code=True)
|
57 |
+
# dataset = load_dataset("../../aidatasets/images/dtd.py", split="test", trust_remote_code=True)
|
58 |
+
|
59 |
+
# Access a sample from the dataset
|
60 |
+
example = dataset[0]
|
61 |
+
image = example["image"]
|
62 |
+
label = example["label"]
|
63 |
+
|
64 |
+
image.show() # Display the image
|
65 |
+
print(f"Label: {label}")
|
66 |
+
```
|
67 |
+
|
68 |
+
## Citation
|
69 |
+
|
70 |
+
<!-- If there is a paper or blog post introducing the dataset, the APA and Bibtex information for that should go in this section. -->
|
71 |
+
|
72 |
+
**BibTeX:**
|
73 |
+
|
74 |
+
@inproceedings{cimpoi2014describing,
|
75 |
+
title={Describing textures in the wild},
|
76 |
+
author={Cimpoi, Mircea and Maji, Subhransu and Kokkinos, Iasonas and Mohamed, Sammy and Vedaldi, Andrea},
|
77 |
+
booktitle={Proceedings of the IEEE conference on computer vision and pattern recognition},
|
78 |
+
pages={3606--3613},
|
79 |
+
year={2014}
|
80 |
+
}
|