Datasets:
Vittorio Pippi
commited on
Commit
Β·
036a76e
1
Parent(s):
839c964
Initial commit
Browse files
README.md
CHANGED
@@ -6,39 +6,106 @@ configs:
|
|
6 |
- split: train
|
7 |
path: tars/train/*.tar
|
8 |
- split: fine_tune
|
9 |
-
path: tars/
|
10 |
language:
|
11 |
- en
|
12 |
---
|
13 |
|
14 |
# Accessing the `font-square-v2` Dataset on Hugging Face
|
15 |
|
16 |
-
The `font-square-v2` dataset is hosted on Hugging Face at [blowing-up-groundhogs/font-square-v2](https://huggingface.co/datasets/blowing-up-groundhogs/font-square-v2).
|
17 |
-
|
18 |
-
-
|
19 |
-
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
20 |
|
21 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
22 |
|
23 |
---
|
24 |
|
25 |
## 1. Downloading the Dataset Locally
|
26 |
|
27 |
-
You can download the dataset locally using either Git LFS or the [huggingface_hub](https://huggingface.co/docs/huggingface_hub) Python library.
|
28 |
|
29 |
-
###
|
30 |
|
31 |
-
Clone the repository (
|
32 |
|
33 |
```bash
|
34 |
git lfs clone https://huggingface.co/datasets/blowing-up-groundhogs/font-square-v2
|
35 |
```
|
36 |
|
37 |
-
This will create a local directory named `font-square-v2` that contains the `tars/` folder with
|
38 |
|
39 |
-
###
|
40 |
|
41 |
-
Alternatively,
|
42 |
|
43 |
```python
|
44 |
from huggingface_hub import snapshot_download
|
@@ -48,102 +115,106 @@ local_dir = snapshot_download(repo_id="blowing-up-groundhogs/font-square-v2", re
|
|
48 |
print("Dataset downloaded to:", local_dir)
|
49 |
```
|
50 |
|
51 |
-
After downloading,
|
|
|
|
|
52 |
|
53 |
-
###
|
54 |
|
55 |
-
Once
|
56 |
|
57 |
```python
|
58 |
import webdataset as wds
|
59 |
import os
|
60 |
-
from pathlib import Path
|
61 |
|
62 |
-
|
63 |
-
local_dir = "path/to/font-square-v2" # Update this path if necessary
|
64 |
-
tar_pattern = os.path.join(local_dir, "tars", "{000000..000499}.tar") # Adjust range to match your tar shard naming
|
65 |
|
66 |
-
#
|
67 |
-
|
|
|
68 |
|
69 |
-
|
70 |
-
|
71 |
-
|
72 |
-
rgb_image = sample["rgb.png"] # RGB image (PIL image)
|
73 |
-
bw_image = sample["bw.png"] # BW image (PIL image)
|
74 |
metadata = sample["json"]
|
75 |
|
76 |
-
print("
|
77 |
-
|
78 |
-
# Process the images as needed...
|
79 |
break
|
80 |
```
|
81 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
82 |
---
|
83 |
|
84 |
## 2. Streaming the Dataset Directly Over HTTP
|
85 |
|
86 |
-
If you prefer to
|
87 |
-
|
88 |
-
For example, if the tar shards are available at:
|
89 |
|
|
|
90 |
```
|
91 |
-
https://huggingface.co/datasets/blowing-up-groundhogs/font-square-v2/resolve/main/tars/000000.tar
|
92 |
-
https://huggingface.co/datasets/blowing-up-groundhogs/font-square-v2/resolve/main/tars/000001.tar
|
93 |
...
|
|
|
94 |
```
|
95 |
-
|
96 |
-
you can set up your WebDataset as follows:
|
97 |
|
98 |
```python
|
99 |
import webdataset as wds
|
100 |
|
101 |
-
|
102 |
-
|
103 |
-
|
|
|
104 |
|
105 |
-
# Create a WebDataset that streams data
|
106 |
dataset = wds.WebDataset(url_pattern).decode("pil")
|
107 |
|
108 |
-
# Iterate over a few samples from the streamed dataset
|
109 |
for sample in dataset:
|
110 |
rgb_image = sample["rgb.png"]
|
111 |
bw_image = sample["bw.png"]
|
112 |
metadata = sample["json"]
|
113 |
|
114 |
print("Sample metadata:", metadata)
|
115 |
-
|
116 |
-
# Process sample as needed...
|
117 |
break
|
118 |
```
|
119 |
|
120 |
-
|
121 |
|
122 |
---
|
123 |
|
124 |
## Additional Considerations
|
125 |
|
126 |
-
- **Decoding
|
127 |
-
The `.decode("pil")` method in
|
128 |
|
129 |
```python
|
130 |
import torchvision.transforms as transforms
|
131 |
transform = transforms.ToTensor()
|
132 |
-
|
133 |
dataset = (
|
134 |
-
wds.WebDataset(
|
135 |
.decode("pil")
|
136 |
.map(lambda sample: {
|
137 |
"rgb": transform(sample["rgb.png"]),
|
138 |
"bw": transform(sample["bw.png"]),
|
139 |
-
"metadata": sample["json"]
|
140 |
})
|
141 |
)
|
142 |
```
|
143 |
|
144 |
-
- **Shard Naming
|
145 |
-
|
146 |
-
|
147 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
148 |
|
149 |
-
By following these instructions, you can easily
|
|
|
6 |
- split: train
|
7 |
path: tars/train/*.tar
|
8 |
- split: fine_tune
|
9 |
+
path: tars/fine_tune/*.tar
|
10 |
language:
|
11 |
- en
|
12 |
---
|
13 |
|
14 |
# Accessing the `font-square-v2` Dataset on Hugging Face
|
15 |
|
16 |
+
The `font-square-v2` dataset is hosted on Hugging Face at [blowing-up-groundhogs/font-square-v2](https://huggingface.co/datasets/blowing-up-groundhogs/font-square-v2). It is stored in WebDataset format, with its tar files split into two folders:
|
17 |
+
|
18 |
+
- **tars/train/**: Contains tar shards named `{000..499}.tar` for the main training split.
|
19 |
+
- **tars/fine_tune/**: Contains tar shards named `{000..049}.tar` for optional fine-tuning.
|
20 |
+
|
21 |
+
Each tar file includes multiple samples; each sample consists of:
|
22 |
+
- An **RGB** image file (`.rgb.png`)
|
23 |
+
- A **black-and-white** image file (`.bw.png`)
|
24 |
+
- A **JSON** file (`.json`) with metadata (such as text, writer ID, etc.)
|
25 |
+
|
26 |
+
You can access the dataset either by downloading it locally or by streaming it directly over HTTP.
|
27 |
+
|
28 |
+
---
|
29 |
+
|
30 |
+
## Table of Contents
|
31 |
+
1. [Dataset Creation](#dataset-creation)
|
32 |
+
2. [Downloading the Dataset Locally](#1-downloading-the-dataset-locally)
|
33 |
+
- [Using Git LFS](#using-git-lfs)
|
34 |
+
- [Using the huggingface_hub Python Library](#using-the-huggingface_hub-python-library)
|
35 |
+
- [Using WebDataset with the Local Files](#using-webdataset-with-the-local-files)
|
36 |
+
3. [Streaming the Dataset Directly Over HTTP](#2-streaming-the-dataset-directly-over-http)
|
37 |
+
4. [Additional Considerations](#additional-considerations)
|
38 |
+
|
39 |
+
---
|
40 |
+
|
41 |
+
## Dataset Creation
|
42 |
+
|
43 |
+
### Text Content Sampling
|
44 |
+
This synthetic dataset comprises images of text lines superimposed on diverse backgrounds. First, text lines are generated by sampling sentences from multiple English corpora available via the [NLTK](https://www.nltk.org/) library (for example, `abc`, `brown`, `genesis`, `inaugural`, `state_union`, and `webtext`).
|
45 |
+
|
46 |
+
To better represent both common and rare characters, a **rarity-based weighting** strategy is used. Each word is assigned a weight based on the frequency of its individual characters (unigrams) and pairs of characters (bigrams). Words containing less frequent character patterns receive higher weights, so that they are sampled more often during dataset creation.
|
47 |
+
|
48 |
+
Below is an example Python function used to compute the weight of a word:
|
49 |
+
|
50 |
+
```python
|
51 |
+
def word_weight(word):
|
52 |
+
# u_counts and b_counts are dictionaries storing
|
53 |
+
# unigram and bigram counts over the entire corpus.
|
54 |
+
|
55 |
+
# Compute the unigram score
|
56 |
+
u_score = 0
|
57 |
+
for c in word:
|
58 |
+
u_score += u_counts[c]
|
59 |
+
u_score /= len(word)
|
60 |
+
|
61 |
+
# Compute the bigram score
|
62 |
+
bigrams = pairwise(' ' + word + ' ')
|
63 |
+
b_score = 0
|
64 |
+
for b in bigrams:
|
65 |
+
b_score += b_counts[''.join(b)]
|
66 |
+
b_score /= len(bigrams)
|
67 |
+
|
68 |
+
# Return the average of the two scores
|
69 |
+
return (u_score + b_score) / 2
|
70 |
+
```
|
71 |
+
|
72 |
+
By sampling words using these computed weights, the resulting text exhibits a balanced distribution of characters, allowing the model to learn from both common and rare patterns.
|
73 |
|
74 |
+
### Text Line Rendering
|
75 |
+
After a sentence is sampled, the image is created in two main steps:
|
76 |
+
|
77 |
+
1. **Text Image Generation**:
|
78 |
+
- A font is selected from a list of 100,000 fonts.
|
79 |
+
- The text is rendered on a white background to produce an initial grayscale text image.
|
80 |
+
|
81 |
+
2. **Background and Final Image Creation**:
|
82 |
+
- A background image is chosen from a collection of realistic textures (such as paper, wood, or walls).
|
83 |
+
- A transparency value is randomly selected between 0.5 and 1.
|
84 |
+
- Random transformations (including rotation, warping, Gaussian blur, dilation, and color jitter) are applied to the text image.
|
85 |
+
- Minor transformations (such as dilation, color jitter, and random inversion) are applied to the background image.
|
86 |
+
- Finally, the processed text image is superimposed onto the background image using the chosen transparency, resulting in the final synthetic RGB image.
|
87 |
+
|
88 |
+
In total, this process yields approximately 2.2 million text images. Each image has a fixed height of 64 pixels, while the width varies in proportion to the length of the text. Metadata for each image is stored in the accompanying JSON file.
|
89 |
|
90 |
---
|
91 |
|
92 |
## 1. Downloading the Dataset Locally
|
93 |
|
94 |
+
You can download the dataset locally using either **Git LFS** or the [`huggingface_hub`](https://huggingface.co/docs/huggingface_hub) Python library.
|
95 |
|
96 |
+
### Using Git LFS
|
97 |
|
98 |
+
Clone the repository (ensure you have [Git LFS](https://git-lfs.github.com/) installed):
|
99 |
|
100 |
```bash
|
101 |
git lfs clone https://huggingface.co/datasets/blowing-up-groundhogs/font-square-v2
|
102 |
```
|
103 |
|
104 |
+
This will create a local directory named `font-square-v2` that contains the `tars/` folder with the subdirectories `train/` and `fine_tune/`.
|
105 |
|
106 |
+
### Using the huggingface_hub Python Library
|
107 |
|
108 |
+
Alternatively, download a snapshot of the dataset with:
|
109 |
|
110 |
```python
|
111 |
from huggingface_hub import snapshot_download
|
|
|
115 |
print("Dataset downloaded to:", local_dir)
|
116 |
```
|
117 |
|
118 |
+
After downloading, you will find the tar shards inside:
|
119 |
+
- `local_dir/tars/train/{000..499}.tar`
|
120 |
+
- `local_dir/tars/fine_tune/{000..049}.tar`
|
121 |
|
122 |
+
### Using WebDataset with the Local Files
|
123 |
|
124 |
+
Once downloaded, you can load the dataset with [WebDataset](https://github.com/webdataset/webdataset). For example, to load the **training split**:
|
125 |
|
126 |
```python
|
127 |
import webdataset as wds
|
128 |
import os
|
|
|
129 |
|
130 |
+
local_dir = "path/to/font-square-v2" # Update the path if necessary
|
|
|
|
|
131 |
|
132 |
+
# Load all training shards (000-499)
|
133 |
+
train_pattern = os.path.join(local_dir, "tars", "train", "{000..499}.tar")
|
134 |
+
train_dataset = wds.WebDataset(train_pattern).decode("pil")
|
135 |
|
136 |
+
for sample in train_dataset:
|
137 |
+
rgb_image = sample["rgb.png"] # PIL image
|
138 |
+
bw_image = sample["bw.png"] # PIL image
|
|
|
|
|
139 |
metadata = sample["json"]
|
140 |
|
141 |
+
print("Training sample metadata:", metadata)
|
|
|
|
|
142 |
break
|
143 |
```
|
144 |
|
145 |
+
Similarly, load the **fine-tune** split with:
|
146 |
+
|
147 |
+
```python
|
148 |
+
fine_tune_pattern = os.path.join(local_dir, "tars", "fine_tune", "{000..049}.tar")
|
149 |
+
fine_tune_dataset = wds.WebDataset(fine_tune_pattern).decode("pil")
|
150 |
+
```
|
151 |
+
|
152 |
---
|
153 |
|
154 |
## 2. Streaming the Dataset Directly Over HTTP
|
155 |
|
156 |
+
If you prefer not to download all shards, you can stream them directly via HTTP using the Hugging Face CDN, provided the tar files are public.
|
|
|
|
|
157 |
|
158 |
+
For example, if the training tar shards are available at:
|
159 |
```
|
160 |
+
https://huggingface.co/datasets/blowing-up-groundhogs/font-square-v2/resolve/main/tars/train/000000.tar
|
|
|
161 |
...
|
162 |
+
https://huggingface.co/datasets/blowing-up-groundhogs/font-square-v2/resolve/main/tars/train/000499.tar
|
163 |
```
|
164 |
+
you can stream them as follows:
|
|
|
165 |
|
166 |
```python
|
167 |
import webdataset as wds
|
168 |
|
169 |
+
url_pattern = (
|
170 |
+
"https://huggingface.co/datasets/blowing-up-groundhogs/font-square-v2/resolve/main"
|
171 |
+
"/tars/train/{000000..000499}.tar"
|
172 |
+
)
|
173 |
|
|
|
174 |
dataset = wds.WebDataset(url_pattern).decode("pil")
|
175 |
|
|
|
176 |
for sample in dataset:
|
177 |
rgb_image = sample["rgb.png"]
|
178 |
bw_image = sample["bw.png"]
|
179 |
metadata = sample["json"]
|
180 |
|
181 |
print("Sample metadata:", metadata)
|
|
|
|
|
182 |
break
|
183 |
```
|
184 |
|
185 |
+
(Adjust the shard range accordingly for the fine-tune split.)
|
186 |
|
187 |
---
|
188 |
|
189 |
## Additional Considerations
|
190 |
|
191 |
+
- **Decoding**
|
192 |
+
The `.decode("pil")` method in WebDataset converts image bytes into PIL images. If you prefer PyTorch tensors, you can add a transformation step:
|
193 |
|
194 |
```python
|
195 |
import torchvision.transforms as transforms
|
196 |
transform = transforms.ToTensor()
|
197 |
+
|
198 |
dataset = (
|
199 |
+
wds.WebDataset(train_pattern)
|
200 |
.decode("pil")
|
201 |
.map(lambda sample: {
|
202 |
"rgb": transform(sample["rgb.png"]),
|
203 |
"bw": transform(sample["bw.png"]),
|
204 |
+
"metadata": sample["json"]
|
205 |
})
|
206 |
)
|
207 |
```
|
208 |
|
209 |
+
- **Shard Naming**
|
210 |
+
The naming convention is now:
|
211 |
+
```
|
212 |
+
tars/
|
213 |
+
βββ train/
|
214 |
+
β βββ {000..499}.tar
|
215 |
+
βββ fine_tune/
|
216 |
+
βββ {000..049}.tar
|
217 |
+
```
|
218 |
+
Ensure that your WebDataset pattern matches this folder structure and tar file naming.
|
219 |
|
220 |
+
By following these instructions, you can easily integrate the `font-square-v2` dataset into your project for training and fine-tuning with a diverse set of synthetic text-line images.
|