Datasets:
Vittorio Pippi
commited on
Commit
Β·
831cd58
1
Parent(s):
fcf1697
Initial commit
Browse files
README.md
CHANGED
@@ -13,77 +13,19 @@ language:
|
|
13 |
|
14 |
# Accessing the `font-square-v2` Dataset on Hugging Face
|
15 |
|
16 |
-
The `font-square-v2` dataset is hosted on Hugging Face at [blowing-up-groundhogs/font-square-v2](https://huggingface.co/datasets/blowing-up-groundhogs/font-square-v2). It is stored in WebDataset format, with
|
17 |
|
18 |
-
- **tars/train/**: Contains
|
19 |
-
- **tars/fine_tune/**: Contains
|
20 |
|
21 |
-
Each tar file
|
22 |
-
- An
|
23 |
-
- A
|
24 |
-
- A
|
25 |
|
26 |
-
|
27 |
-
|
28 |
-
|
29 |
-
## Table of Contents
|
30 |
-
1. [Dataset Creation](#dataset-creation)
|
31 |
-
2. [Downloading the Dataset Locally](#1-downloading-the-dataset-locally)
|
32 |
-
- [Using Git LFS](#using-git-lfs)
|
33 |
-
- [Using the huggingface_hub Python Library](#using-the-huggingface_hub-python-library)
|
34 |
-
- [Using WebDataset with the Local Files](#using-webdataset-with-the-local-files)
|
35 |
-
3. [Streaming the Dataset Directly Over HTTP](#2-streaming-the-dataset-directly-over-http)
|
36 |
-
4. [Additional Considerations](#additional-considerations)
|
37 |
-
|
38 |
-
|
39 |
-
## Dataset Creation
|
40 |
-
|
41 |
-
### Text Content Sampling
|
42 |
-
This synthetic dataset comprises images of text lines superimposed on diverse backgrounds. First, text lines are generated by sampling sentences from multiple English corpora available via the [NLTK](https://www.nltk.org/) library (for example, `abc`, `brown`, `genesis`, `inaugural`, `state_union`, and `webtext`).
|
43 |
-
|
44 |
-
To better represent both common and rare characters, a **rarity-based weighting** strategy is used. Each word is assigned a weight based on the frequency of its individual characters (unigrams) and pairs of characters (bigrams). Words containing less frequent character patterns receive higher weights, so that they are sampled more often during dataset creation.
|
45 |
-
|
46 |
-
Below is an example Python function used to compute the weight of a word:
|
47 |
-
|
48 |
-
```python
|
49 |
-
def word_weight(word):
|
50 |
-
# u_counts and b_counts are dictionaries storing
|
51 |
-
# unigram and bigram counts over the entire corpus.
|
52 |
-
|
53 |
-
# Compute the unigram score
|
54 |
-
u_score = 0
|
55 |
-
for c in word:
|
56 |
-
u_score += u_counts[c]
|
57 |
-
u_score /= len(word)
|
58 |
-
|
59 |
-
# Compute the bigram score
|
60 |
-
bigrams = pairwise(' ' + word + ' ')
|
61 |
-
b_score = 0
|
62 |
-
for b in bigrams:
|
63 |
-
b_score += b_counts[''.join(b)]
|
64 |
-
b_score /= len(bigrams)
|
65 |
-
|
66 |
-
# Return the average of the two scores
|
67 |
-
return (u_score + b_score) / 2
|
68 |
-
```
|
69 |
-
|
70 |
-
By sampling words using these computed weights, the resulting text exhibits a balanced distribution of characters, allowing the model to learn from both common and rare patterns.
|
71 |
-
|
72 |
-
### Text Line Rendering
|
73 |
-
After a sentence is sampled, the image is created in two main steps:
|
74 |
-
|
75 |
-
1. **Text Image Generation**:
|
76 |
-
- A font is selected from a list of 100,000 fonts.
|
77 |
-
- The text is rendered on a white background to produce an initial grayscale text image.
|
78 |
-
|
79 |
-
2. **Background and Final Image Creation**:
|
80 |
-
- A background image is chosen from a collection of realistic textures (such as paper, wood, or walls).
|
81 |
-
- A transparency value is randomly selected between 0.5 and 1.
|
82 |
-
- Random transformations (including rotation, warping, Gaussian blur, dilation, and color jitter) are applied to the text image.
|
83 |
-
- Minor transformations (such as dilation, color jitter, and random inversion) are applied to the background image.
|
84 |
-
- Finally, the processed text image is superimposed onto the background image using the chosen transparency, resulting in the final synthetic RGB image.
|
85 |
|
86 |
-
|
87 |
|
88 |
---
|
89 |
|
@@ -93,17 +35,17 @@ You can download the dataset locally using either **Git LFS** or the [`huggingfa
|
|
93 |
|
94 |
### Using Git LFS
|
95 |
|
96 |
-
Clone the repository (ensure
|
97 |
|
98 |
```bash
|
99 |
git lfs clone https://huggingface.co/datasets/blowing-up-groundhogs/font-square-v2
|
100 |
```
|
101 |
|
102 |
-
This
|
103 |
|
104 |
### Using the huggingface_hub Python Library
|
105 |
|
106 |
-
Alternatively, download a snapshot of the dataset
|
107 |
|
108 |
```python
|
109 |
from huggingface_hub import snapshot_download
|
@@ -113,21 +55,21 @@ local_dir = snapshot_download(repo_id="blowing-up-groundhogs/font-square-v2", re
|
|
113 |
print("Dataset downloaded to:", local_dir)
|
114 |
```
|
115 |
|
116 |
-
After downloading,
|
117 |
- `local_dir/tars/train/{000..499}.tar`
|
118 |
- `local_dir/tars/fine_tune/{000..049}.tar`
|
119 |
|
120 |
### Using WebDataset with the Local Files
|
121 |
|
122 |
-
Once downloaded, you can load the dataset
|
123 |
|
124 |
```python
|
125 |
import webdataset as wds
|
126 |
import os
|
127 |
|
128 |
-
local_dir = "path/to/font-square-v2" # Update
|
129 |
|
130 |
-
# Load
|
131 |
train_pattern = os.path.join(local_dir, "tars", "train", "{000..499}.tar")
|
132 |
train_dataset = wds.WebDataset(train_pattern).decode("pil")
|
133 |
|
@@ -140,7 +82,7 @@ for sample in train_dataset:
|
|
140 |
break
|
141 |
```
|
142 |
|
143 |
-
|
144 |
|
145 |
```python
|
146 |
fine_tune_pattern = os.path.join(local_dir, "tars", "fine_tune", "{000..049}.tar")
|
@@ -151,15 +93,7 @@ fine_tune_dataset = wds.WebDataset(fine_tune_pattern).decode("pil")
|
|
151 |
|
152 |
## 2. Streaming the Dataset Directly Over HTTP
|
153 |
|
154 |
-
If you prefer not to download
|
155 |
-
|
156 |
-
For example, if the training tar shards are available at:
|
157 |
-
```
|
158 |
-
https://huggingface.co/datasets/blowing-up-groundhogs/font-square-v2/resolve/main/tars/train/000000.tar
|
159 |
-
...
|
160 |
-
https://huggingface.co/datasets/blowing-up-groundhogs/font-square-v2/resolve/main/tars/train/000499.tar
|
161 |
-
```
|
162 |
-
you can stream them as follows:
|
163 |
|
164 |
```python
|
165 |
import webdataset as wds
|
@@ -186,8 +120,8 @@ for sample in dataset:
|
|
186 |
|
187 |
## Additional Considerations
|
188 |
|
189 |
-
- **Decoding
|
190 |
-
The `.decode("pil")` method in WebDataset converts image bytes into PIL images.
|
191 |
|
192 |
```python
|
193 |
import torchvision.transforms as transforms
|
@@ -204,8 +138,8 @@ for sample in dataset:
|
|
204 |
)
|
205 |
```
|
206 |
|
207 |
-
- **Shard Naming
|
208 |
-
|
209 |
```
|
210 |
tars/
|
211 |
βββ train/
|
@@ -213,6 +147,5 @@ for sample in dataset:
|
|
213 |
βββ fine_tune/
|
214 |
βββ {000..049}.tar
|
215 |
```
|
216 |
-
Ensure that your WebDataset pattern matches this folder structure and tar file naming.
|
217 |
|
218 |
-
By following these instructions, you can easily integrate the `font-square-v2` dataset into your project for training and fine-tuning
|
|
|
13 |
|
14 |
# Accessing the `font-square-v2` Dataset on Hugging Face
|
15 |
|
16 |
+
The `font-square-v2` dataset is hosted on Hugging Face at [blowing-up-groundhogs/font-square-v2](https://huggingface.co/datasets/blowing-up-groundhogs/font-square-v2). It is stored in WebDataset format, with tar files organized as follows:
|
17 |
|
18 |
+
- **tars/train/**: Contains `{000..499}.tar` shards for the main training split.
|
19 |
+
- **tars/fine_tune/**: Contains `{000..049}.tar` shards for fine-tuning.
|
20 |
|
21 |
+
Each tar file contains multiple samples, where each sample includes:
|
22 |
+
- An RGB image (`.rgb.png`)
|
23 |
+
- A black-and-white image (`.bw.png`)
|
24 |
+
- A JSON file (`.json`) with metadata (e.g. text and writer ID)
|
25 |
|
26 |
+
For details on how the synthetic dataset was generated, please refer to our paper: [Synthetic Dataset Generation](https://example.com/your-paper).
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
27 |
|
28 |
+
You can access the dataset either by downloading it locally or by streaming it directly over HTTP.
|
29 |
|
30 |
---
|
31 |
|
|
|
35 |
|
36 |
### Using Git LFS
|
37 |
|
38 |
+
Clone the repository (ensure [Git LFS](https://git-lfs.github.com/) is installed):
|
39 |
|
40 |
```bash
|
41 |
git lfs clone https://huggingface.co/datasets/blowing-up-groundhogs/font-square-v2
|
42 |
```
|
43 |
|
44 |
+
This creates a local directory `font-square-v2` containing the `tars/` folder with the subdirectories `train/` and `fine_tune/`.
|
45 |
|
46 |
### Using the huggingface_hub Python Library
|
47 |
|
48 |
+
Alternatively, download a snapshot of the dataset:
|
49 |
|
50 |
```python
|
51 |
from huggingface_hub import snapshot_download
|
|
|
55 |
print("Dataset downloaded to:", local_dir)
|
56 |
```
|
57 |
|
58 |
+
After downloading, the tar shards are located in:
|
59 |
- `local_dir/tars/train/{000..499}.tar`
|
60 |
- `local_dir/tars/fine_tune/{000..049}.tar`
|
61 |
|
62 |
### Using WebDataset with the Local Files
|
63 |
|
64 |
+
Once downloaded, you can load the dataset using [WebDataset](https://github.com/webdataset/webdataset). For example, to load the training split:
|
65 |
|
66 |
```python
|
67 |
import webdataset as wds
|
68 |
import os
|
69 |
|
70 |
+
local_dir = "path/to/font-square-v2" # Update as needed
|
71 |
|
72 |
+
# Load training shards
|
73 |
train_pattern = os.path.join(local_dir, "tars", "train", "{000..499}.tar")
|
74 |
train_dataset = wds.WebDataset(train_pattern).decode("pil")
|
75 |
|
|
|
82 |
break
|
83 |
```
|
84 |
|
85 |
+
And similarly for the fine-tune split:
|
86 |
|
87 |
```python
|
88 |
fine_tune_pattern = os.path.join(local_dir, "tars", "fine_tune", "{000..049}.tar")
|
|
|
93 |
|
94 |
## 2. Streaming the Dataset Directly Over HTTP
|
95 |
|
96 |
+
If you prefer not to download the shards, you can stream them directly from Hugging Face using the CDN (provided the tar files are public). For example:
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
97 |
|
98 |
```python
|
99 |
import webdataset as wds
|
|
|
120 |
|
121 |
## Additional Considerations
|
122 |
|
123 |
+
- **Decoding:**
|
124 |
+
The `.decode("pil")` method in WebDataset converts image bytes into PIL images. To use PyTorch tensors, add a transform step:
|
125 |
|
126 |
```python
|
127 |
import torchvision.transforms as transforms
|
|
|
138 |
)
|
139 |
```
|
140 |
|
141 |
+
- **Shard Naming:**
|
142 |
+
Ensure your WebDataset pattern matches the following structure:
|
143 |
```
|
144 |
tars/
|
145 |
βββ train/
|
|
|
147 |
βββ fine_tune/
|
148 |
βββ {000..049}.tar
|
149 |
```
|
|
|
150 |
|
151 |
+
By following these instructions, you can easily integrate the `font-square-v2` dataset into your project for training and fine-tuning.
|