Datasets:
Vittorio Pippi
commited on
Commit
·
313e268
1
Parent(s):
f074040
Inital commit
Browse files- README.md +142 -0
- tars/000001.tar +2 -2
- tars/000002.tar +3 -0
- tars/000003.tar +3 -0
- tars/000004.tar +3 -0
- tars/000005.tar +3 -0
- tars/000006.tar +3 -0
- tars/000007.tar +3 -0
- tars/000008.tar +3 -0
- tars/000009.tar +3 -0
README.md
CHANGED
@@ -1,3 +1,145 @@
|
|
1 |
---
|
2 |
license: mit
|
3 |
---
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
---
|
2 |
license: mit
|
3 |
---
|
4 |
+
|
5 |
+
# Accessing the `font-square-v2` Dataset on Hugging Face
|
6 |
+
|
7 |
+
The `font-square-v2` dataset is hosted on Hugging Face at [blowing-up-groundhogs/font-square-v2](https://huggingface.co/datasets/blowing-up-groundhogs/font-square-v2). The dataset is stored in WebDataset format, with all tar files located in the `tars/` folder of the repository. Each tar file contains multiple samples; each sample includes:
|
8 |
+
- An RGB image file (with the extension `.rgb.png`)
|
9 |
+
- A black-and-white image file (with the extension `.bw.png`)
|
10 |
+
- A JSON file (`.json`) with metadata (e.g. text and writer ID)
|
11 |
+
|
12 |
+
You can access the dataset in one of two ways: by downloading it locally or by streaming it directly over HTTP.
|
13 |
+
|
14 |
+
---
|
15 |
+
|
16 |
+
## 1. Downloading the Dataset Locally
|
17 |
+
|
18 |
+
You can download the dataset locally using either Git LFS or the [huggingface_hub](https://huggingface.co/docs/huggingface_hub) Python library.
|
19 |
+
|
20 |
+
### **Using Git LFS**
|
21 |
+
|
22 |
+
Clone the repository (make sure [Git LFS](https://git-lfs.github.com/) is installed):
|
23 |
+
|
24 |
+
```bash
|
25 |
+
git lfs clone https://huggingface.co/datasets/blowing-up-groundhogs/font-square-v2
|
26 |
+
```
|
27 |
+
|
28 |
+
This will create a local directory named `font-square-v2` that contains the `tars/` folder with all the tar shards.
|
29 |
+
|
30 |
+
### **Using the huggingface_hub Python Library**
|
31 |
+
|
32 |
+
Alternatively, you can download a snapshot of the dataset in your Python code:
|
33 |
+
|
34 |
+
```python
|
35 |
+
from huggingface_hub import snapshot_download
|
36 |
+
|
37 |
+
# Download the repository; the local path is returned
|
38 |
+
local_dir = snapshot_download(repo_id="blowing-up-groundhogs/font-square-v2")
|
39 |
+
print("Dataset downloaded to:", local_dir)
|
40 |
+
```
|
41 |
+
|
42 |
+
After downloading, the tar shards are available in the `tars/` subdirectory of `local_dir`.
|
43 |
+
|
44 |
+
### **Using WebDataset with the Local Files**
|
45 |
+
|
46 |
+
Once the dataset is downloaded locally, you can load it with [WebDataset](https://github.com/webdataset/webdataset). For example:
|
47 |
+
|
48 |
+
```python
|
49 |
+
import webdataset as wds
|
50 |
+
import os
|
51 |
+
from pathlib import Path
|
52 |
+
|
53 |
+
# Assuming the dataset was downloaded to `local_dir` and tar shards are in the 'tars' folder
|
54 |
+
local_dir = "path/to/font-square-v2" # Update this path if necessary
|
55 |
+
tar_pattern = os.path.join(local_dir, "tars", "{000000..999999}.tar") # Adjust range to match your tar shard naming
|
56 |
+
|
57 |
+
# Create a WebDataset
|
58 |
+
dataset = wds.WebDataset(tar_pattern).decode("pil")
|
59 |
+
|
60 |
+
# Example: Iterate over a few samples
|
61 |
+
for sample in dataset:
|
62 |
+
# Access sample items
|
63 |
+
rgb_image = sample["rgb.png"] # RGB image (PIL image)
|
64 |
+
bw_image = sample["bw.png"] # BW image (PIL image)
|
65 |
+
metadata_bytes = sample["json"]
|
66 |
+
|
67 |
+
# Convert JSON metadata to dictionary
|
68 |
+
import json
|
69 |
+
metadata = json.loads(metadata_bytes.decode("utf-8"))
|
70 |
+
print("Sample metadata:", metadata)
|
71 |
+
|
72 |
+
# Process the images as needed...
|
73 |
+
break
|
74 |
+
```
|
75 |
+
|
76 |
+
---
|
77 |
+
|
78 |
+
## 2. Streaming the Dataset Directly Over HTTP
|
79 |
+
|
80 |
+
If you prefer to stream the dataset directly from Hugging Face (without downloading the entire dataset), you can use the HTTP URLs provided by the Hugging Face CDN. Make sure that your tar files are public and accessible.
|
81 |
+
|
82 |
+
For example, if the tar shards are available at:
|
83 |
+
|
84 |
+
```
|
85 |
+
https://huggingface.co/datasets/blowing-up-groundhogs/font-square-v2/resolve/main/tars/000000.tar
|
86 |
+
https://huggingface.co/datasets/blowing-up-groundhogs/font-square-v2/resolve/main/tars/000001.tar
|
87 |
+
...
|
88 |
+
```
|
89 |
+
|
90 |
+
you can set up your WebDataset as follows:
|
91 |
+
|
92 |
+
```python
|
93 |
+
import webdataset as wds
|
94 |
+
|
95 |
+
# Define the URL pattern to stream the tar shards directly from Hugging Face
|
96 |
+
url_pattern = "https://huggingface.co/datasets/blowing-up-groundhogs/font-square-v2/resolve/main/tars/{000000..000010}.tar"
|
97 |
+
# Adjust the shard range (here 000000 to 000010) to cover all your tar shards.
|
98 |
+
|
99 |
+
# Create a WebDataset that streams data
|
100 |
+
dataset = wds.WebDataset(url_pattern).decode("pil")
|
101 |
+
|
102 |
+
# Iterate over a few samples from the streamed dataset
|
103 |
+
for sample in dataset:
|
104 |
+
rgb_image = sample["rgb.png"]
|
105 |
+
bw_image = sample["bw.png"]
|
106 |
+
metadata_bytes = sample["json"]
|
107 |
+
|
108 |
+
import json
|
109 |
+
metadata = json.loads(metadata_bytes.decode("utf-8"))
|
110 |
+
print("Sample metadata:", metadata)
|
111 |
+
|
112 |
+
# Process sample as needed...
|
113 |
+
break
|
114 |
+
```
|
115 |
+
|
116 |
+
**Note:** Streaming performance depends on your network connection and the Hugging Face CDN. If you experience any slowdowns, consider downloading the dataset locally instead.
|
117 |
+
|
118 |
+
---
|
119 |
+
|
120 |
+
## Additional Considerations
|
121 |
+
|
122 |
+
- **Decoding:**
|
123 |
+
The `.decode("pil")` method in the WebDataset pipeline converts image bytes into PIL images. If you prefer PyTorch tensors, you can add a transformation:
|
124 |
+
|
125 |
+
```python
|
126 |
+
import torchvision.transforms as transforms
|
127 |
+
transform = transforms.ToTensor()
|
128 |
+
|
129 |
+
dataset = (
|
130 |
+
wds.WebDataset(url_pattern)
|
131 |
+
.decode("pil")
|
132 |
+
.map(lambda sample: {
|
133 |
+
"rgb": transform(sample["rgb.png"]),
|
134 |
+
"bw": transform(sample["bw.png"]),
|
135 |
+
"metadata": sample["json"],
|
136 |
+
})
|
137 |
+
)
|
138 |
+
```
|
139 |
+
|
140 |
+
- **Shard Naming:**
|
141 |
+
Ensure that the naming convention in your `tars/` folder matches the URL pattern used above. Adjust the pattern `{000000..000010}` accordingly if your tar files have a different naming scheme or if there are more shards.
|
142 |
+
|
143 |
+
---
|
144 |
+
|
145 |
+
By following these instructions, you can easily load and work with the `font-square-v2` dataset from Hugging Face in your Python projects using WebDataset.
|
tars/000001.tar
CHANGED
@@ -1,3 +1,3 @@
|
|
1 |
version https://git-lfs.github.com/spec/v1
|
2 |
-
oid sha256:
|
3 |
-
size
|
|
|
1 |
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:5c1e7931a0d408c5e039f4a4cfe0a7a89cd3288169d7c4b6c2cf6bcec5fca34f
|
3 |
+
size 259133440
|
tars/000002.tar
ADDED
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
1 |
+
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:e2d59cea096c53f94bcee78087b9ef17aba46b07a65440e29b66c6901e4df43a
|
3 |
+
size 255344640
|
tars/000003.tar
ADDED
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
1 |
+
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:15ea27332693d625d47a96254c71ac865e0cf9639bfe4404edd8ab1c97a9c316
|
3 |
+
size 245145600
|
tars/000004.tar
ADDED
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
1 |
+
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:2e21acb30d2c2d0245601fe98857abb4d8c6de2358346360daa13c7ac4cad361
|
3 |
+
size 260771840
|
tars/000005.tar
ADDED
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
1 |
+
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:8144aafcbc3b70fcb4c5d186d6720e03806ab15e8de54f160be7427b35b0384a
|
3 |
+
size 266577920
|
tars/000006.tar
ADDED
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
1 |
+
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:acb9e0ba3141642367ac3159166aab950894bdf0c3e398a2b2b16b7a608d1914
|
3 |
+
size 254863360
|
tars/000007.tar
ADDED
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
1 |
+
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:de2a8fd055bfdfd3e16431714300b01215d9ae39e1c8bc201612f7f271a7b8da
|
3 |
+
size 250019840
|
tars/000008.tar
ADDED
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
1 |
+
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:0835e6efafced2c6bff96a7e909ce3e5c226a8e2a10e7fd7cf9b9426525af0e8
|
3 |
+
size 251371520
|
tars/000009.tar
ADDED
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
1 |
+
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:7fb15f6eaa804c32c9f38436d476d881ab1171a417801671c02c5dab38bf2f07
|
3 |
+
size 256402528
|