Commit
·
cb5bdd5
1
Parent(s):
1809a39
Add PDF to dataset conversion script with examples and documentation
Browse files- Add pdf-to-dataset.py script for converting PDF directories to HF datasets
- Include example PDFs for testing
- Add comprehensive README with usage instructions
- Add CLAUDE.md for development notes
- Configure dataset viewer to be disabled
- Set up Git LFS for PDF files
- .gitattributes +1 -0
- .gitignore +6 -0
- CLAUDE.md +59 -0
- README.md +108 -0
- pdf-examples/10.1177_1941738110375910.pdf +3 -0
- pdf-examples/2025.06.11.659105v1.full.pdf +3 -0
- pdf-to-dataset.py +138 -0
.gitattributes
CHANGED
@@ -57,3 +57,4 @@ saved_model/**/* filter=lfs diff=lfs merge=lfs -text
|
|
57 |
# Video files - compressed
|
58 |
*.mp4 filter=lfs diff=lfs merge=lfs -text
|
59 |
*.webm filter=lfs diff=lfs merge=lfs -text
|
|
|
|
57 |
# Video files - compressed
|
58 |
*.mp4 filter=lfs diff=lfs merge=lfs -text
|
59 |
*.webm filter=lfs diff=lfs merge=lfs -text
|
60 |
+
*.pdf filter=lfs diff=lfs merge=lfs -text
|
.gitignore
ADDED
@@ -0,0 +1,6 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
.DS_Store
|
2 |
+
__pycache__/
|
3 |
+
*.pyc
|
4 |
+
.ruff_cache/
|
5 |
+
test-dataset/
|
6 |
+
test-*-dataset/
|
CLAUDE.md
ADDED
@@ -0,0 +1,59 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
# Dataset Creation Scripts - Development Notes
|
2 |
+
|
3 |
+
This repository contains UV scripts for creating Hugging Face datasets from local files.
|
4 |
+
|
5 |
+
## Important Configuration
|
6 |
+
|
7 |
+
### Dataset Viewer
|
8 |
+
|
9 |
+
Since these are script repositories (not actual datasets), we should **disable the dataset viewer** to avoid confusion. Add the following to the dataset card YAML header:
|
10 |
+
|
11 |
+
```yaml
|
12 |
+
---
|
13 |
+
viewer: false
|
14 |
+
---
|
15 |
+
```
|
16 |
+
|
17 |
+
This prevents Hugging Face from trying to display the scripts as data, which would be misleading since users are meant to download and run these scripts, not view them as datasets.
|
18 |
+
|
19 |
+
Reference: https://huggingface.co/docs/hub/datasets-viewer-configure#disable-the-viewer
|
20 |
+
|
21 |
+
## Repository Structure
|
22 |
+
|
23 |
+
```
|
24 |
+
dataset-creation/
|
25 |
+
├── README.md # User-facing documentation
|
26 |
+
├── CLAUDE.md # Development notes (this file)
|
27 |
+
├── pdf-to-dataset.py # PDF processing script
|
28 |
+
├── pdf-examples/ # Test PDFs for development
|
29 |
+
└── .gitignore # Ignore test outputs
|
30 |
+
```
|
31 |
+
|
32 |
+
## Testing
|
33 |
+
|
34 |
+
Test locally with:
|
35 |
+
```bash
|
36 |
+
uv run pdf-to-dataset.py pdf-examples test-dataset --private
|
37 |
+
```
|
38 |
+
|
39 |
+
## Future Scripts
|
40 |
+
|
41 |
+
Potential additions (when needed):
|
42 |
+
- `images-to-dataset.py` - Process image directories
|
43 |
+
- `text-to-dataset.py` - Convert text files
|
44 |
+
- `audio-to-dataset.py` - Process audio files
|
45 |
+
- `json-to-dataset.py` - Structure JSON data
|
46 |
+
|
47 |
+
## Design Decisions
|
48 |
+
|
49 |
+
1. **Simple is better**: Scripts use built-in dataset loaders where possible
|
50 |
+
2. **No GPU required**: These are data preparation scripts, not inference
|
51 |
+
3. **Direct upload**: Use `push_to_hub` for simplicity
|
52 |
+
4. **Flexible output**: Upload raw objects (PDFs, images) for user processing
|
53 |
+
|
54 |
+
## Maintenance Notes
|
55 |
+
|
56 |
+
- Always test scripts with local examples before pushing
|
57 |
+
- Keep dependencies minimal
|
58 |
+
- Follow UV script best practices from main CLAUDE.md
|
59 |
+
- Ensure ruff formatting and linting passes
|
README.md
ADDED
@@ -0,0 +1,108 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
---
|
2 |
+
viewer: false
|
3 |
+
tags: [uv-script, dataset-creation, pdf-processing, document-processing, tool]
|
4 |
+
task: other
|
5 |
+
language: en
|
6 |
+
---
|
7 |
+
|
8 |
+
# Dataset Creation Scripts
|
9 |
+
|
10 |
+
Ready-to-run scripts for creating Hugging Face datasets from local files.
|
11 |
+
|
12 |
+
## Available Scripts
|
13 |
+
|
14 |
+
### 📄 pdf-to-dataset.py
|
15 |
+
|
16 |
+
Convert directories of PDF files into Hugging Face datasets.
|
17 |
+
|
18 |
+
**Features:**
|
19 |
+
- 📁 Uploads PDFs as dataset objects for flexible processing
|
20 |
+
- 🏷️ Automatic labeling from folder structure
|
21 |
+
- 🚀 Zero configuration - just point at your PDFs
|
22 |
+
- 📤 Direct upload to Hugging Face Hub
|
23 |
+
|
24 |
+
**Usage:**
|
25 |
+
```bash
|
26 |
+
# Basic usage
|
27 |
+
uv run pdf-to-dataset.py /path/to/pdfs username/my-dataset
|
28 |
+
|
29 |
+
# Create private dataset
|
30 |
+
uv run pdf-to-dataset.py /path/to/pdfs username/my-dataset --private
|
31 |
+
|
32 |
+
# Organized by categories (folder structure creates labels)
|
33 |
+
# /pdfs/invoice/doc1.pdf → label: "invoice"
|
34 |
+
# /pdfs/receipt/doc2.pdf → label: "receipt"
|
35 |
+
uv run pdf-to-dataset.py /path/to/organized-pdfs username/categorized-docs
|
36 |
+
```
|
37 |
+
|
38 |
+
**Output Format:**
|
39 |
+
The script creates a dataset where each example contains a `pdf` object that can be processed using the datasets library. Users can then extract text, convert to images, or perform other operations as needed.
|
40 |
+
|
41 |
+
```python
|
42 |
+
from datasets import load_dataset
|
43 |
+
|
44 |
+
# Load your uploaded dataset
|
45 |
+
dataset = load_dataset("username/my-dataset")
|
46 |
+
|
47 |
+
# Access PDF objects
|
48 |
+
pdf = dataset["train"][0]["pdf"]
|
49 |
+
```
|
50 |
+
|
51 |
+
**Requirements:**
|
52 |
+
- Directory containing PDF files
|
53 |
+
- Hugging Face account (for uploading)
|
54 |
+
- No GPU needed - runs on CPU
|
55 |
+
|
56 |
+
## Installation
|
57 |
+
|
58 |
+
No installation needed! Just run with `uv`:
|
59 |
+
|
60 |
+
```bash
|
61 |
+
# Run directly from GitHub
|
62 |
+
uv run https://huggingface.co/datasets/uv-scripts/dataset-creation/resolve/main/pdf-to-dataset.py --help
|
63 |
+
|
64 |
+
# Or clone and run locally
|
65 |
+
git clone https://huggingface.co/datasets/uv-scripts/dataset-creation
|
66 |
+
cd dataset-creation
|
67 |
+
uv run pdf-to-dataset.py /path/to/pdfs my-dataset
|
68 |
+
```
|
69 |
+
|
70 |
+
## Authentication
|
71 |
+
|
72 |
+
Scripts use Hugging Face authentication:
|
73 |
+
1. Pass token via `--hf-token` argument
|
74 |
+
2. Set `HF_TOKEN` environment variable
|
75 |
+
3. Use cached credentials from `huggingface-cli login`
|
76 |
+
|
77 |
+
## Examples
|
78 |
+
|
79 |
+
### Create a Dataset from Research Papers
|
80 |
+
```bash
|
81 |
+
uv run pdf-to-dataset.py ~/Documents/papers username/research-papers
|
82 |
+
```
|
83 |
+
|
84 |
+
### Organize Documents by Type
|
85 |
+
```bash
|
86 |
+
# Directory structure:
|
87 |
+
# documents/
|
88 |
+
# ├── invoices/
|
89 |
+
# │ ├── invoice1.pdf
|
90 |
+
# │ └── invoice2.pdf
|
91 |
+
# └── receipts/
|
92 |
+
# ├── receipt1.pdf
|
93 |
+
# └── receipt2.pdf
|
94 |
+
|
95 |
+
uv run pdf-to-dataset.py documents/ username/financial-docs
|
96 |
+
# Creates dataset with labels: "invoices" and "receipts"
|
97 |
+
```
|
98 |
+
|
99 |
+
## Tips
|
100 |
+
|
101 |
+
- **Large PDFs**: The script handles large PDFs efficiently by uploading them as objects
|
102 |
+
- **Organization**: Use subdirectories to automatically create labeled datasets
|
103 |
+
- **Privacy**: Use `--private` flag for sensitive documents
|
104 |
+
- **Processing**: After upload, use the datasets library to extract text, images, or metadata as needed
|
105 |
+
|
106 |
+
## License
|
107 |
+
|
108 |
+
MIT
|
pdf-examples/10.1177_1941738110375910.pdf
ADDED
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
1 |
+
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:bed92d69c0c9e3b7d7573e35d390691e52c6234c906ec1ba2e7d3b1370a1c22e
|
3 |
+
size 284205
|
pdf-examples/2025.06.11.659105v1.full.pdf
ADDED
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
1 |
+
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:adc2af4c6957d05c19abdebc67127868f3c73bf3d6f56ee28e6a658598bcad1f
|
3 |
+
size 5193285
|
pdf-to-dataset.py
ADDED
@@ -0,0 +1,138 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
# /// script
|
2 |
+
# requires-python = ">=3.11"
|
3 |
+
# dependencies = [
|
4 |
+
# "datasets",
|
5 |
+
# "huggingface-hub[hf_transfer]",
|
6 |
+
# "pdfplumber",
|
7 |
+
# ]
|
8 |
+
# ///
|
9 |
+
"""
|
10 |
+
Convert a directory of PDF files to a Hugging Face dataset.
|
11 |
+
|
12 |
+
This script uses the built-in PDF support in the datasets library to create
|
13 |
+
a dataset from PDF files. Each PDF is converted to images (one per page).
|
14 |
+
|
15 |
+
Example usage:
|
16 |
+
# Basic usage - convert PDFs in a directory
|
17 |
+
uv run pdf-to-dataset.py /path/to/pdfs username/my-dataset
|
18 |
+
|
19 |
+
# Create a private dataset
|
20 |
+
uv run pdf-to-dataset.py /path/to/pdfs username/my-dataset --private
|
21 |
+
|
22 |
+
# Organize by subdirectories (creates labels)
|
23 |
+
# folder/invoice/doc1.pdf -> label: invoice
|
24 |
+
# folder/receipt/doc2.pdf -> label: receipt
|
25 |
+
uv run pdf-to-dataset.py /path/to/organized-pdfs username/categorized-pdfs
|
26 |
+
"""
|
27 |
+
|
28 |
+
import logging
|
29 |
+
import os
|
30 |
+
import sys
|
31 |
+
from argparse import ArgumentParser, RawDescriptionHelpFormatter
|
32 |
+
from pathlib import Path
|
33 |
+
|
34 |
+
from datasets import load_dataset
|
35 |
+
from huggingface_hub import login
|
36 |
+
|
37 |
+
logging.basicConfig(
|
38 |
+
level=logging.INFO, format="%(asctime)s - %(levelname)s - %(message)s"
|
39 |
+
)
|
40 |
+
logger = logging.getLogger(__name__)
|
41 |
+
|
42 |
+
|
43 |
+
def validate_directory(directory: Path) -> int:
|
44 |
+
"""Validate directory and count PDF files."""
|
45 |
+
if not directory.exists():
|
46 |
+
raise ValueError(f"Directory does not exist: {directory}")
|
47 |
+
|
48 |
+
if not directory.is_dir():
|
49 |
+
raise ValueError(f"Path is not a directory: {directory}")
|
50 |
+
|
51 |
+
# Count PDFs (including in subdirectories)
|
52 |
+
pdf_count = len(list(directory.rglob("*.pdf")))
|
53 |
+
|
54 |
+
if pdf_count == 0:
|
55 |
+
raise ValueError(f"No PDF files found in directory: {directory}")
|
56 |
+
|
57 |
+
return pdf_count
|
58 |
+
|
59 |
+
|
60 |
+
def main():
|
61 |
+
parser = ArgumentParser(
|
62 |
+
description="Convert PDF files to Hugging Face datasets",
|
63 |
+
formatter_class=RawDescriptionHelpFormatter,
|
64 |
+
epilog=__doc__,
|
65 |
+
)
|
66 |
+
|
67 |
+
parser.add_argument("directory", type=Path, help="Directory containing PDF files")
|
68 |
+
parser.add_argument(
|
69 |
+
"repo_id",
|
70 |
+
type=str,
|
71 |
+
help="Hugging Face dataset repository ID (e.g., 'username/dataset-name')",
|
72 |
+
)
|
73 |
+
parser.add_argument(
|
74 |
+
"--private", action="store_true", help="Create a private dataset repository"
|
75 |
+
)
|
76 |
+
parser.add_argument(
|
77 |
+
"--hf-token",
|
78 |
+
type=str,
|
79 |
+
default=None,
|
80 |
+
help="Hugging Face API token (can also use HF_TOKEN environment variable)",
|
81 |
+
)
|
82 |
+
|
83 |
+
args = parser.parse_args()
|
84 |
+
|
85 |
+
# Handle authentication
|
86 |
+
hf_token = args.hf_token or os.environ.get("HF_TOKEN")
|
87 |
+
if hf_token:
|
88 |
+
login(token=hf_token)
|
89 |
+
else:
|
90 |
+
logger.info("No HF token provided. Will attempt to use cached credentials.")
|
91 |
+
|
92 |
+
try:
|
93 |
+
# Validate directory
|
94 |
+
pdf_count = validate_directory(args.directory)
|
95 |
+
logger.info(f"Found {pdf_count} PDF files to process")
|
96 |
+
|
97 |
+
# Load dataset using built-in PDF support
|
98 |
+
logger.info("Loading PDFs as dataset (this may take a while for large PDFs)...")
|
99 |
+
dataset = load_dataset("pdffolder", data_dir=str(args.directory))
|
100 |
+
|
101 |
+
# Log dataset info
|
102 |
+
logger.info("\nDataset created successfully!")
|
103 |
+
logger.info(f"Structure: {dataset}")
|
104 |
+
|
105 |
+
if "train" in dataset:
|
106 |
+
train_size = len(dataset["train"])
|
107 |
+
logger.info(f"Training examples: {train_size}")
|
108 |
+
|
109 |
+
# Show sample if available
|
110 |
+
if train_size > 0:
|
111 |
+
sample = dataset["train"][0]
|
112 |
+
logger.info(f"\nSample structure: {list(sample.keys())}")
|
113 |
+
if "label" in sample:
|
114 |
+
logger.info("Labels found - PDFs are organized by category")
|
115 |
+
|
116 |
+
# Push to Hub
|
117 |
+
logger.info(f"\nPushing to Hugging Face Hub: {args.repo_id}")
|
118 |
+
dataset.push_to_hub(args.repo_id, private=args.private)
|
119 |
+
|
120 |
+
logger.info("✅ Dataset uploaded successfully!")
|
121 |
+
logger.info(f"🔗 Available at: https://huggingface.co/datasets/{args.repo_id}")
|
122 |
+
|
123 |
+
# Provide next steps
|
124 |
+
logger.info("\nTo use your dataset:")
|
125 |
+
logger.info(f' dataset = load_dataset("{args.repo_id}")')
|
126 |
+
|
127 |
+
except Exception as e:
|
128 |
+
logger.error(f"Failed to create dataset: {e}")
|
129 |
+
sys.exit(1)
|
130 |
+
|
131 |
+
|
132 |
+
if __name__ == "__main__":
|
133 |
+
if len(sys.argv) == 1:
|
134 |
+
# Show help if no arguments provided
|
135 |
+
print(__doc__)
|
136 |
+
sys.exit(0)
|
137 |
+
|
138 |
+
main()
|