Molbap HF staff commited on
Commit
88874ba
·
verified ·
1 Parent(s): 875bf2c

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +185 -4
README.md CHANGED
@@ -1,3 +1,4 @@
 
1
  ---
2
  license: other
3
  license_name: idl-train
@@ -17,10 +18,114 @@ size_categories:
17
  ### Dataset Summary
18
 
19
  Industry Documents Library (IDL) is a document dataset filtered from [UCSF documents library](https://www.industrydocuments.ucsf.edu/) with 19 million pages kept as valid samples.
20
- Each document exists as a collection of a pdf, a tiff image with the same contents, a json file containing extensive OCR annotation, and a .ocr file with reduced annotation.
 
 
 
21
 
22
  ### Usage
23
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
24
  This instance of IDL is in [webdataset](https://github.com/webdataset/webdataset/commits/main) .tar format. It can be used with webdataset library or current releases of Hugging Face `datasets`.
25
  Here is an example using the "streaming" parameter. We do recommend downloading the dataset to save bandwidth.
26
 
@@ -30,8 +135,73 @@ print(next(iter(dataset['train'])).keys())
30
  >> dict_keys(['__key__', '__url__', 'json', 'ocr', 'pdf', 'tif'])
31
  ```
32
 
33
- ### Data Splits
34
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
35
 
36
  #### Train
37
  * `idl-train-*.tar`
@@ -61,8 +231,19 @@ According to the US Copyright Office, when determining whether a particular use
61
 
62
  Each user of this website is responsible for ensuring compliance with applicable copyright laws. Persons obtaining, or later using, a copy of copyrighted material in excess of “fair use” may become liable for copyright infringement. By accessing this website, the user agrees to hold harmless the University of California, its affiliates and their directors, officers, employees and agents from all claims and expenses, including attorneys’ fees, arising out of the use of this website by the user.
63
 
64
- For more in-depth information on copyright and fair use, visit the Stanford University Libraries’ Copyright and Fair Use website.
65
 
66
  If you hold copyright to a document or documents in our collections and have concerns about our inclusion of this material, please see the IDL Take-Down Policy or contact us with any questions.
 
 
 
 
 
 
 
 
 
 
 
67
  ### Citation Information
68
- ??
 
1
+
2
  ---
3
  license: other
4
  license_name: idl-train
 
18
  ### Dataset Summary
19
 
20
  Industry Documents Library (IDL) is a document dataset filtered from [UCSF documents library](https://www.industrydocuments.ucsf.edu/) with 19 million pages kept as valid samples.
21
+ Each document exists as a collection of a pdf, a tiff image with the same contents rendered, a json file containing extensive Textract OCR annotations from the [idl_data](https://github.com/furkanbiten/idl_data) project, and a .ocr file with the original, older OCR annotation.
22
+
23
+ # TODO add image
24
+
25
 
26
  ### Usage
27
 
28
+ For faster download, you can use directly the `huggingface_hub` library. Make sure `hf_transfer` is installed prior to downloading and mind that you have enough space locally.
29
+
30
+ ```python
31
+ import os
32
+
33
+ os.environ["HF_HUB_ENABLE_HF_TRANSFER"] = "1"
34
+
35
+ from huggingface_hub import HfApi, logging
36
+
37
+ #logging.set_verbosity_debug()
38
+ hf = HfApi()
39
+ hf.snapshot_download("pixparse/IDL-wds", repo_type="dataset", local_dir_use_symlinks=False)
40
+
41
+ ```
42
+
43
+
44
+
45
+ # TODO The following example uses fitz which is AGPL. We should also recommend the same with pypdf.
46
+
47
+ ```python
48
+ from chug import create_wds_loader, create_doc_anno_pipe
49
+
50
+ # TODO import image transforms and text transforms from pixparse.data
51
+
52
+
53
+ decoder = create_decoder_pipe(
54
+ image_preprocess=image_preprocess, # callable of transforms to image tensors
55
+ anno_preprocess=anno_preprocess, # callable of transforms to text into tensors of labels and targets
56
+ image_key="pdf",
57
+ image_fmt="RGB",
58
+ )
59
+
60
+ loader = create_wds_loader(
61
+ "/my_data/idl-train-*.tar",
62
+ decoder,
63
+ is_train=True,
64
+ resampled=False,
65
+ start_interval=0,
66
+ num_samples=2159432,
67
+ workers=8,
68
+ batch_size=32, # adjust to your architecture capacity
69
+ seed=seed, # set a seed
70
+ world_size=world_size, # get world_size from your training environment
71
+ )
72
+
73
+ ```
74
+
75
+ Further, a metadata file `_pdfa-english-train-info-minimal.json` contains the list of samples per shard, with same basename and `.json` or `.pdf` extension,
76
+ as well as the count of files per shard.
77
+
78
+
79
+ #### Words and lines document metadata
80
+
81
+ Initially, we obtained the raw data from the IDL API and combined it with the `idl_data` annotation. This information is then reshaped into lines organized in reading order, under the key lines. We keep non-reshaped word and bounding box information under the word key, should users want to use their own heuristic.
82
+
83
+ The way we obtain an approximate reading order is simply by looking at the frequency peaks of the leftmost word x-coordinate. A frequency peak means that a high number of lines are starting from the same point. Then, we keep track of the x-coordinate of each such identified column. If no peaks are found, the document is assumed to be readable in plain format.
84
+ The code to detect columns can be found here.
85
+ ```python
86
+ def get_columnar_separators(page, min_prominence=0.3, num_bins=10, kernel_width=1):
87
+ """
88
+ Identifies the x-coordinates that best separate columns by analyzing the derivative of a histogram
89
+ of the 'left' values (xmin) of bounding boxes.
90
+
91
+ Args:
92
+ page (dict): Page data with 'bbox' containing bounding boxes of words.
93
+ min_prominence (float): The required prominence of peaks in the histogram.
94
+ num_bins (int): Number of bins to use for the histogram.
95
+ kernel_width (int): The width of the Gaussian kernel used for smoothing the histogram.
96
+
97
+ Returns:
98
+ separators (list): The x-coordinates that separate the columns, if any.
99
+ """
100
+ try:
101
+ left_values = [b[0] for b in page['bbox']]
102
+ hist, bin_edges = np.histogram(left_values, bins=num_bins)
103
+ hist = scipy.ndimage.gaussian_filter1d(hist, kernel_width)
104
+ min_val = min(hist)
105
+ hist = np.insert(hist, [0, len(hist)], min_val)
106
+ bin_width = bin_edges[1] - bin_edges[0]
107
+ bin_edges = np.insert(bin_edges, [0, len(bin_edges)], [bin_edges[0] - bin_width, bin_edges[-1] + bin_width])
108
+
109
+ peaks, _ = scipy.signal.find_peaks(hist, prominence=min_prominence * np.max(hist))
110
+ derivatives = np.diff(hist)
111
+
112
+ separators = []
113
+ if len(peaks) > 1:
114
+ # This finds the index of the maximum derivative value between peaks
115
+ # which indicates peaks after trough --> column
116
+ for i in range(len(peaks)-1):
117
+ peak_left = peaks[i]
118
+ peak_right = peaks[i+1]
119
+ max_deriv_index = np.argmax(derivatives[peak_left:peak_right]) + peak_left
120
+ separator_x = bin_edges[max_deriv_index + 1]
121
+ separators.append(separator_x)
122
+ except Exception as e:
123
+ separators = []
124
+ return separators
125
+ ```
126
+
127
+ For each pdf document, we store statistics on number of pages per shard, number of valid samples per shard. A valid sample is a sample that can be encoded then decoded, which we did for each sample.
128
+
129
  This instance of IDL is in [webdataset](https://github.com/webdataset/webdataset/commits/main) .tar format. It can be used with webdataset library or current releases of Hugging Face `datasets`.
130
  Here is an example using the "streaming" parameter. We do recommend downloading the dataset to save bandwidth.
131
 
 
135
  >> dict_keys(['__key__', '__url__', 'json', 'ocr', 'pdf', 'tif'])
136
  ```
137
 
138
+ For faster download, you can use directly the `huggingface_hub` library. Make sure `hf_transfer` is install prior to downloading and mind that you have enough space locally.
139
 
140
+ ```python
141
+ import os
142
+
143
+ os.environ["HF_HUB_ENABLE_HF_TRANSFER"] = "1"
144
+
145
+ from huggingface_hub import HfApi, logging
146
+
147
+ #logging.set_verbosity_debug()
148
+ hf = HfApi()
149
+ hf.snapshot_download("pixparse/pdfa-english-train", repo_type="dataset", local_dir_use_symlinks=False)
150
+ ```
151
+ ### Data, metadata and statistics.
152
+ Add example image here
153
+
154
+ The metadata for each document has been formatted in this way. Each `pdf` is paired with a `json` file with the following structure. Entries have been shortened for readability.
155
+
156
+ ```json
157
+ {
158
+ "pages": [
159
+ {
160
+ "text": [
161
+ "COVIDIEN",
162
+ "Mallinckrodt",
163
+ "Addendum",
164
+ "This Addendum to the Consulting Agreement (the \"Agreement\") of July 28, 2010 (\"Effective Date\") by",
165
+ "and between David Brushwod, R.Ph., J.D., with an address at P.O. Box 100496, Gainesville, FL 32610-",
166
+ ],
167
+ "bbox": [
168
+ [0.185964, 0.058857, 0.092199, 0.011457],
169
+ [0.186465, 0.079529, 0.087209, 0.009247],
170
+ [0.459241, 0.117854, 0.080015, 0.011332],
171
+ [0.117109, 0.13346, 0.751004, 0.014365],
172
+ [0.117527, 0.150306, 0.750509, 0.012954]
173
+ ],
174
+ "poly": [
175
+ [
176
+ {"X": 0.185964, "Y": 0.058857}, {"X": 0.278163, "Y": 0.058857}, {"X": 0.278163, "Y": 0.070315}, {"X": 0.185964, "Y": 0.070315}
177
+ ],
178
+ [
179
+ {"X": 0.186465, "Y": 0.079529}, {"X": 0.273673, "Y": 0.079529}, {"X": 0.273673, "Y": 0.088777}, {"X": 0.186465, "Y": 0.088777}
180
+ ],
181
+ [
182
+ {"X": 0.459241, "Y": 0.117854}, {"X": 0.539256, "Y": 0.117854}, {"X": 0.539256, "Y": 0.129186}, {"X": 0.459241, "Y": 0.129186}
183
+ ],
184
+ [
185
+ {"X": 0.117109, "Y": 0.13346}, {"X": 0.868113, "Y": 0.13346}, {"X": 0.868113, "Y": 0.147825}, {"X": 0.117109, "Y": 0.147825}
186
+ ],
187
+ [
188
+ {"X": 0.117527, "Y": 0.150306}, {"X": 0.868036, "Y": 0.150306}, {"X": 0.868036, "Y": 0.163261}, {"X": 0.117527, "Y": 0.163261}
189
+ ]
190
+ ],
191
+ "score": [
192
+ 0.9939, 0.5704, 0.9961, 0.9898, 0.9935
193
+ ]
194
+ }
195
+ ]
196
+ }
197
+
198
+ ```
199
+
200
+
201
+ The top-level key, `pages`, is a list of every page in the document. The above example shows only one page. `text` is a list of lines in the document, with their individual associated bounding box in the next entry. `bbox` contains the bounding box coordinates in `left, top, width, height` format, with coordinates relative to the page size. `poly` is the corresponding polygon.
202
+
203
+ `score` is the confidence score for each line obtained with Textract.
204
+ ### Data Splits
205
 
206
  #### Train
207
  * `idl-train-*.tar`
 
231
 
232
  Each user of this website is responsible for ensuring compliance with applicable copyright laws. Persons obtaining, or later using, a copy of copyrighted material in excess of “fair use” may become liable for copyright infringement. By accessing this website, the user agrees to hold harmless the University of California, its affiliates and their directors, officers, employees and agents from all claims and expenses, including attorneys’ fees, arising out of the use of this website by the user.
233
 
234
+ For more in-depth information on copyright and fair use, visit the [Stanford University Libraries’ Copyright and Fair Use website.](https://fairuse.stanford.edu/)
235
 
236
  If you hold copyright to a document or documents in our collections and have concerns about our inclusion of this material, please see the IDL Take-Down Policy or contact us with any questions.
237
+
238
+ In the dataset, the API from the Industry Documents Library holds the following permissions counts per file, showing all are now public (none are "confidential" or "privileged", only formerly.)
239
+
240
+ ```json
241
+ {'public/no restrictions': 3005133,
242
+ 'public/formerly confidential': 264978,
243
+ 'public/formerly privileged': 30063,
244
+ 'public/formerly privileged/formerly confidential': 669,
245
+ 'public/formerly confidential/formerly privileged': 397,
246
+ }
247
+ ```
248
  ### Citation Information
249
+