storytracer commited on
Commit
79f42fb
·
verified ·
1 Parent(s): 49296b3

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +7 -7
README.md CHANGED
@@ -20,6 +20,13 @@ This dataset contains more than 650,000 English public domain books which were d
20
 
21
  The dataset contains 653,983 OCR texts (~ 200 million pages) from various collections of the Internet Archive (IA). Books in the IA can be filtered out from other types of documents by checking whether an IA item is linked to an Open Library (OL) record. Only texts with an OL record have been included in this dataset in order to restrict the dataset as much as possible to books.
22
 
 
 
 
 
 
 
 
23
  ## Size
24
 
25
  The size of the full uncompressed dataset is ~400GB and compressed as Parquet files ~220GB. Each Parquet file contains a maximum of 2000 books.
@@ -38,13 +45,6 @@ In the future, more datasets will be compiled for other languages using the same
38
 
39
  The OCR for most of the books was produced by the IA. You can learn more about the details of the IA OCR process here: https://archive.org/developers/ocr.html. The OCR quality varies from book to book. Future versions of this dataset might include OCR quality scores or
40
 
41
- ## Curation method
42
-
43
- In order to reliably find public domain books among the IA collections, the dataset was curated by combining three approaches:
44
- 1. Manually identifying IA collections which expliclity state that they exclusively contain public domain materials, e.g. the [Cornell University Library collection](https://archive.org/details/cornell/about?tab=about) and downloading them in bulk.
45
- 2. Using the [possible-copyright-status](https://archive.org/developers/metadata-schema/index.html#possible-copyright-status) query parameter to search for items with the status `NOT_IN_COPYRIGHT` across all IA collections using the [IA Search API](https://archive.org/help/aboutsearch.htm).
46
- 3. Restricting all searches with the query parameter `openlibrary_edition:*` to ensure that all returned items posses an OpenLibrary record, i.e. to ensure that they are books and not some other form of text.
47
-
48
  ## Data fields
49
 
50
  | Field | Data Type | Description |
 
20
 
21
  The dataset contains 653,983 OCR texts (~ 200 million pages) from various collections of the Internet Archive (IA). Books in the IA can be filtered out from other types of documents by checking whether an IA item is linked to an Open Library (OL) record. Only texts with an OL record have been included in this dataset in order to restrict the dataset as much as possible to books.
22
 
23
+ ## Curation method
24
+
25
+ In order to reliably find public domain books among the IA collections, the dataset was curated by combining three approaches:
26
+ 1. Manually identifying IA collections which expliclity state that they exclusively contain public domain materials, e.g. the [Cornell University Library collection](https://archive.org/details/cornell/about?tab=about) and downloading them in bulk.
27
+ 2. Using the [possible-copyright-status](https://archive.org/developers/metadata-schema/index.html#possible-copyright-status) query parameter to search for items with the status `NOT_IN_COPYRIGHT` across all IA collections using the [IA Search API](https://archive.org/help/aboutsearch.htm).
28
+ 3. Restricting all searches with the query parameter `openlibrary_edition:*` to ensure that all returned items posses an OpenLibrary record, i.e. to ensure that they are books and not some other form of text.
29
+
30
  ## Size
31
 
32
  The size of the full uncompressed dataset is ~400GB and compressed as Parquet files ~220GB. Each Parquet file contains a maximum of 2000 books.
 
45
 
46
  The OCR for most of the books was produced by the IA. You can learn more about the details of the IA OCR process here: https://archive.org/developers/ocr.html. The OCR quality varies from book to book. Future versions of this dataset might include OCR quality scores or
47
 
 
 
 
 
 
 
 
48
  ## Data fields
49
 
50
  | Field | Data Type | Description |