system HF staff commited on
Commit
0088ca3
·
1 Parent(s): 526e6ec

Update files from the datasets library (from 1.3.0)

Browse files

Release notes: https://github.com/huggingface/datasets/releases/tag/1.3.0

Files changed (1) hide show
  1. README.md +162 -0
README.md ADDED
@@ -0,0 +1,162 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ ---
3
+
4
+ # Dataset Card for "conll2000"
5
+
6
+ ## Table of Contents
7
+ - [Dataset Description](#dataset-description)
8
+ - [Dataset Summary](#dataset-summary)
9
+ - [Supported Tasks](#supported-tasks)
10
+ - [Languages](#languages)
11
+ - [Dataset Structure](#dataset-structure)
12
+ - [Data Instances](#data-instances)
13
+ - [Data Fields](#data-fields)
14
+ - [Data Splits Sample Size](#data-splits-sample-size)
15
+ - [Dataset Creation](#dataset-creation)
16
+ - [Curation Rationale](#curation-rationale)
17
+ - [Source Data](#source-data)
18
+ - [Annotations](#annotations)
19
+ - [Personal and Sensitive Information](#personal-and-sensitive-information)
20
+ - [Considerations for Using the Data](#considerations-for-using-the-data)
21
+ - [Social Impact of Dataset](#social-impact-of-dataset)
22
+ - [Discussion of Biases](#discussion-of-biases)
23
+ - [Other Known Limitations](#other-known-limitations)
24
+ - [Additional Information](#additional-information)
25
+ - [Dataset Curators](#dataset-curators)
26
+ - [Licensing Information](#licensing-information)
27
+ - [Citation Information](#citation-information)
28
+ - [Contributions](#contributions)
29
+
30
+ ## [Dataset Description](#dataset-description)
31
+
32
+ - **Homepage:** [https://www.clips.uantwerpen.be/conll2000/chunking/](https://www.clips.uantwerpen.be/conll2000/chunking/)
33
+ - **Repository:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
34
+ - **Paper:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
35
+ - **Point of Contact:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
36
+ - **Size of downloaded dataset files:** 3.32 MB
37
+ - **Size of the generated dataset:** 6.25 MB
38
+ - **Total amount of disk used:** 9.57 MB
39
+
40
+ ### [Dataset Summary](#dataset-summary)
41
+
42
+ Text chunking consists of dividing a text in syntactically correlated parts of words. For example, the sentence
43
+ He reckons the current account deficit will narrow to only # 1.8 billion in September . can be divided as follows:
44
+ [NP He ] [VP reckons ] [NP the current account deficit ] [VP will narrow ] [PP to ] [NP only # 1.8 billion ]
45
+ [PP in ] [NP September ] .
46
+
47
+ Text chunking is an intermediate step towards full parsing. It was the shared task for CoNLL-2000. Training and test
48
+ data for this task is available. This data consists of the same partitions of the Wall Street Journal corpus (WSJ)
49
+ as the widely used data for noun phrase chunking: sections 15-18 as training data (211727 tokens) and section 20 as
50
+ test data (47377 tokens). The annotation of the data has been derived from the WSJ corpus by a program written by
51
+ Sabine Buchholz from Tilburg University, The Netherlands.
52
+
53
+ ### [Supported Tasks](#supported-tasks)
54
+
55
+ [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
56
+
57
+ ### [Languages](#languages)
58
+
59
+ [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
60
+
61
+ ## [Dataset Structure](#dataset-structure)
62
+
63
+ We show detailed information for up to 5 configurations of the dataset.
64
+
65
+ ### [Data Instances](#data-instances)
66
+
67
+ #### conll2000
68
+
69
+ - **Size of downloaded dataset files:** 3.32 MB
70
+ - **Size of the generated dataset:** 6.25 MB
71
+ - **Total amount of disk used:** 9.57 MB
72
+
73
+ An example of 'train' looks as follows.
74
+ ```
75
+ This example was too long and was cropped:
76
+
77
+ {
78
+ "chunk_tags": [11, 13, 11, 12, 21, 22, 22, 22, 22, 11, 12, 12, 17, 11, 12, 13, 11, 0, 1, 13, 11, 11, 0, 21, 22, 22, 11, 12, 12, 13, 11, 12, 12, 11, 12, 12, 0],
79
+ "id": "0",
80
+ "pos_tags": [19, 14, 11, 19, 39, 27, 37, 32, 34, 11, 15, 19, 14, 19, 22, 14, 20, 5, 15, 14, 19, 19, 5, 34, 32, 34, 11, 15, 19, 14, 20, 9, 20, 24, 15, 22, 6],
81
+ "tokens": "[\"Confidence\", \"in\", \"the\", \"pound\", \"is\", \"widely\", \"expected\", \"to\", \"take\", \"another\", \"sharp\", \"dive\", \"if\", \"trade\", \"figur..."
82
+ }
83
+ ```
84
+
85
+ ### [Data Fields](#data-fields)
86
+
87
+ The data fields are the same among all splits.
88
+
89
+ #### conll2000
90
+ - `id`: a `string` feature.
91
+ - `tokens`: a `list` of `string` features.
92
+ - `pos_tags`: a `list` of classification labels, with possible values including `''` (0), `#` (1), `$` (2), `(` (3), `)` (4).
93
+ - `chunk_tags`: a `list` of classification labels, with possible values including `O` (0), `B-ADJP` (1), `I-ADJP` (2), `B-ADVP` (3), `I-ADVP` (4).
94
+
95
+ ### [Data Splits Sample Size](#data-splits-sample-size)
96
+
97
+ | name |train|test|
98
+ |---------|----:|---:|
99
+ |conll2000| 8937|2013|
100
+
101
+ ## [Dataset Creation](#dataset-creation)
102
+
103
+ ### [Curation Rationale](#curation-rationale)
104
+
105
+ [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
106
+
107
+ ### [Source Data](#source-data)
108
+
109
+ [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
110
+
111
+ ### [Annotations](#annotations)
112
+
113
+ [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
114
+
115
+ ### [Personal and Sensitive Information](#personal-and-sensitive-information)
116
+
117
+ [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
118
+
119
+ ## [Considerations for Using the Data](#considerations-for-using-the-data)
120
+
121
+ ### [Social Impact of Dataset](#social-impact-of-dataset)
122
+
123
+ [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
124
+
125
+ ### [Discussion of Biases](#discussion-of-biases)
126
+
127
+ [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
128
+
129
+ ### [Other Known Limitations](#other-known-limitations)
130
+
131
+ [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
132
+
133
+ ## [Additional Information](#additional-information)
134
+
135
+ ### [Dataset Curators](#dataset-curators)
136
+
137
+ [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
138
+
139
+ ### [Licensing Information](#licensing-information)
140
+
141
+ [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
142
+
143
+ ### [Citation Information](#citation-information)
144
+
145
+ ```
146
+ @inproceedings{tksbuchholz2000conll,
147
+ author = "Tjong Kim Sang, Erik F. and Sabine Buchholz",
148
+ title = "Introduction to the CoNLL-2000 Shared Task: Chunking",
149
+ editor = "Claire Cardie and Walter Daelemans and Claire
150
+ Nedellec and Tjong Kim Sang, Erik",
151
+ booktitle = "Proceedings of CoNLL-2000 and LLL-2000",
152
+ publisher = "Lisbon, Portugal",
153
+ pages = "127--132",
154
+ year = "2000"
155
+ }
156
+
157
+ ```
158
+
159
+
160
+ ### Contributions
161
+
162
+ Thanks to [@vblagoje](https://github.com/vblagoje), [@jplu](https://github.com/jplu) for adding this dataset.