Datasets:
Tasks:
Token Classification
Modalities:
Text
Formats:
parquet
Sub-tasks:
named-entity-recognition
Languages:
Romanian
Size:
10K - 100K
ArXiv:
License:
Update files from the datasets library (from 1.15.0)
Browse filesRelease notes: https://github.com/huggingface/datasets/releases/tag/1.15.0
- README.md +72 -21
- dataset_infos.json +1 -1
- dummy/ronec/2.0.0/dummy_data.zip +3 -0
- ronec.py +41 -67
README.md
CHANGED
@@ -19,6 +19,7 @@ task_categories:
|
|
19 |
task_ids:
|
20 |
- named-entity-recognition
|
21 |
paperswithcode_id: ronec
|
|
|
22 |
---
|
23 |
|
24 |
# Dataset Card for RONEC
|
@@ -52,57 +53,87 @@ paperswithcode_id: ronec
|
|
52 |
- **Homepage:** https://github.com/dumitrescustefan/ronec
|
53 |
- **Repository:** https://github.com/dumitrescustefan/ronec
|
54 |
- **Paper:** https://arxiv.org/abs/1909.01247
|
55 |
-
- **Leaderboard:**
|
56 |
- **Point of Contact:** [email protected], [email protected]
|
57 |
|
58 |
### Dataset Summary
|
59 |
|
60 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
61 |
|
62 |
### Supported Tasks and Leaderboards
|
63 |
|
64 |
-
|
|
|
|
|
65 |
|
66 |
### Languages
|
67 |
|
68 |
-
|
69 |
|
70 |
## Dataset Structure
|
71 |
|
72 |
### Data Instances
|
73 |
|
74 |
-
|
75 |
|
76 |
-
```
|
77 |
-
{
|
78 |
-
|
79 |
-
|
|
|
|
|
|
|
80 |
}
|
81 |
```
|
82 |
|
83 |
### Data Fields
|
84 |
|
85 |
-
|
86 |
-
|
87 |
-
-
|
|
|
|
|
|
|
88 |
|
89 |
### Data Splits
|
90 |
|
91 |
-
The dataset
|
92 |
|
93 |
## Dataset Creation
|
94 |
|
95 |
### Curation Rationale
|
96 |
|
97 |
-
|
98 |
-
|
99 |
-
*The corpus, at its current version 1.0 is composed of5127 sentences, annotated with16 classes, for a totalof26377 annotated entities. The 16 classes are: PER-SON, NATRELPOL, ORG, GPE, LOC, FACILITY,PRODUCT, EVENT, LANGUAGE, WORKOFART,DATETIME, PERIOD, MONEY, QUANTITY, NU-MERICVALUE and ORDINAL. It is based on copyright-free text extracted from SoutheastEuropean Times (SETimes) (Tyers and Alperen, 2010).The news portal has published10“news and views fromSoutheast Europe” in ten languages, including Romanian.SETimes has been used in the past for several annotatedcorpora, including parallel corpora for machine translation.For RONEC we have used a hand-picked11selection of sen-tences belonging to several categories*
|
100 |
|
101 |
### Source Data
|
102 |
|
|
|
|
|
103 |
#### Initial Data Collection and Normalization
|
104 |
|
105 |
-
|
106 |
|
107 |
#### Who are the source language producers?
|
108 |
|
@@ -110,17 +141,35 @@ From the original paper:
|
|
110 |
|
111 |
### Annotations
|
112 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
113 |
#### Annotation process
|
114 |
|
115 |
-
|
116 |
|
117 |
#### Who are the annotators?
|
118 |
|
119 |
-
Stefan Dumitrescu
|
120 |
|
121 |
### Personal and Sensitive Information
|
122 |
|
123 |
-
|
124 |
|
125 |
## Considerations for Using the Data
|
126 |
|
@@ -148,13 +197,15 @@ MIT License
|
|
148 |
|
149 |
### Citation Information
|
150 |
|
|
|
151 |
@article{dumitrescu2019introducing,
|
152 |
title={Introducing RONEC--the Romanian Named Entity Corpus},
|
153 |
author={Dumitrescu, Stefan Daniel and Avram, Andrei-Marius},
|
154 |
journal={arXiv preprint arXiv:1909.01247},
|
155 |
year={2019}
|
156 |
}
|
|
|
157 |
|
158 |
### Contributions
|
159 |
|
160 |
-
Thanks to [@iliemihai](https://github.com/iliemihai) for adding
|
|
|
19 |
task_ids:
|
20 |
- named-entity-recognition
|
21 |
paperswithcode_id: ronec
|
22 |
+
pretty_name: RONEC
|
23 |
---
|
24 |
|
25 |
# Dataset Card for RONEC
|
|
|
53 |
- **Homepage:** https://github.com/dumitrescustefan/ronec
|
54 |
- **Repository:** https://github.com/dumitrescustefan/ronec
|
55 |
- **Paper:** https://arxiv.org/abs/1909.01247
|
56 |
+
- **Leaderboard:** https://lirobenchmark.github.io/
|
57 |
- **Point of Contact:** [email protected], [email protected]
|
58 |
|
59 |
### Dataset Summary
|
60 |
|
61 |
+
RONEC, at version 2.0, holds 12330 sentences with over 0.5M tokens, annotated with 15 classes, to a total of 80.283 distinctly annotated entities.
|
62 |
+
|
63 |
+
The corpus has the following classes and distribution in the train/valid/test splits:
|
64 |
+
|
65 |
+
| Classes | Total | Train | | Valid | | Test | |
|
66 |
+
|------------- |:------: |:------: |:-------: |:------: |:-------: |:------: |:-------: |
|
67 |
+
| | # | # | % | # | % | # | % |
|
68 |
+
| PERSON | **26130** | 19167 | 73.35 | 2733 | 10.46 | 4230 | 16.19 |
|
69 |
+
| GPE | **11103** | 8193 | 73.79 | 1182 | 10.65 | 1728 | 15.56 |
|
70 |
+
| LOC | **2467** | 1824 | 73.94 | 270 | 10.94 | 373 | 15.12 |
|
71 |
+
| ORG | **7880** | 5688 | 72.18 | 880 | 11.17 | 1312 | 16.65 |
|
72 |
+
| LANGUAGE | **467** | 342 | 73.23 | 52 | 11.13 | 73 | 15.63 |
|
73 |
+
| NAT_REL_POL | **4970** | 3673 | 73.90 | 516 | 10.38 | 781 | 15.71 |
|
74 |
+
| DATETIME | **9614** | 6960 | 72.39 | 1029 | 10.7 | 1625 | 16.9 |
|
75 |
+
| PERIOD | **1188** | 862 | 72.56 | 129 | 10.86 | 197 | 16.58 |
|
76 |
+
| QUANTITY | **1588** | 1161 | 73.11 | 181 | 11.4 | 246 | 15.49 |
|
77 |
+
| MONEY | **1424** | 1041 | 73.10 | 159 | 11.17 | 224 | 15.73 |
|
78 |
+
| NUMERIC | **7735** | 5734 | 74.13 | 814 | 10.52 | 1187 | 15.35 |
|
79 |
+
| ORDINAL | **1893** | 1377 | 72.74 | 212 | 11.2 | 304 | 16.06 |
|
80 |
+
| FACILITY | **1126** | 840 | 74.6 | 113 | 10.04 | 173 | 15.36 |
|
81 |
+
| WORK_OF_ART | **1596** | 1157 | 72.49 | 176 | 11.03 | 263 | 16.48 |
|
82 |
+
| EVENT | **1102** | 826 | 74.95 | 107 | 9.71 | 169 | 15.34 |
|
83 |
+
|
84 |
|
85 |
### Supported Tasks and Leaderboards
|
86 |
|
87 |
+
The corpus is meant to train Named Entity Recognition models for the Romanian language.
|
88 |
+
|
89 |
+
Please see the leaderboard here : [https://lirobenchmark.github.io/](https://lirobenchmark.github.io/)
|
90 |
|
91 |
### Languages
|
92 |
|
93 |
+
RONEC is in Romanian (`ro`)
|
94 |
|
95 |
## Dataset Structure
|
96 |
|
97 |
### Data Instances
|
98 |
|
99 |
+
The dataset is a list of instances. For example, an instance looks like:
|
100 |
|
101 |
+
```json
|
102 |
+
{
|
103 |
+
"id": 10454,
|
104 |
+
"tokens": ["Pentru", "a", "vizita", "locația", "care", "va", "fi", "pusă", "la", "dispoziția", "reprezentanților", "consiliilor", "județene", ",", "o", "delegație", "a", "U.N.C.J.R.", ",", "din", "care", "a", "făcut", "parte", "și", "dl", "Constantin", "Ostaficiuc", ",", "președintele", "C.J.T.", ",", "a", "fost", "prezentă", "la", "Bruxelles", ",", "între", "1-3", "martie", "."],
|
105 |
+
"ner_tags": ["O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "B-PERSON", "O", "O", "O", "O", "O", "O", "B-ORG", "O", "O", "O", "O", "O", "O", "O", "B-PERSON", "I-PERSON", "I-PERSON", "I-PERSON", "I-PERSON", "B-ORG", "O", "O", "O", "O", "O", "B-GPE", "O", "B-PERIOD", "I-PERIOD", "I-PERIOD", "O"],
|
106 |
+
"ner_ids": [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 3, 0, 0, 0, 0, 0, 0, 0, 1, 2, 2, 2, 2, 3, 0, 0, 0, 0, 0, 5, 0, 19, 20, 20, 0],
|
107 |
+
"space_after": [true, true, true, true, true, true, true, true, true, true, true, true, false, true, true, true, true, false, true, true, true, true, true, true, true, true, true, false, true, true, false, true, true, true, true, true, false, true, true, true, false, false]
|
108 |
}
|
109 |
```
|
110 |
|
111 |
### Data Fields
|
112 |
|
113 |
+
The fields of each examples are:
|
114 |
+
|
115 |
+
- ``tokens`` are the words of the sentence.
|
116 |
+
- ``ner_tags`` are the string tags assigned to each token, following the BIO2 format. For example, the span ``"între", "1-3", "martie"`` has three tokens, but is a single class ``PERIOD``, marked as ``"B-PERIOD", "I-PERIOD", "I-PERIOD"``.
|
117 |
+
- ``ner_ids`` are the integer encoding of each tag, to be compatible with the standard and to be quickly used for model training. Note that each ``B``-starting tag is odd, and each ``I``-starting tag is even.
|
118 |
+
- ``space_after`` is used to help if there is a need to detokenize the dataset. A ``true`` value means that there is a space after the token on that respective position.
|
119 |
|
120 |
### Data Splits
|
121 |
|
122 |
+
The dataset is split in train: 9000 sentences, dev: 1330 sentence and test: 2000 sentences.
|
123 |
|
124 |
## Dataset Creation
|
125 |
|
126 |
### Curation Rationale
|
127 |
|
128 |
+
[Needs More Information]
|
|
|
|
|
129 |
|
130 |
### Source Data
|
131 |
|
132 |
+
*The corpus data source represents sentences that are free of copyright, taken from older datasets like the freely available SEETimes and more recent datasources like the Romanian Wikipedia or the Common Crawl.*
|
133 |
+
|
134 |
#### Initial Data Collection and Normalization
|
135 |
|
136 |
+
[Needs More Information]
|
137 |
|
138 |
#### Who are the source language producers?
|
139 |
|
|
|
141 |
|
142 |
### Annotations
|
143 |
|
144 |
+
The corpus was annotated with the following classes:
|
145 |
+
|
146 |
+
1. PERSON - proper nouns, including common nouns or pronouns if they refer to a person. (e.g. 'sister')
|
147 |
+
2. GPE - geo political entity, like a city or a country; has to have a governance form
|
148 |
+
3. LOC - location, like a sea, continent, region, road, address, etc.
|
149 |
+
4. ORG - organization
|
150 |
+
5. LANGUAGE - language (e.g. Romanian, French, etc.)
|
151 |
+
6. NAT_REL_POL - national, religious or political organizations
|
152 |
+
7. DATETIME - a time and date in any format, including references to time (e.g. 'yesterday')
|
153 |
+
8. PERIOD - a period that is precisely bounded by two date times
|
154 |
+
9. QUANTITY - a quantity that is not numerical; it has a unit of measure
|
155 |
+
10. MONEY - a monetary value, numeric or otherwise
|
156 |
+
11. NUMERIC - a simple numeric value, represented as digits or words
|
157 |
+
12. ORDINAL - an ordinal value like 'first', 'third', etc.
|
158 |
+
13. FACILITY - a named place that is easily recognizable
|
159 |
+
14. WORK_OF_ART - a work of art like a named TV show, painting, etc.
|
160 |
+
15. EVENT - a named recognizable or periodic major event
|
161 |
+
|
162 |
#### Annotation process
|
163 |
|
164 |
+
The corpus was annotated by 3 language experts, and was cross-checked for annotation consistency. The annotation took several months to complete, but the result is a high quality dataset.
|
165 |
|
166 |
#### Who are the annotators?
|
167 |
|
168 |
+
Stefan Dumitrescu (lead).
|
169 |
|
170 |
### Personal and Sensitive Information
|
171 |
|
172 |
+
All the source data is already freely downloadable and usable online, so there are no privacy concerns.
|
173 |
|
174 |
## Considerations for Using the Data
|
175 |
|
|
|
197 |
|
198 |
### Citation Information
|
199 |
|
200 |
+
```bibtex
|
201 |
@article{dumitrescu2019introducing,
|
202 |
title={Introducing RONEC--the Romanian Named Entity Corpus},
|
203 |
author={Dumitrescu, Stefan Daniel and Avram, Andrei-Marius},
|
204 |
journal={arXiv preprint arXiv:1909.01247},
|
205 |
year={2019}
|
206 |
}
|
207 |
+
```
|
208 |
|
209 |
### Contributions
|
210 |
|
211 |
+
Thanks to [@iliemihai](https://github.com/iliemihai) for adding v1.0 of the dataset.
|
dataset_infos.json
CHANGED
@@ -1 +1 @@
|
|
1 |
-
{"ronec": {"description": "
|
|
|
1 |
+
{"ronec": {"description": "RONEC - the Romanian Named Entity Corpus, at version 2.0, holds 12330 sentences with over 0.5M tokens, annotated with 15 classes, to a total of 80.283 distinctly annotated entities. It is used for named entity recognition and represents the largest Romanian NER corpus to date.\n", "citation": "@article{dumitrescu2019introducing,\n title={Introducing RONEC--the Romanian Named Entity Corpus},\n author={Dumitrescu, Stefan Daniel and Avram, Andrei-Marius},\n journal={arXiv preprint arXiv:1909.01247},\n year={2019}\n}\n", "homepage": "https://github.com/dumitrescustefan/ronec", "license": "MIT License", "features": {"id": {"dtype": "int32", "id": null, "_type": "Value"}, "tokens": {"feature": {"dtype": "string", "id": null, "_type": "Value"}, "length": -1, "id": null, "_type": "Sequence"}, "ner_ids": {"feature": {"dtype": "int32", "id": null, "_type": "Value"}, "length": -1, "id": null, "_type": "Sequence"}, "space_after": {"feature": {"dtype": "bool", "id": null, "_type": "Value"}, "length": -1, "id": null, "_type": "Sequence"}, "ner_tags": {"feature": {"num_classes": 31, "names": ["O", "B-PERSON", "I-PERSON", "B-ORG", "I-ORG", "B-GPE", "I-GPE", "B-LOC", "I-LOC", "B-NAT_REL_POL", "I-NAT_REL_POL", "B-EVENT", "I-EVENT", "B-LANGUAGE", "I-LANGUAGE", "B-WORK_OF_ART", "I-WORK_OF_ART", "B-DATETIME", "I-DATETIME", "B-PERIOD", "I-PERIOD", "B-MONEY", "I-MONEY", "B-QUANTITY", "I-QUANTITY", "B-NUMERIC", "I-NUMERIC", "B-ORDINAL", "I-ORDINAL", "B-FACILITY", "I-FACILITY"], "names_file": null, "id": null, "_type": "ClassLabel"}, "length": -1, "id": null, "_type": "Sequence"}}, "post_processed": null, "supervised_keys": null, "task_templates": null, "builder_name": "ronec", "config_name": "ronec", "version": {"version_str": "2.0.0", "description": null, "major": 2, "minor": 0, "patch": 0}, "splits": {"train": {"name": "train", "num_bytes": 8701577, "num_examples": 9000, "dataset_name": "ronec"}, "validation": {"name": "validation", "num_bytes": 1266490, "num_examples": 1330, "dataset_name": "ronec"}, "test": {"name": "test", "num_bytes": 1902224, "num_examples": 2000, "dataset_name": "ronec"}}, "download_checksums": {"https://raw.githubusercontent.com/dumitrescustefan/ronec/master/data/train.json": {"num_bytes": 10753146, "checksum": "349d8632f3f416cdefa7709c759d6fba458d0cdc0a59c546af5b5d4167096e48"}, "https://raw.githubusercontent.com/dumitrescustefan/ronec/master/data/valid.json": {"num_bytes": 1567702, "checksum": "0c3e08c8fb7058c96d91c30cc3fe12b684b682c1c481e84a9d05aac14b59fe04"}, "https://raw.githubusercontent.com/dumitrescustefan/ronec/master/data/test.json": {"num_bytes": 2355095, "checksum": "7532619fd6680adb4c3926f5e185f3c1705caeb2d8466e9fe2a4428e81660704"}}, "download_size": 14675943, "post_processing_size": null, "dataset_size": 11870291, "size_in_bytes": 26546234}}
|
dummy/ronec/2.0.0/dummy_data.zip
ADDED
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
1 |
+
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:dccefc79b6e137f97cd4c671c0fc66e21100d4ff6a654cadb59ab5b9ba09a670
|
3 |
+
size 4916
|
ronec.py
CHANGED
@@ -12,15 +12,14 @@
|
|
12 |
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
13 |
# See the License for the specific language governing permissions and
|
14 |
# limitations under the License.
|
15 |
-
"""Introduction in RONEC: Named Entity Corpus for ROmanian language"""
|
16 |
|
|
|
17 |
|
18 |
import datasets
|
19 |
|
20 |
|
21 |
logger = datasets.logging.get_logger(__name__)
|
22 |
|
23 |
-
|
24 |
# Find for instance the citation on arxiv or on the dataset repo/website
|
25 |
_CITATION = """\
|
26 |
@article{dumitrescu2019introducing,
|
@@ -33,8 +32,7 @@ _CITATION = """\
|
|
33 |
|
34 |
# You can copy an official description
|
35 |
_DESCRIPTION = """\
|
36 |
-
|
37 |
-
belonging to 16 distinct classes. It represents the first initiative in the Romanian language space specifically targeted for named entity recognition
|
38 |
"""
|
39 |
|
40 |
_HOMEPAGE = "https://github.com/dumitrescustefan/ronec"
|
@@ -43,10 +41,10 @@ _LICENSE = "MIT License"
|
|
43 |
|
44 |
# The HuggingFace dataset library don't host the datasets but only point to the original files
|
45 |
# This can be an arbitrary nested dict/list of URLs (see below in `_split_generators` method)
|
46 |
-
_URL = "https://raw.githubusercontent.com/dumitrescustefan/ronec/master/
|
47 |
-
_TRAINING_FILE = "train.
|
48 |
-
|
49 |
-
|
50 |
|
51 |
|
52 |
class RONECConfig(datasets.BuilderConfig):
|
@@ -59,53 +57,52 @@ class RONECConfig(datasets.BuilderConfig):
|
|
59 |
class RONEC(datasets.GeneratorBasedBuilder):
|
60 |
"""RONEC dataset"""
|
61 |
|
62 |
-
VERSION = datasets.Version("
|
63 |
BUILDER_CONFIGS = [
|
64 |
RONECConfig(name="ronec", version=VERSION, description="RONEC dataset"),
|
65 |
]
|
66 |
|
67 |
def _info(self):
|
68 |
-
|
69 |
features = datasets.Features(
|
70 |
{
|
71 |
-
"id": datasets.Value("
|
72 |
"tokens": datasets.Sequence(datasets.Value("string")),
|
|
|
|
|
73 |
"ner_tags": datasets.Sequence(
|
74 |
datasets.features.ClassLabel(
|
75 |
names=[
|
76 |
"O",
|
77 |
-
"B-
|
78 |
-
"
|
79 |
-
"B-
|
|
|
80 |
"B-GPE",
|
81 |
-
"
|
82 |
"B-LOC",
|
83 |
-
"
|
84 |
"B-NAT_REL_POL",
|
85 |
-
"
|
86 |
-
"B-
|
87 |
-
"B-ORGANIZATION",
|
88 |
-
"B-PERIOD",
|
89 |
-
"B-PERSON",
|
90 |
-
"B-PRODUCT",
|
91 |
-
"B-QUANTITY",
|
92 |
-
"B-WORK_OF_ART",
|
93 |
-
"I-DATETIME",
|
94 |
"I-EVENT",
|
95 |
-
"
|
96 |
-
"I-GPE",
|
97 |
"I-LANGUAGE",
|
98 |
-
"
|
99 |
-
"I-
|
100 |
-
"
|
101 |
-
"I-
|
102 |
-
"
|
103 |
-
"I-ORGANIZATION",
|
104 |
"I-PERIOD",
|
105 |
-
"
|
106 |
-
"I-
|
|
|
107 |
"I-QUANTITY",
|
108 |
-
"
|
|
|
|
|
|
|
|
|
|
|
109 |
]
|
110 |
)
|
111 |
),
|
@@ -143,14 +140,14 @@ class RONEC(datasets.GeneratorBasedBuilder):
|
|
143 |
gen_kwargs={"filepath": downloaded_files["train"]},
|
144 |
),
|
145 |
datasets.SplitGenerator(
|
146 |
-
name=datasets.Split.
|
147 |
# These kwargs will be passed to _generate_examples
|
148 |
-
gen_kwargs={"filepath": downloaded_files["
|
149 |
),
|
150 |
datasets.SplitGenerator(
|
151 |
-
name=datasets.Split.
|
152 |
# These kwargs will be passed to _generate_examples
|
153 |
-
gen_kwargs={"filepath": downloaded_files["
|
154 |
),
|
155 |
]
|
156 |
|
@@ -158,30 +155,7 @@ class RONEC(datasets.GeneratorBasedBuilder):
|
|
158 |
"""Yields examples."""
|
159 |
|
160 |
logger.info("⏳ Generating examples from = %s", filepath)
|
161 |
-
with open(filepath, encoding="utf-8") as f:
|
162 |
-
|
163 |
-
|
164 |
-
|
165 |
-
for line in f:
|
166 |
-
if "#" in line or line == "\n":
|
167 |
-
if tokens:
|
168 |
-
yield guid, {
|
169 |
-
"id": str(guid),
|
170 |
-
"tokens": tokens,
|
171 |
-
"ner_tags": ner_tags,
|
172 |
-
}
|
173 |
-
guid += 1
|
174 |
-
tokens = []
|
175 |
-
ner_tags = []
|
176 |
-
else:
|
177 |
-
# ronec tokens are tab separated
|
178 |
-
splits = line.split("\t")
|
179 |
-
tokens.append(splits[1])
|
180 |
-
ner_tags.append(splits[10].rstrip())
|
181 |
-
# last example
|
182 |
-
if tokens:
|
183 |
-
yield guid, {
|
184 |
-
"id": str(guid),
|
185 |
-
"tokens": tokens,
|
186 |
-
"ner_tags": ner_tags,
|
187 |
-
}
|
|
|
12 |
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
13 |
# See the License for the specific language governing permissions and
|
14 |
# limitations under the License.
|
|
|
15 |
|
16 |
+
import json
|
17 |
|
18 |
import datasets
|
19 |
|
20 |
|
21 |
logger = datasets.logging.get_logger(__name__)
|
22 |
|
|
|
23 |
# Find for instance the citation on arxiv or on the dataset repo/website
|
24 |
_CITATION = """\
|
25 |
@article{dumitrescu2019introducing,
|
|
|
32 |
|
33 |
# You can copy an official description
|
34 |
_DESCRIPTION = """\
|
35 |
+
RONEC - the Romanian Named Entity Corpus, at version 2.0, holds 12330 sentences with over 0.5M tokens, annotated with 15 classes, to a total of 80.283 distinctly annotated entities. It is used for named entity recognition and represents the largest Romanian NER corpus to date.
|
|
|
36 |
"""
|
37 |
|
38 |
_HOMEPAGE = "https://github.com/dumitrescustefan/ronec"
|
|
|
41 |
|
42 |
# The HuggingFace dataset library don't host the datasets but only point to the original files
|
43 |
# This can be an arbitrary nested dict/list of URLs (see below in `_split_generators` method)
|
44 |
+
_URL = "https://raw.githubusercontent.com/dumitrescustefan/ronec/master/data/"
|
45 |
+
_TRAINING_FILE = "train.json"
|
46 |
+
_DEV_FILE = "valid.json"
|
47 |
+
_TEST_FILE = "test.json"
|
48 |
|
49 |
|
50 |
class RONECConfig(datasets.BuilderConfig):
|
|
|
57 |
class RONEC(datasets.GeneratorBasedBuilder):
|
58 |
"""RONEC dataset"""
|
59 |
|
60 |
+
VERSION = datasets.Version("2.0.0")
|
61 |
BUILDER_CONFIGS = [
|
62 |
RONECConfig(name="ronec", version=VERSION, description="RONEC dataset"),
|
63 |
]
|
64 |
|
65 |
def _info(self):
|
|
|
66 |
features = datasets.Features(
|
67 |
{
|
68 |
+
"id": datasets.Value("int32"),
|
69 |
"tokens": datasets.Sequence(datasets.Value("string")),
|
70 |
+
"ner_ids": datasets.Sequence(datasets.Value("int32")),
|
71 |
+
"space_after": datasets.Sequence(datasets.Value("bool")),
|
72 |
"ner_tags": datasets.Sequence(
|
73 |
datasets.features.ClassLabel(
|
74 |
names=[
|
75 |
"O",
|
76 |
+
"B-PERSON",
|
77 |
+
"I-PERSON",
|
78 |
+
"B-ORG",
|
79 |
+
"I-ORG",
|
80 |
"B-GPE",
|
81 |
+
"I-GPE",
|
82 |
"B-LOC",
|
83 |
+
"I-LOC",
|
84 |
"B-NAT_REL_POL",
|
85 |
+
"I-NAT_REL_POL",
|
86 |
+
"B-EVENT",
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
87 |
"I-EVENT",
|
88 |
+
"B-LANGUAGE",
|
|
|
89 |
"I-LANGUAGE",
|
90 |
+
"B-WORK_OF_ART",
|
91 |
+
"I-WORK_OF_ART",
|
92 |
+
"B-DATETIME",
|
93 |
+
"I-DATETIME",
|
94 |
+
"B-PERIOD",
|
|
|
95 |
"I-PERIOD",
|
96 |
+
"B-MONEY",
|
97 |
+
"I-MONEY",
|
98 |
+
"B-QUANTITY",
|
99 |
"I-QUANTITY",
|
100 |
+
"B-NUMERIC",
|
101 |
+
"I-NUMERIC",
|
102 |
+
"B-ORDINAL",
|
103 |
+
"I-ORDINAL",
|
104 |
+
"B-FACILITY",
|
105 |
+
"I-FACILITY",
|
106 |
]
|
107 |
)
|
108 |
),
|
|
|
140 |
gen_kwargs={"filepath": downloaded_files["train"]},
|
141 |
),
|
142 |
datasets.SplitGenerator(
|
143 |
+
name=datasets.Split.VALIDATION,
|
144 |
# These kwargs will be passed to _generate_examples
|
145 |
+
gen_kwargs={"filepath": downloaded_files["dev"]},
|
146 |
),
|
147 |
datasets.SplitGenerator(
|
148 |
+
name=datasets.Split.TEST,
|
149 |
# These kwargs will be passed to _generate_examples
|
150 |
+
gen_kwargs={"filepath": downloaded_files["test"]},
|
151 |
),
|
152 |
]
|
153 |
|
|
|
155 |
"""Yields examples."""
|
156 |
|
157 |
logger.info("⏳ Generating examples from = %s", filepath)
|
158 |
+
with open(filepath, "r", encoding="utf-8") as f:
|
159 |
+
data = json.load(f)
|
160 |
+
for instance in data:
|
161 |
+
yield instance["id"], instance
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|