Datasets:
tner
/

Modalities:
Text
Languages:
English
ArXiv:
Libraries:
Datasets
License:
parquet-converter commited on
Commit
e922580
·
1 Parent(s): cb0fecb

Update parquet files

Browse files
.gitattributes DELETED
@@ -1,37 +0,0 @@
1
- *.7z filter=lfs diff=lfs merge=lfs -text
2
- *.arrow filter=lfs diff=lfs merge=lfs -text
3
- *.bin filter=lfs diff=lfs merge=lfs -text
4
- *.bz2 filter=lfs diff=lfs merge=lfs -text
5
- *.ftz filter=lfs diff=lfs merge=lfs -text
6
- *.gz filter=lfs diff=lfs merge=lfs -text
7
- *.h5 filter=lfs diff=lfs merge=lfs -text
8
- *.joblib filter=lfs diff=lfs merge=lfs -text
9
- *.lfs.* filter=lfs diff=lfs merge=lfs -text
10
- *.model filter=lfs diff=lfs merge=lfs -text
11
- *.msgpack filter=lfs diff=lfs merge=lfs -text
12
- *.onnx filter=lfs diff=lfs merge=lfs -text
13
- *.ot filter=lfs diff=lfs merge=lfs -text
14
- *.parquet filter=lfs diff=lfs merge=lfs -text
15
- *.pb filter=lfs diff=lfs merge=lfs -text
16
- *.pt filter=lfs diff=lfs merge=lfs -text
17
- *.pth filter=lfs diff=lfs merge=lfs -text
18
- *.rar filter=lfs diff=lfs merge=lfs -text
19
- saved_model/**/* filter=lfs diff=lfs merge=lfs -text
20
- *.tar.* filter=lfs diff=lfs merge=lfs -text
21
- *.tflite filter=lfs diff=lfs merge=lfs -text
22
- *.tgz filter=lfs diff=lfs merge=lfs -text
23
- *.wasm filter=lfs diff=lfs merge=lfs -text
24
- *.xz filter=lfs diff=lfs merge=lfs -text
25
- *.zip filter=lfs diff=lfs merge=lfs -text
26
- *.zstandard filter=lfs diff=lfs merge=lfs -text
27
- *tfevents* filter=lfs diff=lfs merge=lfs -text
28
- # Audio files - uncompressed
29
- *.pcm filter=lfs diff=lfs merge=lfs -text
30
- *.sam filter=lfs diff=lfs merge=lfs -text
31
- *.raw filter=lfs diff=lfs merge=lfs -text
32
- # Audio files - compressed
33
- *.aac filter=lfs diff=lfs merge=lfs -text
34
- *.flac filter=lfs diff=lfs merge=lfs -text
35
- *.mp3 filter=lfs diff=lfs merge=lfs -text
36
- *.ogg filter=lfs diff=lfs merge=lfs -text
37
- *.wav filter=lfs diff=lfs merge=lfs -text
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
README.md DELETED
@@ -1,86 +0,0 @@
1
- ---
2
- language:
3
- - en
4
- license:
5
- - other
6
- multilinguality:
7
- - monolingual
8
- size_categories:
9
- - 1k<10K
10
- task_categories:
11
- - token-classification
12
- task_ids:
13
- - named-entity-recognition
14
- pretty_name: TweeBank NER
15
- ---
16
-
17
- # Dataset Card for "tner/tweebank_ner"
18
-
19
- ## Dataset Description
20
-
21
- - **Repository:** [T-NER](https://github.com/asahi417/tner)
22
- - **Paper:** [https://arxiv.org/abs/2201.07281](https://arxiv.org/abs/2201.07281)
23
- - **Dataset:** TweeBank NER
24
- - **Domain:** Twitter
25
- - **Number of Entity:** 4
26
-
27
-
28
- ### Dataset Summary
29
- TweeBank NER dataset formatted in a part of [TNER](https://github.com/asahi417/tner) project.
30
- - Entity Types: `LOC`, `MISC`, `PER`, `ORG`
31
-
32
- ## Dataset Structure
33
-
34
- ### Data Instances
35
- An example of `train` looks as follows.
36
-
37
- ```
38
- {
39
- 'tokens': ['RT', '@USER2362', ':', 'Farmall', 'Heart', 'Of', 'The', 'Holidays', 'Tabletop', 'Christmas', 'Tree', 'With', 'Lights', 'And', 'Motion', 'URL1087', '#Holiday', '#Gifts'],
40
- 'tags': [8, 8, 8, 2, 8, 8, 8, 8, 8, 8, 8, 8, 8, 8, 8, 8, 8, 8]
41
- }
42
- ```
43
-
44
- ### Label ID
45
- The label2id dictionary can be found at [here](https://huggingface.co/datasets/tner/tweebank_ner/raw/main/dataset/label.json).
46
- ```python
47
- {
48
- "B-LOC": 0,
49
- "B-MISC": 1,
50
- "B-ORG": 2,
51
- "B-PER": 3,
52
- "I-LOC": 4,
53
- "I-MISC": 5,
54
- "I-ORG": 6,
55
- "I-PER": 7,
56
- "O": 8
57
- }
58
- ```
59
-
60
- ### Data Splits
61
-
62
- | name |train|validation|test|
63
- |---------|----:|---------:|---:|
64
- |tweebank_ner | 1639| 710 |1201|
65
-
66
- ### Citation Information
67
-
68
- ```
69
- @article{DBLP:journals/corr/abs-2201-07281,
70
- author = {Hang Jiang and
71
- Yining Hua and
72
- Doug Beeferman and
73
- Deb Roy},
74
- title = {Annotating the Tweebank Corpus on Named Entity Recognition and Building
75
- {NLP} Models for Social Media Analysis},
76
- journal = {CoRR},
77
- volume = {abs/2201.07281},
78
- year = {2022},
79
- url = {https://arxiv.org/abs/2201.07281},
80
- eprinttype = {arXiv},
81
- eprint = {2201.07281},
82
- timestamp = {Fri, 21 Jan 2022 13:57:15 +0100},
83
- biburl = {https://dblp.org/rec/journals/corr/abs-2201-07281.bib},
84
- bibsource = {dblp computer science bibliography, https://dblp.org}
85
- }
86
- ```
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
dataset/label.json DELETED
@@ -1 +0,0 @@
1
- {"B-LOC": 0, "B-MISC": 1, "B-ORG": 2, "B-PER": 3, "I-LOC": 4, "I-MISC": 5, "I-ORG": 6, "I-PER": 7, "O": 8}
 
 
dataset/test.json DELETED
The diff for this file is too large to render. See raw diff
 
dataset/train.json DELETED
The diff for this file is too large to render. See raw diff
 
dataset/valid.json DELETED
The diff for this file is too large to render. See raw diff
 
tweebank_ner.py DELETED
@@ -1,87 +0,0 @@
1
- """ NER dataset compiled by T-NER library https://github.com/asahi417/tner/tree/master/tner """
2
- import json
3
- from itertools import chain
4
- import datasets
5
-
6
- logger = datasets.logging.get_logger(__name__)
7
- _DESCRIPTION = """[Tweebank NER](https://arxiv.org/abs/2201.07281)"""
8
- _NAME = "tweebank_ner"
9
- _VERSION = "1.0.1"
10
- _CITATION = """
11
- @article{DBLP:journals/corr/abs-2201-07281,
12
- author = {Hang Jiang and
13
- Yining Hua and
14
- Doug Beeferman and
15
- Deb Roy},
16
- title = {Annotating the Tweebank Corpus on Named Entity Recognition and Building
17
- {NLP} Models for Social Media Analysis},
18
- journal = {CoRR},
19
- volume = {abs/2201.07281},
20
- year = {2022},
21
- url = {https://arxiv.org/abs/2201.07281},
22
- eprinttype = {arXiv},
23
- eprint = {2201.07281},
24
- timestamp = {Fri, 21 Jan 2022 13:57:15 +0100},
25
- biburl = {https://dblp.org/rec/journals/corr/abs-2201-07281.bib},
26
- bibsource = {dblp computer science bibliography, https://dblp.org}
27
- }
28
- """
29
-
30
- _HOME_PAGE = "https://github.com/asahi417/tner"
31
- _URL = f'https://huggingface.co/datasets/tner/{_NAME}/raw/main/dataset'
32
- _URLS = {
33
- str(datasets.Split.TEST): [f'{_URL}/test.json'],
34
- str(datasets.Split.TRAIN): [f'{_URL}/train.json'],
35
- str(datasets.Split.VALIDATION): [f'{_URL}/valid.json'],
36
- }
37
-
38
-
39
- class TweebankNERConfig(datasets.BuilderConfig):
40
- """BuilderConfig"""
41
-
42
- def __init__(self, **kwargs):
43
- """BuilderConfig.
44
-
45
- Args:
46
- **kwargs: keyword arguments forwarded to super.
47
- """
48
- super(TweebankNERConfig, self).__init__(**kwargs)
49
-
50
-
51
- class TweebankNER(datasets.GeneratorBasedBuilder):
52
- """Dataset."""
53
-
54
- BUILDER_CONFIGS = [
55
- TweebankNERConfig(name=_NAME, version=datasets.Version(_VERSION), description=_DESCRIPTION),
56
- ]
57
-
58
- def _split_generators(self, dl_manager):
59
- downloaded_file = dl_manager.download_and_extract(_URLS)
60
- return [datasets.SplitGenerator(name=i, gen_kwargs={"filepaths": downloaded_file[str(i)]})
61
- for i in [datasets.Split.TRAIN, datasets.Split.VALIDATION, datasets.Split.TEST]]
62
-
63
- def _generate_examples(self, filepaths):
64
- _key = 0
65
- for filepath in filepaths:
66
- logger.info(f"generating examples from = {filepath}")
67
- with open(filepath, encoding="utf-8") as f:
68
- _list = [i for i in f.read().split('\n') if len(i) > 0]
69
- for i in _list:
70
- data = json.loads(i)
71
- yield _key, data
72
- _key += 1
73
-
74
- def _info(self):
75
- names = ["B-LOC", "B-MISC", "B-ORG", "B-PER", "I-LOC", "I-MISC", "I-ORG", "I-PER", "O"]
76
- return datasets.DatasetInfo(
77
- description=_DESCRIPTION,
78
- features=datasets.Features(
79
- {
80
- "tokens": datasets.Sequence(datasets.Value("string")),
81
- "tags": datasets.Sequence(datasets.features.ClassLabel(names=names)),
82
- }
83
- ),
84
- supervised_keys=None,
85
- homepage=_HOME_PAGE,
86
- citation=_CITATION,
87
- )
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
tweebank_ner/tweebank_ner-test.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:6cd444db8386bb40abb6dc24ec058db3ad256e650ceec67be93351c8b05213d1
3
+ size 94671
tweebank_ner/tweebank_ner-train.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:16f88f969646f099ded1b0959029a07421c02126a4cbca1f31a7dbbfddd648eb
3
+ size 121650
tweebank_ner/tweebank_ner-validation.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:01be2731a9a13d1cff60fe5b02d1d366ba8bace45ebbff648ef7fa25ba1f9b89
3
+ size 58518