magdyks commited on
Commit
062b525
·
verified ·
1 Parent(s): 5dd6417

Upload folder using huggingface_hub

Browse files
Files changed (5) hide show
  1. README.md +159 -0
  2. added_tokens.json +1 -0
  3. config.json +43 -0
  4. get_files.sh +8 -0
  5. model.safetensors +3 -0
README.md ADDED
@@ -0,0 +1,159 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ language: en
3
+ datasets:
4
+ - conll2003
5
+ license: mit
6
+ model-index:
7
+ - name: dslim/bert-base-NER
8
+ results:
9
+ - task:
10
+ type: token-classification
11
+ name: Token Classification
12
+ dataset:
13
+ name: conll2003
14
+ type: conll2003
15
+ config: conll2003
16
+ split: test
17
+ metrics:
18
+ - name: Accuracy
19
+ type: accuracy
20
+ value: 0.9118041001560013
21
+ verified: true
22
+ - name: Precision
23
+ type: precision
24
+ value: 0.9211550382257732
25
+ verified: true
26
+ - name: Recall
27
+ type: recall
28
+ value: 0.9306415698281261
29
+ verified: true
30
+ - name: F1
31
+ type: f1
32
+ value: 0.9258740048459675
33
+ verified: true
34
+ - name: loss
35
+ type: loss
36
+ value: 0.48325642943382263
37
+ verified: true
38
+ ---
39
+ # bert-base-NER
40
+
41
+ ## Model description
42
+
43
+ **bert-base-NER** is a fine-tuned BERT model that is ready to use for **Named Entity Recognition** and achieves **state-of-the-art performance** for the NER task. It has been trained to recognize four types of entities: location (LOC), organizations (ORG), person (PER) and Miscellaneous (MISC).
44
+
45
+ Specifically, this model is a *bert-base-cased* model that was fine-tuned on the English version of the standard [CoNLL-2003 Named Entity Recognition](https://www.aclweb.org/anthology/W03-0419.pdf) dataset.
46
+
47
+ If you'd like to use a larger BERT-large model fine-tuned on the same dataset, a [**bert-large-NER**](https://huggingface.co/dslim/bert-large-NER/) version is also available.
48
+
49
+ ### Available NER models
50
+ | Model Name | Description | Parameters |
51
+ |-------------------|-------------|------------------|
52
+ | [distilbert-NER](https://huggingface.co/dslim/distilbert-NER) **(NEW!)** | Fine-tuned DistilBERT - a smaller, faster, lighter version of BERT | 66M |
53
+ | [bert-large-NER](https://huggingface.co/dslim/bert-large-NER/) | Fine-tuned bert-large-cased - larger model with slightly better performance | 340M |
54
+ | [bert-base-NER](https://huggingface.co/dslim/bert-base-NER)-([uncased](https://huggingface.co/dslim/bert-base-NER-uncased)) | Fine-tuned bert-base, available in both cased and uncased versions | 110M |
55
+
56
+
57
+ ## Intended uses & limitations
58
+
59
+ #### How to use
60
+
61
+ You can use this model with Transformers *pipeline* for NER.
62
+
63
+ ```python
64
+ from transformers import AutoTokenizer, AutoModelForTokenClassification
65
+ from transformers import pipeline
66
+
67
+ tokenizer = AutoTokenizer.from_pretrained("dslim/bert-base-NER")
68
+ model = AutoModelForTokenClassification.from_pretrained("dslim/bert-base-NER")
69
+
70
+ nlp = pipeline("ner", model=model, tokenizer=tokenizer)
71
+ example = "My name is Wolfgang and I live in Berlin"
72
+
73
+ ner_results = nlp(example)
74
+ print(ner_results)
75
+ ```
76
+
77
+ #### Limitations and bias
78
+
79
+ This model is limited by its training dataset of entity-annotated news articles from a specific span of time. This may not generalize well for all use cases in different domains. Furthermore, the model occassionally tags subword tokens as entities and post-processing of results may be necessary to handle those cases.
80
+
81
+ ## Training data
82
+
83
+ This model was fine-tuned on English version of the standard [CoNLL-2003 Named Entity Recognition](https://www.aclweb.org/anthology/W03-0419.pdf) dataset.
84
+
85
+ The training dataset distinguishes between the beginning and continuation of an entity so that if there are back-to-back entities of the same type, the model can output where the second entity begins. As in the dataset, each token will be classified as one of the following classes:
86
+
87
+ Abbreviation|Description
88
+ -|-
89
+ O|Outside of a named entity
90
+ B-MISC |Beginning of a miscellaneous entity right after another miscellaneous entity
91
+ I-MISC | Miscellaneous entity
92
+ B-PER |Beginning of a person’s name right after another person’s name
93
+ I-PER |Person’s name
94
+ B-ORG |Beginning of an organization right after another organization
95
+ I-ORG |organization
96
+ B-LOC |Beginning of a location right after another location
97
+ I-LOC |Location
98
+
99
+
100
+ ### CoNLL-2003 English Dataset Statistics
101
+ This dataset was derived from the Reuters corpus which consists of Reuters news stories. You can read more about how this dataset was created in the CoNLL-2003 paper.
102
+ #### # of training examples per entity type
103
+ Dataset|LOC|MISC|ORG|PER
104
+ -|-|-|-|-
105
+ Train|7140|3438|6321|6600
106
+ Dev|1837|922|1341|1842
107
+ Test|1668|702|1661|1617
108
+ #### # of articles/sentences/tokens per dataset
109
+ Dataset |Articles |Sentences |Tokens
110
+ -|-|-|-
111
+ Train |946 |14,987 |203,621
112
+ Dev |216 |3,466 |51,362
113
+ Test |231 |3,684 |46,435
114
+
115
+ ## Training procedure
116
+
117
+ This model was trained on a single NVIDIA V100 GPU with recommended hyperparameters from the [original BERT paper](https://arxiv.org/pdf/1810.04805) which trained & evaluated the model on CoNLL-2003 NER task.
118
+
119
+ ## Eval results
120
+ metric|dev|test
121
+ -|-|-
122
+ f1 |95.1 |91.3
123
+ precision |95.0 |90.7
124
+ recall |95.3 |91.9
125
+
126
+ The test metrics are a little lower than the official Google BERT results which encoded document context & experimented with CRF. More on replicating the original results [here](https://github.com/google-research/bert/issues/223).
127
+
128
+ ### BibTeX entry and citation info
129
+
130
+ ```
131
+ @article{DBLP:journals/corr/abs-1810-04805,
132
+ author = {Jacob Devlin and
133
+ Ming{-}Wei Chang and
134
+ Kenton Lee and
135
+ Kristina Toutanova},
136
+ title = {{BERT:} Pre-training of Deep Bidirectional Transformers for Language
137
+ Understanding},
138
+ journal = {CoRR},
139
+ volume = {abs/1810.04805},
140
+ year = {2018},
141
+ url = {http://arxiv.org/abs/1810.04805},
142
+ archivePrefix = {arXiv},
143
+ eprint = {1810.04805},
144
+ timestamp = {Tue, 30 Oct 2018 20:39:56 +0100},
145
+ biburl = {https://dblp.org/rec/journals/corr/abs-1810-04805.bib},
146
+ bibsource = {dblp computer science bibliography, https://dblp.org}
147
+ }
148
+ ```
149
+ ```
150
+ @inproceedings{tjong-kim-sang-de-meulder-2003-introduction,
151
+ title = "Introduction to the {C}o{NLL}-2003 Shared Task: Language-Independent Named Entity Recognition",
152
+ author = "Tjong Kim Sang, Erik F. and
153
+ De Meulder, Fien",
154
+ booktitle = "Proceedings of the Seventh Conference on Natural Language Learning at {HLT}-{NAACL} 2003",
155
+ year = "2003",
156
+ url = "https://www.aclweb.org/anthology/W03-0419",
157
+ pages = "142--147",
158
+ }
159
+ ```
added_tokens.json ADDED
@@ -0,0 +1 @@
 
 
1
+ {}
config.json ADDED
@@ -0,0 +1,43 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "_num_labels": 9,
3
+ "architectures": [
4
+ "BertForTokenClassification"
5
+ ],
6
+ "attention_probs_dropout_prob": 0.1,
7
+ "hidden_act": "gelu",
8
+ "hidden_dropout_prob": 0.1,
9
+ "hidden_size": 768,
10
+ "id2label": {
11
+ "0": "O",
12
+ "1": "B-MISC",
13
+ "2": "I-MISC",
14
+ "3": "B-PER",
15
+ "4": "I-PER",
16
+ "5": "B-ORG",
17
+ "6": "I-ORG",
18
+ "7": "B-LOC",
19
+ "8": "I-LOC"
20
+ },
21
+ "initializer_range": 0.02,
22
+ "intermediate_size": 3072,
23
+ "label2id": {
24
+ "B-LOC": 7,
25
+ "B-MISC": 1,
26
+ "B-ORG": 5,
27
+ "B-PER": 3,
28
+ "I-LOC": 8,
29
+ "I-MISC": 2,
30
+ "I-ORG": 6,
31
+ "I-PER": 4,
32
+ "O": 0
33
+ },
34
+ "layer_norm_eps": 1e-12,
35
+ "max_position_embeddings": 512,
36
+ "model_type": "bert",
37
+ "num_attention_heads": 12,
38
+ "num_hidden_layers": 12,
39
+ "output_past": true,
40
+ "pad_token_id": 0,
41
+ "type_vocab_size": 2,
42
+ "vocab_size": 28996
43
+ }
get_files.sh ADDED
@@ -0,0 +1,8 @@
 
 
 
 
 
 
 
 
 
1
+ rm 'added_tokens.json?download=true'
2
+ rm 'config.json?download=true'
3
+ rm 'model.safetensors?download=true'
4
+
5
+ wget https://huggingface.co/dslim/bert-base-NER/resolve/main/README.md
6
+ wget https://huggingface.co/dslim/bert-base-NER/resolve/main/added_tokens.json
7
+ wget https://huggingface.co/dslim/bert-base-NER/resolve/main/config.json
8
+ wget https://huggingface.co/dslim/bert-base-NER/resolve/main/model.safetensors
model.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:b04492186cfb45a64908487a17a9f8d6ddec3a403ef39db5bca688f0fa702a34
3
+ size 433292294