modelId
stringlengths
5
139
author
stringlengths
2
42
last_modified
timestamp[us, tz=UTC]date
2020-02-15 11:33:14
2025-07-16 00:42:46
downloads
int64
0
223M
likes
int64
0
11.7k
library_name
stringclasses
522 values
tags
listlengths
1
4.05k
pipeline_tag
stringclasses
55 values
createdAt
timestamp[us, tz=UTC]date
2022-03-02 23:29:04
2025-07-16 00:42:16
card
stringlengths
11
1.01M
stefan-it/hmbench-icdar-fr-hmbert-bs8-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-5
stefan-it
2023-10-17T23:20:47Z
6
0
flair
[ "flair", "pytorch", "token-classification", "sequence-tagger-model", "fr", "base_model:dbmdz/bert-base-historic-multilingual-cased", "base_model:finetune:dbmdz/bert-base-historic-multilingual-cased", "license:mit", "region:us" ]
token-classification
2023-10-14T01:27:32Z
--- language: fr license: mit tags: - flair - token-classification - sequence-tagger-model base_model: dbmdz/bert-base-historic-multilingual-cased widget: - text: Je suis convaincu , a-t43 dit . que nous n"y parviendrions pas , mais nous ne pouvons céder parce que l' état moral de nos troupe* en souffrirait trop . ( Fournier . ) Des avions ennemis lancent dix-sept bombes sur Dunkerque LONDRES . 31 décembre . --- # Fine-tuned Flair Model on French ICDAR-Europeana NER Dataset This Flair model was fine-tuned on the [French ICDAR-Europeana](https://github.com/stefan-it/historic-domain-adaptation-icdar) NER Dataset using hmBERT as backbone LM. The ICDAR-Europeana NER Dataset is a preprocessed variant of the [Europeana NER Corpora](https://github.com/EuropeanaNewspapers/ner-corpora) for Dutch and French. The following NEs were annotated: `PER`, `LOC` and `ORG`. # Results We performed a hyper-parameter search over the following parameters with 5 different seeds per configuration: * Batch Sizes: `[8, 4]` * Learning Rates: `[3e-05, 5e-05]` And report micro F1-score on development set: | Configuration | Run 1 | Run 2 | Run 3 | Run 4 | Run 5 | Avg. | |-----------------|--------------|--------------|--------------|--------------|--------------|--------------| | bs4-e10-lr3e-05 | [0.7731][1] | [0.7696][2] | [0.7666][3] | [0.7823][4] | [0.7714][5] | 77.26 ± 0.53 | | bs4-e10-lr5e-05 | [0.774][6] | [0.7571][7] | [0.7685][8] | [0.7694][9] | [0.7704][10] | 76.79 ± 0.57 | | bs8-e10-lr5e-05 | [0.7675][11] | [0.7698][12] | [0.7601][13] | [0.7657][14] | [0.7641][15] | 76.54 ± 0.33 | | bs8-e10-lr3e-05 | [0.7596][16] | [0.7697][17] | [0.7711][18] | [0.7628][19] | [0.7574][20] | 76.41 ± 0.54 | [1]: https://hf.co/stefan-it/hmbench-icdar-fr-hmbert-bs4-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-1 [2]: https://hf.co/stefan-it/hmbench-icdar-fr-hmbert-bs4-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-2 [3]: https://hf.co/stefan-it/hmbench-icdar-fr-hmbert-bs4-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-3 [4]: https://hf.co/stefan-it/hmbench-icdar-fr-hmbert-bs4-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-4 [5]: https://hf.co/stefan-it/hmbench-icdar-fr-hmbert-bs4-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-5 [6]: https://hf.co/stefan-it/hmbench-icdar-fr-hmbert-bs4-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-1 [7]: https://hf.co/stefan-it/hmbench-icdar-fr-hmbert-bs4-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-2 [8]: https://hf.co/stefan-it/hmbench-icdar-fr-hmbert-bs4-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-3 [9]: https://hf.co/stefan-it/hmbench-icdar-fr-hmbert-bs4-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-4 [10]: https://hf.co/stefan-it/hmbench-icdar-fr-hmbert-bs4-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-5 [11]: https://hf.co/stefan-it/hmbench-icdar-fr-hmbert-bs8-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-1 [12]: https://hf.co/stefan-it/hmbench-icdar-fr-hmbert-bs8-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-2 [13]: https://hf.co/stefan-it/hmbench-icdar-fr-hmbert-bs8-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-3 [14]: https://hf.co/stefan-it/hmbench-icdar-fr-hmbert-bs8-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-4 [15]: https://hf.co/stefan-it/hmbench-icdar-fr-hmbert-bs8-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-5 [16]: https://hf.co/stefan-it/hmbench-icdar-fr-hmbert-bs8-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-1 [17]: https://hf.co/stefan-it/hmbench-icdar-fr-hmbert-bs8-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-2 [18]: https://hf.co/stefan-it/hmbench-icdar-fr-hmbert-bs8-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-3 [19]: https://hf.co/stefan-it/hmbench-icdar-fr-hmbert-bs8-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-4 [20]: https://hf.co/stefan-it/hmbench-icdar-fr-hmbert-bs8-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-5 The [training log](training.log) and TensorBoard logs (only for hmByT5 and hmTEAMS based models) are also uploaded to the model hub. More information about fine-tuning can be found [here](https://github.com/stefan-it/hmBench). # Acknowledgements We thank [Luisa März](https://github.com/LuisaMaerz), [Katharina Schmid](https://github.com/schmika) and [Erion Çano](https://github.com/erionc) for their fruitful discussions about Historic Language Models. Research supported with Cloud TPUs from Google's [TPU Research Cloud](https://sites.research.google/trc/about/) (TRC). Many Thanks for providing access to the TPUs ❤️
stefan-it/hmbench-icdar-fr-hmbert-bs8-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-4
stefan-it
2023-10-17T23:20:46Z
2
0
flair
[ "flair", "pytorch", "token-classification", "sequence-tagger-model", "fr", "base_model:dbmdz/bert-base-historic-multilingual-cased", "base_model:finetune:dbmdz/bert-base-historic-multilingual-cased", "license:mit", "region:us" ]
token-classification
2023-10-14T00:33:04Z
--- language: fr license: mit tags: - flair - token-classification - sequence-tagger-model base_model: dbmdz/bert-base-historic-multilingual-cased widget: - text: Je suis convaincu , a-t43 dit . que nous n"y parviendrions pas , mais nous ne pouvons céder parce que l' état moral de nos troupe* en souffrirait trop . ( Fournier . ) Des avions ennemis lancent dix-sept bombes sur Dunkerque LONDRES . 31 décembre . --- # Fine-tuned Flair Model on French ICDAR-Europeana NER Dataset This Flair model was fine-tuned on the [French ICDAR-Europeana](https://github.com/stefan-it/historic-domain-adaptation-icdar) NER Dataset using hmBERT as backbone LM. The ICDAR-Europeana NER Dataset is a preprocessed variant of the [Europeana NER Corpora](https://github.com/EuropeanaNewspapers/ner-corpora) for Dutch and French. The following NEs were annotated: `PER`, `LOC` and `ORG`. # Results We performed a hyper-parameter search over the following parameters with 5 different seeds per configuration: * Batch Sizes: `[8, 4]` * Learning Rates: `[3e-05, 5e-05]` And report micro F1-score on development set: | Configuration | Run 1 | Run 2 | Run 3 | Run 4 | Run 5 | Avg. | |-----------------|--------------|--------------|--------------|--------------|--------------|--------------| | bs4-e10-lr3e-05 | [0.7731][1] | [0.7696][2] | [0.7666][3] | [0.7823][4] | [0.7714][5] | 77.26 ± 0.53 | | bs4-e10-lr5e-05 | [0.774][6] | [0.7571][7] | [0.7685][8] | [0.7694][9] | [0.7704][10] | 76.79 ± 0.57 | | bs8-e10-lr5e-05 | [0.7675][11] | [0.7698][12] | [0.7601][13] | [0.7657][14] | [0.7641][15] | 76.54 ± 0.33 | | bs8-e10-lr3e-05 | [0.7596][16] | [0.7697][17] | [0.7711][18] | [0.7628][19] | [0.7574][20] | 76.41 ± 0.54 | [1]: https://hf.co/stefan-it/hmbench-icdar-fr-hmbert-bs4-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-1 [2]: https://hf.co/stefan-it/hmbench-icdar-fr-hmbert-bs4-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-2 [3]: https://hf.co/stefan-it/hmbench-icdar-fr-hmbert-bs4-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-3 [4]: https://hf.co/stefan-it/hmbench-icdar-fr-hmbert-bs4-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-4 [5]: https://hf.co/stefan-it/hmbench-icdar-fr-hmbert-bs4-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-5 [6]: https://hf.co/stefan-it/hmbench-icdar-fr-hmbert-bs4-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-1 [7]: https://hf.co/stefan-it/hmbench-icdar-fr-hmbert-bs4-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-2 [8]: https://hf.co/stefan-it/hmbench-icdar-fr-hmbert-bs4-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-3 [9]: https://hf.co/stefan-it/hmbench-icdar-fr-hmbert-bs4-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-4 [10]: https://hf.co/stefan-it/hmbench-icdar-fr-hmbert-bs4-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-5 [11]: https://hf.co/stefan-it/hmbench-icdar-fr-hmbert-bs8-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-1 [12]: https://hf.co/stefan-it/hmbench-icdar-fr-hmbert-bs8-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-2 [13]: https://hf.co/stefan-it/hmbench-icdar-fr-hmbert-bs8-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-3 [14]: https://hf.co/stefan-it/hmbench-icdar-fr-hmbert-bs8-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-4 [15]: https://hf.co/stefan-it/hmbench-icdar-fr-hmbert-bs8-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-5 [16]: https://hf.co/stefan-it/hmbench-icdar-fr-hmbert-bs8-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-1 [17]: https://hf.co/stefan-it/hmbench-icdar-fr-hmbert-bs8-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-2 [18]: https://hf.co/stefan-it/hmbench-icdar-fr-hmbert-bs8-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-3 [19]: https://hf.co/stefan-it/hmbench-icdar-fr-hmbert-bs8-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-4 [20]: https://hf.co/stefan-it/hmbench-icdar-fr-hmbert-bs8-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-5 The [training log](training.log) and TensorBoard logs (only for hmByT5 and hmTEAMS based models) are also uploaded to the model hub. More information about fine-tuning can be found [here](https://github.com/stefan-it/hmBench). # Acknowledgements We thank [Luisa März](https://github.com/LuisaMaerz), [Katharina Schmid](https://github.com/schmika) and [Erion Çano](https://github.com/erionc) for their fruitful discussions about Historic Language Models. Research supported with Cloud TPUs from Google's [TPU Research Cloud](https://sites.research.google/trc/about/) (TRC). Many Thanks for providing access to the TPUs ❤️
stefan-it/hmbench-icdar-fr-hmbert-bs8-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-3
stefan-it
2023-10-17T23:20:45Z
2
0
flair
[ "flair", "pytorch", "token-classification", "sequence-tagger-model", "fr", "base_model:dbmdz/bert-base-historic-multilingual-cased", "base_model:finetune:dbmdz/bert-base-historic-multilingual-cased", "license:mit", "region:us" ]
token-classification
2023-10-13T23:38:29Z
--- language: fr license: mit tags: - flair - token-classification - sequence-tagger-model base_model: dbmdz/bert-base-historic-multilingual-cased widget: - text: Je suis convaincu , a-t43 dit . que nous n"y parviendrions pas , mais nous ne pouvons céder parce que l' état moral de nos troupe* en souffrirait trop . ( Fournier . ) Des avions ennemis lancent dix-sept bombes sur Dunkerque LONDRES . 31 décembre . --- # Fine-tuned Flair Model on French ICDAR-Europeana NER Dataset This Flair model was fine-tuned on the [French ICDAR-Europeana](https://github.com/stefan-it/historic-domain-adaptation-icdar) NER Dataset using hmBERT as backbone LM. The ICDAR-Europeana NER Dataset is a preprocessed variant of the [Europeana NER Corpora](https://github.com/EuropeanaNewspapers/ner-corpora) for Dutch and French. The following NEs were annotated: `PER`, `LOC` and `ORG`. # Results We performed a hyper-parameter search over the following parameters with 5 different seeds per configuration: * Batch Sizes: `[8, 4]` * Learning Rates: `[3e-05, 5e-05]` And report micro F1-score on development set: | Configuration | Run 1 | Run 2 | Run 3 | Run 4 | Run 5 | Avg. | |-----------------|--------------|--------------|--------------|--------------|--------------|--------------| | bs4-e10-lr3e-05 | [0.7731][1] | [0.7696][2] | [0.7666][3] | [0.7823][4] | [0.7714][5] | 77.26 ± 0.53 | | bs4-e10-lr5e-05 | [0.774][6] | [0.7571][7] | [0.7685][8] | [0.7694][9] | [0.7704][10] | 76.79 ± 0.57 | | bs8-e10-lr5e-05 | [0.7675][11] | [0.7698][12] | [0.7601][13] | [0.7657][14] | [0.7641][15] | 76.54 ± 0.33 | | bs8-e10-lr3e-05 | [0.7596][16] | [0.7697][17] | [0.7711][18] | [0.7628][19] | [0.7574][20] | 76.41 ± 0.54 | [1]: https://hf.co/stefan-it/hmbench-icdar-fr-hmbert-bs4-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-1 [2]: https://hf.co/stefan-it/hmbench-icdar-fr-hmbert-bs4-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-2 [3]: https://hf.co/stefan-it/hmbench-icdar-fr-hmbert-bs4-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-3 [4]: https://hf.co/stefan-it/hmbench-icdar-fr-hmbert-bs4-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-4 [5]: https://hf.co/stefan-it/hmbench-icdar-fr-hmbert-bs4-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-5 [6]: https://hf.co/stefan-it/hmbench-icdar-fr-hmbert-bs4-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-1 [7]: https://hf.co/stefan-it/hmbench-icdar-fr-hmbert-bs4-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-2 [8]: https://hf.co/stefan-it/hmbench-icdar-fr-hmbert-bs4-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-3 [9]: https://hf.co/stefan-it/hmbench-icdar-fr-hmbert-bs4-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-4 [10]: https://hf.co/stefan-it/hmbench-icdar-fr-hmbert-bs4-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-5 [11]: https://hf.co/stefan-it/hmbench-icdar-fr-hmbert-bs8-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-1 [12]: https://hf.co/stefan-it/hmbench-icdar-fr-hmbert-bs8-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-2 [13]: https://hf.co/stefan-it/hmbench-icdar-fr-hmbert-bs8-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-3 [14]: https://hf.co/stefan-it/hmbench-icdar-fr-hmbert-bs8-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-4 [15]: https://hf.co/stefan-it/hmbench-icdar-fr-hmbert-bs8-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-5 [16]: https://hf.co/stefan-it/hmbench-icdar-fr-hmbert-bs8-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-1 [17]: https://hf.co/stefan-it/hmbench-icdar-fr-hmbert-bs8-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-2 [18]: https://hf.co/stefan-it/hmbench-icdar-fr-hmbert-bs8-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-3 [19]: https://hf.co/stefan-it/hmbench-icdar-fr-hmbert-bs8-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-4 [20]: https://hf.co/stefan-it/hmbench-icdar-fr-hmbert-bs8-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-5 The [training log](training.log) and TensorBoard logs (only for hmByT5 and hmTEAMS based models) are also uploaded to the model hub. More information about fine-tuning can be found [here](https://github.com/stefan-it/hmBench). # Acknowledgements We thank [Luisa März](https://github.com/LuisaMaerz), [Katharina Schmid](https://github.com/schmika) and [Erion Çano](https://github.com/erionc) for their fruitful discussions about Historic Language Models. Research supported with Cloud TPUs from Google's [TPU Research Cloud](https://sites.research.google/trc/about/) (TRC). Many Thanks for providing access to the TPUs ❤️
stefan-it/hmbench-icdar-fr-hmbert-bs8-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-2
stefan-it
2023-10-17T23:20:43Z
6
0
flair
[ "flair", "pytorch", "token-classification", "sequence-tagger-model", "fr", "base_model:dbmdz/bert-base-historic-multilingual-cased", "base_model:finetune:dbmdz/bert-base-historic-multilingual-cased", "license:mit", "region:us" ]
token-classification
2023-10-13T22:43:14Z
--- language: fr license: mit tags: - flair - token-classification - sequence-tagger-model base_model: dbmdz/bert-base-historic-multilingual-cased widget: - text: Je suis convaincu , a-t43 dit . que nous n"y parviendrions pas , mais nous ne pouvons céder parce que l' état moral de nos troupe* en souffrirait trop . ( Fournier . ) Des avions ennemis lancent dix-sept bombes sur Dunkerque LONDRES . 31 décembre . --- # Fine-tuned Flair Model on French ICDAR-Europeana NER Dataset This Flair model was fine-tuned on the [French ICDAR-Europeana](https://github.com/stefan-it/historic-domain-adaptation-icdar) NER Dataset using hmBERT as backbone LM. The ICDAR-Europeana NER Dataset is a preprocessed variant of the [Europeana NER Corpora](https://github.com/EuropeanaNewspapers/ner-corpora) for Dutch and French. The following NEs were annotated: `PER`, `LOC` and `ORG`. # Results We performed a hyper-parameter search over the following parameters with 5 different seeds per configuration: * Batch Sizes: `[8, 4]` * Learning Rates: `[3e-05, 5e-05]` And report micro F1-score on development set: | Configuration | Run 1 | Run 2 | Run 3 | Run 4 | Run 5 | Avg. | |-----------------|--------------|--------------|--------------|--------------|--------------|--------------| | bs4-e10-lr3e-05 | [0.7731][1] | [0.7696][2] | [0.7666][3] | [0.7823][4] | [0.7714][5] | 77.26 ± 0.53 | | bs4-e10-lr5e-05 | [0.774][6] | [0.7571][7] | [0.7685][8] | [0.7694][9] | [0.7704][10] | 76.79 ± 0.57 | | bs8-e10-lr5e-05 | [0.7675][11] | [0.7698][12] | [0.7601][13] | [0.7657][14] | [0.7641][15] | 76.54 ± 0.33 | | bs8-e10-lr3e-05 | [0.7596][16] | [0.7697][17] | [0.7711][18] | [0.7628][19] | [0.7574][20] | 76.41 ± 0.54 | [1]: https://hf.co/stefan-it/hmbench-icdar-fr-hmbert-bs4-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-1 [2]: https://hf.co/stefan-it/hmbench-icdar-fr-hmbert-bs4-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-2 [3]: https://hf.co/stefan-it/hmbench-icdar-fr-hmbert-bs4-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-3 [4]: https://hf.co/stefan-it/hmbench-icdar-fr-hmbert-bs4-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-4 [5]: https://hf.co/stefan-it/hmbench-icdar-fr-hmbert-bs4-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-5 [6]: https://hf.co/stefan-it/hmbench-icdar-fr-hmbert-bs4-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-1 [7]: https://hf.co/stefan-it/hmbench-icdar-fr-hmbert-bs4-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-2 [8]: https://hf.co/stefan-it/hmbench-icdar-fr-hmbert-bs4-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-3 [9]: https://hf.co/stefan-it/hmbench-icdar-fr-hmbert-bs4-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-4 [10]: https://hf.co/stefan-it/hmbench-icdar-fr-hmbert-bs4-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-5 [11]: https://hf.co/stefan-it/hmbench-icdar-fr-hmbert-bs8-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-1 [12]: https://hf.co/stefan-it/hmbench-icdar-fr-hmbert-bs8-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-2 [13]: https://hf.co/stefan-it/hmbench-icdar-fr-hmbert-bs8-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-3 [14]: https://hf.co/stefan-it/hmbench-icdar-fr-hmbert-bs8-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-4 [15]: https://hf.co/stefan-it/hmbench-icdar-fr-hmbert-bs8-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-5 [16]: https://hf.co/stefan-it/hmbench-icdar-fr-hmbert-bs8-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-1 [17]: https://hf.co/stefan-it/hmbench-icdar-fr-hmbert-bs8-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-2 [18]: https://hf.co/stefan-it/hmbench-icdar-fr-hmbert-bs8-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-3 [19]: https://hf.co/stefan-it/hmbench-icdar-fr-hmbert-bs8-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-4 [20]: https://hf.co/stefan-it/hmbench-icdar-fr-hmbert-bs8-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-5 The [training log](training.log) and TensorBoard logs (only for hmByT5 and hmTEAMS based models) are also uploaded to the model hub. More information about fine-tuning can be found [here](https://github.com/stefan-it/hmBench). # Acknowledgements We thank [Luisa März](https://github.com/LuisaMaerz), [Katharina Schmid](https://github.com/schmika) and [Erion Çano](https://github.com/erionc) for their fruitful discussions about Historic Language Models. Research supported with Cloud TPUs from Google's [TPU Research Cloud](https://sites.research.google/trc/about/) (TRC). Many Thanks for providing access to the TPUs ❤️
stefan-it/hmbench-icdar-fr-hmbert-bs8-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-4
stefan-it
2023-10-17T23:20:40Z
2
0
flair
[ "flair", "pytorch", "token-classification", "sequence-tagger-model", "fr", "base_model:dbmdz/bert-base-historic-multilingual-cased", "base_model:finetune:dbmdz/bert-base-historic-multilingual-cased", "license:mit", "region:us" ]
token-classification
2023-10-14T00:22:03Z
--- language: fr license: mit tags: - flair - token-classification - sequence-tagger-model base_model: dbmdz/bert-base-historic-multilingual-cased widget: - text: Je suis convaincu , a-t43 dit . que nous n"y parviendrions pas , mais nous ne pouvons céder parce que l' état moral de nos troupe* en souffrirait trop . ( Fournier . ) Des avions ennemis lancent dix-sept bombes sur Dunkerque LONDRES . 31 décembre . --- # Fine-tuned Flair Model on French ICDAR-Europeana NER Dataset This Flair model was fine-tuned on the [French ICDAR-Europeana](https://github.com/stefan-it/historic-domain-adaptation-icdar) NER Dataset using hmBERT as backbone LM. The ICDAR-Europeana NER Dataset is a preprocessed variant of the [Europeana NER Corpora](https://github.com/EuropeanaNewspapers/ner-corpora) for Dutch and French. The following NEs were annotated: `PER`, `LOC` and `ORG`. # Results We performed a hyper-parameter search over the following parameters with 5 different seeds per configuration: * Batch Sizes: `[8, 4]` * Learning Rates: `[3e-05, 5e-05]` And report micro F1-score on development set: | Configuration | Run 1 | Run 2 | Run 3 | Run 4 | Run 5 | Avg. | |-----------------|--------------|--------------|--------------|--------------|--------------|--------------| | bs4-e10-lr3e-05 | [0.7731][1] | [0.7696][2] | [0.7666][3] | [0.7823][4] | [0.7714][5] | 77.26 ± 0.53 | | bs4-e10-lr5e-05 | [0.774][6] | [0.7571][7] | [0.7685][8] | [0.7694][9] | [0.7704][10] | 76.79 ± 0.57 | | bs8-e10-lr5e-05 | [0.7675][11] | [0.7698][12] | [0.7601][13] | [0.7657][14] | [0.7641][15] | 76.54 ± 0.33 | | bs8-e10-lr3e-05 | [0.7596][16] | [0.7697][17] | [0.7711][18] | [0.7628][19] | [0.7574][20] | 76.41 ± 0.54 | [1]: https://hf.co/stefan-it/hmbench-icdar-fr-hmbert-bs4-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-1 [2]: https://hf.co/stefan-it/hmbench-icdar-fr-hmbert-bs4-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-2 [3]: https://hf.co/stefan-it/hmbench-icdar-fr-hmbert-bs4-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-3 [4]: https://hf.co/stefan-it/hmbench-icdar-fr-hmbert-bs4-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-4 [5]: https://hf.co/stefan-it/hmbench-icdar-fr-hmbert-bs4-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-5 [6]: https://hf.co/stefan-it/hmbench-icdar-fr-hmbert-bs4-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-1 [7]: https://hf.co/stefan-it/hmbench-icdar-fr-hmbert-bs4-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-2 [8]: https://hf.co/stefan-it/hmbench-icdar-fr-hmbert-bs4-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-3 [9]: https://hf.co/stefan-it/hmbench-icdar-fr-hmbert-bs4-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-4 [10]: https://hf.co/stefan-it/hmbench-icdar-fr-hmbert-bs4-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-5 [11]: https://hf.co/stefan-it/hmbench-icdar-fr-hmbert-bs8-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-1 [12]: https://hf.co/stefan-it/hmbench-icdar-fr-hmbert-bs8-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-2 [13]: https://hf.co/stefan-it/hmbench-icdar-fr-hmbert-bs8-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-3 [14]: https://hf.co/stefan-it/hmbench-icdar-fr-hmbert-bs8-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-4 [15]: https://hf.co/stefan-it/hmbench-icdar-fr-hmbert-bs8-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-5 [16]: https://hf.co/stefan-it/hmbench-icdar-fr-hmbert-bs8-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-1 [17]: https://hf.co/stefan-it/hmbench-icdar-fr-hmbert-bs8-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-2 [18]: https://hf.co/stefan-it/hmbench-icdar-fr-hmbert-bs8-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-3 [19]: https://hf.co/stefan-it/hmbench-icdar-fr-hmbert-bs8-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-4 [20]: https://hf.co/stefan-it/hmbench-icdar-fr-hmbert-bs8-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-5 The [training log](training.log) and TensorBoard logs (only for hmByT5 and hmTEAMS based models) are also uploaded to the model hub. More information about fine-tuning can be found [here](https://github.com/stefan-it/hmBench). # Acknowledgements We thank [Luisa März](https://github.com/LuisaMaerz), [Katharina Schmid](https://github.com/schmika) and [Erion Çano](https://github.com/erionc) for their fruitful discussions about Historic Language Models. Research supported with Cloud TPUs from Google's [TPU Research Cloud](https://sites.research.google/trc/about/) (TRC). Many Thanks for providing access to the TPUs ❤️
stefan-it/hmbench-icdar-fr-hmbert-bs8-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-3
stefan-it
2023-10-17T23:20:39Z
3
0
flair
[ "flair", "pytorch", "token-classification", "sequence-tagger-model", "fr", "base_model:dbmdz/bert-base-historic-multilingual-cased", "base_model:finetune:dbmdz/bert-base-historic-multilingual-cased", "license:mit", "region:us" ]
token-classification
2023-10-13T23:27:22Z
--- language: fr license: mit tags: - flair - token-classification - sequence-tagger-model base_model: dbmdz/bert-base-historic-multilingual-cased widget: - text: Je suis convaincu , a-t43 dit . que nous n"y parviendrions pas , mais nous ne pouvons céder parce que l' état moral de nos troupe* en souffrirait trop . ( Fournier . ) Des avions ennemis lancent dix-sept bombes sur Dunkerque LONDRES . 31 décembre . --- # Fine-tuned Flair Model on French ICDAR-Europeana NER Dataset This Flair model was fine-tuned on the [French ICDAR-Europeana](https://github.com/stefan-it/historic-domain-adaptation-icdar) NER Dataset using hmBERT as backbone LM. The ICDAR-Europeana NER Dataset is a preprocessed variant of the [Europeana NER Corpora](https://github.com/EuropeanaNewspapers/ner-corpora) for Dutch and French. The following NEs were annotated: `PER`, `LOC` and `ORG`. # Results We performed a hyper-parameter search over the following parameters with 5 different seeds per configuration: * Batch Sizes: `[8, 4]` * Learning Rates: `[3e-05, 5e-05]` And report micro F1-score on development set: | Configuration | Run 1 | Run 2 | Run 3 | Run 4 | Run 5 | Avg. | |-----------------|--------------|--------------|--------------|--------------|--------------|--------------| | bs4-e10-lr3e-05 | [0.7731][1] | [0.7696][2] | [0.7666][3] | [0.7823][4] | [0.7714][5] | 77.26 ± 0.53 | | bs4-e10-lr5e-05 | [0.774][6] | [0.7571][7] | [0.7685][8] | [0.7694][9] | [0.7704][10] | 76.79 ± 0.57 | | bs8-e10-lr5e-05 | [0.7675][11] | [0.7698][12] | [0.7601][13] | [0.7657][14] | [0.7641][15] | 76.54 ± 0.33 | | bs8-e10-lr3e-05 | [0.7596][16] | [0.7697][17] | [0.7711][18] | [0.7628][19] | [0.7574][20] | 76.41 ± 0.54 | [1]: https://hf.co/stefan-it/hmbench-icdar-fr-hmbert-bs4-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-1 [2]: https://hf.co/stefan-it/hmbench-icdar-fr-hmbert-bs4-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-2 [3]: https://hf.co/stefan-it/hmbench-icdar-fr-hmbert-bs4-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-3 [4]: https://hf.co/stefan-it/hmbench-icdar-fr-hmbert-bs4-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-4 [5]: https://hf.co/stefan-it/hmbench-icdar-fr-hmbert-bs4-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-5 [6]: https://hf.co/stefan-it/hmbench-icdar-fr-hmbert-bs4-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-1 [7]: https://hf.co/stefan-it/hmbench-icdar-fr-hmbert-bs4-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-2 [8]: https://hf.co/stefan-it/hmbench-icdar-fr-hmbert-bs4-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-3 [9]: https://hf.co/stefan-it/hmbench-icdar-fr-hmbert-bs4-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-4 [10]: https://hf.co/stefan-it/hmbench-icdar-fr-hmbert-bs4-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-5 [11]: https://hf.co/stefan-it/hmbench-icdar-fr-hmbert-bs8-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-1 [12]: https://hf.co/stefan-it/hmbench-icdar-fr-hmbert-bs8-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-2 [13]: https://hf.co/stefan-it/hmbench-icdar-fr-hmbert-bs8-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-3 [14]: https://hf.co/stefan-it/hmbench-icdar-fr-hmbert-bs8-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-4 [15]: https://hf.co/stefan-it/hmbench-icdar-fr-hmbert-bs8-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-5 [16]: https://hf.co/stefan-it/hmbench-icdar-fr-hmbert-bs8-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-1 [17]: https://hf.co/stefan-it/hmbench-icdar-fr-hmbert-bs8-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-2 [18]: https://hf.co/stefan-it/hmbench-icdar-fr-hmbert-bs8-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-3 [19]: https://hf.co/stefan-it/hmbench-icdar-fr-hmbert-bs8-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-4 [20]: https://hf.co/stefan-it/hmbench-icdar-fr-hmbert-bs8-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-5 The [training log](training.log) and TensorBoard logs (only for hmByT5 and hmTEAMS based models) are also uploaded to the model hub. More information about fine-tuning can be found [here](https://github.com/stefan-it/hmBench). # Acknowledgements We thank [Luisa März](https://github.com/LuisaMaerz), [Katharina Schmid](https://github.com/schmika) and [Erion Çano](https://github.com/erionc) for their fruitful discussions about Historic Language Models. Research supported with Cloud TPUs from Google's [TPU Research Cloud](https://sites.research.google/trc/about/) (TRC). Many Thanks for providing access to the TPUs ❤️
stefan-it/hmbench-icdar-nl-hmbert-bs4-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-4
stefan-it
2023-10-17T23:20:20Z
6
0
flair
[ "flair", "pytorch", "token-classification", "sequence-tagger-model", "nl", "base_model:dbmdz/bert-base-historic-multilingual-cased", "base_model:finetune:dbmdz/bert-base-historic-multilingual-cased", "license:mit", "region:us" ]
token-classification
2023-10-14T10:56:58Z
--- language: nl license: mit tags: - flair - token-classification - sequence-tagger-model base_model: dbmdz/bert-base-historic-multilingual-cased widget: - text: Professoren der Geneeskun dige Faculteit te Groningen alsook van de HH , Doctoren en Chirurgijns van Groningen , Friesland , Noordholland , Overijssel , Gelderland , Drenthe , in welke Provinciën dit Elixir als Medicament voor Mond en Tanden reeds jaren bakend is . --- # Fine-tuned Flair Model on Dutch ICDAR-Europeana NER Dataset This Flair model was fine-tuned on the [Dutch ICDAR-Europeana](https://github.com/stefan-it/historic-domain-adaptation-icdar) NER Dataset using hmBERT as backbone LM. The ICDAR-Europeana NER Dataset is a preprocessed variant of the [Europeana NER Corpora](https://github.com/EuropeanaNewspapers/ner-corpora) for Dutch and French. The following NEs were annotated: `PER`, `LOC` and `ORG`. # Results We performed a hyper-parameter search over the following parameters with 5 different seeds per configuration: * Batch Sizes: `[8, 4]` * Learning Rates: `[3e-05, 5e-05]` And report micro F1-score on development set: | Configuration | Run 1 | Run 2 | Run 3 | Run 4 | Run 5 | Avg. | |-----------------|--------------|--------------|--------------|--------------|--------------|--------------| | bs8-e10-lr5e-05 | [0.8191][1] | [0.8086][2] | [0.8237][3] | [0.8318][4] | [0.8235][5] | 82.13 ± 0.76 | | bs8-e10-lr3e-05 | [0.8056][6] | [0.8183][7] | [0.8241][8] | [0.8431][9] | [0.8155][10] | 82.13 ± 1.24 | | bs4-e10-lr5e-05 | [0.8055][11] | [0.822][12] | [0.8243][13] | [0.8093][14] | [0.8144][15] | 81.51 ± 0.72 | | bs4-e10-lr3e-05 | [0.8039][16] | [0.8122][17] | [0.8073][18] | [0.8246][19] | [0.8132][20] | 81.22 ± 0.7 | [1]: https://hf.co/stefan-it/hmbench-icdar-nl-hmbert-bs8-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-1 [2]: https://hf.co/stefan-it/hmbench-icdar-nl-hmbert-bs8-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-2 [3]: https://hf.co/stefan-it/hmbench-icdar-nl-hmbert-bs8-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-3 [4]: https://hf.co/stefan-it/hmbench-icdar-nl-hmbert-bs8-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-4 [5]: https://hf.co/stefan-it/hmbench-icdar-nl-hmbert-bs8-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-5 [6]: https://hf.co/stefan-it/hmbench-icdar-nl-hmbert-bs8-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-1 [7]: https://hf.co/stefan-it/hmbench-icdar-nl-hmbert-bs8-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-2 [8]: https://hf.co/stefan-it/hmbench-icdar-nl-hmbert-bs8-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-3 [9]: https://hf.co/stefan-it/hmbench-icdar-nl-hmbert-bs8-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-4 [10]: https://hf.co/stefan-it/hmbench-icdar-nl-hmbert-bs8-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-5 [11]: https://hf.co/stefan-it/hmbench-icdar-nl-hmbert-bs4-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-1 [12]: https://hf.co/stefan-it/hmbench-icdar-nl-hmbert-bs4-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-2 [13]: https://hf.co/stefan-it/hmbench-icdar-nl-hmbert-bs4-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-3 [14]: https://hf.co/stefan-it/hmbench-icdar-nl-hmbert-bs4-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-4 [15]: https://hf.co/stefan-it/hmbench-icdar-nl-hmbert-bs4-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-5 [16]: https://hf.co/stefan-it/hmbench-icdar-nl-hmbert-bs4-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-1 [17]: https://hf.co/stefan-it/hmbench-icdar-nl-hmbert-bs4-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-2 [18]: https://hf.co/stefan-it/hmbench-icdar-nl-hmbert-bs4-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-3 [19]: https://hf.co/stefan-it/hmbench-icdar-nl-hmbert-bs4-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-4 [20]: https://hf.co/stefan-it/hmbench-icdar-nl-hmbert-bs4-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-5 The [training log](training.log) and TensorBoard logs (only for hmByT5 and hmTEAMS based models) are also uploaded to the model hub. More information about fine-tuning can be found [here](https://github.com/stefan-it/hmBench). # Acknowledgements We thank [Luisa März](https://github.com/LuisaMaerz), [Katharina Schmid](https://github.com/schmika) and [Erion Çano](https://github.com/erionc) for their fruitful discussions about Historic Language Models. Research supported with Cloud TPUs from Google's [TPU Research Cloud](https://sites.research.google/trc/about/) (TRC). Many Thanks for providing access to the TPUs ❤️
stefan-it/hmbench-icdar-nl-hmbert-bs4-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-2
stefan-it
2023-10-17T23:20:18Z
7
0
flair
[ "flair", "pytorch", "token-classification", "sequence-tagger-model", "nl", "base_model:dbmdz/bert-base-historic-multilingual-cased", "base_model:finetune:dbmdz/bert-base-historic-multilingual-cased", "license:mit", "region:us" ]
token-classification
2023-10-14T09:18:03Z
--- language: nl license: mit tags: - flair - token-classification - sequence-tagger-model base_model: dbmdz/bert-base-historic-multilingual-cased widget: - text: Professoren der Geneeskun dige Faculteit te Groningen alsook van de HH , Doctoren en Chirurgijns van Groningen , Friesland , Noordholland , Overijssel , Gelderland , Drenthe , in welke Provinciën dit Elixir als Medicament voor Mond en Tanden reeds jaren bakend is . --- # Fine-tuned Flair Model on Dutch ICDAR-Europeana NER Dataset This Flair model was fine-tuned on the [Dutch ICDAR-Europeana](https://github.com/stefan-it/historic-domain-adaptation-icdar) NER Dataset using hmBERT as backbone LM. The ICDAR-Europeana NER Dataset is a preprocessed variant of the [Europeana NER Corpora](https://github.com/EuropeanaNewspapers/ner-corpora) for Dutch and French. The following NEs were annotated: `PER`, `LOC` and `ORG`. # Results We performed a hyper-parameter search over the following parameters with 5 different seeds per configuration: * Batch Sizes: `[8, 4]` * Learning Rates: `[3e-05, 5e-05]` And report micro F1-score on development set: | Configuration | Run 1 | Run 2 | Run 3 | Run 4 | Run 5 | Avg. | |-----------------|--------------|--------------|--------------|--------------|--------------|--------------| | bs8-e10-lr5e-05 | [0.8191][1] | [0.8086][2] | [0.8237][3] | [0.8318][4] | [0.8235][5] | 82.13 ± 0.76 | | bs8-e10-lr3e-05 | [0.8056][6] | [0.8183][7] | [0.8241][8] | [0.8431][9] | [0.8155][10] | 82.13 ± 1.24 | | bs4-e10-lr5e-05 | [0.8055][11] | [0.822][12] | [0.8243][13] | [0.8093][14] | [0.8144][15] | 81.51 ± 0.72 | | bs4-e10-lr3e-05 | [0.8039][16] | [0.8122][17] | [0.8073][18] | [0.8246][19] | [0.8132][20] | 81.22 ± 0.7 | [1]: https://hf.co/stefan-it/hmbench-icdar-nl-hmbert-bs8-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-1 [2]: https://hf.co/stefan-it/hmbench-icdar-nl-hmbert-bs8-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-2 [3]: https://hf.co/stefan-it/hmbench-icdar-nl-hmbert-bs8-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-3 [4]: https://hf.co/stefan-it/hmbench-icdar-nl-hmbert-bs8-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-4 [5]: https://hf.co/stefan-it/hmbench-icdar-nl-hmbert-bs8-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-5 [6]: https://hf.co/stefan-it/hmbench-icdar-nl-hmbert-bs8-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-1 [7]: https://hf.co/stefan-it/hmbench-icdar-nl-hmbert-bs8-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-2 [8]: https://hf.co/stefan-it/hmbench-icdar-nl-hmbert-bs8-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-3 [9]: https://hf.co/stefan-it/hmbench-icdar-nl-hmbert-bs8-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-4 [10]: https://hf.co/stefan-it/hmbench-icdar-nl-hmbert-bs8-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-5 [11]: https://hf.co/stefan-it/hmbench-icdar-nl-hmbert-bs4-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-1 [12]: https://hf.co/stefan-it/hmbench-icdar-nl-hmbert-bs4-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-2 [13]: https://hf.co/stefan-it/hmbench-icdar-nl-hmbert-bs4-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-3 [14]: https://hf.co/stefan-it/hmbench-icdar-nl-hmbert-bs4-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-4 [15]: https://hf.co/stefan-it/hmbench-icdar-nl-hmbert-bs4-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-5 [16]: https://hf.co/stefan-it/hmbench-icdar-nl-hmbert-bs4-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-1 [17]: https://hf.co/stefan-it/hmbench-icdar-nl-hmbert-bs4-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-2 [18]: https://hf.co/stefan-it/hmbench-icdar-nl-hmbert-bs4-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-3 [19]: https://hf.co/stefan-it/hmbench-icdar-nl-hmbert-bs4-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-4 [20]: https://hf.co/stefan-it/hmbench-icdar-nl-hmbert-bs4-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-5 The [training log](training.log) and TensorBoard logs (only for hmByT5 and hmTEAMS based models) are also uploaded to the model hub. More information about fine-tuning can be found [here](https://github.com/stefan-it/hmBench). # Acknowledgements We thank [Luisa März](https://github.com/LuisaMaerz), [Katharina Schmid](https://github.com/schmika) and [Erion Çano](https://github.com/erionc) for their fruitful discussions about Historic Language Models. Research supported with Cloud TPUs from Google's [TPU Research Cloud](https://sites.research.google/trc/about/) (TRC). Many Thanks for providing access to the TPUs ❤️
stefan-it/hmbench-icdar-nl-hmbert-bs4-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-1
stefan-it
2023-10-17T23:20:17Z
0
0
flair
[ "flair", "pytorch", "token-classification", "sequence-tagger-model", "nl", "base_model:dbmdz/bert-base-historic-multilingual-cased", "base_model:finetune:dbmdz/bert-base-historic-multilingual-cased", "license:mit", "region:us" ]
token-classification
2023-10-14T08:28:20Z
--- language: nl license: mit tags: - flair - token-classification - sequence-tagger-model base_model: dbmdz/bert-base-historic-multilingual-cased widget: - text: Professoren der Geneeskun dige Faculteit te Groningen alsook van de HH , Doctoren en Chirurgijns van Groningen , Friesland , Noordholland , Overijssel , Gelderland , Drenthe , in welke Provinciën dit Elixir als Medicament voor Mond en Tanden reeds jaren bakend is . --- # Fine-tuned Flair Model on Dutch ICDAR-Europeana NER Dataset This Flair model was fine-tuned on the [Dutch ICDAR-Europeana](https://github.com/stefan-it/historic-domain-adaptation-icdar) NER Dataset using hmBERT as backbone LM. The ICDAR-Europeana NER Dataset is a preprocessed variant of the [Europeana NER Corpora](https://github.com/EuropeanaNewspapers/ner-corpora) for Dutch and French. The following NEs were annotated: `PER`, `LOC` and `ORG`. # Results We performed a hyper-parameter search over the following parameters with 5 different seeds per configuration: * Batch Sizes: `[8, 4]` * Learning Rates: `[3e-05, 5e-05]` And report micro F1-score on development set: | Configuration | Run 1 | Run 2 | Run 3 | Run 4 | Run 5 | Avg. | |-----------------|--------------|--------------|--------------|--------------|--------------|--------------| | bs8-e10-lr5e-05 | [0.8191][1] | [0.8086][2] | [0.8237][3] | [0.8318][4] | [0.8235][5] | 82.13 ± 0.76 | | bs8-e10-lr3e-05 | [0.8056][6] | [0.8183][7] | [0.8241][8] | [0.8431][9] | [0.8155][10] | 82.13 ± 1.24 | | bs4-e10-lr5e-05 | [0.8055][11] | [0.822][12] | [0.8243][13] | [0.8093][14] | [0.8144][15] | 81.51 ± 0.72 | | bs4-e10-lr3e-05 | [0.8039][16] | [0.8122][17] | [0.8073][18] | [0.8246][19] | [0.8132][20] | 81.22 ± 0.7 | [1]: https://hf.co/stefan-it/hmbench-icdar-nl-hmbert-bs8-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-1 [2]: https://hf.co/stefan-it/hmbench-icdar-nl-hmbert-bs8-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-2 [3]: https://hf.co/stefan-it/hmbench-icdar-nl-hmbert-bs8-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-3 [4]: https://hf.co/stefan-it/hmbench-icdar-nl-hmbert-bs8-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-4 [5]: https://hf.co/stefan-it/hmbench-icdar-nl-hmbert-bs8-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-5 [6]: https://hf.co/stefan-it/hmbench-icdar-nl-hmbert-bs8-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-1 [7]: https://hf.co/stefan-it/hmbench-icdar-nl-hmbert-bs8-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-2 [8]: https://hf.co/stefan-it/hmbench-icdar-nl-hmbert-bs8-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-3 [9]: https://hf.co/stefan-it/hmbench-icdar-nl-hmbert-bs8-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-4 [10]: https://hf.co/stefan-it/hmbench-icdar-nl-hmbert-bs8-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-5 [11]: https://hf.co/stefan-it/hmbench-icdar-nl-hmbert-bs4-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-1 [12]: https://hf.co/stefan-it/hmbench-icdar-nl-hmbert-bs4-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-2 [13]: https://hf.co/stefan-it/hmbench-icdar-nl-hmbert-bs4-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-3 [14]: https://hf.co/stefan-it/hmbench-icdar-nl-hmbert-bs4-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-4 [15]: https://hf.co/stefan-it/hmbench-icdar-nl-hmbert-bs4-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-5 [16]: https://hf.co/stefan-it/hmbench-icdar-nl-hmbert-bs4-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-1 [17]: https://hf.co/stefan-it/hmbench-icdar-nl-hmbert-bs4-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-2 [18]: https://hf.co/stefan-it/hmbench-icdar-nl-hmbert-bs4-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-3 [19]: https://hf.co/stefan-it/hmbench-icdar-nl-hmbert-bs4-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-4 [20]: https://hf.co/stefan-it/hmbench-icdar-nl-hmbert-bs4-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-5 The [training log](training.log) and TensorBoard logs (only for hmByT5 and hmTEAMS based models) are also uploaded to the model hub. More information about fine-tuning can be found [here](https://github.com/stefan-it/hmBench). # Acknowledgements We thank [Luisa März](https://github.com/LuisaMaerz), [Katharina Schmid](https://github.com/schmika) and [Erion Çano](https://github.com/erionc) for their fruitful discussions about Historic Language Models. Research supported with Cloud TPUs from Google's [TPU Research Cloud](https://sites.research.google/trc/about/) (TRC). Many Thanks for providing access to the TPUs ❤️
stefan-it/hmbench-icdar-nl-hmbert-bs4-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-5
stefan-it
2023-10-17T23:20:16Z
4
0
flair
[ "flair", "pytorch", "token-classification", "sequence-tagger-model", "nl", "base_model:dbmdz/bert-base-historic-multilingual-cased", "base_model:finetune:dbmdz/bert-base-historic-multilingual-cased", "license:mit", "region:us" ]
token-classification
2023-10-14T11:33:03Z
--- language: nl license: mit tags: - flair - token-classification - sequence-tagger-model base_model: dbmdz/bert-base-historic-multilingual-cased widget: - text: Professoren der Geneeskun dige Faculteit te Groningen alsook van de HH , Doctoren en Chirurgijns van Groningen , Friesland , Noordholland , Overijssel , Gelderland , Drenthe , in welke Provinciën dit Elixir als Medicament voor Mond en Tanden reeds jaren bakend is . --- # Fine-tuned Flair Model on Dutch ICDAR-Europeana NER Dataset This Flair model was fine-tuned on the [Dutch ICDAR-Europeana](https://github.com/stefan-it/historic-domain-adaptation-icdar) NER Dataset using hmBERT as backbone LM. The ICDAR-Europeana NER Dataset is a preprocessed variant of the [Europeana NER Corpora](https://github.com/EuropeanaNewspapers/ner-corpora) for Dutch and French. The following NEs were annotated: `PER`, `LOC` and `ORG`. # Results We performed a hyper-parameter search over the following parameters with 5 different seeds per configuration: * Batch Sizes: `[8, 4]` * Learning Rates: `[3e-05, 5e-05]` And report micro F1-score on development set: | Configuration | Run 1 | Run 2 | Run 3 | Run 4 | Run 5 | Avg. | |-----------------|--------------|--------------|--------------|--------------|--------------|--------------| | bs8-e10-lr5e-05 | [0.8191][1] | [0.8086][2] | [0.8237][3] | [0.8318][4] | [0.8235][5] | 82.13 ± 0.76 | | bs8-e10-lr3e-05 | [0.8056][6] | [0.8183][7] | [0.8241][8] | [0.8431][9] | [0.8155][10] | 82.13 ± 1.24 | | bs4-e10-lr5e-05 | [0.8055][11] | [0.822][12] | [0.8243][13] | [0.8093][14] | [0.8144][15] | 81.51 ± 0.72 | | bs4-e10-lr3e-05 | [0.8039][16] | [0.8122][17] | [0.8073][18] | [0.8246][19] | [0.8132][20] | 81.22 ± 0.7 | [1]: https://hf.co/stefan-it/hmbench-icdar-nl-hmbert-bs8-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-1 [2]: https://hf.co/stefan-it/hmbench-icdar-nl-hmbert-bs8-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-2 [3]: https://hf.co/stefan-it/hmbench-icdar-nl-hmbert-bs8-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-3 [4]: https://hf.co/stefan-it/hmbench-icdar-nl-hmbert-bs8-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-4 [5]: https://hf.co/stefan-it/hmbench-icdar-nl-hmbert-bs8-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-5 [6]: https://hf.co/stefan-it/hmbench-icdar-nl-hmbert-bs8-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-1 [7]: https://hf.co/stefan-it/hmbench-icdar-nl-hmbert-bs8-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-2 [8]: https://hf.co/stefan-it/hmbench-icdar-nl-hmbert-bs8-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-3 [9]: https://hf.co/stefan-it/hmbench-icdar-nl-hmbert-bs8-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-4 [10]: https://hf.co/stefan-it/hmbench-icdar-nl-hmbert-bs8-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-5 [11]: https://hf.co/stefan-it/hmbench-icdar-nl-hmbert-bs4-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-1 [12]: https://hf.co/stefan-it/hmbench-icdar-nl-hmbert-bs4-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-2 [13]: https://hf.co/stefan-it/hmbench-icdar-nl-hmbert-bs4-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-3 [14]: https://hf.co/stefan-it/hmbench-icdar-nl-hmbert-bs4-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-4 [15]: https://hf.co/stefan-it/hmbench-icdar-nl-hmbert-bs4-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-5 [16]: https://hf.co/stefan-it/hmbench-icdar-nl-hmbert-bs4-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-1 [17]: https://hf.co/stefan-it/hmbench-icdar-nl-hmbert-bs4-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-2 [18]: https://hf.co/stefan-it/hmbench-icdar-nl-hmbert-bs4-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-3 [19]: https://hf.co/stefan-it/hmbench-icdar-nl-hmbert-bs4-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-4 [20]: https://hf.co/stefan-it/hmbench-icdar-nl-hmbert-bs4-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-5 The [training log](training.log) and TensorBoard logs (only for hmByT5 and hmTEAMS based models) are also uploaded to the model hub. More information about fine-tuning can be found [here](https://github.com/stefan-it/hmBench). # Acknowledgements We thank [Luisa März](https://github.com/LuisaMaerz), [Katharina Schmid](https://github.com/schmika) and [Erion Çano](https://github.com/erionc) for their fruitful discussions about Historic Language Models. Research supported with Cloud TPUs from Google's [TPU Research Cloud](https://sites.research.google/trc/about/) (TRC). Many Thanks for providing access to the TPUs ❤️
stefan-it/hmbench-icdar-nl-hmbert-bs4-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-2
stefan-it
2023-10-17T23:20:12Z
3
0
flair
[ "flair", "pytorch", "token-classification", "sequence-tagger-model", "nl", "base_model:dbmdz/bert-base-historic-multilingual-cased", "base_model:finetune:dbmdz/bert-base-historic-multilingual-cased", "license:mit", "region:us" ]
token-classification
2023-10-14T09:04:25Z
--- language: nl license: mit tags: - flair - token-classification - sequence-tagger-model base_model: dbmdz/bert-base-historic-multilingual-cased widget: - text: Professoren der Geneeskun dige Faculteit te Groningen alsook van de HH , Doctoren en Chirurgijns van Groningen , Friesland , Noordholland , Overijssel , Gelderland , Drenthe , in welke Provinciën dit Elixir als Medicament voor Mond en Tanden reeds jaren bakend is . --- # Fine-tuned Flair Model on Dutch ICDAR-Europeana NER Dataset This Flair model was fine-tuned on the [Dutch ICDAR-Europeana](https://github.com/stefan-it/historic-domain-adaptation-icdar) NER Dataset using hmBERT as backbone LM. The ICDAR-Europeana NER Dataset is a preprocessed variant of the [Europeana NER Corpora](https://github.com/EuropeanaNewspapers/ner-corpora) for Dutch and French. The following NEs were annotated: `PER`, `LOC` and `ORG`. # Results We performed a hyper-parameter search over the following parameters with 5 different seeds per configuration: * Batch Sizes: `[8, 4]` * Learning Rates: `[3e-05, 5e-05]` And report micro F1-score on development set: | Configuration | Run 1 | Run 2 | Run 3 | Run 4 | Run 5 | Avg. | |-----------------|--------------|--------------|--------------|--------------|--------------|--------------| | bs8-e10-lr5e-05 | [0.8191][1] | [0.8086][2] | [0.8237][3] | [0.8318][4] | [0.8235][5] | 82.13 ± 0.76 | | bs8-e10-lr3e-05 | [0.8056][6] | [0.8183][7] | [0.8241][8] | [0.8431][9] | [0.8155][10] | 82.13 ± 1.24 | | bs4-e10-lr5e-05 | [0.8055][11] | [0.822][12] | [0.8243][13] | [0.8093][14] | [0.8144][15] | 81.51 ± 0.72 | | bs4-e10-lr3e-05 | [0.8039][16] | [0.8122][17] | [0.8073][18] | [0.8246][19] | [0.8132][20] | 81.22 ± 0.7 | [1]: https://hf.co/stefan-it/hmbench-icdar-nl-hmbert-bs8-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-1 [2]: https://hf.co/stefan-it/hmbench-icdar-nl-hmbert-bs8-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-2 [3]: https://hf.co/stefan-it/hmbench-icdar-nl-hmbert-bs8-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-3 [4]: https://hf.co/stefan-it/hmbench-icdar-nl-hmbert-bs8-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-4 [5]: https://hf.co/stefan-it/hmbench-icdar-nl-hmbert-bs8-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-5 [6]: https://hf.co/stefan-it/hmbench-icdar-nl-hmbert-bs8-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-1 [7]: https://hf.co/stefan-it/hmbench-icdar-nl-hmbert-bs8-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-2 [8]: https://hf.co/stefan-it/hmbench-icdar-nl-hmbert-bs8-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-3 [9]: https://hf.co/stefan-it/hmbench-icdar-nl-hmbert-bs8-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-4 [10]: https://hf.co/stefan-it/hmbench-icdar-nl-hmbert-bs8-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-5 [11]: https://hf.co/stefan-it/hmbench-icdar-nl-hmbert-bs4-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-1 [12]: https://hf.co/stefan-it/hmbench-icdar-nl-hmbert-bs4-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-2 [13]: https://hf.co/stefan-it/hmbench-icdar-nl-hmbert-bs4-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-3 [14]: https://hf.co/stefan-it/hmbench-icdar-nl-hmbert-bs4-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-4 [15]: https://hf.co/stefan-it/hmbench-icdar-nl-hmbert-bs4-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-5 [16]: https://hf.co/stefan-it/hmbench-icdar-nl-hmbert-bs4-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-1 [17]: https://hf.co/stefan-it/hmbench-icdar-nl-hmbert-bs4-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-2 [18]: https://hf.co/stefan-it/hmbench-icdar-nl-hmbert-bs4-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-3 [19]: https://hf.co/stefan-it/hmbench-icdar-nl-hmbert-bs4-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-4 [20]: https://hf.co/stefan-it/hmbench-icdar-nl-hmbert-bs4-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-5 The [training log](training.log) and TensorBoard logs (only for hmByT5 and hmTEAMS based models) are also uploaded to the model hub. More information about fine-tuning can be found [here](https://github.com/stefan-it/hmBench). # Acknowledgements We thank [Luisa März](https://github.com/LuisaMaerz), [Katharina Schmid](https://github.com/schmika) and [Erion Çano](https://github.com/erionc) for their fruitful discussions about Historic Language Models. Research supported with Cloud TPUs from Google's [TPU Research Cloud](https://sites.research.google/trc/about/) (TRC). Many Thanks for providing access to the TPUs ❤️
stefan-it/hmbench-icdar-nl-hmbert-bs8-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-2
stefan-it
2023-10-17T23:20:05Z
6
0
flair
[ "flair", "pytorch", "token-classification", "sequence-tagger-model", "nl", "base_model:dbmdz/bert-base-historic-multilingual-cased", "base_model:finetune:dbmdz/bert-base-historic-multilingual-cased", "license:mit", "region:us" ]
token-classification
2023-10-14T09:40:32Z
--- language: nl license: mit tags: - flair - token-classification - sequence-tagger-model base_model: dbmdz/bert-base-historic-multilingual-cased widget: - text: Professoren der Geneeskun dige Faculteit te Groningen alsook van de HH , Doctoren en Chirurgijns van Groningen , Friesland , Noordholland , Overijssel , Gelderland , Drenthe , in welke Provinciën dit Elixir als Medicament voor Mond en Tanden reeds jaren bakend is . --- # Fine-tuned Flair Model on Dutch ICDAR-Europeana NER Dataset This Flair model was fine-tuned on the [Dutch ICDAR-Europeana](https://github.com/stefan-it/historic-domain-adaptation-icdar) NER Dataset using hmBERT as backbone LM. The ICDAR-Europeana NER Dataset is a preprocessed variant of the [Europeana NER Corpora](https://github.com/EuropeanaNewspapers/ner-corpora) for Dutch and French. The following NEs were annotated: `PER`, `LOC` and `ORG`. # Results We performed a hyper-parameter search over the following parameters with 5 different seeds per configuration: * Batch Sizes: `[8, 4]` * Learning Rates: `[3e-05, 5e-05]` And report micro F1-score on development set: | Configuration | Run 1 | Run 2 | Run 3 | Run 4 | Run 5 | Avg. | |-----------------|--------------|--------------|--------------|--------------|--------------|--------------| | bs8-e10-lr5e-05 | [0.8191][1] | [0.8086][2] | [0.8237][3] | [0.8318][4] | [0.8235][5] | 82.13 ± 0.76 | | bs8-e10-lr3e-05 | [0.8056][6] | [0.8183][7] | [0.8241][8] | [0.8431][9] | [0.8155][10] | 82.13 ± 1.24 | | bs4-e10-lr5e-05 | [0.8055][11] | [0.822][12] | [0.8243][13] | [0.8093][14] | [0.8144][15] | 81.51 ± 0.72 | | bs4-e10-lr3e-05 | [0.8039][16] | [0.8122][17] | [0.8073][18] | [0.8246][19] | [0.8132][20] | 81.22 ± 0.7 | [1]: https://hf.co/stefan-it/hmbench-icdar-nl-hmbert-bs8-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-1 [2]: https://hf.co/stefan-it/hmbench-icdar-nl-hmbert-bs8-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-2 [3]: https://hf.co/stefan-it/hmbench-icdar-nl-hmbert-bs8-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-3 [4]: https://hf.co/stefan-it/hmbench-icdar-nl-hmbert-bs8-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-4 [5]: https://hf.co/stefan-it/hmbench-icdar-nl-hmbert-bs8-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-5 [6]: https://hf.co/stefan-it/hmbench-icdar-nl-hmbert-bs8-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-1 [7]: https://hf.co/stefan-it/hmbench-icdar-nl-hmbert-bs8-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-2 [8]: https://hf.co/stefan-it/hmbench-icdar-nl-hmbert-bs8-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-3 [9]: https://hf.co/stefan-it/hmbench-icdar-nl-hmbert-bs8-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-4 [10]: https://hf.co/stefan-it/hmbench-icdar-nl-hmbert-bs8-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-5 [11]: https://hf.co/stefan-it/hmbench-icdar-nl-hmbert-bs4-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-1 [12]: https://hf.co/stefan-it/hmbench-icdar-nl-hmbert-bs4-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-2 [13]: https://hf.co/stefan-it/hmbench-icdar-nl-hmbert-bs4-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-3 [14]: https://hf.co/stefan-it/hmbench-icdar-nl-hmbert-bs4-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-4 [15]: https://hf.co/stefan-it/hmbench-icdar-nl-hmbert-bs4-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-5 [16]: https://hf.co/stefan-it/hmbench-icdar-nl-hmbert-bs4-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-1 [17]: https://hf.co/stefan-it/hmbench-icdar-nl-hmbert-bs4-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-2 [18]: https://hf.co/stefan-it/hmbench-icdar-nl-hmbert-bs4-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-3 [19]: https://hf.co/stefan-it/hmbench-icdar-nl-hmbert-bs4-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-4 [20]: https://hf.co/stefan-it/hmbench-icdar-nl-hmbert-bs4-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-5 The [training log](training.log) and TensorBoard logs (only for hmByT5 and hmTEAMS based models) are also uploaded to the model hub. More information about fine-tuning can be found [here](https://github.com/stefan-it/hmBench). # Acknowledgements We thank [Luisa März](https://github.com/LuisaMaerz), [Katharina Schmid](https://github.com/schmika) and [Erion Çano](https://github.com/erionc) for their fruitful discussions about Historic Language Models. Research supported with Cloud TPUs from Google's [TPU Research Cloud](https://sites.research.google/trc/about/) (TRC). Many Thanks for providing access to the TPUs ❤️
stefan-it/hmbench-icdar-nl-hmbert-bs8-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-2
stefan-it
2023-10-17T23:20:00Z
3
0
flair
[ "flair", "pytorch", "token-classification", "sequence-tagger-model", "nl", "base_model:dbmdz/bert-base-historic-multilingual-cased", "base_model:finetune:dbmdz/bert-base-historic-multilingual-cased", "license:mit", "region:us" ]
token-classification
2023-10-14T09:29:24Z
--- language: nl license: mit tags: - flair - token-classification - sequence-tagger-model base_model: dbmdz/bert-base-historic-multilingual-cased widget: - text: Professoren der Geneeskun dige Faculteit te Groningen alsook van de HH , Doctoren en Chirurgijns van Groningen , Friesland , Noordholland , Overijssel , Gelderland , Drenthe , in welke Provinciën dit Elixir als Medicament voor Mond en Tanden reeds jaren bakend is . --- # Fine-tuned Flair Model on Dutch ICDAR-Europeana NER Dataset This Flair model was fine-tuned on the [Dutch ICDAR-Europeana](https://github.com/stefan-it/historic-domain-adaptation-icdar) NER Dataset using hmBERT as backbone LM. The ICDAR-Europeana NER Dataset is a preprocessed variant of the [Europeana NER Corpora](https://github.com/EuropeanaNewspapers/ner-corpora) for Dutch and French. The following NEs were annotated: `PER`, `LOC` and `ORG`. # Results We performed a hyper-parameter search over the following parameters with 5 different seeds per configuration: * Batch Sizes: `[8, 4]` * Learning Rates: `[3e-05, 5e-05]` And report micro F1-score on development set: | Configuration | Run 1 | Run 2 | Run 3 | Run 4 | Run 5 | Avg. | |-----------------|--------------|--------------|--------------|--------------|--------------|--------------| | bs8-e10-lr5e-05 | [0.8191][1] | [0.8086][2] | [0.8237][3] | [0.8318][4] | [0.8235][5] | 82.13 ± 0.76 | | bs8-e10-lr3e-05 | [0.8056][6] | [0.8183][7] | [0.8241][8] | [0.8431][9] | [0.8155][10] | 82.13 ± 1.24 | | bs4-e10-lr5e-05 | [0.8055][11] | [0.822][12] | [0.8243][13] | [0.8093][14] | [0.8144][15] | 81.51 ± 0.72 | | bs4-e10-lr3e-05 | [0.8039][16] | [0.8122][17] | [0.8073][18] | [0.8246][19] | [0.8132][20] | 81.22 ± 0.7 | [1]: https://hf.co/stefan-it/hmbench-icdar-nl-hmbert-bs8-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-1 [2]: https://hf.co/stefan-it/hmbench-icdar-nl-hmbert-bs8-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-2 [3]: https://hf.co/stefan-it/hmbench-icdar-nl-hmbert-bs8-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-3 [4]: https://hf.co/stefan-it/hmbench-icdar-nl-hmbert-bs8-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-4 [5]: https://hf.co/stefan-it/hmbench-icdar-nl-hmbert-bs8-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-5 [6]: https://hf.co/stefan-it/hmbench-icdar-nl-hmbert-bs8-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-1 [7]: https://hf.co/stefan-it/hmbench-icdar-nl-hmbert-bs8-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-2 [8]: https://hf.co/stefan-it/hmbench-icdar-nl-hmbert-bs8-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-3 [9]: https://hf.co/stefan-it/hmbench-icdar-nl-hmbert-bs8-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-4 [10]: https://hf.co/stefan-it/hmbench-icdar-nl-hmbert-bs8-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-5 [11]: https://hf.co/stefan-it/hmbench-icdar-nl-hmbert-bs4-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-1 [12]: https://hf.co/stefan-it/hmbench-icdar-nl-hmbert-bs4-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-2 [13]: https://hf.co/stefan-it/hmbench-icdar-nl-hmbert-bs4-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-3 [14]: https://hf.co/stefan-it/hmbench-icdar-nl-hmbert-bs4-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-4 [15]: https://hf.co/stefan-it/hmbench-icdar-nl-hmbert-bs4-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-5 [16]: https://hf.co/stefan-it/hmbench-icdar-nl-hmbert-bs4-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-1 [17]: https://hf.co/stefan-it/hmbench-icdar-nl-hmbert-bs4-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-2 [18]: https://hf.co/stefan-it/hmbench-icdar-nl-hmbert-bs4-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-3 [19]: https://hf.co/stefan-it/hmbench-icdar-nl-hmbert-bs4-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-4 [20]: https://hf.co/stefan-it/hmbench-icdar-nl-hmbert-bs4-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-5 The [training log](training.log) and TensorBoard logs (only for hmByT5 and hmTEAMS based models) are also uploaded to the model hub. More information about fine-tuning can be found [here](https://github.com/stefan-it/hmBench). # Acknowledgements We thank [Luisa März](https://github.com/LuisaMaerz), [Katharina Schmid](https://github.com/schmika) and [Erion Çano](https://github.com/erionc) for their fruitful discussions about Historic Language Models. Research supported with Cloud TPUs from Google's [TPU Research Cloud](https://sites.research.google/trc/about/) (TRC). Many Thanks for providing access to the TPUs ❤️
stefan-it/hmbench-newseye-sv-hmbert-bs4-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-4
stefan-it
2023-10-17T23:19:20Z
4
0
flair
[ "flair", "pytorch", "token-classification", "sequence-tagger-model", "sv", "base_model:dbmdz/bert-base-historic-multilingual-cased", "base_model:finetune:dbmdz/bert-base-historic-multilingual-cased", "license:mit", "region:us" ]
token-classification
2023-10-16T20:14:43Z
--- language: sv license: mit tags: - flair - token-classification - sequence-tagger-model base_model: dbmdz/bert-base-historic-multilingual-cased widget: - text: Värri , Teittinen , Forsman , Tensik - kala m . fl . anslöto sig till reservatio - nen , hvaremot lm Fieandt , Huopo - nen , Koskelin , Leppänen , ( Li - belits ) , Eklund m . fl . förordade ut - skottets formulering af § 11 . --- # Fine-tuned Flair Model on Swedish NewsEye NER Dataset (HIPE-2022) This Flair model was fine-tuned on the [Swedish NewsEye](https://github.com/hipe-eval/HIPE-2022-data/blob/main/documentation/README-newseye.md) NER Dataset using hmBERT as backbone LM. The NewsEye dataset is comprised of diachronic historical newspaper material published between 1850 and 1950 in French, German, Finnish, and Swedish. More information can be found [here](https://dl.acm.org/doi/abs/10.1145/3404835.3463255). The following NEs were annotated: `PER`, `LOC`, `ORG` and `HumanProd`. # Results We performed a hyper-parameter search over the following parameters with 5 different seeds per configuration: * Batch Sizes: `[8, 4]` * Learning Rates: `[3e-05, 5e-05]` And report micro F1-score on development set: | Configuration | Run 1 | Run 2 | Run 3 | Run 4 | Run 5 | Avg. | |-----------------|--------------|--------------|--------------|--------------|--------------|--------------| | bs4-e10-lr5e-05 | [0.8433][1] | [0.829][2] | [0.8207][3] | [0.8286][4] | [0.8318][5] | 83.07 ± 0.73 | | bs4-e10-lr3e-05 | [0.819][6] | [0.8296][7] | [0.8439][8] | [0.8202][9] | [0.8327][10] | 82.91 ± 0.91 | | bs8-e10-lr5e-05 | [0.8214][11] | [0.8073][12] | [0.8309][13] | [0.8324][14] | [0.8324][15] | 82.49 ± 0.97 | | bs8-e10-lr3e-05 | [0.817][16] | [0.8155][17] | [0.8187][18] | [0.8259][19] | [0.8088][20] | 81.72 ± 0.55 | [1]: https://hf.co/stefan-it/hmbench-newseye-sv-hmbert-bs4-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-1 [2]: https://hf.co/stefan-it/hmbench-newseye-sv-hmbert-bs4-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-2 [3]: https://hf.co/stefan-it/hmbench-newseye-sv-hmbert-bs4-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-3 [4]: https://hf.co/stefan-it/hmbench-newseye-sv-hmbert-bs4-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-4 [5]: https://hf.co/stefan-it/hmbench-newseye-sv-hmbert-bs4-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-5 [6]: https://hf.co/stefan-it/hmbench-newseye-sv-hmbert-bs4-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-1 [7]: https://hf.co/stefan-it/hmbench-newseye-sv-hmbert-bs4-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-2 [8]: https://hf.co/stefan-it/hmbench-newseye-sv-hmbert-bs4-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-3 [9]: https://hf.co/stefan-it/hmbench-newseye-sv-hmbert-bs4-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-4 [10]: https://hf.co/stefan-it/hmbench-newseye-sv-hmbert-bs4-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-5 [11]: https://hf.co/stefan-it/hmbench-newseye-sv-hmbert-bs8-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-1 [12]: https://hf.co/stefan-it/hmbench-newseye-sv-hmbert-bs8-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-2 [13]: https://hf.co/stefan-it/hmbench-newseye-sv-hmbert-bs8-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-3 [14]: https://hf.co/stefan-it/hmbench-newseye-sv-hmbert-bs8-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-4 [15]: https://hf.co/stefan-it/hmbench-newseye-sv-hmbert-bs8-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-5 [16]: https://hf.co/stefan-it/hmbench-newseye-sv-hmbert-bs8-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-1 [17]: https://hf.co/stefan-it/hmbench-newseye-sv-hmbert-bs8-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-2 [18]: https://hf.co/stefan-it/hmbench-newseye-sv-hmbert-bs8-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-3 [19]: https://hf.co/stefan-it/hmbench-newseye-sv-hmbert-bs8-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-4 [20]: https://hf.co/stefan-it/hmbench-newseye-sv-hmbert-bs8-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-5 The [training log](training.log) and TensorBoard logs (only for hmByT5 and hmTEAMS based models) are also uploaded to the model hub. More information about fine-tuning can be found [here](https://github.com/stefan-it/hmBench). # Acknowledgements We thank [Luisa März](https://github.com/LuisaMaerz), [Katharina Schmid](https://github.com/schmika) and [Erion Çano](https://github.com/erionc) for their fruitful discussions about Historic Language Models. Research supported with Cloud TPUs from Google's [TPU Research Cloud](https://sites.research.google/trc/about/) (TRC). Many Thanks for providing access to the TPUs ❤️
hmbert/flair-hipe-2022-newseye-sv
hmbert
2023-10-17T23:19:17Z
14
0
flair
[ "flair", "pytorch", "token-classification", "sequence-tagger-model", "sv", "base_model:dbmdz/bert-base-historic-multilingual-cased", "base_model:finetune:dbmdz/bert-base-historic-multilingual-cased", "license:mit", "region:us" ]
token-classification
2023-10-16T19:34:49Z
--- language: sv license: mit tags: - flair - token-classification - sequence-tagger-model base_model: dbmdz/bert-base-historic-multilingual-cased widget: - text: Värri , Teittinen , Forsman , Tensik - kala m . fl . anslöto sig till reservatio - nen , hvaremot lm Fieandt , Huopo - nen , Koskelin , Leppänen , ( Li - belits ) , Eklund m . fl . förordade ut - skottets formulering af § 11 . --- # Fine-tuned Flair Model on Swedish NewsEye NER Dataset (HIPE-2022) This Flair model was fine-tuned on the [Swedish NewsEye](https://github.com/hipe-eval/HIPE-2022-data/blob/main/documentation/README-newseye.md) NER Dataset using hmBERT as backbone LM. The NewsEye dataset is comprised of diachronic historical newspaper material published between 1850 and 1950 in French, German, Finnish, and Swedish. More information can be found [here](https://dl.acm.org/doi/abs/10.1145/3404835.3463255). The following NEs were annotated: `PER`, `LOC`, `ORG` and `HumanProd`. # Results We performed a hyper-parameter search over the following parameters with 5 different seeds per configuration: * Batch Sizes: `[8, 4]` * Learning Rates: `[3e-05, 5e-05]` And report micro F1-score on development set: | Configuration | Run 1 | Run 2 | Run 3 | Run 4 | Run 5 | Avg. | |-----------------|--------------|--------------|--------------|--------------|--------------|--------------| | bs4-e10-lr5e-05 | [0.8433][1] | [0.829][2] | [0.8207][3] | [0.8286][4] | [0.8318][5] | 83.07 ± 0.73 | | bs4-e10-lr3e-05 | [0.819][6] | [0.8296][7] | [0.8439][8] | [0.8202][9] | [0.8327][10] | 82.91 ± 0.91 | | bs8-e10-lr5e-05 | [0.8214][11] | [0.8073][12] | [0.8309][13] | [0.8324][14] | [0.8324][15] | 82.49 ± 0.97 | | bs8-e10-lr3e-05 | [0.817][16] | [0.8155][17] | [0.8187][18] | [0.8259][19] | [0.8088][20] | 81.72 ± 0.55 | [1]: https://hf.co/stefan-it/hmbench-newseye-sv-hmbert-bs4-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-1 [2]: https://hf.co/stefan-it/hmbench-newseye-sv-hmbert-bs4-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-2 [3]: https://hf.co/stefan-it/hmbench-newseye-sv-hmbert-bs4-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-3 [4]: https://hf.co/stefan-it/hmbench-newseye-sv-hmbert-bs4-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-4 [5]: https://hf.co/stefan-it/hmbench-newseye-sv-hmbert-bs4-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-5 [6]: https://hf.co/stefan-it/hmbench-newseye-sv-hmbert-bs4-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-1 [7]: https://hf.co/stefan-it/hmbench-newseye-sv-hmbert-bs4-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-2 [8]: https://hf.co/stefan-it/hmbench-newseye-sv-hmbert-bs4-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-3 [9]: https://hf.co/stefan-it/hmbench-newseye-sv-hmbert-bs4-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-4 [10]: https://hf.co/stefan-it/hmbench-newseye-sv-hmbert-bs4-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-5 [11]: https://hf.co/stefan-it/hmbench-newseye-sv-hmbert-bs8-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-1 [12]: https://hf.co/stefan-it/hmbench-newseye-sv-hmbert-bs8-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-2 [13]: https://hf.co/stefan-it/hmbench-newseye-sv-hmbert-bs8-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-3 [14]: https://hf.co/stefan-it/hmbench-newseye-sv-hmbert-bs8-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-4 [15]: https://hf.co/stefan-it/hmbench-newseye-sv-hmbert-bs8-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-5 [16]: https://hf.co/stefan-it/hmbench-newseye-sv-hmbert-bs8-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-1 [17]: https://hf.co/stefan-it/hmbench-newseye-sv-hmbert-bs8-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-2 [18]: https://hf.co/stefan-it/hmbench-newseye-sv-hmbert-bs8-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-3 [19]: https://hf.co/stefan-it/hmbench-newseye-sv-hmbert-bs8-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-4 [20]: https://hf.co/stefan-it/hmbench-newseye-sv-hmbert-bs8-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-5 The [training log](training.log) and TensorBoard logs (only for hmByT5 and hmTEAMS based models) are also uploaded to the model hub. More information about fine-tuning can be found [here](https://github.com/stefan-it/hmBench). # Acknowledgements We thank [Luisa März](https://github.com/LuisaMaerz), [Katharina Schmid](https://github.com/schmika) and [Erion Çano](https://github.com/erionc) for their fruitful discussions about Historic Language Models. Research supported with Cloud TPUs from Google's [TPU Research Cloud](https://sites.research.google/trc/about/) (TRC). Many Thanks for providing access to the TPUs ❤️
stefan-it/hmbench-newseye-sv-hmbert-bs4-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-5
stefan-it
2023-10-17T23:19:16Z
5
0
flair
[ "flair", "pytorch", "token-classification", "sequence-tagger-model", "sv", "base_model:dbmdz/bert-base-historic-multilingual-cased", "base_model:finetune:dbmdz/bert-base-historic-multilingual-cased", "license:mit", "region:us" ]
token-classification
2023-10-16T20:24:38Z
--- language: sv license: mit tags: - flair - token-classification - sequence-tagger-model base_model: dbmdz/bert-base-historic-multilingual-cased widget: - text: Värri , Teittinen , Forsman , Tensik - kala m . fl . anslöto sig till reservatio - nen , hvaremot lm Fieandt , Huopo - nen , Koskelin , Leppänen , ( Li - belits ) , Eklund m . fl . förordade ut - skottets formulering af § 11 . --- # Fine-tuned Flair Model on Swedish NewsEye NER Dataset (HIPE-2022) This Flair model was fine-tuned on the [Swedish NewsEye](https://github.com/hipe-eval/HIPE-2022-data/blob/main/documentation/README-newseye.md) NER Dataset using hmBERT as backbone LM. The NewsEye dataset is comprised of diachronic historical newspaper material published between 1850 and 1950 in French, German, Finnish, and Swedish. More information can be found [here](https://dl.acm.org/doi/abs/10.1145/3404835.3463255). The following NEs were annotated: `PER`, `LOC`, `ORG` and `HumanProd`. # Results We performed a hyper-parameter search over the following parameters with 5 different seeds per configuration: * Batch Sizes: `[8, 4]` * Learning Rates: `[3e-05, 5e-05]` And report micro F1-score on development set: | Configuration | Run 1 | Run 2 | Run 3 | Run 4 | Run 5 | Avg. | |-----------------|--------------|--------------|--------------|--------------|--------------|--------------| | bs4-e10-lr5e-05 | [0.8433][1] | [0.829][2] | [0.8207][3] | [0.8286][4] | [0.8318][5] | 83.07 ± 0.73 | | bs4-e10-lr3e-05 | [0.819][6] | [0.8296][7] | [0.8439][8] | [0.8202][9] | [0.8327][10] | 82.91 ± 0.91 | | bs8-e10-lr5e-05 | [0.8214][11] | [0.8073][12] | [0.8309][13] | [0.8324][14] | [0.8324][15] | 82.49 ± 0.97 | | bs8-e10-lr3e-05 | [0.817][16] | [0.8155][17] | [0.8187][18] | [0.8259][19] | [0.8088][20] | 81.72 ± 0.55 | [1]: https://hf.co/stefan-it/hmbench-newseye-sv-hmbert-bs4-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-1 [2]: https://hf.co/stefan-it/hmbench-newseye-sv-hmbert-bs4-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-2 [3]: https://hf.co/stefan-it/hmbench-newseye-sv-hmbert-bs4-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-3 [4]: https://hf.co/stefan-it/hmbench-newseye-sv-hmbert-bs4-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-4 [5]: https://hf.co/stefan-it/hmbench-newseye-sv-hmbert-bs4-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-5 [6]: https://hf.co/stefan-it/hmbench-newseye-sv-hmbert-bs4-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-1 [7]: https://hf.co/stefan-it/hmbench-newseye-sv-hmbert-bs4-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-2 [8]: https://hf.co/stefan-it/hmbench-newseye-sv-hmbert-bs4-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-3 [9]: https://hf.co/stefan-it/hmbench-newseye-sv-hmbert-bs4-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-4 [10]: https://hf.co/stefan-it/hmbench-newseye-sv-hmbert-bs4-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-5 [11]: https://hf.co/stefan-it/hmbench-newseye-sv-hmbert-bs8-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-1 [12]: https://hf.co/stefan-it/hmbench-newseye-sv-hmbert-bs8-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-2 [13]: https://hf.co/stefan-it/hmbench-newseye-sv-hmbert-bs8-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-3 [14]: https://hf.co/stefan-it/hmbench-newseye-sv-hmbert-bs8-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-4 [15]: https://hf.co/stefan-it/hmbench-newseye-sv-hmbert-bs8-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-5 [16]: https://hf.co/stefan-it/hmbench-newseye-sv-hmbert-bs8-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-1 [17]: https://hf.co/stefan-it/hmbench-newseye-sv-hmbert-bs8-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-2 [18]: https://hf.co/stefan-it/hmbench-newseye-sv-hmbert-bs8-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-3 [19]: https://hf.co/stefan-it/hmbench-newseye-sv-hmbert-bs8-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-4 [20]: https://hf.co/stefan-it/hmbench-newseye-sv-hmbert-bs8-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-5 The [training log](training.log) and TensorBoard logs (only for hmByT5 and hmTEAMS based models) are also uploaded to the model hub. More information about fine-tuning can be found [here](https://github.com/stefan-it/hmBench). # Acknowledgements We thank [Luisa März](https://github.com/LuisaMaerz), [Katharina Schmid](https://github.com/schmika) and [Erion Çano](https://github.com/erionc) for their fruitful discussions about Historic Language Models. Research supported with Cloud TPUs from Google's [TPU Research Cloud](https://sites.research.google/trc/about/) (TRC). Many Thanks for providing access to the TPUs ❤️
stefan-it/hmbench-newseye-sv-hmbert-bs4-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-4
stefan-it
2023-10-17T23:19:15Z
0
0
flair
[ "flair", "pytorch", "token-classification", "sequence-tagger-model", "sv", "base_model:dbmdz/bert-base-historic-multilingual-cased", "base_model:finetune:dbmdz/bert-base-historic-multilingual-cased", "license:mit", "region:us" ]
token-classification
2023-10-16T20:11:18Z
--- language: sv license: mit tags: - flair - token-classification - sequence-tagger-model base_model: dbmdz/bert-base-historic-multilingual-cased widget: - text: Värri , Teittinen , Forsman , Tensik - kala m . fl . anslöto sig till reservatio - nen , hvaremot lm Fieandt , Huopo - nen , Koskelin , Leppänen , ( Li - belits ) , Eklund m . fl . förordade ut - skottets formulering af § 11 . --- # Fine-tuned Flair Model on Swedish NewsEye NER Dataset (HIPE-2022) This Flair model was fine-tuned on the [Swedish NewsEye](https://github.com/hipe-eval/HIPE-2022-data/blob/main/documentation/README-newseye.md) NER Dataset using hmBERT as backbone LM. The NewsEye dataset is comprised of diachronic historical newspaper material published between 1850 and 1950 in French, German, Finnish, and Swedish. More information can be found [here](https://dl.acm.org/doi/abs/10.1145/3404835.3463255). The following NEs were annotated: `PER`, `LOC`, `ORG` and `HumanProd`. # Results We performed a hyper-parameter search over the following parameters with 5 different seeds per configuration: * Batch Sizes: `[8, 4]` * Learning Rates: `[3e-05, 5e-05]` And report micro F1-score on development set: | Configuration | Run 1 | Run 2 | Run 3 | Run 4 | Run 5 | Avg. | |-----------------|--------------|--------------|--------------|--------------|--------------|--------------| | bs4-e10-lr5e-05 | [0.8433][1] | [0.829][2] | [0.8207][3] | [0.8286][4] | [0.8318][5] | 83.07 ± 0.73 | | bs4-e10-lr3e-05 | [0.819][6] | [0.8296][7] | [0.8439][8] | [0.8202][9] | [0.8327][10] | 82.91 ± 0.91 | | bs8-e10-lr5e-05 | [0.8214][11] | [0.8073][12] | [0.8309][13] | [0.8324][14] | [0.8324][15] | 82.49 ± 0.97 | | bs8-e10-lr3e-05 | [0.817][16] | [0.8155][17] | [0.8187][18] | [0.8259][19] | [0.8088][20] | 81.72 ± 0.55 | [1]: https://hf.co/stefan-it/hmbench-newseye-sv-hmbert-bs4-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-1 [2]: https://hf.co/stefan-it/hmbench-newseye-sv-hmbert-bs4-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-2 [3]: https://hf.co/stefan-it/hmbench-newseye-sv-hmbert-bs4-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-3 [4]: https://hf.co/stefan-it/hmbench-newseye-sv-hmbert-bs4-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-4 [5]: https://hf.co/stefan-it/hmbench-newseye-sv-hmbert-bs4-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-5 [6]: https://hf.co/stefan-it/hmbench-newseye-sv-hmbert-bs4-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-1 [7]: https://hf.co/stefan-it/hmbench-newseye-sv-hmbert-bs4-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-2 [8]: https://hf.co/stefan-it/hmbench-newseye-sv-hmbert-bs4-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-3 [9]: https://hf.co/stefan-it/hmbench-newseye-sv-hmbert-bs4-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-4 [10]: https://hf.co/stefan-it/hmbench-newseye-sv-hmbert-bs4-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-5 [11]: https://hf.co/stefan-it/hmbench-newseye-sv-hmbert-bs8-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-1 [12]: https://hf.co/stefan-it/hmbench-newseye-sv-hmbert-bs8-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-2 [13]: https://hf.co/stefan-it/hmbench-newseye-sv-hmbert-bs8-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-3 [14]: https://hf.co/stefan-it/hmbench-newseye-sv-hmbert-bs8-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-4 [15]: https://hf.co/stefan-it/hmbench-newseye-sv-hmbert-bs8-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-5 [16]: https://hf.co/stefan-it/hmbench-newseye-sv-hmbert-bs8-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-1 [17]: https://hf.co/stefan-it/hmbench-newseye-sv-hmbert-bs8-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-2 [18]: https://hf.co/stefan-it/hmbench-newseye-sv-hmbert-bs8-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-3 [19]: https://hf.co/stefan-it/hmbench-newseye-sv-hmbert-bs8-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-4 [20]: https://hf.co/stefan-it/hmbench-newseye-sv-hmbert-bs8-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-5 The [training log](training.log) and TensorBoard logs (only for hmByT5 and hmTEAMS based models) are also uploaded to the model hub. More information about fine-tuning can be found [here](https://github.com/stefan-it/hmBench). # Acknowledgements We thank [Luisa März](https://github.com/LuisaMaerz), [Katharina Schmid](https://github.com/schmika) and [Erion Çano](https://github.com/erionc) for their fruitful discussions about Historic Language Models. Research supported with Cloud TPUs from Google's [TPU Research Cloud](https://sites.research.google/trc/about/) (TRC). Many Thanks for providing access to the TPUs ❤️
stefan-it/hmbench-newseye-sv-hmbert-bs4-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-3
stefan-it
2023-10-17T23:19:13Z
4
0
flair
[ "flair", "pytorch", "token-classification", "sequence-tagger-model", "sv", "base_model:dbmdz/bert-base-historic-multilingual-cased", "base_model:finetune:dbmdz/bert-base-historic-multilingual-cased", "license:mit", "region:us" ]
token-classification
2023-10-16T19:58:06Z
--- language: sv license: mit tags: - flair - token-classification - sequence-tagger-model base_model: dbmdz/bert-base-historic-multilingual-cased widget: - text: Värri , Teittinen , Forsman , Tensik - kala m . fl . anslöto sig till reservatio - nen , hvaremot lm Fieandt , Huopo - nen , Koskelin , Leppänen , ( Li - belits ) , Eklund m . fl . förordade ut - skottets formulering af § 11 . --- # Fine-tuned Flair Model on Swedish NewsEye NER Dataset (HIPE-2022) This Flair model was fine-tuned on the [Swedish NewsEye](https://github.com/hipe-eval/HIPE-2022-data/blob/main/documentation/README-newseye.md) NER Dataset using hmBERT as backbone LM. The NewsEye dataset is comprised of diachronic historical newspaper material published between 1850 and 1950 in French, German, Finnish, and Swedish. More information can be found [here](https://dl.acm.org/doi/abs/10.1145/3404835.3463255). The following NEs were annotated: `PER`, `LOC`, `ORG` and `HumanProd`. # Results We performed a hyper-parameter search over the following parameters with 5 different seeds per configuration: * Batch Sizes: `[8, 4]` * Learning Rates: `[3e-05, 5e-05]` And report micro F1-score on development set: | Configuration | Run 1 | Run 2 | Run 3 | Run 4 | Run 5 | Avg. | |-----------------|--------------|--------------|--------------|--------------|--------------|--------------| | bs4-e10-lr5e-05 | [0.8433][1] | [0.829][2] | [0.8207][3] | [0.8286][4] | [0.8318][5] | 83.07 ± 0.73 | | bs4-e10-lr3e-05 | [0.819][6] | [0.8296][7] | [0.8439][8] | [0.8202][9] | [0.8327][10] | 82.91 ± 0.91 | | bs8-e10-lr5e-05 | [0.8214][11] | [0.8073][12] | [0.8309][13] | [0.8324][14] | [0.8324][15] | 82.49 ± 0.97 | | bs8-e10-lr3e-05 | [0.817][16] | [0.8155][17] | [0.8187][18] | [0.8259][19] | [0.8088][20] | 81.72 ± 0.55 | [1]: https://hf.co/stefan-it/hmbench-newseye-sv-hmbert-bs4-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-1 [2]: https://hf.co/stefan-it/hmbench-newseye-sv-hmbert-bs4-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-2 [3]: https://hf.co/stefan-it/hmbench-newseye-sv-hmbert-bs4-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-3 [4]: https://hf.co/stefan-it/hmbench-newseye-sv-hmbert-bs4-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-4 [5]: https://hf.co/stefan-it/hmbench-newseye-sv-hmbert-bs4-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-5 [6]: https://hf.co/stefan-it/hmbench-newseye-sv-hmbert-bs4-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-1 [7]: https://hf.co/stefan-it/hmbench-newseye-sv-hmbert-bs4-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-2 [8]: https://hf.co/stefan-it/hmbench-newseye-sv-hmbert-bs4-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-3 [9]: https://hf.co/stefan-it/hmbench-newseye-sv-hmbert-bs4-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-4 [10]: https://hf.co/stefan-it/hmbench-newseye-sv-hmbert-bs4-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-5 [11]: https://hf.co/stefan-it/hmbench-newseye-sv-hmbert-bs8-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-1 [12]: https://hf.co/stefan-it/hmbench-newseye-sv-hmbert-bs8-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-2 [13]: https://hf.co/stefan-it/hmbench-newseye-sv-hmbert-bs8-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-3 [14]: https://hf.co/stefan-it/hmbench-newseye-sv-hmbert-bs8-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-4 [15]: https://hf.co/stefan-it/hmbench-newseye-sv-hmbert-bs8-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-5 [16]: https://hf.co/stefan-it/hmbench-newseye-sv-hmbert-bs8-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-1 [17]: https://hf.co/stefan-it/hmbench-newseye-sv-hmbert-bs8-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-2 [18]: https://hf.co/stefan-it/hmbench-newseye-sv-hmbert-bs8-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-3 [19]: https://hf.co/stefan-it/hmbench-newseye-sv-hmbert-bs8-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-4 [20]: https://hf.co/stefan-it/hmbench-newseye-sv-hmbert-bs8-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-5 The [training log](training.log) and TensorBoard logs (only for hmByT5 and hmTEAMS based models) are also uploaded to the model hub. More information about fine-tuning can be found [here](https://github.com/stefan-it/hmBench). # Acknowledgements We thank [Luisa März](https://github.com/LuisaMaerz), [Katharina Schmid](https://github.com/schmika) and [Erion Çano](https://github.com/erionc) for their fruitful discussions about Historic Language Models. Research supported with Cloud TPUs from Google's [TPU Research Cloud](https://sites.research.google/trc/about/) (TRC). Many Thanks for providing access to the TPUs ❤️
stefan-it/hmbench-newseye-sv-hmbert-bs4-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-2
stefan-it
2023-10-17T23:19:12Z
4
0
flair
[ "flair", "pytorch", "token-classification", "sequence-tagger-model", "sv", "base_model:dbmdz/bert-base-historic-multilingual-cased", "base_model:finetune:dbmdz/bert-base-historic-multilingual-cased", "license:mit", "region:us" ]
token-classification
2023-10-16T19:44:43Z
--- language: sv license: mit tags: - flair - token-classification - sequence-tagger-model base_model: dbmdz/bert-base-historic-multilingual-cased widget: - text: Värri , Teittinen , Forsman , Tensik - kala m . fl . anslöto sig till reservatio - nen , hvaremot lm Fieandt , Huopo - nen , Koskelin , Leppänen , ( Li - belits ) , Eklund m . fl . förordade ut - skottets formulering af § 11 . --- # Fine-tuned Flair Model on Swedish NewsEye NER Dataset (HIPE-2022) This Flair model was fine-tuned on the [Swedish NewsEye](https://github.com/hipe-eval/HIPE-2022-data/blob/main/documentation/README-newseye.md) NER Dataset using hmBERT as backbone LM. The NewsEye dataset is comprised of diachronic historical newspaper material published between 1850 and 1950 in French, German, Finnish, and Swedish. More information can be found [here](https://dl.acm.org/doi/abs/10.1145/3404835.3463255). The following NEs were annotated: `PER`, `LOC`, `ORG` and `HumanProd`. # Results We performed a hyper-parameter search over the following parameters with 5 different seeds per configuration: * Batch Sizes: `[8, 4]` * Learning Rates: `[3e-05, 5e-05]` And report micro F1-score on development set: | Configuration | Run 1 | Run 2 | Run 3 | Run 4 | Run 5 | Avg. | |-----------------|--------------|--------------|--------------|--------------|--------------|--------------| | bs4-e10-lr5e-05 | [0.8433][1] | [0.829][2] | [0.8207][3] | [0.8286][4] | [0.8318][5] | 83.07 ± 0.73 | | bs4-e10-lr3e-05 | [0.819][6] | [0.8296][7] | [0.8439][8] | [0.8202][9] | [0.8327][10] | 82.91 ± 0.91 | | bs8-e10-lr5e-05 | [0.8214][11] | [0.8073][12] | [0.8309][13] | [0.8324][14] | [0.8324][15] | 82.49 ± 0.97 | | bs8-e10-lr3e-05 | [0.817][16] | [0.8155][17] | [0.8187][18] | [0.8259][19] | [0.8088][20] | 81.72 ± 0.55 | [1]: https://hf.co/stefan-it/hmbench-newseye-sv-hmbert-bs4-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-1 [2]: https://hf.co/stefan-it/hmbench-newseye-sv-hmbert-bs4-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-2 [3]: https://hf.co/stefan-it/hmbench-newseye-sv-hmbert-bs4-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-3 [4]: https://hf.co/stefan-it/hmbench-newseye-sv-hmbert-bs4-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-4 [5]: https://hf.co/stefan-it/hmbench-newseye-sv-hmbert-bs4-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-5 [6]: https://hf.co/stefan-it/hmbench-newseye-sv-hmbert-bs4-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-1 [7]: https://hf.co/stefan-it/hmbench-newseye-sv-hmbert-bs4-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-2 [8]: https://hf.co/stefan-it/hmbench-newseye-sv-hmbert-bs4-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-3 [9]: https://hf.co/stefan-it/hmbench-newseye-sv-hmbert-bs4-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-4 [10]: https://hf.co/stefan-it/hmbench-newseye-sv-hmbert-bs4-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-5 [11]: https://hf.co/stefan-it/hmbench-newseye-sv-hmbert-bs8-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-1 [12]: https://hf.co/stefan-it/hmbench-newseye-sv-hmbert-bs8-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-2 [13]: https://hf.co/stefan-it/hmbench-newseye-sv-hmbert-bs8-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-3 [14]: https://hf.co/stefan-it/hmbench-newseye-sv-hmbert-bs8-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-4 [15]: https://hf.co/stefan-it/hmbench-newseye-sv-hmbert-bs8-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-5 [16]: https://hf.co/stefan-it/hmbench-newseye-sv-hmbert-bs8-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-1 [17]: https://hf.co/stefan-it/hmbench-newseye-sv-hmbert-bs8-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-2 [18]: https://hf.co/stefan-it/hmbench-newseye-sv-hmbert-bs8-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-3 [19]: https://hf.co/stefan-it/hmbench-newseye-sv-hmbert-bs8-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-4 [20]: https://hf.co/stefan-it/hmbench-newseye-sv-hmbert-bs8-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-5 The [training log](training.log) and TensorBoard logs (only for hmByT5 and hmTEAMS based models) are also uploaded to the model hub. More information about fine-tuning can be found [here](https://github.com/stefan-it/hmBench). # Acknowledgements We thank [Luisa März](https://github.com/LuisaMaerz), [Katharina Schmid](https://github.com/schmika) and [Erion Çano](https://github.com/erionc) for their fruitful discussions about Historic Language Models. Research supported with Cloud TPUs from Google's [TPU Research Cloud](https://sites.research.google/trc/about/) (TRC). Many Thanks for providing access to the TPUs ❤️
stefan-it/hmbench-newseye-sv-hmbert-bs8-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-5
stefan-it
2023-10-17T23:19:10Z
3
0
flair
[ "flair", "pytorch", "token-classification", "sequence-tagger-model", "sv", "base_model:dbmdz/bert-base-historic-multilingual-cased", "base_model:finetune:dbmdz/bert-base-historic-multilingual-cased", "license:mit", "region:us" ]
token-classification
2023-10-16T20:34:28Z
--- language: sv license: mit tags: - flair - token-classification - sequence-tagger-model base_model: dbmdz/bert-base-historic-multilingual-cased widget: - text: Värri , Teittinen , Forsman , Tensik - kala m . fl . anslöto sig till reservatio - nen , hvaremot lm Fieandt , Huopo - nen , Koskelin , Leppänen , ( Li - belits ) , Eklund m . fl . förordade ut - skottets formulering af § 11 . --- # Fine-tuned Flair Model on Swedish NewsEye NER Dataset (HIPE-2022) This Flair model was fine-tuned on the [Swedish NewsEye](https://github.com/hipe-eval/HIPE-2022-data/blob/main/documentation/README-newseye.md) NER Dataset using hmBERT as backbone LM. The NewsEye dataset is comprised of diachronic historical newspaper material published between 1850 and 1950 in French, German, Finnish, and Swedish. More information can be found [here](https://dl.acm.org/doi/abs/10.1145/3404835.3463255). The following NEs were annotated: `PER`, `LOC`, `ORG` and `HumanProd`. # Results We performed a hyper-parameter search over the following parameters with 5 different seeds per configuration: * Batch Sizes: `[8, 4]` * Learning Rates: `[3e-05, 5e-05]` And report micro F1-score on development set: | Configuration | Run 1 | Run 2 | Run 3 | Run 4 | Run 5 | Avg. | |-----------------|--------------|--------------|--------------|--------------|--------------|--------------| | bs4-e10-lr5e-05 | [0.8433][1] | [0.829][2] | [0.8207][3] | [0.8286][4] | [0.8318][5] | 83.07 ± 0.73 | | bs4-e10-lr3e-05 | [0.819][6] | [0.8296][7] | [0.8439][8] | [0.8202][9] | [0.8327][10] | 82.91 ± 0.91 | | bs8-e10-lr5e-05 | [0.8214][11] | [0.8073][12] | [0.8309][13] | [0.8324][14] | [0.8324][15] | 82.49 ± 0.97 | | bs8-e10-lr3e-05 | [0.817][16] | [0.8155][17] | [0.8187][18] | [0.8259][19] | [0.8088][20] | 81.72 ± 0.55 | [1]: https://hf.co/stefan-it/hmbench-newseye-sv-hmbert-bs4-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-1 [2]: https://hf.co/stefan-it/hmbench-newseye-sv-hmbert-bs4-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-2 [3]: https://hf.co/stefan-it/hmbench-newseye-sv-hmbert-bs4-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-3 [4]: https://hf.co/stefan-it/hmbench-newseye-sv-hmbert-bs4-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-4 [5]: https://hf.co/stefan-it/hmbench-newseye-sv-hmbert-bs4-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-5 [6]: https://hf.co/stefan-it/hmbench-newseye-sv-hmbert-bs4-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-1 [7]: https://hf.co/stefan-it/hmbench-newseye-sv-hmbert-bs4-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-2 [8]: https://hf.co/stefan-it/hmbench-newseye-sv-hmbert-bs4-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-3 [9]: https://hf.co/stefan-it/hmbench-newseye-sv-hmbert-bs4-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-4 [10]: https://hf.co/stefan-it/hmbench-newseye-sv-hmbert-bs4-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-5 [11]: https://hf.co/stefan-it/hmbench-newseye-sv-hmbert-bs8-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-1 [12]: https://hf.co/stefan-it/hmbench-newseye-sv-hmbert-bs8-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-2 [13]: https://hf.co/stefan-it/hmbench-newseye-sv-hmbert-bs8-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-3 [14]: https://hf.co/stefan-it/hmbench-newseye-sv-hmbert-bs8-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-4 [15]: https://hf.co/stefan-it/hmbench-newseye-sv-hmbert-bs8-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-5 [16]: https://hf.co/stefan-it/hmbench-newseye-sv-hmbert-bs8-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-1 [17]: https://hf.co/stefan-it/hmbench-newseye-sv-hmbert-bs8-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-2 [18]: https://hf.co/stefan-it/hmbench-newseye-sv-hmbert-bs8-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-3 [19]: https://hf.co/stefan-it/hmbench-newseye-sv-hmbert-bs8-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-4 [20]: https://hf.co/stefan-it/hmbench-newseye-sv-hmbert-bs8-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-5 The [training log](training.log) and TensorBoard logs (only for hmByT5 and hmTEAMS based models) are also uploaded to the model hub. More information about fine-tuning can be found [here](https://github.com/stefan-it/hmBench). # Acknowledgements We thank [Luisa März](https://github.com/LuisaMaerz), [Katharina Schmid](https://github.com/schmika) and [Erion Çano](https://github.com/erionc) for their fruitful discussions about Historic Language Models. Research supported with Cloud TPUs from Google's [TPU Research Cloud](https://sites.research.google/trc/about/) (TRC). Many Thanks for providing access to the TPUs ❤️
stefan-it/hmbench-newseye-sv-hmbert-bs8-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-4
stefan-it
2023-10-17T23:19:09Z
6
0
flair
[ "flair", "pytorch", "token-classification", "sequence-tagger-model", "sv", "base_model:dbmdz/bert-base-historic-multilingual-cased", "base_model:finetune:dbmdz/bert-base-historic-multilingual-cased", "license:mit", "region:us" ]
token-classification
2023-10-16T20:21:11Z
--- language: sv license: mit tags: - flair - token-classification - sequence-tagger-model base_model: dbmdz/bert-base-historic-multilingual-cased widget: - text: Värri , Teittinen , Forsman , Tensik - kala m . fl . anslöto sig till reservatio - nen , hvaremot lm Fieandt , Huopo - nen , Koskelin , Leppänen , ( Li - belits ) , Eklund m . fl . förordade ut - skottets formulering af § 11 . --- # Fine-tuned Flair Model on Swedish NewsEye NER Dataset (HIPE-2022) This Flair model was fine-tuned on the [Swedish NewsEye](https://github.com/hipe-eval/HIPE-2022-data/blob/main/documentation/README-newseye.md) NER Dataset using hmBERT as backbone LM. The NewsEye dataset is comprised of diachronic historical newspaper material published between 1850 and 1950 in French, German, Finnish, and Swedish. More information can be found [here](https://dl.acm.org/doi/abs/10.1145/3404835.3463255). The following NEs were annotated: `PER`, `LOC`, `ORG` and `HumanProd`. # Results We performed a hyper-parameter search over the following parameters with 5 different seeds per configuration: * Batch Sizes: `[8, 4]` * Learning Rates: `[3e-05, 5e-05]` And report micro F1-score on development set: | Configuration | Run 1 | Run 2 | Run 3 | Run 4 | Run 5 | Avg. | |-----------------|--------------|--------------|--------------|--------------|--------------|--------------| | bs4-e10-lr5e-05 | [0.8433][1] | [0.829][2] | [0.8207][3] | [0.8286][4] | [0.8318][5] | 83.07 ± 0.73 | | bs4-e10-lr3e-05 | [0.819][6] | [0.8296][7] | [0.8439][8] | [0.8202][9] | [0.8327][10] | 82.91 ± 0.91 | | bs8-e10-lr5e-05 | [0.8214][11] | [0.8073][12] | [0.8309][13] | [0.8324][14] | [0.8324][15] | 82.49 ± 0.97 | | bs8-e10-lr3e-05 | [0.817][16] | [0.8155][17] | [0.8187][18] | [0.8259][19] | [0.8088][20] | 81.72 ± 0.55 | [1]: https://hf.co/stefan-it/hmbench-newseye-sv-hmbert-bs4-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-1 [2]: https://hf.co/stefan-it/hmbench-newseye-sv-hmbert-bs4-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-2 [3]: https://hf.co/stefan-it/hmbench-newseye-sv-hmbert-bs4-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-3 [4]: https://hf.co/stefan-it/hmbench-newseye-sv-hmbert-bs4-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-4 [5]: https://hf.co/stefan-it/hmbench-newseye-sv-hmbert-bs4-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-5 [6]: https://hf.co/stefan-it/hmbench-newseye-sv-hmbert-bs4-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-1 [7]: https://hf.co/stefan-it/hmbench-newseye-sv-hmbert-bs4-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-2 [8]: https://hf.co/stefan-it/hmbench-newseye-sv-hmbert-bs4-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-3 [9]: https://hf.co/stefan-it/hmbench-newseye-sv-hmbert-bs4-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-4 [10]: https://hf.co/stefan-it/hmbench-newseye-sv-hmbert-bs4-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-5 [11]: https://hf.co/stefan-it/hmbench-newseye-sv-hmbert-bs8-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-1 [12]: https://hf.co/stefan-it/hmbench-newseye-sv-hmbert-bs8-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-2 [13]: https://hf.co/stefan-it/hmbench-newseye-sv-hmbert-bs8-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-3 [14]: https://hf.co/stefan-it/hmbench-newseye-sv-hmbert-bs8-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-4 [15]: https://hf.co/stefan-it/hmbench-newseye-sv-hmbert-bs8-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-5 [16]: https://hf.co/stefan-it/hmbench-newseye-sv-hmbert-bs8-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-1 [17]: https://hf.co/stefan-it/hmbench-newseye-sv-hmbert-bs8-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-2 [18]: https://hf.co/stefan-it/hmbench-newseye-sv-hmbert-bs8-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-3 [19]: https://hf.co/stefan-it/hmbench-newseye-sv-hmbert-bs8-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-4 [20]: https://hf.co/stefan-it/hmbench-newseye-sv-hmbert-bs8-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-5 The [training log](training.log) and TensorBoard logs (only for hmByT5 and hmTEAMS based models) are also uploaded to the model hub. More information about fine-tuning can be found [here](https://github.com/stefan-it/hmBench). # Acknowledgements We thank [Luisa März](https://github.com/LuisaMaerz), [Katharina Schmid](https://github.com/schmika) and [Erion Çano](https://github.com/erionc) for their fruitful discussions about Historic Language Models. Research supported with Cloud TPUs from Google's [TPU Research Cloud](https://sites.research.google/trc/about/) (TRC). Many Thanks for providing access to the TPUs ❤️
stefan-it/hmbench-newseye-sv-hmbert-bs8-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-2
stefan-it
2023-10-17T23:19:07Z
4
0
flair
[ "flair", "pytorch", "token-classification", "sequence-tagger-model", "sv", "base_model:dbmdz/bert-base-historic-multilingual-cased", "base_model:finetune:dbmdz/bert-base-historic-multilingual-cased", "license:mit", "region:us" ]
token-classification
2023-10-16T19:54:30Z
--- language: sv license: mit tags: - flair - token-classification - sequence-tagger-model base_model: dbmdz/bert-base-historic-multilingual-cased widget: - text: Värri , Teittinen , Forsman , Tensik - kala m . fl . anslöto sig till reservatio - nen , hvaremot lm Fieandt , Huopo - nen , Koskelin , Leppänen , ( Li - belits ) , Eklund m . fl . förordade ut - skottets formulering af § 11 . --- # Fine-tuned Flair Model on Swedish NewsEye NER Dataset (HIPE-2022) This Flair model was fine-tuned on the [Swedish NewsEye](https://github.com/hipe-eval/HIPE-2022-data/blob/main/documentation/README-newseye.md) NER Dataset using hmBERT as backbone LM. The NewsEye dataset is comprised of diachronic historical newspaper material published between 1850 and 1950 in French, German, Finnish, and Swedish. More information can be found [here](https://dl.acm.org/doi/abs/10.1145/3404835.3463255). The following NEs were annotated: `PER`, `LOC`, `ORG` and `HumanProd`. # Results We performed a hyper-parameter search over the following parameters with 5 different seeds per configuration: * Batch Sizes: `[8, 4]` * Learning Rates: `[3e-05, 5e-05]` And report micro F1-score on development set: | Configuration | Run 1 | Run 2 | Run 3 | Run 4 | Run 5 | Avg. | |-----------------|--------------|--------------|--------------|--------------|--------------|--------------| | bs4-e10-lr5e-05 | [0.8433][1] | [0.829][2] | [0.8207][3] | [0.8286][4] | [0.8318][5] | 83.07 ± 0.73 | | bs4-e10-lr3e-05 | [0.819][6] | [0.8296][7] | [0.8439][8] | [0.8202][9] | [0.8327][10] | 82.91 ± 0.91 | | bs8-e10-lr5e-05 | [0.8214][11] | [0.8073][12] | [0.8309][13] | [0.8324][14] | [0.8324][15] | 82.49 ± 0.97 | | bs8-e10-lr3e-05 | [0.817][16] | [0.8155][17] | [0.8187][18] | [0.8259][19] | [0.8088][20] | 81.72 ± 0.55 | [1]: https://hf.co/stefan-it/hmbench-newseye-sv-hmbert-bs4-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-1 [2]: https://hf.co/stefan-it/hmbench-newseye-sv-hmbert-bs4-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-2 [3]: https://hf.co/stefan-it/hmbench-newseye-sv-hmbert-bs4-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-3 [4]: https://hf.co/stefan-it/hmbench-newseye-sv-hmbert-bs4-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-4 [5]: https://hf.co/stefan-it/hmbench-newseye-sv-hmbert-bs4-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-5 [6]: https://hf.co/stefan-it/hmbench-newseye-sv-hmbert-bs4-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-1 [7]: https://hf.co/stefan-it/hmbench-newseye-sv-hmbert-bs4-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-2 [8]: https://hf.co/stefan-it/hmbench-newseye-sv-hmbert-bs4-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-3 [9]: https://hf.co/stefan-it/hmbench-newseye-sv-hmbert-bs4-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-4 [10]: https://hf.co/stefan-it/hmbench-newseye-sv-hmbert-bs4-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-5 [11]: https://hf.co/stefan-it/hmbench-newseye-sv-hmbert-bs8-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-1 [12]: https://hf.co/stefan-it/hmbench-newseye-sv-hmbert-bs8-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-2 [13]: https://hf.co/stefan-it/hmbench-newseye-sv-hmbert-bs8-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-3 [14]: https://hf.co/stefan-it/hmbench-newseye-sv-hmbert-bs8-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-4 [15]: https://hf.co/stefan-it/hmbench-newseye-sv-hmbert-bs8-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-5 [16]: https://hf.co/stefan-it/hmbench-newseye-sv-hmbert-bs8-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-1 [17]: https://hf.co/stefan-it/hmbench-newseye-sv-hmbert-bs8-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-2 [18]: https://hf.co/stefan-it/hmbench-newseye-sv-hmbert-bs8-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-3 [19]: https://hf.co/stefan-it/hmbench-newseye-sv-hmbert-bs8-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-4 [20]: https://hf.co/stefan-it/hmbench-newseye-sv-hmbert-bs8-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-5 The [training log](training.log) and TensorBoard logs (only for hmByT5 and hmTEAMS based models) are also uploaded to the model hub. More information about fine-tuning can be found [here](https://github.com/stefan-it/hmBench). # Acknowledgements We thank [Luisa März](https://github.com/LuisaMaerz), [Katharina Schmid](https://github.com/schmika) and [Erion Çano](https://github.com/erionc) for their fruitful discussions about Historic Language Models. Research supported with Cloud TPUs from Google's [TPU Research Cloud](https://sites.research.google/trc/about/) (TRC). Many Thanks for providing access to the TPUs ❤️
stefan-it/hmbench-newseye-fi-hmbert-bs4-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-5
stefan-it
2023-10-17T23:18:39Z
5
0
flair
[ "flair", "pytorch", "token-classification", "sequence-tagger-model", "fi", "base_model:dbmdz/bert-base-historic-multilingual-cased", "base_model:finetune:dbmdz/bert-base-historic-multilingual-cased", "license:mit", "region:us" ]
token-classification
2023-10-16T18:54:10Z
--- language: fi license: mit tags: - flair - token-classification - sequence-tagger-model base_model: dbmdz/bert-base-historic-multilingual-cased widget: - text: Rooseveltin sihteeri ilmoittaa perättö - mäksi tiedon , että Rooseveltia olisi kehotettu käymään Englannissa , Saksassa ja Venäjällä puhumassa San Franciscon näyttelyn puolesta . --- # Fine-tuned Flair Model on Finnish NewsEye NER Dataset (HIPE-2022) This Flair model was fine-tuned on the [Finnish NewsEye](https://github.com/hipe-eval/HIPE-2022-data/blob/main/documentation/README-newseye.md) NER Dataset using hmBERT as backbone LM. The NewsEye dataset is comprised of diachronic historical newspaper material published between 1850 and 1950 in French, German, Finnish, and Swedish. More information can be found [here](https://dl.acm.org/doi/abs/10.1145/3404835.3463255). The following NEs were annotated: `PER`, `LOC`, `ORG` and `HumanProd`. # Results We performed a hyper-parameter search over the following parameters with 5 different seeds per configuration: * Batch Sizes: `[8, 4]` * Learning Rates: `[3e-05, 5e-05]` And report micro F1-score on development set: | Configuration | Run 1 | Run 2 | Run 3 | Run 4 | Run 5 | Avg. | |-----------------|--------------|--------------|--------------|--------------|--------------|--------------| | bs4-e10-lr3e-05 | [0.7669][1] | [0.8009][2] | [0.7722][3] | [0.7653][4] | [0.7579][5] | 77.26 ± 1.49 | | bs8-e10-lr5e-05 | [0.7837][6] | [0.7447][7] | [0.778][8] | [0.7702][9] | [0.7666][10] | 76.86 ± 1.34 | | bs4-e10-lr5e-05 | [0.7856][11] | [0.7722][12] | [0.7484][13] | [0.7619][14] | [0.7556][15] | 76.47 ± 1.3 | | bs8-e10-lr3e-05 | [0.7669][16] | [0.7436][17] | [0.766][18] | [0.7716][19] | [0.7328][20] | 75.62 ± 1.52 | [1]: https://hf.co/stefan-it/hmbench-newseye-fi-hmbert-bs4-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-1 [2]: https://hf.co/stefan-it/hmbench-newseye-fi-hmbert-bs4-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-2 [3]: https://hf.co/stefan-it/hmbench-newseye-fi-hmbert-bs4-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-3 [4]: https://hf.co/stefan-it/hmbench-newseye-fi-hmbert-bs4-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-4 [5]: https://hf.co/stefan-it/hmbench-newseye-fi-hmbert-bs4-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-5 [6]: https://hf.co/stefan-it/hmbench-newseye-fi-hmbert-bs8-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-1 [7]: https://hf.co/stefan-it/hmbench-newseye-fi-hmbert-bs8-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-2 [8]: https://hf.co/stefan-it/hmbench-newseye-fi-hmbert-bs8-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-3 [9]: https://hf.co/stefan-it/hmbench-newseye-fi-hmbert-bs8-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-4 [10]: https://hf.co/stefan-it/hmbench-newseye-fi-hmbert-bs8-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-5 [11]: https://hf.co/stefan-it/hmbench-newseye-fi-hmbert-bs4-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-1 [12]: https://hf.co/stefan-it/hmbench-newseye-fi-hmbert-bs4-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-2 [13]: https://hf.co/stefan-it/hmbench-newseye-fi-hmbert-bs4-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-3 [14]: https://hf.co/stefan-it/hmbench-newseye-fi-hmbert-bs4-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-4 [15]: https://hf.co/stefan-it/hmbench-newseye-fi-hmbert-bs4-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-5 [16]: https://hf.co/stefan-it/hmbench-newseye-fi-hmbert-bs8-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-1 [17]: https://hf.co/stefan-it/hmbench-newseye-fi-hmbert-bs8-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-2 [18]: https://hf.co/stefan-it/hmbench-newseye-fi-hmbert-bs8-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-3 [19]: https://hf.co/stefan-it/hmbench-newseye-fi-hmbert-bs8-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-4 [20]: https://hf.co/stefan-it/hmbench-newseye-fi-hmbert-bs8-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-5 The [training log](training.log) and TensorBoard logs (only for hmByT5 and hmTEAMS based models) are also uploaded to the model hub. More information about fine-tuning can be found [here](https://github.com/stefan-it/hmBench). # Acknowledgements We thank [Luisa März](https://github.com/LuisaMaerz), [Katharina Schmid](https://github.com/schmika) and [Erion Çano](https://github.com/erionc) for their fruitful discussions about Historic Language Models. Research supported with Cloud TPUs from Google's [TPU Research Cloud](https://sites.research.google/trc/about/) (TRC). Many Thanks for providing access to the TPUs ❤️
stefan-it/hmbench-newseye-fi-hmbert-bs4-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-4
stefan-it
2023-10-17T23:18:38Z
3
0
flair
[ "flair", "pytorch", "token-classification", "sequence-tagger-model", "fi", "base_model:dbmdz/bert-base-historic-multilingual-cased", "base_model:finetune:dbmdz/bert-base-historic-multilingual-cased", "license:mit", "region:us" ]
token-classification
2023-10-16T18:40:34Z
--- language: fi license: mit tags: - flair - token-classification - sequence-tagger-model base_model: dbmdz/bert-base-historic-multilingual-cased widget: - text: Rooseveltin sihteeri ilmoittaa perättö - mäksi tiedon , että Rooseveltia olisi kehotettu käymään Englannissa , Saksassa ja Venäjällä puhumassa San Franciscon näyttelyn puolesta . --- # Fine-tuned Flair Model on Finnish NewsEye NER Dataset (HIPE-2022) This Flair model was fine-tuned on the [Finnish NewsEye](https://github.com/hipe-eval/HIPE-2022-data/blob/main/documentation/README-newseye.md) NER Dataset using hmBERT as backbone LM. The NewsEye dataset is comprised of diachronic historical newspaper material published between 1850 and 1950 in French, German, Finnish, and Swedish. More information can be found [here](https://dl.acm.org/doi/abs/10.1145/3404835.3463255). The following NEs were annotated: `PER`, `LOC`, `ORG` and `HumanProd`. # Results We performed a hyper-parameter search over the following parameters with 5 different seeds per configuration: * Batch Sizes: `[8, 4]` * Learning Rates: `[3e-05, 5e-05]` And report micro F1-score on development set: | Configuration | Run 1 | Run 2 | Run 3 | Run 4 | Run 5 | Avg. | |-----------------|--------------|--------------|--------------|--------------|--------------|--------------| | bs4-e10-lr3e-05 | [0.7669][1] | [0.8009][2] | [0.7722][3] | [0.7653][4] | [0.7579][5] | 77.26 ± 1.49 | | bs8-e10-lr5e-05 | [0.7837][6] | [0.7447][7] | [0.778][8] | [0.7702][9] | [0.7666][10] | 76.86 ± 1.34 | | bs4-e10-lr5e-05 | [0.7856][11] | [0.7722][12] | [0.7484][13] | [0.7619][14] | [0.7556][15] | 76.47 ± 1.3 | | bs8-e10-lr3e-05 | [0.7669][16] | [0.7436][17] | [0.766][18] | [0.7716][19] | [0.7328][20] | 75.62 ± 1.52 | [1]: https://hf.co/stefan-it/hmbench-newseye-fi-hmbert-bs4-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-1 [2]: https://hf.co/stefan-it/hmbench-newseye-fi-hmbert-bs4-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-2 [3]: https://hf.co/stefan-it/hmbench-newseye-fi-hmbert-bs4-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-3 [4]: https://hf.co/stefan-it/hmbench-newseye-fi-hmbert-bs4-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-4 [5]: https://hf.co/stefan-it/hmbench-newseye-fi-hmbert-bs4-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-5 [6]: https://hf.co/stefan-it/hmbench-newseye-fi-hmbert-bs8-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-1 [7]: https://hf.co/stefan-it/hmbench-newseye-fi-hmbert-bs8-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-2 [8]: https://hf.co/stefan-it/hmbench-newseye-fi-hmbert-bs8-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-3 [9]: https://hf.co/stefan-it/hmbench-newseye-fi-hmbert-bs8-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-4 [10]: https://hf.co/stefan-it/hmbench-newseye-fi-hmbert-bs8-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-5 [11]: https://hf.co/stefan-it/hmbench-newseye-fi-hmbert-bs4-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-1 [12]: https://hf.co/stefan-it/hmbench-newseye-fi-hmbert-bs4-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-2 [13]: https://hf.co/stefan-it/hmbench-newseye-fi-hmbert-bs4-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-3 [14]: https://hf.co/stefan-it/hmbench-newseye-fi-hmbert-bs4-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-4 [15]: https://hf.co/stefan-it/hmbench-newseye-fi-hmbert-bs4-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-5 [16]: https://hf.co/stefan-it/hmbench-newseye-fi-hmbert-bs8-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-1 [17]: https://hf.co/stefan-it/hmbench-newseye-fi-hmbert-bs8-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-2 [18]: https://hf.co/stefan-it/hmbench-newseye-fi-hmbert-bs8-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-3 [19]: https://hf.co/stefan-it/hmbench-newseye-fi-hmbert-bs8-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-4 [20]: https://hf.co/stefan-it/hmbench-newseye-fi-hmbert-bs8-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-5 The [training log](training.log) and TensorBoard logs (only for hmByT5 and hmTEAMS based models) are also uploaded to the model hub. More information about fine-tuning can be found [here](https://github.com/stefan-it/hmBench). # Acknowledgements We thank [Luisa März](https://github.com/LuisaMaerz), [Katharina Schmid](https://github.com/schmika) and [Erion Çano](https://github.com/erionc) for their fruitful discussions about Historic Language Models. Research supported with Cloud TPUs from Google's [TPU Research Cloud](https://sites.research.google/trc/about/) (TRC). Many Thanks for providing access to the TPUs ❤️
stefan-it/hmbench-newseye-fi-hmbert-bs4-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-4
stefan-it
2023-10-17T23:18:33Z
6
0
flair
[ "flair", "pytorch", "token-classification", "sequence-tagger-model", "fi", "base_model:dbmdz/bert-base-historic-multilingual-cased", "base_model:finetune:dbmdz/bert-base-historic-multilingual-cased", "license:mit", "region:us" ]
token-classification
2023-10-16T18:37:01Z
--- language: fi license: mit tags: - flair - token-classification - sequence-tagger-model base_model: dbmdz/bert-base-historic-multilingual-cased widget: - text: Rooseveltin sihteeri ilmoittaa perättö - mäksi tiedon , että Rooseveltia olisi kehotettu käymään Englannissa , Saksassa ja Venäjällä puhumassa San Franciscon näyttelyn puolesta . --- # Fine-tuned Flair Model on Finnish NewsEye NER Dataset (HIPE-2022) This Flair model was fine-tuned on the [Finnish NewsEye](https://github.com/hipe-eval/HIPE-2022-data/blob/main/documentation/README-newseye.md) NER Dataset using hmBERT as backbone LM. The NewsEye dataset is comprised of diachronic historical newspaper material published between 1850 and 1950 in French, German, Finnish, and Swedish. More information can be found [here](https://dl.acm.org/doi/abs/10.1145/3404835.3463255). The following NEs were annotated: `PER`, `LOC`, `ORG` and `HumanProd`. # Results We performed a hyper-parameter search over the following parameters with 5 different seeds per configuration: * Batch Sizes: `[8, 4]` * Learning Rates: `[3e-05, 5e-05]` And report micro F1-score on development set: | Configuration | Run 1 | Run 2 | Run 3 | Run 4 | Run 5 | Avg. | |-----------------|--------------|--------------|--------------|--------------|--------------|--------------| | bs4-e10-lr3e-05 | [0.7669][1] | [0.8009][2] | [0.7722][3] | [0.7653][4] | [0.7579][5] | 77.26 ± 1.49 | | bs8-e10-lr5e-05 | [0.7837][6] | [0.7447][7] | [0.778][8] | [0.7702][9] | [0.7666][10] | 76.86 ± 1.34 | | bs4-e10-lr5e-05 | [0.7856][11] | [0.7722][12] | [0.7484][13] | [0.7619][14] | [0.7556][15] | 76.47 ± 1.3 | | bs8-e10-lr3e-05 | [0.7669][16] | [0.7436][17] | [0.766][18] | [0.7716][19] | [0.7328][20] | 75.62 ± 1.52 | [1]: https://hf.co/stefan-it/hmbench-newseye-fi-hmbert-bs4-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-1 [2]: https://hf.co/stefan-it/hmbench-newseye-fi-hmbert-bs4-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-2 [3]: https://hf.co/stefan-it/hmbench-newseye-fi-hmbert-bs4-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-3 [4]: https://hf.co/stefan-it/hmbench-newseye-fi-hmbert-bs4-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-4 [5]: https://hf.co/stefan-it/hmbench-newseye-fi-hmbert-bs4-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-5 [6]: https://hf.co/stefan-it/hmbench-newseye-fi-hmbert-bs8-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-1 [7]: https://hf.co/stefan-it/hmbench-newseye-fi-hmbert-bs8-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-2 [8]: https://hf.co/stefan-it/hmbench-newseye-fi-hmbert-bs8-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-3 [9]: https://hf.co/stefan-it/hmbench-newseye-fi-hmbert-bs8-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-4 [10]: https://hf.co/stefan-it/hmbench-newseye-fi-hmbert-bs8-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-5 [11]: https://hf.co/stefan-it/hmbench-newseye-fi-hmbert-bs4-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-1 [12]: https://hf.co/stefan-it/hmbench-newseye-fi-hmbert-bs4-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-2 [13]: https://hf.co/stefan-it/hmbench-newseye-fi-hmbert-bs4-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-3 [14]: https://hf.co/stefan-it/hmbench-newseye-fi-hmbert-bs4-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-4 [15]: https://hf.co/stefan-it/hmbench-newseye-fi-hmbert-bs4-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-5 [16]: https://hf.co/stefan-it/hmbench-newseye-fi-hmbert-bs8-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-1 [17]: https://hf.co/stefan-it/hmbench-newseye-fi-hmbert-bs8-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-2 [18]: https://hf.co/stefan-it/hmbench-newseye-fi-hmbert-bs8-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-3 [19]: https://hf.co/stefan-it/hmbench-newseye-fi-hmbert-bs8-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-4 [20]: https://hf.co/stefan-it/hmbench-newseye-fi-hmbert-bs8-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-5 The [training log](training.log) and TensorBoard logs (only for hmByT5 and hmTEAMS based models) are also uploaded to the model hub. More information about fine-tuning can be found [here](https://github.com/stefan-it/hmBench). # Acknowledgements We thank [Luisa März](https://github.com/LuisaMaerz), [Katharina Schmid](https://github.com/schmika) and [Erion Çano](https://github.com/erionc) for their fruitful discussions about Historic Language Models. Research supported with Cloud TPUs from Google's [TPU Research Cloud](https://sites.research.google/trc/about/) (TRC). Many Thanks for providing access to the TPUs ❤️
stefan-it/hmbench-newseye-fi-hmbert-bs8-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-5
stefan-it
2023-10-17T23:18:28Z
1
0
flair
[ "flair", "pytorch", "token-classification", "sequence-tagger-model", "fi", "base_model:dbmdz/bert-base-historic-multilingual-cased", "base_model:finetune:dbmdz/bert-base-historic-multilingual-cased", "license:mit", "region:us" ]
token-classification
2023-10-16T19:00:35Z
--- language: fi license: mit tags: - flair - token-classification - sequence-tagger-model base_model: dbmdz/bert-base-historic-multilingual-cased widget: - text: Rooseveltin sihteeri ilmoittaa perättö - mäksi tiedon , että Rooseveltia olisi kehotettu käymään Englannissa , Saksassa ja Venäjällä puhumassa San Franciscon näyttelyn puolesta . --- # Fine-tuned Flair Model on Finnish NewsEye NER Dataset (HIPE-2022) This Flair model was fine-tuned on the [Finnish NewsEye](https://github.com/hipe-eval/HIPE-2022-data/blob/main/documentation/README-newseye.md) NER Dataset using hmBERT as backbone LM. The NewsEye dataset is comprised of diachronic historical newspaper material published between 1850 and 1950 in French, German, Finnish, and Swedish. More information can be found [here](https://dl.acm.org/doi/abs/10.1145/3404835.3463255). The following NEs were annotated: `PER`, `LOC`, `ORG` and `HumanProd`. # Results We performed a hyper-parameter search over the following parameters with 5 different seeds per configuration: * Batch Sizes: `[8, 4]` * Learning Rates: `[3e-05, 5e-05]` And report micro F1-score on development set: | Configuration | Run 1 | Run 2 | Run 3 | Run 4 | Run 5 | Avg. | |-----------------|--------------|--------------|--------------|--------------|--------------|--------------| | bs4-e10-lr3e-05 | [0.7669][1] | [0.8009][2] | [0.7722][3] | [0.7653][4] | [0.7579][5] | 77.26 ± 1.49 | | bs8-e10-lr5e-05 | [0.7837][6] | [0.7447][7] | [0.778][8] | [0.7702][9] | [0.7666][10] | 76.86 ± 1.34 | | bs4-e10-lr5e-05 | [0.7856][11] | [0.7722][12] | [0.7484][13] | [0.7619][14] | [0.7556][15] | 76.47 ± 1.3 | | bs8-e10-lr3e-05 | [0.7669][16] | [0.7436][17] | [0.766][18] | [0.7716][19] | [0.7328][20] | 75.62 ± 1.52 | [1]: https://hf.co/stefan-it/hmbench-newseye-fi-hmbert-bs4-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-1 [2]: https://hf.co/stefan-it/hmbench-newseye-fi-hmbert-bs4-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-2 [3]: https://hf.co/stefan-it/hmbench-newseye-fi-hmbert-bs4-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-3 [4]: https://hf.co/stefan-it/hmbench-newseye-fi-hmbert-bs4-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-4 [5]: https://hf.co/stefan-it/hmbench-newseye-fi-hmbert-bs4-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-5 [6]: https://hf.co/stefan-it/hmbench-newseye-fi-hmbert-bs8-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-1 [7]: https://hf.co/stefan-it/hmbench-newseye-fi-hmbert-bs8-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-2 [8]: https://hf.co/stefan-it/hmbench-newseye-fi-hmbert-bs8-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-3 [9]: https://hf.co/stefan-it/hmbench-newseye-fi-hmbert-bs8-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-4 [10]: https://hf.co/stefan-it/hmbench-newseye-fi-hmbert-bs8-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-5 [11]: https://hf.co/stefan-it/hmbench-newseye-fi-hmbert-bs4-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-1 [12]: https://hf.co/stefan-it/hmbench-newseye-fi-hmbert-bs4-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-2 [13]: https://hf.co/stefan-it/hmbench-newseye-fi-hmbert-bs4-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-3 [14]: https://hf.co/stefan-it/hmbench-newseye-fi-hmbert-bs4-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-4 [15]: https://hf.co/stefan-it/hmbench-newseye-fi-hmbert-bs4-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-5 [16]: https://hf.co/stefan-it/hmbench-newseye-fi-hmbert-bs8-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-1 [17]: https://hf.co/stefan-it/hmbench-newseye-fi-hmbert-bs8-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-2 [18]: https://hf.co/stefan-it/hmbench-newseye-fi-hmbert-bs8-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-3 [19]: https://hf.co/stefan-it/hmbench-newseye-fi-hmbert-bs8-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-4 [20]: https://hf.co/stefan-it/hmbench-newseye-fi-hmbert-bs8-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-5 The [training log](training.log) and TensorBoard logs (only for hmByT5 and hmTEAMS based models) are also uploaded to the model hub. More information about fine-tuning can be found [here](https://github.com/stefan-it/hmBench). # Acknowledgements We thank [Luisa März](https://github.com/LuisaMaerz), [Katharina Schmid](https://github.com/schmika) and [Erion Çano](https://github.com/erionc) for their fruitful discussions about Historic Language Models. Research supported with Cloud TPUs from Google's [TPU Research Cloud](https://sites.research.google/trc/about/) (TRC). Many Thanks for providing access to the TPUs ❤️
stefan-it/hmbench-newseye-fi-hmbert-bs8-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-1
stefan-it
2023-10-17T23:18:24Z
6
0
flair
[ "flair", "pytorch", "token-classification", "sequence-tagger-model", "fi", "base_model:dbmdz/bert-base-historic-multilingual-cased", "base_model:finetune:dbmdz/bert-base-historic-multilingual-cased", "license:mit", "region:us" ]
token-classification
2023-10-16T18:06:18Z
--- language: fi license: mit tags: - flair - token-classification - sequence-tagger-model base_model: dbmdz/bert-base-historic-multilingual-cased widget: - text: Rooseveltin sihteeri ilmoittaa perättö - mäksi tiedon , että Rooseveltia olisi kehotettu käymään Englannissa , Saksassa ja Venäjällä puhumassa San Franciscon näyttelyn puolesta . --- # Fine-tuned Flair Model on Finnish NewsEye NER Dataset (HIPE-2022) This Flair model was fine-tuned on the [Finnish NewsEye](https://github.com/hipe-eval/HIPE-2022-data/blob/main/documentation/README-newseye.md) NER Dataset using hmBERT as backbone LM. The NewsEye dataset is comprised of diachronic historical newspaper material published between 1850 and 1950 in French, German, Finnish, and Swedish. More information can be found [here](https://dl.acm.org/doi/abs/10.1145/3404835.3463255). The following NEs were annotated: `PER`, `LOC`, `ORG` and `HumanProd`. # Results We performed a hyper-parameter search over the following parameters with 5 different seeds per configuration: * Batch Sizes: `[8, 4]` * Learning Rates: `[3e-05, 5e-05]` And report micro F1-score on development set: | Configuration | Run 1 | Run 2 | Run 3 | Run 4 | Run 5 | Avg. | |-----------------|--------------|--------------|--------------|--------------|--------------|--------------| | bs4-e10-lr3e-05 | [0.7669][1] | [0.8009][2] | [0.7722][3] | [0.7653][4] | [0.7579][5] | 77.26 ± 1.49 | | bs8-e10-lr5e-05 | [0.7837][6] | [0.7447][7] | [0.778][8] | [0.7702][9] | [0.7666][10] | 76.86 ± 1.34 | | bs4-e10-lr5e-05 | [0.7856][11] | [0.7722][12] | [0.7484][13] | [0.7619][14] | [0.7556][15] | 76.47 ± 1.3 | | bs8-e10-lr3e-05 | [0.7669][16] | [0.7436][17] | [0.766][18] | [0.7716][19] | [0.7328][20] | 75.62 ± 1.52 | [1]: https://hf.co/stefan-it/hmbench-newseye-fi-hmbert-bs4-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-1 [2]: https://hf.co/stefan-it/hmbench-newseye-fi-hmbert-bs4-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-2 [3]: https://hf.co/stefan-it/hmbench-newseye-fi-hmbert-bs4-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-3 [4]: https://hf.co/stefan-it/hmbench-newseye-fi-hmbert-bs4-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-4 [5]: https://hf.co/stefan-it/hmbench-newseye-fi-hmbert-bs4-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-5 [6]: https://hf.co/stefan-it/hmbench-newseye-fi-hmbert-bs8-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-1 [7]: https://hf.co/stefan-it/hmbench-newseye-fi-hmbert-bs8-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-2 [8]: https://hf.co/stefan-it/hmbench-newseye-fi-hmbert-bs8-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-3 [9]: https://hf.co/stefan-it/hmbench-newseye-fi-hmbert-bs8-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-4 [10]: https://hf.co/stefan-it/hmbench-newseye-fi-hmbert-bs8-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-5 [11]: https://hf.co/stefan-it/hmbench-newseye-fi-hmbert-bs4-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-1 [12]: https://hf.co/stefan-it/hmbench-newseye-fi-hmbert-bs4-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-2 [13]: https://hf.co/stefan-it/hmbench-newseye-fi-hmbert-bs4-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-3 [14]: https://hf.co/stefan-it/hmbench-newseye-fi-hmbert-bs4-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-4 [15]: https://hf.co/stefan-it/hmbench-newseye-fi-hmbert-bs4-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-5 [16]: https://hf.co/stefan-it/hmbench-newseye-fi-hmbert-bs8-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-1 [17]: https://hf.co/stefan-it/hmbench-newseye-fi-hmbert-bs8-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-2 [18]: https://hf.co/stefan-it/hmbench-newseye-fi-hmbert-bs8-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-3 [19]: https://hf.co/stefan-it/hmbench-newseye-fi-hmbert-bs8-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-4 [20]: https://hf.co/stefan-it/hmbench-newseye-fi-hmbert-bs8-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-5 The [training log](training.log) and TensorBoard logs (only for hmByT5 and hmTEAMS based models) are also uploaded to the model hub. More information about fine-tuning can be found [here](https://github.com/stefan-it/hmBench). # Acknowledgements We thank [Luisa März](https://github.com/LuisaMaerz), [Katharina Schmid](https://github.com/schmika) and [Erion Çano](https://github.com/erionc) for their fruitful discussions about Historic Language Models. Research supported with Cloud TPUs from Google's [TPU Research Cloud](https://sites.research.google/trc/about/) (TRC). Many Thanks for providing access to the TPUs ❤️
stefan-it/hmbench-newseye-fi-hmbert-bs8-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-2
stefan-it
2023-10-17T23:18:19Z
6
0
flair
[ "flair", "pytorch", "token-classification", "sequence-tagger-model", "fi", "base_model:dbmdz/bert-base-historic-multilingual-cased", "base_model:finetune:dbmdz/bert-base-historic-multilingual-cased", "license:mit", "region:us" ]
token-classification
2023-10-16T18:16:37Z
--- language: fi license: mit tags: - flair - token-classification - sequence-tagger-model base_model: dbmdz/bert-base-historic-multilingual-cased widget: - text: Rooseveltin sihteeri ilmoittaa perättö - mäksi tiedon , että Rooseveltia olisi kehotettu käymään Englannissa , Saksassa ja Venäjällä puhumassa San Franciscon näyttelyn puolesta . --- # Fine-tuned Flair Model on Finnish NewsEye NER Dataset (HIPE-2022) This Flair model was fine-tuned on the [Finnish NewsEye](https://github.com/hipe-eval/HIPE-2022-data/blob/main/documentation/README-newseye.md) NER Dataset using hmBERT as backbone LM. The NewsEye dataset is comprised of diachronic historical newspaper material published between 1850 and 1950 in French, German, Finnish, and Swedish. More information can be found [here](https://dl.acm.org/doi/abs/10.1145/3404835.3463255). The following NEs were annotated: `PER`, `LOC`, `ORG` and `HumanProd`. # Results We performed a hyper-parameter search over the following parameters with 5 different seeds per configuration: * Batch Sizes: `[8, 4]` * Learning Rates: `[3e-05, 5e-05]` And report micro F1-score on development set: | Configuration | Run 1 | Run 2 | Run 3 | Run 4 | Run 5 | Avg. | |-----------------|--------------|--------------|--------------|--------------|--------------|--------------| | bs4-e10-lr3e-05 | [0.7669][1] | [0.8009][2] | [0.7722][3] | [0.7653][4] | [0.7579][5] | 77.26 ± 1.49 | | bs8-e10-lr5e-05 | [0.7837][6] | [0.7447][7] | [0.778][8] | [0.7702][9] | [0.7666][10] | 76.86 ± 1.34 | | bs4-e10-lr5e-05 | [0.7856][11] | [0.7722][12] | [0.7484][13] | [0.7619][14] | [0.7556][15] | 76.47 ± 1.3 | | bs8-e10-lr3e-05 | [0.7669][16] | [0.7436][17] | [0.766][18] | [0.7716][19] | [0.7328][20] | 75.62 ± 1.52 | [1]: https://hf.co/stefan-it/hmbench-newseye-fi-hmbert-bs4-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-1 [2]: https://hf.co/stefan-it/hmbench-newseye-fi-hmbert-bs4-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-2 [3]: https://hf.co/stefan-it/hmbench-newseye-fi-hmbert-bs4-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-3 [4]: https://hf.co/stefan-it/hmbench-newseye-fi-hmbert-bs4-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-4 [5]: https://hf.co/stefan-it/hmbench-newseye-fi-hmbert-bs4-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-5 [6]: https://hf.co/stefan-it/hmbench-newseye-fi-hmbert-bs8-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-1 [7]: https://hf.co/stefan-it/hmbench-newseye-fi-hmbert-bs8-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-2 [8]: https://hf.co/stefan-it/hmbench-newseye-fi-hmbert-bs8-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-3 [9]: https://hf.co/stefan-it/hmbench-newseye-fi-hmbert-bs8-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-4 [10]: https://hf.co/stefan-it/hmbench-newseye-fi-hmbert-bs8-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-5 [11]: https://hf.co/stefan-it/hmbench-newseye-fi-hmbert-bs4-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-1 [12]: https://hf.co/stefan-it/hmbench-newseye-fi-hmbert-bs4-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-2 [13]: https://hf.co/stefan-it/hmbench-newseye-fi-hmbert-bs4-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-3 [14]: https://hf.co/stefan-it/hmbench-newseye-fi-hmbert-bs4-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-4 [15]: https://hf.co/stefan-it/hmbench-newseye-fi-hmbert-bs4-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-5 [16]: https://hf.co/stefan-it/hmbench-newseye-fi-hmbert-bs8-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-1 [17]: https://hf.co/stefan-it/hmbench-newseye-fi-hmbert-bs8-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-2 [18]: https://hf.co/stefan-it/hmbench-newseye-fi-hmbert-bs8-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-3 [19]: https://hf.co/stefan-it/hmbench-newseye-fi-hmbert-bs8-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-4 [20]: https://hf.co/stefan-it/hmbench-newseye-fi-hmbert-bs8-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-5 The [training log](training.log) and TensorBoard logs (only for hmByT5 and hmTEAMS based models) are also uploaded to the model hub. More information about fine-tuning can be found [here](https://github.com/stefan-it/hmBench). # Acknowledgements We thank [Luisa März](https://github.com/LuisaMaerz), [Katharina Schmid](https://github.com/schmika) and [Erion Çano](https://github.com/erionc) for their fruitful discussions about Historic Language Models. Research supported with Cloud TPUs from Google's [TPU Research Cloud](https://sites.research.google/trc/about/) (TRC). Many Thanks for providing access to the TPUs ❤️
stefan-it/hmbench-newseye-fi-hmbert-bs8-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-1
stefan-it
2023-10-17T23:18:18Z
2
0
flair
[ "flair", "pytorch", "token-classification", "sequence-tagger-model", "fi", "base_model:dbmdz/bert-base-historic-multilingual-cased", "base_model:finetune:dbmdz/bert-base-historic-multilingual-cased", "license:mit", "region:us" ]
token-classification
2023-10-16T18:03:05Z
--- language: fi license: mit tags: - flair - token-classification - sequence-tagger-model base_model: dbmdz/bert-base-historic-multilingual-cased widget: - text: Rooseveltin sihteeri ilmoittaa perättö - mäksi tiedon , että Rooseveltia olisi kehotettu käymään Englannissa , Saksassa ja Venäjällä puhumassa San Franciscon näyttelyn puolesta . --- # Fine-tuned Flair Model on Finnish NewsEye NER Dataset (HIPE-2022) This Flair model was fine-tuned on the [Finnish NewsEye](https://github.com/hipe-eval/HIPE-2022-data/blob/main/documentation/README-newseye.md) NER Dataset using hmBERT as backbone LM. The NewsEye dataset is comprised of diachronic historical newspaper material published between 1850 and 1950 in French, German, Finnish, and Swedish. More information can be found [here](https://dl.acm.org/doi/abs/10.1145/3404835.3463255). The following NEs were annotated: `PER`, `LOC`, `ORG` and `HumanProd`. # Results We performed a hyper-parameter search over the following parameters with 5 different seeds per configuration: * Batch Sizes: `[8, 4]` * Learning Rates: `[3e-05, 5e-05]` And report micro F1-score on development set: | Configuration | Run 1 | Run 2 | Run 3 | Run 4 | Run 5 | Avg. | |-----------------|--------------|--------------|--------------|--------------|--------------|--------------| | bs4-e10-lr3e-05 | [0.7669][1] | [0.8009][2] | [0.7722][3] | [0.7653][4] | [0.7579][5] | 77.26 ± 1.49 | | bs8-e10-lr5e-05 | [0.7837][6] | [0.7447][7] | [0.778][8] | [0.7702][9] | [0.7666][10] | 76.86 ± 1.34 | | bs4-e10-lr5e-05 | [0.7856][11] | [0.7722][12] | [0.7484][13] | [0.7619][14] | [0.7556][15] | 76.47 ± 1.3 | | bs8-e10-lr3e-05 | [0.7669][16] | [0.7436][17] | [0.766][18] | [0.7716][19] | [0.7328][20] | 75.62 ± 1.52 | [1]: https://hf.co/stefan-it/hmbench-newseye-fi-hmbert-bs4-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-1 [2]: https://hf.co/stefan-it/hmbench-newseye-fi-hmbert-bs4-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-2 [3]: https://hf.co/stefan-it/hmbench-newseye-fi-hmbert-bs4-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-3 [4]: https://hf.co/stefan-it/hmbench-newseye-fi-hmbert-bs4-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-4 [5]: https://hf.co/stefan-it/hmbench-newseye-fi-hmbert-bs4-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-5 [6]: https://hf.co/stefan-it/hmbench-newseye-fi-hmbert-bs8-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-1 [7]: https://hf.co/stefan-it/hmbench-newseye-fi-hmbert-bs8-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-2 [8]: https://hf.co/stefan-it/hmbench-newseye-fi-hmbert-bs8-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-3 [9]: https://hf.co/stefan-it/hmbench-newseye-fi-hmbert-bs8-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-4 [10]: https://hf.co/stefan-it/hmbench-newseye-fi-hmbert-bs8-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-5 [11]: https://hf.co/stefan-it/hmbench-newseye-fi-hmbert-bs4-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-1 [12]: https://hf.co/stefan-it/hmbench-newseye-fi-hmbert-bs4-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-2 [13]: https://hf.co/stefan-it/hmbench-newseye-fi-hmbert-bs4-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-3 [14]: https://hf.co/stefan-it/hmbench-newseye-fi-hmbert-bs4-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-4 [15]: https://hf.co/stefan-it/hmbench-newseye-fi-hmbert-bs4-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-5 [16]: https://hf.co/stefan-it/hmbench-newseye-fi-hmbert-bs8-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-1 [17]: https://hf.co/stefan-it/hmbench-newseye-fi-hmbert-bs8-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-2 [18]: https://hf.co/stefan-it/hmbench-newseye-fi-hmbert-bs8-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-3 [19]: https://hf.co/stefan-it/hmbench-newseye-fi-hmbert-bs8-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-4 [20]: https://hf.co/stefan-it/hmbench-newseye-fi-hmbert-bs8-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-5 The [training log](training.log) and TensorBoard logs (only for hmByT5 and hmTEAMS based models) are also uploaded to the model hub. More information about fine-tuning can be found [here](https://github.com/stefan-it/hmBench). # Acknowledgements We thank [Luisa März](https://github.com/LuisaMaerz), [Katharina Schmid](https://github.com/schmika) and [Erion Çano](https://github.com/erionc) for their fruitful discussions about Historic Language Models. Research supported with Cloud TPUs from Google's [TPU Research Cloud](https://sites.research.google/trc/about/) (TRC). Many Thanks for providing access to the TPUs ❤️
stefan-it/hmbench-newseye-fr-hmbert-bs4-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-4
stefan-it
2023-10-17T23:17:29Z
5
0
flair
[ "flair", "pytorch", "token-classification", "sequence-tagger-model", "fr", "base_model:dbmdz/bert-base-historic-multilingual-cased", "base_model:finetune:dbmdz/bert-base-historic-multilingual-cased", "license:mit", "region:us" ]
token-classification
2023-10-16T13:43:18Z
--- language: fr license: mit tags: - flair - token-classification - sequence-tagger-model base_model: dbmdz/bert-base-historic-multilingual-cased widget: - text: Le Moniteur universel fait ressortir les avantages de la situation de l ' Allemagne , sa force militaire , le peu d ' intérêts personnels qu ' elle peut avoir dans la question d ' Orient . --- # Fine-tuned Flair Model on French NewsEye NER Dataset (HIPE-2022) This Flair model was fine-tuned on the [French NewsEye](https://github.com/hipe-eval/HIPE-2022-data/blob/main/documentation/README-newseye.md) NER Dataset using hmBERT as backbone LM. The NewsEye dataset is comprised of diachronic historical newspaper material published between 1850 and 1950 in French, German, Finnish, and Swedish. More information can be found [here](https://dl.acm.org/doi/abs/10.1145/3404835.3463255). The following NEs were annotated: `PER`, `LOC`, `ORG` and `HumanProd`. # Results We performed a hyper-parameter search over the following parameters with 5 different seeds per configuration: * Batch Sizes: `[8, 4]` * Learning Rates: `[3e-05, 5e-05]` And report micro F1-score on development set: | Configuration | Run 1 | Run 2 | Run 3 | Run 4 | Run 5 | Avg. | |-----------------|--------------|--------------|--------------|--------------|--------------|--------------| | bs4-e10-lr3e-05 | [0.8103][1] | [0.8199][2] | [0.8181][3] | [0.8091][4] | [0.8102][5] | 81.35 ± 0.45 | | bs8-e10-lr5e-05 | [0.8038][6] | [0.81][7] | [0.8029][8] | [0.82][9] | [0.8214][10] | 81.16 ± 0.78 | | bs8-e10-lr3e-05 | [0.8109][11] | [0.8102][12] | [0.8032][13] | [0.8038][14] | [0.8204][15] | 80.97 ± 0.62 | | bs4-e10-lr5e-05 | [0.8054][16] | [0.8078][17] | [0.8048][18] | [0.8099][19] | [0.8076][20] | 80.71 ± 0.18 | [1]: https://hf.co/stefan-it/hmbench-newseye-fr-hmbert-bs4-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-1 [2]: https://hf.co/stefan-it/hmbench-newseye-fr-hmbert-bs4-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-2 [3]: https://hf.co/stefan-it/hmbench-newseye-fr-hmbert-bs4-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-3 [4]: https://hf.co/stefan-it/hmbench-newseye-fr-hmbert-bs4-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-4 [5]: https://hf.co/stefan-it/hmbench-newseye-fr-hmbert-bs4-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-5 [6]: https://hf.co/stefan-it/hmbench-newseye-fr-hmbert-bs8-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-1 [7]: https://hf.co/stefan-it/hmbench-newseye-fr-hmbert-bs8-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-2 [8]: https://hf.co/stefan-it/hmbench-newseye-fr-hmbert-bs8-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-3 [9]: https://hf.co/stefan-it/hmbench-newseye-fr-hmbert-bs8-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-4 [10]: https://hf.co/stefan-it/hmbench-newseye-fr-hmbert-bs8-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-5 [11]: https://hf.co/stefan-it/hmbench-newseye-fr-hmbert-bs8-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-1 [12]: https://hf.co/stefan-it/hmbench-newseye-fr-hmbert-bs8-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-2 [13]: https://hf.co/stefan-it/hmbench-newseye-fr-hmbert-bs8-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-3 [14]: https://hf.co/stefan-it/hmbench-newseye-fr-hmbert-bs8-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-4 [15]: https://hf.co/stefan-it/hmbench-newseye-fr-hmbert-bs8-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-5 [16]: https://hf.co/stefan-it/hmbench-newseye-fr-hmbert-bs4-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-1 [17]: https://hf.co/stefan-it/hmbench-newseye-fr-hmbert-bs4-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-2 [18]: https://hf.co/stefan-it/hmbench-newseye-fr-hmbert-bs4-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-3 [19]: https://hf.co/stefan-it/hmbench-newseye-fr-hmbert-bs4-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-4 [20]: https://hf.co/stefan-it/hmbench-newseye-fr-hmbert-bs4-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-5 The [training log](training.log) and TensorBoard logs (only for hmByT5 and hmTEAMS based models) are also uploaded to the model hub. More information about fine-tuning can be found [here](https://github.com/stefan-it/hmBench). # Acknowledgements We thank [Luisa März](https://github.com/LuisaMaerz), [Katharina Schmid](https://github.com/schmika) and [Erion Çano](https://github.com/erionc) for their fruitful discussions about Historic Language Models. Research supported with Cloud TPUs from Google's [TPU Research Cloud](https://sites.research.google/trc/about/) (TRC). Many Thanks for providing access to the TPUs ❤️
stefan-it/hmbench-newseye-fr-hmbert-bs4-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-3
stefan-it
2023-10-17T23:17:28Z
2
0
flair
[ "flair", "pytorch", "token-classification", "sequence-tagger-model", "fr", "base_model:dbmdz/bert-base-historic-multilingual-cased", "base_model:finetune:dbmdz/bert-base-historic-multilingual-cased", "license:mit", "region:us" ]
token-classification
2023-10-16T11:11:44Z
--- language: fr license: mit tags: - flair - token-classification - sequence-tagger-model base_model: dbmdz/bert-base-historic-multilingual-cased widget: - text: Le Moniteur universel fait ressortir les avantages de la situation de l ' Allemagne , sa force militaire , le peu d ' intérêts personnels qu ' elle peut avoir dans la question d ' Orient . --- # Fine-tuned Flair Model on French NewsEye NER Dataset (HIPE-2022) This Flair model was fine-tuned on the [French NewsEye](https://github.com/hipe-eval/HIPE-2022-data/blob/main/documentation/README-newseye.md) NER Dataset using hmBERT as backbone LM. The NewsEye dataset is comprised of diachronic historical newspaper material published between 1850 and 1950 in French, German, Finnish, and Swedish. More information can be found [here](https://dl.acm.org/doi/abs/10.1145/3404835.3463255). The following NEs were annotated: `PER`, `LOC`, `ORG` and `HumanProd`. # Results We performed a hyper-parameter search over the following parameters with 5 different seeds per configuration: * Batch Sizes: `[8, 4]` * Learning Rates: `[3e-05, 5e-05]` And report micro F1-score on development set: | Configuration | Run 1 | Run 2 | Run 3 | Run 4 | Run 5 | Avg. | |-----------------|--------------|--------------|--------------|--------------|--------------|--------------| | bs4-e10-lr3e-05 | [0.8103][1] | [0.8199][2] | [0.8181][3] | [0.8091][4] | [0.8102][5] | 81.35 ± 0.45 | | bs8-e10-lr5e-05 | [0.8038][6] | [0.81][7] | [0.8029][8] | [0.82][9] | [0.8214][10] | 81.16 ± 0.78 | | bs8-e10-lr3e-05 | [0.8109][11] | [0.8102][12] | [0.8032][13] | [0.8038][14] | [0.8204][15] | 80.97 ± 0.62 | | bs4-e10-lr5e-05 | [0.8054][16] | [0.8078][17] | [0.8048][18] | [0.8099][19] | [0.8076][20] | 80.71 ± 0.18 | [1]: https://hf.co/stefan-it/hmbench-newseye-fr-hmbert-bs4-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-1 [2]: https://hf.co/stefan-it/hmbench-newseye-fr-hmbert-bs4-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-2 [3]: https://hf.co/stefan-it/hmbench-newseye-fr-hmbert-bs4-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-3 [4]: https://hf.co/stefan-it/hmbench-newseye-fr-hmbert-bs4-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-4 [5]: https://hf.co/stefan-it/hmbench-newseye-fr-hmbert-bs4-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-5 [6]: https://hf.co/stefan-it/hmbench-newseye-fr-hmbert-bs8-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-1 [7]: https://hf.co/stefan-it/hmbench-newseye-fr-hmbert-bs8-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-2 [8]: https://hf.co/stefan-it/hmbench-newseye-fr-hmbert-bs8-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-3 [9]: https://hf.co/stefan-it/hmbench-newseye-fr-hmbert-bs8-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-4 [10]: https://hf.co/stefan-it/hmbench-newseye-fr-hmbert-bs8-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-5 [11]: https://hf.co/stefan-it/hmbench-newseye-fr-hmbert-bs8-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-1 [12]: https://hf.co/stefan-it/hmbench-newseye-fr-hmbert-bs8-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-2 [13]: https://hf.co/stefan-it/hmbench-newseye-fr-hmbert-bs8-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-3 [14]: https://hf.co/stefan-it/hmbench-newseye-fr-hmbert-bs8-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-4 [15]: https://hf.co/stefan-it/hmbench-newseye-fr-hmbert-bs8-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-5 [16]: https://hf.co/stefan-it/hmbench-newseye-fr-hmbert-bs4-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-1 [17]: https://hf.co/stefan-it/hmbench-newseye-fr-hmbert-bs4-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-2 [18]: https://hf.co/stefan-it/hmbench-newseye-fr-hmbert-bs4-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-3 [19]: https://hf.co/stefan-it/hmbench-newseye-fr-hmbert-bs4-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-4 [20]: https://hf.co/stefan-it/hmbench-newseye-fr-hmbert-bs4-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-5 The [training log](training.log) and TensorBoard logs (only for hmByT5 and hmTEAMS based models) are also uploaded to the model hub. More information about fine-tuning can be found [here](https://github.com/stefan-it/hmBench). # Acknowledgements We thank [Luisa März](https://github.com/LuisaMaerz), [Katharina Schmid](https://github.com/schmika) and [Erion Çano](https://github.com/erionc) for their fruitful discussions about Historic Language Models. Research supported with Cloud TPUs from Google's [TPU Research Cloud](https://sites.research.google/trc/about/) (TRC). Many Thanks for providing access to the TPUs ❤️
stefan-it/hmbench-newseye-fr-hmbert-bs4-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-1
stefan-it
2023-10-17T23:17:25Z
7
0
flair
[ "flair", "pytorch", "token-classification", "sequence-tagger-model", "fr", "base_model:dbmdz/bert-base-historic-multilingual-cased", "base_model:finetune:dbmdz/bert-base-historic-multilingual-cased", "license:mit", "region:us" ]
token-classification
2023-10-16T09:15:07Z
--- language: fr license: mit tags: - flair - token-classification - sequence-tagger-model base_model: dbmdz/bert-base-historic-multilingual-cased widget: - text: Le Moniteur universel fait ressortir les avantages de la situation de l ' Allemagne , sa force militaire , le peu d ' intérêts personnels qu ' elle peut avoir dans la question d ' Orient . --- # Fine-tuned Flair Model on French NewsEye NER Dataset (HIPE-2022) This Flair model was fine-tuned on the [French NewsEye](https://github.com/hipe-eval/HIPE-2022-data/blob/main/documentation/README-newseye.md) NER Dataset using hmBERT as backbone LM. The NewsEye dataset is comprised of diachronic historical newspaper material published between 1850 and 1950 in French, German, Finnish, and Swedish. More information can be found [here](https://dl.acm.org/doi/abs/10.1145/3404835.3463255). The following NEs were annotated: `PER`, `LOC`, `ORG` and `HumanProd`. # Results We performed a hyper-parameter search over the following parameters with 5 different seeds per configuration: * Batch Sizes: `[8, 4]` * Learning Rates: `[3e-05, 5e-05]` And report micro F1-score on development set: | Configuration | Run 1 | Run 2 | Run 3 | Run 4 | Run 5 | Avg. | |-----------------|--------------|--------------|--------------|--------------|--------------|--------------| | bs4-e10-lr3e-05 | [0.8103][1] | [0.8199][2] | [0.8181][3] | [0.8091][4] | [0.8102][5] | 81.35 ± 0.45 | | bs8-e10-lr5e-05 | [0.8038][6] | [0.81][7] | [0.8029][8] | [0.82][9] | [0.8214][10] | 81.16 ± 0.78 | | bs8-e10-lr3e-05 | [0.8109][11] | [0.8102][12] | [0.8032][13] | [0.8038][14] | [0.8204][15] | 80.97 ± 0.62 | | bs4-e10-lr5e-05 | [0.8054][16] | [0.8078][17] | [0.8048][18] | [0.8099][19] | [0.8076][20] | 80.71 ± 0.18 | [1]: https://hf.co/stefan-it/hmbench-newseye-fr-hmbert-bs4-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-1 [2]: https://hf.co/stefan-it/hmbench-newseye-fr-hmbert-bs4-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-2 [3]: https://hf.co/stefan-it/hmbench-newseye-fr-hmbert-bs4-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-3 [4]: https://hf.co/stefan-it/hmbench-newseye-fr-hmbert-bs4-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-4 [5]: https://hf.co/stefan-it/hmbench-newseye-fr-hmbert-bs4-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-5 [6]: https://hf.co/stefan-it/hmbench-newseye-fr-hmbert-bs8-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-1 [7]: https://hf.co/stefan-it/hmbench-newseye-fr-hmbert-bs8-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-2 [8]: https://hf.co/stefan-it/hmbench-newseye-fr-hmbert-bs8-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-3 [9]: https://hf.co/stefan-it/hmbench-newseye-fr-hmbert-bs8-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-4 [10]: https://hf.co/stefan-it/hmbench-newseye-fr-hmbert-bs8-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-5 [11]: https://hf.co/stefan-it/hmbench-newseye-fr-hmbert-bs8-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-1 [12]: https://hf.co/stefan-it/hmbench-newseye-fr-hmbert-bs8-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-2 [13]: https://hf.co/stefan-it/hmbench-newseye-fr-hmbert-bs8-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-3 [14]: https://hf.co/stefan-it/hmbench-newseye-fr-hmbert-bs8-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-4 [15]: https://hf.co/stefan-it/hmbench-newseye-fr-hmbert-bs8-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-5 [16]: https://hf.co/stefan-it/hmbench-newseye-fr-hmbert-bs4-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-1 [17]: https://hf.co/stefan-it/hmbench-newseye-fr-hmbert-bs4-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-2 [18]: https://hf.co/stefan-it/hmbench-newseye-fr-hmbert-bs4-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-3 [19]: https://hf.co/stefan-it/hmbench-newseye-fr-hmbert-bs4-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-4 [20]: https://hf.co/stefan-it/hmbench-newseye-fr-hmbert-bs4-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-5 The [training log](training.log) and TensorBoard logs (only for hmByT5 and hmTEAMS based models) are also uploaded to the model hub. More information about fine-tuning can be found [here](https://github.com/stefan-it/hmBench). # Acknowledgements We thank [Luisa März](https://github.com/LuisaMaerz), [Katharina Schmid](https://github.com/schmika) and [Erion Çano](https://github.com/erionc) for their fruitful discussions about Historic Language Models. Research supported with Cloud TPUs from Google's [TPU Research Cloud](https://sites.research.google/trc/about/) (TRC). Many Thanks for providing access to the TPUs ❤️
stefan-it/hmbench-newseye-fr-hmbert-bs4-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-3
stefan-it
2023-10-17T23:17:22Z
4
0
flair
[ "flair", "pytorch", "token-classification", "sequence-tagger-model", "fr", "base_model:dbmdz/bert-base-historic-multilingual-cased", "base_model:finetune:dbmdz/bert-base-historic-multilingual-cased", "license:mit", "region:us" ]
token-classification
2023-10-16T10:55:19Z
--- language: fr license: mit tags: - flair - token-classification - sequence-tagger-model base_model: dbmdz/bert-base-historic-multilingual-cased widget: - text: Le Moniteur universel fait ressortir les avantages de la situation de l ' Allemagne , sa force militaire , le peu d ' intérêts personnels qu ' elle peut avoir dans la question d ' Orient . --- # Fine-tuned Flair Model on French NewsEye NER Dataset (HIPE-2022) This Flair model was fine-tuned on the [French NewsEye](https://github.com/hipe-eval/HIPE-2022-data/blob/main/documentation/README-newseye.md) NER Dataset using hmBERT as backbone LM. The NewsEye dataset is comprised of diachronic historical newspaper material published between 1850 and 1950 in French, German, Finnish, and Swedish. More information can be found [here](https://dl.acm.org/doi/abs/10.1145/3404835.3463255). The following NEs were annotated: `PER`, `LOC`, `ORG` and `HumanProd`. # Results We performed a hyper-parameter search over the following parameters with 5 different seeds per configuration: * Batch Sizes: `[8, 4]` * Learning Rates: `[3e-05, 5e-05]` And report micro F1-score on development set: | Configuration | Run 1 | Run 2 | Run 3 | Run 4 | Run 5 | Avg. | |-----------------|--------------|--------------|--------------|--------------|--------------|--------------| | bs4-e10-lr3e-05 | [0.8103][1] | [0.8199][2] | [0.8181][3] | [0.8091][4] | [0.8102][5] | 81.35 ± 0.45 | | bs8-e10-lr5e-05 | [0.8038][6] | [0.81][7] | [0.8029][8] | [0.82][9] | [0.8214][10] | 81.16 ± 0.78 | | bs8-e10-lr3e-05 | [0.8109][11] | [0.8102][12] | [0.8032][13] | [0.8038][14] | [0.8204][15] | 80.97 ± 0.62 | | bs4-e10-lr5e-05 | [0.8054][16] | [0.8078][17] | [0.8048][18] | [0.8099][19] | [0.8076][20] | 80.71 ± 0.18 | [1]: https://hf.co/stefan-it/hmbench-newseye-fr-hmbert-bs4-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-1 [2]: https://hf.co/stefan-it/hmbench-newseye-fr-hmbert-bs4-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-2 [3]: https://hf.co/stefan-it/hmbench-newseye-fr-hmbert-bs4-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-3 [4]: https://hf.co/stefan-it/hmbench-newseye-fr-hmbert-bs4-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-4 [5]: https://hf.co/stefan-it/hmbench-newseye-fr-hmbert-bs4-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-5 [6]: https://hf.co/stefan-it/hmbench-newseye-fr-hmbert-bs8-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-1 [7]: https://hf.co/stefan-it/hmbench-newseye-fr-hmbert-bs8-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-2 [8]: https://hf.co/stefan-it/hmbench-newseye-fr-hmbert-bs8-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-3 [9]: https://hf.co/stefan-it/hmbench-newseye-fr-hmbert-bs8-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-4 [10]: https://hf.co/stefan-it/hmbench-newseye-fr-hmbert-bs8-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-5 [11]: https://hf.co/stefan-it/hmbench-newseye-fr-hmbert-bs8-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-1 [12]: https://hf.co/stefan-it/hmbench-newseye-fr-hmbert-bs8-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-2 [13]: https://hf.co/stefan-it/hmbench-newseye-fr-hmbert-bs8-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-3 [14]: https://hf.co/stefan-it/hmbench-newseye-fr-hmbert-bs8-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-4 [15]: https://hf.co/stefan-it/hmbench-newseye-fr-hmbert-bs8-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-5 [16]: https://hf.co/stefan-it/hmbench-newseye-fr-hmbert-bs4-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-1 [17]: https://hf.co/stefan-it/hmbench-newseye-fr-hmbert-bs4-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-2 [18]: https://hf.co/stefan-it/hmbench-newseye-fr-hmbert-bs4-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-3 [19]: https://hf.co/stefan-it/hmbench-newseye-fr-hmbert-bs4-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-4 [20]: https://hf.co/stefan-it/hmbench-newseye-fr-hmbert-bs4-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-5 The [training log](training.log) and TensorBoard logs (only for hmByT5 and hmTEAMS based models) are also uploaded to the model hub. More information about fine-tuning can be found [here](https://github.com/stefan-it/hmBench). # Acknowledgements We thank [Luisa März](https://github.com/LuisaMaerz), [Katharina Schmid](https://github.com/schmika) and [Erion Çano](https://github.com/erionc) for their fruitful discussions about Historic Language Models. Research supported with Cloud TPUs from Google's [TPU Research Cloud](https://sites.research.google/trc/about/) (TRC). Many Thanks for providing access to the TPUs ❤️
hmbert/flair-hipe-2022-newseye-fr
hmbert
2023-10-17T23:17:21Z
6
0
flair
[ "flair", "pytorch", "token-classification", "sequence-tagger-model", "fr", "base_model:dbmdz/bert-base-historic-multilingual-cased", "base_model:finetune:dbmdz/bert-base-historic-multilingual-cased", "license:mit", "region:us" ]
token-classification
2023-10-16T09:57:10Z
--- language: fr license: mit tags: - flair - token-classification - sequence-tagger-model base_model: dbmdz/bert-base-historic-multilingual-cased widget: - text: Le Moniteur universel fait ressortir les avantages de la situation de l ' Allemagne , sa force militaire , le peu d ' intérêts personnels qu ' elle peut avoir dans la question d ' Orient . --- # Fine-tuned Flair Model on French NewsEye NER Dataset (HIPE-2022) This Flair model was fine-tuned on the [French NewsEye](https://github.com/hipe-eval/HIPE-2022-data/blob/main/documentation/README-newseye.md) NER Dataset using hmBERT as backbone LM. The NewsEye dataset is comprised of diachronic historical newspaper material published between 1850 and 1950 in French, German, Finnish, and Swedish. More information can be found [here](https://dl.acm.org/doi/abs/10.1145/3404835.3463255). The following NEs were annotated: `PER`, `LOC`, `ORG` and `HumanProd`. # Results We performed a hyper-parameter search over the following parameters with 5 different seeds per configuration: * Batch Sizes: `[8, 4]` * Learning Rates: `[3e-05, 5e-05]` And report micro F1-score on development set: | Configuration | Run 1 | Run 2 | Run 3 | Run 4 | Run 5 | Avg. | |-----------------|--------------|--------------|--------------|--------------|--------------|--------------| | bs4-e10-lr3e-05 | [0.8103][1] | [0.8199][2] | [0.8181][3] | [0.8091][4] | [0.8102][5] | 81.35 ± 0.45 | | bs8-e10-lr5e-05 | [0.8038][6] | [0.81][7] | [0.8029][8] | [0.82][9] | [0.8214][10] | 81.16 ± 0.78 | | bs8-e10-lr3e-05 | [0.8109][11] | [0.8102][12] | [0.8032][13] | [0.8038][14] | [0.8204][15] | 80.97 ± 0.62 | | bs4-e10-lr5e-05 | [0.8054][16] | [0.8078][17] | [0.8048][18] | [0.8099][19] | [0.8076][20] | 80.71 ± 0.18 | [1]: https://hf.co/stefan-it/hmbench-newseye-fr-hmbert-bs4-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-1 [2]: https://hf.co/stefan-it/hmbench-newseye-fr-hmbert-bs4-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-2 [3]: https://hf.co/stefan-it/hmbench-newseye-fr-hmbert-bs4-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-3 [4]: https://hf.co/stefan-it/hmbench-newseye-fr-hmbert-bs4-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-4 [5]: https://hf.co/stefan-it/hmbench-newseye-fr-hmbert-bs4-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-5 [6]: https://hf.co/stefan-it/hmbench-newseye-fr-hmbert-bs8-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-1 [7]: https://hf.co/stefan-it/hmbench-newseye-fr-hmbert-bs8-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-2 [8]: https://hf.co/stefan-it/hmbench-newseye-fr-hmbert-bs8-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-3 [9]: https://hf.co/stefan-it/hmbench-newseye-fr-hmbert-bs8-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-4 [10]: https://hf.co/stefan-it/hmbench-newseye-fr-hmbert-bs8-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-5 [11]: https://hf.co/stefan-it/hmbench-newseye-fr-hmbert-bs8-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-1 [12]: https://hf.co/stefan-it/hmbench-newseye-fr-hmbert-bs8-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-2 [13]: https://hf.co/stefan-it/hmbench-newseye-fr-hmbert-bs8-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-3 [14]: https://hf.co/stefan-it/hmbench-newseye-fr-hmbert-bs8-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-4 [15]: https://hf.co/stefan-it/hmbench-newseye-fr-hmbert-bs8-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-5 [16]: https://hf.co/stefan-it/hmbench-newseye-fr-hmbert-bs4-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-1 [17]: https://hf.co/stefan-it/hmbench-newseye-fr-hmbert-bs4-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-2 [18]: https://hf.co/stefan-it/hmbench-newseye-fr-hmbert-bs4-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-3 [19]: https://hf.co/stefan-it/hmbench-newseye-fr-hmbert-bs4-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-4 [20]: https://hf.co/stefan-it/hmbench-newseye-fr-hmbert-bs4-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-5 The [training log](training.log) and TensorBoard logs (only for hmByT5 and hmTEAMS based models) are also uploaded to the model hub. More information about fine-tuning can be found [here](https://github.com/stefan-it/hmBench). # Acknowledgements We thank [Luisa März](https://github.com/LuisaMaerz), [Katharina Schmid](https://github.com/schmika) and [Erion Çano](https://github.com/erionc) for their fruitful discussions about Historic Language Models. Research supported with Cloud TPUs from Google's [TPU Research Cloud](https://sites.research.google/trc/about/) (TRC). Many Thanks for providing access to the TPUs ❤️
stefan-it/hmbench-newseye-fr-hmbert-bs4-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-1
stefan-it
2023-10-17T23:17:19Z
1
0
flair
[ "flair", "pytorch", "token-classification", "sequence-tagger-model", "fr", "base_model:dbmdz/bert-base-historic-multilingual-cased", "base_model:finetune:dbmdz/bert-base-historic-multilingual-cased", "license:mit", "region:us" ]
token-classification
2023-10-16T08:58:59Z
--- language: fr license: mit tags: - flair - token-classification - sequence-tagger-model base_model: dbmdz/bert-base-historic-multilingual-cased widget: - text: Le Moniteur universel fait ressortir les avantages de la situation de l ' Allemagne , sa force militaire , le peu d ' intérêts personnels qu ' elle peut avoir dans la question d ' Orient . --- # Fine-tuned Flair Model on French NewsEye NER Dataset (HIPE-2022) This Flair model was fine-tuned on the [French NewsEye](https://github.com/hipe-eval/HIPE-2022-data/blob/main/documentation/README-newseye.md) NER Dataset using hmBERT as backbone LM. The NewsEye dataset is comprised of diachronic historical newspaper material published between 1850 and 1950 in French, German, Finnish, and Swedish. More information can be found [here](https://dl.acm.org/doi/abs/10.1145/3404835.3463255). The following NEs were annotated: `PER`, `LOC`, `ORG` and `HumanProd`. # Results We performed a hyper-parameter search over the following parameters with 5 different seeds per configuration: * Batch Sizes: `[8, 4]` * Learning Rates: `[3e-05, 5e-05]` And report micro F1-score on development set: | Configuration | Run 1 | Run 2 | Run 3 | Run 4 | Run 5 | Avg. | |-----------------|--------------|--------------|--------------|--------------|--------------|--------------| | bs4-e10-lr3e-05 | [0.8103][1] | [0.8199][2] | [0.8181][3] | [0.8091][4] | [0.8102][5] | 81.35 ± 0.45 | | bs8-e10-lr5e-05 | [0.8038][6] | [0.81][7] | [0.8029][8] | [0.82][9] | [0.8214][10] | 81.16 ± 0.78 | | bs8-e10-lr3e-05 | [0.8109][11] | [0.8102][12] | [0.8032][13] | [0.8038][14] | [0.8204][15] | 80.97 ± 0.62 | | bs4-e10-lr5e-05 | [0.8054][16] | [0.8078][17] | [0.8048][18] | [0.8099][19] | [0.8076][20] | 80.71 ± 0.18 | [1]: https://hf.co/stefan-it/hmbench-newseye-fr-hmbert-bs4-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-1 [2]: https://hf.co/stefan-it/hmbench-newseye-fr-hmbert-bs4-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-2 [3]: https://hf.co/stefan-it/hmbench-newseye-fr-hmbert-bs4-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-3 [4]: https://hf.co/stefan-it/hmbench-newseye-fr-hmbert-bs4-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-4 [5]: https://hf.co/stefan-it/hmbench-newseye-fr-hmbert-bs4-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-5 [6]: https://hf.co/stefan-it/hmbench-newseye-fr-hmbert-bs8-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-1 [7]: https://hf.co/stefan-it/hmbench-newseye-fr-hmbert-bs8-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-2 [8]: https://hf.co/stefan-it/hmbench-newseye-fr-hmbert-bs8-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-3 [9]: https://hf.co/stefan-it/hmbench-newseye-fr-hmbert-bs8-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-4 [10]: https://hf.co/stefan-it/hmbench-newseye-fr-hmbert-bs8-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-5 [11]: https://hf.co/stefan-it/hmbench-newseye-fr-hmbert-bs8-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-1 [12]: https://hf.co/stefan-it/hmbench-newseye-fr-hmbert-bs8-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-2 [13]: https://hf.co/stefan-it/hmbench-newseye-fr-hmbert-bs8-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-3 [14]: https://hf.co/stefan-it/hmbench-newseye-fr-hmbert-bs8-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-4 [15]: https://hf.co/stefan-it/hmbench-newseye-fr-hmbert-bs8-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-5 [16]: https://hf.co/stefan-it/hmbench-newseye-fr-hmbert-bs4-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-1 [17]: https://hf.co/stefan-it/hmbench-newseye-fr-hmbert-bs4-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-2 [18]: https://hf.co/stefan-it/hmbench-newseye-fr-hmbert-bs4-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-3 [19]: https://hf.co/stefan-it/hmbench-newseye-fr-hmbert-bs4-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-4 [20]: https://hf.co/stefan-it/hmbench-newseye-fr-hmbert-bs4-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-5 The [training log](training.log) and TensorBoard logs (only for hmByT5 and hmTEAMS based models) are also uploaded to the model hub. More information about fine-tuning can be found [here](https://github.com/stefan-it/hmBench). # Acknowledgements We thank [Luisa März](https://github.com/LuisaMaerz), [Katharina Schmid](https://github.com/schmika) and [Erion Çano](https://github.com/erionc) for their fruitful discussions about Historic Language Models. Research supported with Cloud TPUs from Google's [TPU Research Cloud](https://sites.research.google/trc/about/) (TRC). Many Thanks for providing access to the TPUs ❤️
stefan-it/hmbench-newseye-fr-hmbert-bs8-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-4
stefan-it
2023-10-17T23:17:17Z
7
0
flair
[ "flair", "pytorch", "token-classification", "sequence-tagger-model", "fr", "base_model:dbmdz/bert-base-historic-multilingual-cased", "base_model:finetune:dbmdz/bert-base-historic-multilingual-cased", "license:mit", "region:us" ]
token-classification
2023-10-16T13:11:12Z
--- language: fr license: mit tags: - flair - token-classification - sequence-tagger-model base_model: dbmdz/bert-base-historic-multilingual-cased widget: - text: Le Moniteur universel fait ressortir les avantages de la situation de l ' Allemagne , sa force militaire , le peu d ' intérêts personnels qu ' elle peut avoir dans la question d ' Orient . --- # Fine-tuned Flair Model on French NewsEye NER Dataset (HIPE-2022) This Flair model was fine-tuned on the [French NewsEye](https://github.com/hipe-eval/HIPE-2022-data/blob/main/documentation/README-newseye.md) NER Dataset using hmBERT as backbone LM. The NewsEye dataset is comprised of diachronic historical newspaper material published between 1850 and 1950 in French, German, Finnish, and Swedish. More information can be found [here](https://dl.acm.org/doi/abs/10.1145/3404835.3463255). The following NEs were annotated: `PER`, `LOC`, `ORG` and `HumanProd`. # Results We performed a hyper-parameter search over the following parameters with 5 different seeds per configuration: * Batch Sizes: `[8, 4]` * Learning Rates: `[3e-05, 5e-05]` And report micro F1-score on development set: | Configuration | Run 1 | Run 2 | Run 3 | Run 4 | Run 5 | Avg. | |-----------------|--------------|--------------|--------------|--------------|--------------|--------------| | bs4-e10-lr3e-05 | [0.8103][1] | [0.8199][2] | [0.8181][3] | [0.8091][4] | [0.8102][5] | 81.35 ± 0.45 | | bs8-e10-lr5e-05 | [0.8038][6] | [0.81][7] | [0.8029][8] | [0.82][9] | [0.8214][10] | 81.16 ± 0.78 | | bs8-e10-lr3e-05 | [0.8109][11] | [0.8102][12] | [0.8032][13] | [0.8038][14] | [0.8204][15] | 80.97 ± 0.62 | | bs4-e10-lr5e-05 | [0.8054][16] | [0.8078][17] | [0.8048][18] | [0.8099][19] | [0.8076][20] | 80.71 ± 0.18 | [1]: https://hf.co/stefan-it/hmbench-newseye-fr-hmbert-bs4-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-1 [2]: https://hf.co/stefan-it/hmbench-newseye-fr-hmbert-bs4-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-2 [3]: https://hf.co/stefan-it/hmbench-newseye-fr-hmbert-bs4-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-3 [4]: https://hf.co/stefan-it/hmbench-newseye-fr-hmbert-bs4-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-4 [5]: https://hf.co/stefan-it/hmbench-newseye-fr-hmbert-bs4-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-5 [6]: https://hf.co/stefan-it/hmbench-newseye-fr-hmbert-bs8-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-1 [7]: https://hf.co/stefan-it/hmbench-newseye-fr-hmbert-bs8-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-2 [8]: https://hf.co/stefan-it/hmbench-newseye-fr-hmbert-bs8-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-3 [9]: https://hf.co/stefan-it/hmbench-newseye-fr-hmbert-bs8-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-4 [10]: https://hf.co/stefan-it/hmbench-newseye-fr-hmbert-bs8-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-5 [11]: https://hf.co/stefan-it/hmbench-newseye-fr-hmbert-bs8-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-1 [12]: https://hf.co/stefan-it/hmbench-newseye-fr-hmbert-bs8-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-2 [13]: https://hf.co/stefan-it/hmbench-newseye-fr-hmbert-bs8-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-3 [14]: https://hf.co/stefan-it/hmbench-newseye-fr-hmbert-bs8-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-4 [15]: https://hf.co/stefan-it/hmbench-newseye-fr-hmbert-bs8-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-5 [16]: https://hf.co/stefan-it/hmbench-newseye-fr-hmbert-bs4-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-1 [17]: https://hf.co/stefan-it/hmbench-newseye-fr-hmbert-bs4-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-2 [18]: https://hf.co/stefan-it/hmbench-newseye-fr-hmbert-bs4-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-3 [19]: https://hf.co/stefan-it/hmbench-newseye-fr-hmbert-bs4-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-4 [20]: https://hf.co/stefan-it/hmbench-newseye-fr-hmbert-bs4-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-5 The [training log](training.log) and TensorBoard logs (only for hmByT5 and hmTEAMS based models) are also uploaded to the model hub. More information about fine-tuning can be found [here](https://github.com/stefan-it/hmBench). # Acknowledgements We thank [Luisa März](https://github.com/LuisaMaerz), [Katharina Schmid](https://github.com/schmika) and [Erion Çano](https://github.com/erionc) for their fruitful discussions about Historic Language Models. Research supported with Cloud TPUs from Google's [TPU Research Cloud](https://sites.research.google/trc/about/) (TRC). Many Thanks for providing access to the TPUs ❤️
stefan-it/hmbench-newseye-fr-hmbert-bs8-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-2
stefan-it
2023-10-17T23:17:15Z
2
0
flair
[ "flair", "pytorch", "token-classification", "sequence-tagger-model", "fr", "base_model:dbmdz/bert-base-historic-multilingual-cased", "base_model:finetune:dbmdz/bert-base-historic-multilingual-cased", "license:mit", "region:us" ]
token-classification
2023-10-16T09:40:56Z
--- language: fr license: mit tags: - flair - token-classification - sequence-tagger-model base_model: dbmdz/bert-base-historic-multilingual-cased widget: - text: Le Moniteur universel fait ressortir les avantages de la situation de l ' Allemagne , sa force militaire , le peu d ' intérêts personnels qu ' elle peut avoir dans la question d ' Orient . --- # Fine-tuned Flair Model on French NewsEye NER Dataset (HIPE-2022) This Flair model was fine-tuned on the [French NewsEye](https://github.com/hipe-eval/HIPE-2022-data/blob/main/documentation/README-newseye.md) NER Dataset using hmBERT as backbone LM. The NewsEye dataset is comprised of diachronic historical newspaper material published between 1850 and 1950 in French, German, Finnish, and Swedish. More information can be found [here](https://dl.acm.org/doi/abs/10.1145/3404835.3463255). The following NEs were annotated: `PER`, `LOC`, `ORG` and `HumanProd`. # Results We performed a hyper-parameter search over the following parameters with 5 different seeds per configuration: * Batch Sizes: `[8, 4]` * Learning Rates: `[3e-05, 5e-05]` And report micro F1-score on development set: | Configuration | Run 1 | Run 2 | Run 3 | Run 4 | Run 5 | Avg. | |-----------------|--------------|--------------|--------------|--------------|--------------|--------------| | bs4-e10-lr3e-05 | [0.8103][1] | [0.8199][2] | [0.8181][3] | [0.8091][4] | [0.8102][5] | 81.35 ± 0.45 | | bs8-e10-lr5e-05 | [0.8038][6] | [0.81][7] | [0.8029][8] | [0.82][9] | [0.8214][10] | 81.16 ± 0.78 | | bs8-e10-lr3e-05 | [0.8109][11] | [0.8102][12] | [0.8032][13] | [0.8038][14] | [0.8204][15] | 80.97 ± 0.62 | | bs4-e10-lr5e-05 | [0.8054][16] | [0.8078][17] | [0.8048][18] | [0.8099][19] | [0.8076][20] | 80.71 ± 0.18 | [1]: https://hf.co/stefan-it/hmbench-newseye-fr-hmbert-bs4-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-1 [2]: https://hf.co/stefan-it/hmbench-newseye-fr-hmbert-bs4-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-2 [3]: https://hf.co/stefan-it/hmbench-newseye-fr-hmbert-bs4-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-3 [4]: https://hf.co/stefan-it/hmbench-newseye-fr-hmbert-bs4-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-4 [5]: https://hf.co/stefan-it/hmbench-newseye-fr-hmbert-bs4-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-5 [6]: https://hf.co/stefan-it/hmbench-newseye-fr-hmbert-bs8-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-1 [7]: https://hf.co/stefan-it/hmbench-newseye-fr-hmbert-bs8-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-2 [8]: https://hf.co/stefan-it/hmbench-newseye-fr-hmbert-bs8-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-3 [9]: https://hf.co/stefan-it/hmbench-newseye-fr-hmbert-bs8-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-4 [10]: https://hf.co/stefan-it/hmbench-newseye-fr-hmbert-bs8-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-5 [11]: https://hf.co/stefan-it/hmbench-newseye-fr-hmbert-bs8-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-1 [12]: https://hf.co/stefan-it/hmbench-newseye-fr-hmbert-bs8-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-2 [13]: https://hf.co/stefan-it/hmbench-newseye-fr-hmbert-bs8-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-3 [14]: https://hf.co/stefan-it/hmbench-newseye-fr-hmbert-bs8-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-4 [15]: https://hf.co/stefan-it/hmbench-newseye-fr-hmbert-bs8-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-5 [16]: https://hf.co/stefan-it/hmbench-newseye-fr-hmbert-bs4-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-1 [17]: https://hf.co/stefan-it/hmbench-newseye-fr-hmbert-bs4-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-2 [18]: https://hf.co/stefan-it/hmbench-newseye-fr-hmbert-bs4-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-3 [19]: https://hf.co/stefan-it/hmbench-newseye-fr-hmbert-bs4-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-4 [20]: https://hf.co/stefan-it/hmbench-newseye-fr-hmbert-bs4-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-5 The [training log](training.log) and TensorBoard logs (only for hmByT5 and hmTEAMS based models) are also uploaded to the model hub. More information about fine-tuning can be found [here](https://github.com/stefan-it/hmBench). # Acknowledgements We thank [Luisa März](https://github.com/LuisaMaerz), [Katharina Schmid](https://github.com/schmika) and [Erion Çano](https://github.com/erionc) for their fruitful discussions about Historic Language Models. Research supported with Cloud TPUs from Google's [TPU Research Cloud](https://sites.research.google/trc/about/) (TRC). Many Thanks for providing access to the TPUs ❤️
stefan-it/hmbench-newseye-fr-hmbert-bs8-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-5
stefan-it
2023-10-17T23:17:12Z
7
0
flair
[ "flair", "pytorch", "token-classification", "sequence-tagger-model", "fr", "base_model:dbmdz/bert-base-historic-multilingual-cased", "base_model:finetune:dbmdz/bert-base-historic-multilingual-cased", "license:mit", "region:us" ]
token-classification
2023-10-16T13:56:10Z
--- language: fr license: mit tags: - flair - token-classification - sequence-tagger-model base_model: dbmdz/bert-base-historic-multilingual-cased widget: - text: Le Moniteur universel fait ressortir les avantages de la situation de l ' Allemagne , sa force militaire , le peu d ' intérêts personnels qu ' elle peut avoir dans la question d ' Orient . --- # Fine-tuned Flair Model on French NewsEye NER Dataset (HIPE-2022) This Flair model was fine-tuned on the [French NewsEye](https://github.com/hipe-eval/HIPE-2022-data/blob/main/documentation/README-newseye.md) NER Dataset using hmBERT as backbone LM. The NewsEye dataset is comprised of diachronic historical newspaper material published between 1850 and 1950 in French, German, Finnish, and Swedish. More information can be found [here](https://dl.acm.org/doi/abs/10.1145/3404835.3463255). The following NEs were annotated: `PER`, `LOC`, `ORG` and `HumanProd`. # Results We performed a hyper-parameter search over the following parameters with 5 different seeds per configuration: * Batch Sizes: `[8, 4]` * Learning Rates: `[3e-05, 5e-05]` And report micro F1-score on development set: | Configuration | Run 1 | Run 2 | Run 3 | Run 4 | Run 5 | Avg. | |-----------------|--------------|--------------|--------------|--------------|--------------|--------------| | bs4-e10-lr3e-05 | [0.8103][1] | [0.8199][2] | [0.8181][3] | [0.8091][4] | [0.8102][5] | 81.35 ± 0.45 | | bs8-e10-lr5e-05 | [0.8038][6] | [0.81][7] | [0.8029][8] | [0.82][9] | [0.8214][10] | 81.16 ± 0.78 | | bs8-e10-lr3e-05 | [0.8109][11] | [0.8102][12] | [0.8032][13] | [0.8038][14] | [0.8204][15] | 80.97 ± 0.62 | | bs4-e10-lr5e-05 | [0.8054][16] | [0.8078][17] | [0.8048][18] | [0.8099][19] | [0.8076][20] | 80.71 ± 0.18 | [1]: https://hf.co/stefan-it/hmbench-newseye-fr-hmbert-bs4-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-1 [2]: https://hf.co/stefan-it/hmbench-newseye-fr-hmbert-bs4-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-2 [3]: https://hf.co/stefan-it/hmbench-newseye-fr-hmbert-bs4-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-3 [4]: https://hf.co/stefan-it/hmbench-newseye-fr-hmbert-bs4-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-4 [5]: https://hf.co/stefan-it/hmbench-newseye-fr-hmbert-bs4-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-5 [6]: https://hf.co/stefan-it/hmbench-newseye-fr-hmbert-bs8-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-1 [7]: https://hf.co/stefan-it/hmbench-newseye-fr-hmbert-bs8-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-2 [8]: https://hf.co/stefan-it/hmbench-newseye-fr-hmbert-bs8-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-3 [9]: https://hf.co/stefan-it/hmbench-newseye-fr-hmbert-bs8-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-4 [10]: https://hf.co/stefan-it/hmbench-newseye-fr-hmbert-bs8-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-5 [11]: https://hf.co/stefan-it/hmbench-newseye-fr-hmbert-bs8-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-1 [12]: https://hf.co/stefan-it/hmbench-newseye-fr-hmbert-bs8-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-2 [13]: https://hf.co/stefan-it/hmbench-newseye-fr-hmbert-bs8-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-3 [14]: https://hf.co/stefan-it/hmbench-newseye-fr-hmbert-bs8-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-4 [15]: https://hf.co/stefan-it/hmbench-newseye-fr-hmbert-bs8-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-5 [16]: https://hf.co/stefan-it/hmbench-newseye-fr-hmbert-bs4-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-1 [17]: https://hf.co/stefan-it/hmbench-newseye-fr-hmbert-bs4-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-2 [18]: https://hf.co/stefan-it/hmbench-newseye-fr-hmbert-bs4-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-3 [19]: https://hf.co/stefan-it/hmbench-newseye-fr-hmbert-bs4-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-4 [20]: https://hf.co/stefan-it/hmbench-newseye-fr-hmbert-bs4-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-5 The [training log](training.log) and TensorBoard logs (only for hmByT5 and hmTEAMS based models) are also uploaded to the model hub. More information about fine-tuning can be found [here](https://github.com/stefan-it/hmBench). # Acknowledgements We thank [Luisa März](https://github.com/LuisaMaerz), [Katharina Schmid](https://github.com/schmika) and [Erion Çano](https://github.com/erionc) for their fruitful discussions about Historic Language Models. Research supported with Cloud TPUs from Google's [TPU Research Cloud](https://sites.research.google/trc/about/) (TRC). Many Thanks for providing access to the TPUs ❤️
stefan-it/hmbench-newseye-fr-hmbert-bs8-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-4
stefan-it
2023-10-17T23:17:11Z
5
0
flair
[ "flair", "pytorch", "token-classification", "sequence-tagger-model", "fr", "base_model:dbmdz/bert-base-historic-multilingual-cased", "base_model:finetune:dbmdz/bert-base-historic-multilingual-cased", "license:mit", "region:us" ]
token-classification
2023-10-16T12:58:17Z
--- language: fr license: mit tags: - flair - token-classification - sequence-tagger-model base_model: dbmdz/bert-base-historic-multilingual-cased widget: - text: Le Moniteur universel fait ressortir les avantages de la situation de l ' Allemagne , sa force militaire , le peu d ' intérêts personnels qu ' elle peut avoir dans la question d ' Orient . --- # Fine-tuned Flair Model on French NewsEye NER Dataset (HIPE-2022) This Flair model was fine-tuned on the [French NewsEye](https://github.com/hipe-eval/HIPE-2022-data/blob/main/documentation/README-newseye.md) NER Dataset using hmBERT as backbone LM. The NewsEye dataset is comprised of diachronic historical newspaper material published between 1850 and 1950 in French, German, Finnish, and Swedish. More information can be found [here](https://dl.acm.org/doi/abs/10.1145/3404835.3463255). The following NEs were annotated: `PER`, `LOC`, `ORG` and `HumanProd`. # Results We performed a hyper-parameter search over the following parameters with 5 different seeds per configuration: * Batch Sizes: `[8, 4]` * Learning Rates: `[3e-05, 5e-05]` And report micro F1-score on development set: | Configuration | Run 1 | Run 2 | Run 3 | Run 4 | Run 5 | Avg. | |-----------------|--------------|--------------|--------------|--------------|--------------|--------------| | bs4-e10-lr3e-05 | [0.8103][1] | [0.8199][2] | [0.8181][3] | [0.8091][4] | [0.8102][5] | 81.35 ± 0.45 | | bs8-e10-lr5e-05 | [0.8038][6] | [0.81][7] | [0.8029][8] | [0.82][9] | [0.8214][10] | 81.16 ± 0.78 | | bs8-e10-lr3e-05 | [0.8109][11] | [0.8102][12] | [0.8032][13] | [0.8038][14] | [0.8204][15] | 80.97 ± 0.62 | | bs4-e10-lr5e-05 | [0.8054][16] | [0.8078][17] | [0.8048][18] | [0.8099][19] | [0.8076][20] | 80.71 ± 0.18 | [1]: https://hf.co/stefan-it/hmbench-newseye-fr-hmbert-bs4-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-1 [2]: https://hf.co/stefan-it/hmbench-newseye-fr-hmbert-bs4-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-2 [3]: https://hf.co/stefan-it/hmbench-newseye-fr-hmbert-bs4-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-3 [4]: https://hf.co/stefan-it/hmbench-newseye-fr-hmbert-bs4-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-4 [5]: https://hf.co/stefan-it/hmbench-newseye-fr-hmbert-bs4-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-5 [6]: https://hf.co/stefan-it/hmbench-newseye-fr-hmbert-bs8-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-1 [7]: https://hf.co/stefan-it/hmbench-newseye-fr-hmbert-bs8-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-2 [8]: https://hf.co/stefan-it/hmbench-newseye-fr-hmbert-bs8-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-3 [9]: https://hf.co/stefan-it/hmbench-newseye-fr-hmbert-bs8-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-4 [10]: https://hf.co/stefan-it/hmbench-newseye-fr-hmbert-bs8-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-5 [11]: https://hf.co/stefan-it/hmbench-newseye-fr-hmbert-bs8-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-1 [12]: https://hf.co/stefan-it/hmbench-newseye-fr-hmbert-bs8-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-2 [13]: https://hf.co/stefan-it/hmbench-newseye-fr-hmbert-bs8-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-3 [14]: https://hf.co/stefan-it/hmbench-newseye-fr-hmbert-bs8-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-4 [15]: https://hf.co/stefan-it/hmbench-newseye-fr-hmbert-bs8-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-5 [16]: https://hf.co/stefan-it/hmbench-newseye-fr-hmbert-bs4-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-1 [17]: https://hf.co/stefan-it/hmbench-newseye-fr-hmbert-bs4-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-2 [18]: https://hf.co/stefan-it/hmbench-newseye-fr-hmbert-bs4-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-3 [19]: https://hf.co/stefan-it/hmbench-newseye-fr-hmbert-bs4-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-4 [20]: https://hf.co/stefan-it/hmbench-newseye-fr-hmbert-bs4-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-5 The [training log](training.log) and TensorBoard logs (only for hmByT5 and hmTEAMS based models) are also uploaded to the model hub. More information about fine-tuning can be found [here](https://github.com/stefan-it/hmBench). # Acknowledgements We thank [Luisa März](https://github.com/LuisaMaerz), [Katharina Schmid](https://github.com/schmika) and [Erion Çano](https://github.com/erionc) for their fruitful discussions about Historic Language Models. Research supported with Cloud TPUs from Google's [TPU Research Cloud](https://sites.research.google/trc/about/) (TRC). Many Thanks for providing access to the TPUs ❤️
stefan-it/hmbench-newseye-fr-hmbert-bs8-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-2
stefan-it
2023-10-17T23:17:09Z
5
0
flair
[ "flair", "pytorch", "token-classification", "sequence-tagger-model", "fr", "base_model:dbmdz/bert-base-historic-multilingual-cased", "base_model:finetune:dbmdz/bert-base-historic-multilingual-cased", "license:mit", "region:us" ]
token-classification
2023-10-16T09:27:58Z
--- language: fr license: mit tags: - flair - token-classification - sequence-tagger-model base_model: dbmdz/bert-base-historic-multilingual-cased widget: - text: Le Moniteur universel fait ressortir les avantages de la situation de l ' Allemagne , sa force militaire , le peu d ' intérêts personnels qu ' elle peut avoir dans la question d ' Orient . --- # Fine-tuned Flair Model on French NewsEye NER Dataset (HIPE-2022) This Flair model was fine-tuned on the [French NewsEye](https://github.com/hipe-eval/HIPE-2022-data/blob/main/documentation/README-newseye.md) NER Dataset using hmBERT as backbone LM. The NewsEye dataset is comprised of diachronic historical newspaper material published between 1850 and 1950 in French, German, Finnish, and Swedish. More information can be found [here](https://dl.acm.org/doi/abs/10.1145/3404835.3463255). The following NEs were annotated: `PER`, `LOC`, `ORG` and `HumanProd`. # Results We performed a hyper-parameter search over the following parameters with 5 different seeds per configuration: * Batch Sizes: `[8, 4]` * Learning Rates: `[3e-05, 5e-05]` And report micro F1-score on development set: | Configuration | Run 1 | Run 2 | Run 3 | Run 4 | Run 5 | Avg. | |-----------------|--------------|--------------|--------------|--------------|--------------|--------------| | bs4-e10-lr3e-05 | [0.8103][1] | [0.8199][2] | [0.8181][3] | [0.8091][4] | [0.8102][5] | 81.35 ± 0.45 | | bs8-e10-lr5e-05 | [0.8038][6] | [0.81][7] | [0.8029][8] | [0.82][9] | [0.8214][10] | 81.16 ± 0.78 | | bs8-e10-lr3e-05 | [0.8109][11] | [0.8102][12] | [0.8032][13] | [0.8038][14] | [0.8204][15] | 80.97 ± 0.62 | | bs4-e10-lr5e-05 | [0.8054][16] | [0.8078][17] | [0.8048][18] | [0.8099][19] | [0.8076][20] | 80.71 ± 0.18 | [1]: https://hf.co/stefan-it/hmbench-newseye-fr-hmbert-bs4-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-1 [2]: https://hf.co/stefan-it/hmbench-newseye-fr-hmbert-bs4-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-2 [3]: https://hf.co/stefan-it/hmbench-newseye-fr-hmbert-bs4-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-3 [4]: https://hf.co/stefan-it/hmbench-newseye-fr-hmbert-bs4-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-4 [5]: https://hf.co/stefan-it/hmbench-newseye-fr-hmbert-bs4-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-5 [6]: https://hf.co/stefan-it/hmbench-newseye-fr-hmbert-bs8-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-1 [7]: https://hf.co/stefan-it/hmbench-newseye-fr-hmbert-bs8-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-2 [8]: https://hf.co/stefan-it/hmbench-newseye-fr-hmbert-bs8-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-3 [9]: https://hf.co/stefan-it/hmbench-newseye-fr-hmbert-bs8-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-4 [10]: https://hf.co/stefan-it/hmbench-newseye-fr-hmbert-bs8-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-5 [11]: https://hf.co/stefan-it/hmbench-newseye-fr-hmbert-bs8-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-1 [12]: https://hf.co/stefan-it/hmbench-newseye-fr-hmbert-bs8-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-2 [13]: https://hf.co/stefan-it/hmbench-newseye-fr-hmbert-bs8-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-3 [14]: https://hf.co/stefan-it/hmbench-newseye-fr-hmbert-bs8-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-4 [15]: https://hf.co/stefan-it/hmbench-newseye-fr-hmbert-bs8-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-5 [16]: https://hf.co/stefan-it/hmbench-newseye-fr-hmbert-bs4-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-1 [17]: https://hf.co/stefan-it/hmbench-newseye-fr-hmbert-bs4-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-2 [18]: https://hf.co/stefan-it/hmbench-newseye-fr-hmbert-bs4-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-3 [19]: https://hf.co/stefan-it/hmbench-newseye-fr-hmbert-bs4-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-4 [20]: https://hf.co/stefan-it/hmbench-newseye-fr-hmbert-bs4-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-5 The [training log](training.log) and TensorBoard logs (only for hmByT5 and hmTEAMS based models) are also uploaded to the model hub. More information about fine-tuning can be found [here](https://github.com/stefan-it/hmBench). # Acknowledgements We thank [Luisa März](https://github.com/LuisaMaerz), [Katharina Schmid](https://github.com/schmika) and [Erion Çano](https://github.com/erionc) for their fruitful discussions about Historic Language Models. Research supported with Cloud TPUs from Google's [TPU Research Cloud](https://sites.research.google/trc/about/) (TRC). Many Thanks for providing access to the TPUs ❤️
stefan-it/hmbench-newseye-de-hmbert-bs4-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-5
stefan-it
2023-10-17T23:16:38Z
7
0
flair
[ "flair", "pytorch", "token-classification", "sequence-tagger-model", "de", "base_model:dbmdz/bert-base-historic-multilingual-cased", "base_model:finetune:dbmdz/bert-base-historic-multilingual-cased", "license:mit", "region:us" ]
token-classification
2023-10-16T00:29:31Z
--- language: de license: mit tags: - flair - token-classification - sequence-tagger-model base_model: dbmdz/bert-base-historic-multilingual-cased widget: - text: In Teltsch und Jarmeritz wurden die abgegebenen Stimmen für Genossen Krapka ungiltig erklärt , weil sie keinen Wohnort aufwiesen . --- # Fine-tuned Flair Model on German NewsEye NER Dataset (HIPE-2022) This Flair model was fine-tuned on the [German NewsEye](https://github.com/hipe-eval/HIPE-2022-data/blob/main/documentation/README-newseye.md) NER Dataset using hmBERT as backbone LM. The NewsEye dataset is comprised of diachronic historical newspaper material published between 1850 and 1950 in French, German, Finnish, and Swedish. More information can be found [here](https://dl.acm.org/doi/abs/10.1145/3404835.3463255). The following NEs were annotated: `PER`, `LOC`, `ORG` and `HumanProd`. # Results We performed a hyper-parameter search over the following parameters with 5 different seeds per configuration: * Batch Sizes: `[8, 4]` * Learning Rates: `[3e-05, 5e-05]` And report micro F1-score on development set: | Configuration | Run 1 | Run 2 | Run 3 | Run 4 | Run 5 | Avg. | |-----------------|--------------|--------------|--------------|--------------|--------------|--------------| | bs8-e10-lr5e-05 | [0.3974][1] | [0.3847][2] | [0.3941][3] | [0.4084][4] | [0.3881][5] | 39.45 ± 0.82 | | bs8-e10-lr3e-05 | [0.3759][6] | [0.39][7] | [0.4087][8] | [0.3916][9] | [0.3884][10] | 39.09 ± 1.05 | | bs4-e10-lr3e-05 | [0.3732][11] | [0.4014][12] | [0.3915][13] | [0.3741][14] | [0.3805][15] | 38.41 ± 1.08 | | bs4-e10-lr5e-05 | [0.3654][16] | [0.3756][17] | [0.3845][18] | [0.3947][19] | [0.3933][20] | 38.27 ± 1.1 | [1]: https://hf.co/stefan-it/hmbench-newseye-de-hmbert-bs8-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-1 [2]: https://hf.co/stefan-it/hmbench-newseye-de-hmbert-bs8-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-2 [3]: https://hf.co/stefan-it/hmbench-newseye-de-hmbert-bs8-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-3 [4]: https://hf.co/stefan-it/hmbench-newseye-de-hmbert-bs8-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-4 [5]: https://hf.co/stefan-it/hmbench-newseye-de-hmbert-bs8-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-5 [6]: https://hf.co/stefan-it/hmbench-newseye-de-hmbert-bs8-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-1 [7]: https://hf.co/stefan-it/hmbench-newseye-de-hmbert-bs8-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-2 [8]: https://hf.co/stefan-it/hmbench-newseye-de-hmbert-bs8-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-3 [9]: https://hf.co/stefan-it/hmbench-newseye-de-hmbert-bs8-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-4 [10]: https://hf.co/stefan-it/hmbench-newseye-de-hmbert-bs8-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-5 [11]: https://hf.co/stefan-it/hmbench-newseye-de-hmbert-bs4-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-1 [12]: https://hf.co/stefan-it/hmbench-newseye-de-hmbert-bs4-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-2 [13]: https://hf.co/stefan-it/hmbench-newseye-de-hmbert-bs4-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-3 [14]: https://hf.co/stefan-it/hmbench-newseye-de-hmbert-bs4-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-4 [15]: https://hf.co/stefan-it/hmbench-newseye-de-hmbert-bs4-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-5 [16]: https://hf.co/stefan-it/hmbench-newseye-de-hmbert-bs4-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-1 [17]: https://hf.co/stefan-it/hmbench-newseye-de-hmbert-bs4-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-2 [18]: https://hf.co/stefan-it/hmbench-newseye-de-hmbert-bs4-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-3 [19]: https://hf.co/stefan-it/hmbench-newseye-de-hmbert-bs4-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-4 [20]: https://hf.co/stefan-it/hmbench-newseye-de-hmbert-bs4-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-5 The [training log](training.log) and TensorBoard logs (only for hmByT5 and hmTEAMS based models) are also uploaded to the model hub. More information about fine-tuning can be found [here](https://github.com/stefan-it/hmBench). # Acknowledgements We thank [Luisa März](https://github.com/LuisaMaerz), [Katharina Schmid](https://github.com/schmika) and [Erion Çano](https://github.com/erionc) for their fruitful discussions about Historic Language Models. Research supported with Cloud TPUs from Google's [TPU Research Cloud](https://sites.research.google/trc/about/) (TRC). Many Thanks for providing access to the TPUs ❤️
stefan-it/hmbench-newseye-de-hmbert-bs4-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-1
stefan-it
2023-10-17T23:16:34Z
10
0
flair
[ "flair", "pytorch", "token-classification", "sequence-tagger-model", "de", "base_model:dbmdz/bert-base-historic-multilingual-cased", "base_model:finetune:dbmdz/bert-base-historic-multilingual-cased", "license:mit", "region:us" ]
token-classification
2023-10-15T14:00:42Z
--- language: de license: mit tags: - flair - token-classification - sequence-tagger-model base_model: dbmdz/bert-base-historic-multilingual-cased widget: - text: In Teltsch und Jarmeritz wurden die abgegebenen Stimmen für Genossen Krapka ungiltig erklärt , weil sie keinen Wohnort aufwiesen . --- # Fine-tuned Flair Model on German NewsEye NER Dataset (HIPE-2022) This Flair model was fine-tuned on the [German NewsEye](https://github.com/hipe-eval/HIPE-2022-data/blob/main/documentation/README-newseye.md) NER Dataset using hmBERT as backbone LM. The NewsEye dataset is comprised of diachronic historical newspaper material published between 1850 and 1950 in French, German, Finnish, and Swedish. More information can be found [here](https://dl.acm.org/doi/abs/10.1145/3404835.3463255). The following NEs were annotated: `PER`, `LOC`, `ORG` and `HumanProd`. # Results We performed a hyper-parameter search over the following parameters with 5 different seeds per configuration: * Batch Sizes: `[8, 4]` * Learning Rates: `[3e-05, 5e-05]` And report micro F1-score on development set: | Configuration | Run 1 | Run 2 | Run 3 | Run 4 | Run 5 | Avg. | |-----------------|--------------|--------------|--------------|--------------|--------------|--------------| | bs8-e10-lr5e-05 | [0.3974][1] | [0.3847][2] | [0.3941][3] | [0.4084][4] | [0.3881][5] | 39.45 ± 0.82 | | bs8-e10-lr3e-05 | [0.3759][6] | [0.39][7] | [0.4087][8] | [0.3916][9] | [0.3884][10] | 39.09 ± 1.05 | | bs4-e10-lr3e-05 | [0.3732][11] | [0.4014][12] | [0.3915][13] | [0.3741][14] | [0.3805][15] | 38.41 ± 1.08 | | bs4-e10-lr5e-05 | [0.3654][16] | [0.3756][17] | [0.3845][18] | [0.3947][19] | [0.3933][20] | 38.27 ± 1.1 | [1]: https://hf.co/stefan-it/hmbench-newseye-de-hmbert-bs8-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-1 [2]: https://hf.co/stefan-it/hmbench-newseye-de-hmbert-bs8-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-2 [3]: https://hf.co/stefan-it/hmbench-newseye-de-hmbert-bs8-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-3 [4]: https://hf.co/stefan-it/hmbench-newseye-de-hmbert-bs8-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-4 [5]: https://hf.co/stefan-it/hmbench-newseye-de-hmbert-bs8-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-5 [6]: https://hf.co/stefan-it/hmbench-newseye-de-hmbert-bs8-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-1 [7]: https://hf.co/stefan-it/hmbench-newseye-de-hmbert-bs8-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-2 [8]: https://hf.co/stefan-it/hmbench-newseye-de-hmbert-bs8-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-3 [9]: https://hf.co/stefan-it/hmbench-newseye-de-hmbert-bs8-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-4 [10]: https://hf.co/stefan-it/hmbench-newseye-de-hmbert-bs8-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-5 [11]: https://hf.co/stefan-it/hmbench-newseye-de-hmbert-bs4-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-1 [12]: https://hf.co/stefan-it/hmbench-newseye-de-hmbert-bs4-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-2 [13]: https://hf.co/stefan-it/hmbench-newseye-de-hmbert-bs4-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-3 [14]: https://hf.co/stefan-it/hmbench-newseye-de-hmbert-bs4-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-4 [15]: https://hf.co/stefan-it/hmbench-newseye-de-hmbert-bs4-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-5 [16]: https://hf.co/stefan-it/hmbench-newseye-de-hmbert-bs4-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-1 [17]: https://hf.co/stefan-it/hmbench-newseye-de-hmbert-bs4-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-2 [18]: https://hf.co/stefan-it/hmbench-newseye-de-hmbert-bs4-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-3 [19]: https://hf.co/stefan-it/hmbench-newseye-de-hmbert-bs4-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-4 [20]: https://hf.co/stefan-it/hmbench-newseye-de-hmbert-bs4-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-5 The [training log](training.log) and TensorBoard logs (only for hmByT5 and hmTEAMS based models) are also uploaded to the model hub. More information about fine-tuning can be found [here](https://github.com/stefan-it/hmBench). # Acknowledgements We thank [Luisa März](https://github.com/LuisaMaerz), [Katharina Schmid](https://github.com/schmika) and [Erion Çano](https://github.com/erionc) for their fruitful discussions about Historic Language Models. Research supported with Cloud TPUs from Google's [TPU Research Cloud](https://sites.research.google/trc/about/) (TRC). Many Thanks for providing access to the TPUs ❤️
stefan-it/hmbench-newseye-de-hmbert-bs4-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-4
stefan-it
2023-10-17T23:16:31Z
8
0
flair
[ "flair", "pytorch", "token-classification", "sequence-tagger-model", "de", "base_model:dbmdz/bert-base-historic-multilingual-cased", "base_model:finetune:dbmdz/bert-base-historic-multilingual-cased", "license:mit", "region:us" ]
token-classification
2023-10-15T21:08:30Z
--- language: de license: mit tags: - flair - token-classification - sequence-tagger-model base_model: dbmdz/bert-base-historic-multilingual-cased widget: - text: In Teltsch und Jarmeritz wurden die abgegebenen Stimmen für Genossen Krapka ungiltig erklärt , weil sie keinen Wohnort aufwiesen . --- # Fine-tuned Flair Model on German NewsEye NER Dataset (HIPE-2022) This Flair model was fine-tuned on the [German NewsEye](https://github.com/hipe-eval/HIPE-2022-data/blob/main/documentation/README-newseye.md) NER Dataset using hmBERT as backbone LM. The NewsEye dataset is comprised of diachronic historical newspaper material published between 1850 and 1950 in French, German, Finnish, and Swedish. More information can be found [here](https://dl.acm.org/doi/abs/10.1145/3404835.3463255). The following NEs were annotated: `PER`, `LOC`, `ORG` and `HumanProd`. # Results We performed a hyper-parameter search over the following parameters with 5 different seeds per configuration: * Batch Sizes: `[8, 4]` * Learning Rates: `[3e-05, 5e-05]` And report micro F1-score on development set: | Configuration | Run 1 | Run 2 | Run 3 | Run 4 | Run 5 | Avg. | |-----------------|--------------|--------------|--------------|--------------|--------------|--------------| | bs8-e10-lr5e-05 | [0.3974][1] | [0.3847][2] | [0.3941][3] | [0.4084][4] | [0.3881][5] | 39.45 ± 0.82 | | bs8-e10-lr3e-05 | [0.3759][6] | [0.39][7] | [0.4087][8] | [0.3916][9] | [0.3884][10] | 39.09 ± 1.05 | | bs4-e10-lr3e-05 | [0.3732][11] | [0.4014][12] | [0.3915][13] | [0.3741][14] | [0.3805][15] | 38.41 ± 1.08 | | bs4-e10-lr5e-05 | [0.3654][16] | [0.3756][17] | [0.3845][18] | [0.3947][19] | [0.3933][20] | 38.27 ± 1.1 | [1]: https://hf.co/stefan-it/hmbench-newseye-de-hmbert-bs8-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-1 [2]: https://hf.co/stefan-it/hmbench-newseye-de-hmbert-bs8-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-2 [3]: https://hf.co/stefan-it/hmbench-newseye-de-hmbert-bs8-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-3 [4]: https://hf.co/stefan-it/hmbench-newseye-de-hmbert-bs8-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-4 [5]: https://hf.co/stefan-it/hmbench-newseye-de-hmbert-bs8-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-5 [6]: https://hf.co/stefan-it/hmbench-newseye-de-hmbert-bs8-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-1 [7]: https://hf.co/stefan-it/hmbench-newseye-de-hmbert-bs8-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-2 [8]: https://hf.co/stefan-it/hmbench-newseye-de-hmbert-bs8-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-3 [9]: https://hf.co/stefan-it/hmbench-newseye-de-hmbert-bs8-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-4 [10]: https://hf.co/stefan-it/hmbench-newseye-de-hmbert-bs8-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-5 [11]: https://hf.co/stefan-it/hmbench-newseye-de-hmbert-bs4-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-1 [12]: https://hf.co/stefan-it/hmbench-newseye-de-hmbert-bs4-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-2 [13]: https://hf.co/stefan-it/hmbench-newseye-de-hmbert-bs4-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-3 [14]: https://hf.co/stefan-it/hmbench-newseye-de-hmbert-bs4-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-4 [15]: https://hf.co/stefan-it/hmbench-newseye-de-hmbert-bs4-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-5 [16]: https://hf.co/stefan-it/hmbench-newseye-de-hmbert-bs4-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-1 [17]: https://hf.co/stefan-it/hmbench-newseye-de-hmbert-bs4-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-2 [18]: https://hf.co/stefan-it/hmbench-newseye-de-hmbert-bs4-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-3 [19]: https://hf.co/stefan-it/hmbench-newseye-de-hmbert-bs4-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-4 [20]: https://hf.co/stefan-it/hmbench-newseye-de-hmbert-bs4-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-5 The [training log](training.log) and TensorBoard logs (only for hmByT5 and hmTEAMS based models) are also uploaded to the model hub. More information about fine-tuning can be found [here](https://github.com/stefan-it/hmBench). # Acknowledgements We thank [Luisa März](https://github.com/LuisaMaerz), [Katharina Schmid](https://github.com/schmika) and [Erion Çano](https://github.com/erionc) for their fruitful discussions about Historic Language Models. Research supported with Cloud TPUs from Google's [TPU Research Cloud](https://sites.research.google/trc/about/) (TRC). Many Thanks for providing access to the TPUs ❤️
stefan-it/hmbench-newseye-de-hmbert-bs4-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-3
stefan-it
2023-10-17T23:16:30Z
5
0
flair
[ "flair", "pytorch", "token-classification", "sequence-tagger-model", "de", "base_model:dbmdz/bert-base-historic-multilingual-cased", "base_model:finetune:dbmdz/bert-base-historic-multilingual-cased", "license:mit", "region:us" ]
token-classification
2023-10-15T18:30:22Z
--- language: de license: mit tags: - flair - token-classification - sequence-tagger-model base_model: dbmdz/bert-base-historic-multilingual-cased widget: - text: In Teltsch und Jarmeritz wurden die abgegebenen Stimmen für Genossen Krapka ungiltig erklärt , weil sie keinen Wohnort aufwiesen . --- # Fine-tuned Flair Model on German NewsEye NER Dataset (HIPE-2022) This Flair model was fine-tuned on the [German NewsEye](https://github.com/hipe-eval/HIPE-2022-data/blob/main/documentation/README-newseye.md) NER Dataset using hmBERT as backbone LM. The NewsEye dataset is comprised of diachronic historical newspaper material published between 1850 and 1950 in French, German, Finnish, and Swedish. More information can be found [here](https://dl.acm.org/doi/abs/10.1145/3404835.3463255). The following NEs were annotated: `PER`, `LOC`, `ORG` and `HumanProd`. # Results We performed a hyper-parameter search over the following parameters with 5 different seeds per configuration: * Batch Sizes: `[8, 4]` * Learning Rates: `[3e-05, 5e-05]` And report micro F1-score on development set: | Configuration | Run 1 | Run 2 | Run 3 | Run 4 | Run 5 | Avg. | |-----------------|--------------|--------------|--------------|--------------|--------------|--------------| | bs8-e10-lr5e-05 | [0.3974][1] | [0.3847][2] | [0.3941][3] | [0.4084][4] | [0.3881][5] | 39.45 ± 0.82 | | bs8-e10-lr3e-05 | [0.3759][6] | [0.39][7] | [0.4087][8] | [0.3916][9] | [0.3884][10] | 39.09 ± 1.05 | | bs4-e10-lr3e-05 | [0.3732][11] | [0.4014][12] | [0.3915][13] | [0.3741][14] | [0.3805][15] | 38.41 ± 1.08 | | bs4-e10-lr5e-05 | [0.3654][16] | [0.3756][17] | [0.3845][18] | [0.3947][19] | [0.3933][20] | 38.27 ± 1.1 | [1]: https://hf.co/stefan-it/hmbench-newseye-de-hmbert-bs8-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-1 [2]: https://hf.co/stefan-it/hmbench-newseye-de-hmbert-bs8-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-2 [3]: https://hf.co/stefan-it/hmbench-newseye-de-hmbert-bs8-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-3 [4]: https://hf.co/stefan-it/hmbench-newseye-de-hmbert-bs8-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-4 [5]: https://hf.co/stefan-it/hmbench-newseye-de-hmbert-bs8-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-5 [6]: https://hf.co/stefan-it/hmbench-newseye-de-hmbert-bs8-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-1 [7]: https://hf.co/stefan-it/hmbench-newseye-de-hmbert-bs8-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-2 [8]: https://hf.co/stefan-it/hmbench-newseye-de-hmbert-bs8-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-3 [9]: https://hf.co/stefan-it/hmbench-newseye-de-hmbert-bs8-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-4 [10]: https://hf.co/stefan-it/hmbench-newseye-de-hmbert-bs8-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-5 [11]: https://hf.co/stefan-it/hmbench-newseye-de-hmbert-bs4-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-1 [12]: https://hf.co/stefan-it/hmbench-newseye-de-hmbert-bs4-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-2 [13]: https://hf.co/stefan-it/hmbench-newseye-de-hmbert-bs4-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-3 [14]: https://hf.co/stefan-it/hmbench-newseye-de-hmbert-bs4-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-4 [15]: https://hf.co/stefan-it/hmbench-newseye-de-hmbert-bs4-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-5 [16]: https://hf.co/stefan-it/hmbench-newseye-de-hmbert-bs4-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-1 [17]: https://hf.co/stefan-it/hmbench-newseye-de-hmbert-bs4-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-2 [18]: https://hf.co/stefan-it/hmbench-newseye-de-hmbert-bs4-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-3 [19]: https://hf.co/stefan-it/hmbench-newseye-de-hmbert-bs4-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-4 [20]: https://hf.co/stefan-it/hmbench-newseye-de-hmbert-bs4-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-5 The [training log](training.log) and TensorBoard logs (only for hmByT5 and hmTEAMS based models) are also uploaded to the model hub. More information about fine-tuning can be found [here](https://github.com/stefan-it/hmBench). # Acknowledgements We thank [Luisa März](https://github.com/LuisaMaerz), [Katharina Schmid](https://github.com/schmika) and [Erion Çano](https://github.com/erionc) for their fruitful discussions about Historic Language Models. Research supported with Cloud TPUs from Google's [TPU Research Cloud](https://sites.research.google/trc/about/) (TRC). Many Thanks for providing access to the TPUs ❤️
stefan-it/hmbench-newseye-de-hmbert-bs4-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-2
stefan-it
2023-10-17T23:16:29Z
9
0
flair
[ "flair", "pytorch", "token-classification", "sequence-tagger-model", "de", "base_model:dbmdz/bert-base-historic-multilingual-cased", "base_model:finetune:dbmdz/bert-base-historic-multilingual-cased", "license:mit", "region:us" ]
token-classification
2023-10-15T15:53:11Z
--- language: de license: mit tags: - flair - token-classification - sequence-tagger-model base_model: dbmdz/bert-base-historic-multilingual-cased widget: - text: In Teltsch und Jarmeritz wurden die abgegebenen Stimmen für Genossen Krapka ungiltig erklärt , weil sie keinen Wohnort aufwiesen . --- # Fine-tuned Flair Model on German NewsEye NER Dataset (HIPE-2022) This Flair model was fine-tuned on the [German NewsEye](https://github.com/hipe-eval/HIPE-2022-data/blob/main/documentation/README-newseye.md) NER Dataset using hmBERT as backbone LM. The NewsEye dataset is comprised of diachronic historical newspaper material published between 1850 and 1950 in French, German, Finnish, and Swedish. More information can be found [here](https://dl.acm.org/doi/abs/10.1145/3404835.3463255). The following NEs were annotated: `PER`, `LOC`, `ORG` and `HumanProd`. # Results We performed a hyper-parameter search over the following parameters with 5 different seeds per configuration: * Batch Sizes: `[8, 4]` * Learning Rates: `[3e-05, 5e-05]` And report micro F1-score on development set: | Configuration | Run 1 | Run 2 | Run 3 | Run 4 | Run 5 | Avg. | |-----------------|--------------|--------------|--------------|--------------|--------------|--------------| | bs8-e10-lr5e-05 | [0.3974][1] | [0.3847][2] | [0.3941][3] | [0.4084][4] | [0.3881][5] | 39.45 ± 0.82 | | bs8-e10-lr3e-05 | [0.3759][6] | [0.39][7] | [0.4087][8] | [0.3916][9] | [0.3884][10] | 39.09 ± 1.05 | | bs4-e10-lr3e-05 | [0.3732][11] | [0.4014][12] | [0.3915][13] | [0.3741][14] | [0.3805][15] | 38.41 ± 1.08 | | bs4-e10-lr5e-05 | [0.3654][16] | [0.3756][17] | [0.3845][18] | [0.3947][19] | [0.3933][20] | 38.27 ± 1.1 | [1]: https://hf.co/stefan-it/hmbench-newseye-de-hmbert-bs8-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-1 [2]: https://hf.co/stefan-it/hmbench-newseye-de-hmbert-bs8-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-2 [3]: https://hf.co/stefan-it/hmbench-newseye-de-hmbert-bs8-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-3 [4]: https://hf.co/stefan-it/hmbench-newseye-de-hmbert-bs8-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-4 [5]: https://hf.co/stefan-it/hmbench-newseye-de-hmbert-bs8-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-5 [6]: https://hf.co/stefan-it/hmbench-newseye-de-hmbert-bs8-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-1 [7]: https://hf.co/stefan-it/hmbench-newseye-de-hmbert-bs8-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-2 [8]: https://hf.co/stefan-it/hmbench-newseye-de-hmbert-bs8-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-3 [9]: https://hf.co/stefan-it/hmbench-newseye-de-hmbert-bs8-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-4 [10]: https://hf.co/stefan-it/hmbench-newseye-de-hmbert-bs8-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-5 [11]: https://hf.co/stefan-it/hmbench-newseye-de-hmbert-bs4-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-1 [12]: https://hf.co/stefan-it/hmbench-newseye-de-hmbert-bs4-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-2 [13]: https://hf.co/stefan-it/hmbench-newseye-de-hmbert-bs4-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-3 [14]: https://hf.co/stefan-it/hmbench-newseye-de-hmbert-bs4-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-4 [15]: https://hf.co/stefan-it/hmbench-newseye-de-hmbert-bs4-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-5 [16]: https://hf.co/stefan-it/hmbench-newseye-de-hmbert-bs4-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-1 [17]: https://hf.co/stefan-it/hmbench-newseye-de-hmbert-bs4-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-2 [18]: https://hf.co/stefan-it/hmbench-newseye-de-hmbert-bs4-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-3 [19]: https://hf.co/stefan-it/hmbench-newseye-de-hmbert-bs4-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-4 [20]: https://hf.co/stefan-it/hmbench-newseye-de-hmbert-bs4-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-5 The [training log](training.log) and TensorBoard logs (only for hmByT5 and hmTEAMS based models) are also uploaded to the model hub. More information about fine-tuning can be found [here](https://github.com/stefan-it/hmBench). # Acknowledgements We thank [Luisa März](https://github.com/LuisaMaerz), [Katharina Schmid](https://github.com/schmika) and [Erion Çano](https://github.com/erionc) for their fruitful discussions about Historic Language Models. Research supported with Cloud TPUs from Google's [TPU Research Cloud](https://sites.research.google/trc/about/) (TRC). Many Thanks for providing access to the TPUs ❤️
stefan-it/hmbench-newseye-de-hmbert-bs8-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-3
stefan-it
2023-10-17T23:16:24Z
4
0
flair
[ "flair", "pytorch", "token-classification", "sequence-tagger-model", "de", "base_model:dbmdz/bert-base-historic-multilingual-cased", "base_model:finetune:dbmdz/bert-base-historic-multilingual-cased", "license:mit", "region:us" ]
token-classification
2023-10-15T17:45:44Z
--- language: de license: mit tags: - flair - token-classification - sequence-tagger-model base_model: dbmdz/bert-base-historic-multilingual-cased widget: - text: In Teltsch und Jarmeritz wurden die abgegebenen Stimmen für Genossen Krapka ungiltig erklärt , weil sie keinen Wohnort aufwiesen . --- # Fine-tuned Flair Model on German NewsEye NER Dataset (HIPE-2022) This Flair model was fine-tuned on the [German NewsEye](https://github.com/hipe-eval/HIPE-2022-data/blob/main/documentation/README-newseye.md) NER Dataset using hmBERT as backbone LM. The NewsEye dataset is comprised of diachronic historical newspaper material published between 1850 and 1950 in French, German, Finnish, and Swedish. More information can be found [here](https://dl.acm.org/doi/abs/10.1145/3404835.3463255). The following NEs were annotated: `PER`, `LOC`, `ORG` and `HumanProd`. # Results We performed a hyper-parameter search over the following parameters with 5 different seeds per configuration: * Batch Sizes: `[8, 4]` * Learning Rates: `[3e-05, 5e-05]` And report micro F1-score on development set: | Configuration | Run 1 | Run 2 | Run 3 | Run 4 | Run 5 | Avg. | |-----------------|--------------|--------------|--------------|--------------|--------------|--------------| | bs8-e10-lr5e-05 | [0.3974][1] | [0.3847][2] | [0.3941][3] | [0.4084][4] | [0.3881][5] | 39.45 ± 0.82 | | bs8-e10-lr3e-05 | [0.3759][6] | [0.39][7] | [0.4087][8] | [0.3916][9] | [0.3884][10] | 39.09 ± 1.05 | | bs4-e10-lr3e-05 | [0.3732][11] | [0.4014][12] | [0.3915][13] | [0.3741][14] | [0.3805][15] | 38.41 ± 1.08 | | bs4-e10-lr5e-05 | [0.3654][16] | [0.3756][17] | [0.3845][18] | [0.3947][19] | [0.3933][20] | 38.27 ± 1.1 | [1]: https://hf.co/stefan-it/hmbench-newseye-de-hmbert-bs8-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-1 [2]: https://hf.co/stefan-it/hmbench-newseye-de-hmbert-bs8-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-2 [3]: https://hf.co/stefan-it/hmbench-newseye-de-hmbert-bs8-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-3 [4]: https://hf.co/stefan-it/hmbench-newseye-de-hmbert-bs8-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-4 [5]: https://hf.co/stefan-it/hmbench-newseye-de-hmbert-bs8-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-5 [6]: https://hf.co/stefan-it/hmbench-newseye-de-hmbert-bs8-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-1 [7]: https://hf.co/stefan-it/hmbench-newseye-de-hmbert-bs8-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-2 [8]: https://hf.co/stefan-it/hmbench-newseye-de-hmbert-bs8-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-3 [9]: https://hf.co/stefan-it/hmbench-newseye-de-hmbert-bs8-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-4 [10]: https://hf.co/stefan-it/hmbench-newseye-de-hmbert-bs8-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-5 [11]: https://hf.co/stefan-it/hmbench-newseye-de-hmbert-bs4-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-1 [12]: https://hf.co/stefan-it/hmbench-newseye-de-hmbert-bs4-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-2 [13]: https://hf.co/stefan-it/hmbench-newseye-de-hmbert-bs4-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-3 [14]: https://hf.co/stefan-it/hmbench-newseye-de-hmbert-bs4-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-4 [15]: https://hf.co/stefan-it/hmbench-newseye-de-hmbert-bs4-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-5 [16]: https://hf.co/stefan-it/hmbench-newseye-de-hmbert-bs4-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-1 [17]: https://hf.co/stefan-it/hmbench-newseye-de-hmbert-bs4-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-2 [18]: https://hf.co/stefan-it/hmbench-newseye-de-hmbert-bs4-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-3 [19]: https://hf.co/stefan-it/hmbench-newseye-de-hmbert-bs4-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-4 [20]: https://hf.co/stefan-it/hmbench-newseye-de-hmbert-bs4-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-5 The [training log](training.log) and TensorBoard logs (only for hmByT5 and hmTEAMS based models) are also uploaded to the model hub. More information about fine-tuning can be found [here](https://github.com/stefan-it/hmBench). # Acknowledgements We thank [Luisa März](https://github.com/LuisaMaerz), [Katharina Schmid](https://github.com/schmika) and [Erion Çano](https://github.com/erionc) for their fruitful discussions about Historic Language Models. Research supported with Cloud TPUs from Google's [TPU Research Cloud](https://sites.research.google/trc/about/) (TRC). Many Thanks for providing access to the TPUs ❤️
stefan-it/hmbench-newseye-de-hmbert-bs8-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-2
stefan-it
2023-10-17T23:16:22Z
5
0
flair
[ "flair", "pytorch", "token-classification", "sequence-tagger-model", "de", "base_model:dbmdz/bert-base-historic-multilingual-cased", "base_model:finetune:dbmdz/bert-base-historic-multilingual-cased", "license:mit", "region:us" ]
token-classification
2023-10-15T15:08:37Z
--- language: de license: mit tags: - flair - token-classification - sequence-tagger-model base_model: dbmdz/bert-base-historic-multilingual-cased widget: - text: In Teltsch und Jarmeritz wurden die abgegebenen Stimmen für Genossen Krapka ungiltig erklärt , weil sie keinen Wohnort aufwiesen . --- # Fine-tuned Flair Model on German NewsEye NER Dataset (HIPE-2022) This Flair model was fine-tuned on the [German NewsEye](https://github.com/hipe-eval/HIPE-2022-data/blob/main/documentation/README-newseye.md) NER Dataset using hmBERT as backbone LM. The NewsEye dataset is comprised of diachronic historical newspaper material published between 1850 and 1950 in French, German, Finnish, and Swedish. More information can be found [here](https://dl.acm.org/doi/abs/10.1145/3404835.3463255). The following NEs were annotated: `PER`, `LOC`, `ORG` and `HumanProd`. # Results We performed a hyper-parameter search over the following parameters with 5 different seeds per configuration: * Batch Sizes: `[8, 4]` * Learning Rates: `[3e-05, 5e-05]` And report micro F1-score on development set: | Configuration | Run 1 | Run 2 | Run 3 | Run 4 | Run 5 | Avg. | |-----------------|--------------|--------------|--------------|--------------|--------------|--------------| | bs8-e10-lr5e-05 | [0.3974][1] | [0.3847][2] | [0.3941][3] | [0.4084][4] | [0.3881][5] | 39.45 ± 0.82 | | bs8-e10-lr3e-05 | [0.3759][6] | [0.39][7] | [0.4087][8] | [0.3916][9] | [0.3884][10] | 39.09 ± 1.05 | | bs4-e10-lr3e-05 | [0.3732][11] | [0.4014][12] | [0.3915][13] | [0.3741][14] | [0.3805][15] | 38.41 ± 1.08 | | bs4-e10-lr5e-05 | [0.3654][16] | [0.3756][17] | [0.3845][18] | [0.3947][19] | [0.3933][20] | 38.27 ± 1.1 | [1]: https://hf.co/stefan-it/hmbench-newseye-de-hmbert-bs8-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-1 [2]: https://hf.co/stefan-it/hmbench-newseye-de-hmbert-bs8-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-2 [3]: https://hf.co/stefan-it/hmbench-newseye-de-hmbert-bs8-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-3 [4]: https://hf.co/stefan-it/hmbench-newseye-de-hmbert-bs8-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-4 [5]: https://hf.co/stefan-it/hmbench-newseye-de-hmbert-bs8-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-5 [6]: https://hf.co/stefan-it/hmbench-newseye-de-hmbert-bs8-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-1 [7]: https://hf.co/stefan-it/hmbench-newseye-de-hmbert-bs8-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-2 [8]: https://hf.co/stefan-it/hmbench-newseye-de-hmbert-bs8-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-3 [9]: https://hf.co/stefan-it/hmbench-newseye-de-hmbert-bs8-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-4 [10]: https://hf.co/stefan-it/hmbench-newseye-de-hmbert-bs8-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-5 [11]: https://hf.co/stefan-it/hmbench-newseye-de-hmbert-bs4-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-1 [12]: https://hf.co/stefan-it/hmbench-newseye-de-hmbert-bs4-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-2 [13]: https://hf.co/stefan-it/hmbench-newseye-de-hmbert-bs4-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-3 [14]: https://hf.co/stefan-it/hmbench-newseye-de-hmbert-bs4-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-4 [15]: https://hf.co/stefan-it/hmbench-newseye-de-hmbert-bs4-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-5 [16]: https://hf.co/stefan-it/hmbench-newseye-de-hmbert-bs4-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-1 [17]: https://hf.co/stefan-it/hmbench-newseye-de-hmbert-bs4-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-2 [18]: https://hf.co/stefan-it/hmbench-newseye-de-hmbert-bs4-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-3 [19]: https://hf.co/stefan-it/hmbench-newseye-de-hmbert-bs4-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-4 [20]: https://hf.co/stefan-it/hmbench-newseye-de-hmbert-bs4-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-5 The [training log](training.log) and TensorBoard logs (only for hmByT5 and hmTEAMS based models) are also uploaded to the model hub. More information about fine-tuning can be found [here](https://github.com/stefan-it/hmBench). # Acknowledgements We thank [Luisa März](https://github.com/LuisaMaerz), [Katharina Schmid](https://github.com/schmika) and [Erion Çano](https://github.com/erionc) for their fruitful discussions about Historic Language Models. Research supported with Cloud TPUs from Google's [TPU Research Cloud](https://sites.research.google/trc/about/) (TRC). Many Thanks for providing access to the TPUs ❤️
stefan-it/hmbench-newseye-de-hmbert-bs8-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-5
stefan-it
2023-10-17T23:16:20Z
2
0
flair
[ "flair", "pytorch", "token-classification", "sequence-tagger-model", "de", "base_model:dbmdz/bert-base-historic-multilingual-cased", "base_model:finetune:dbmdz/bert-base-historic-multilingual-cased", "license:mit", "region:us" ]
token-classification
2023-10-15T22:26:45Z
--- language: de license: mit tags: - flair - token-classification - sequence-tagger-model base_model: dbmdz/bert-base-historic-multilingual-cased widget: - text: In Teltsch und Jarmeritz wurden die abgegebenen Stimmen für Genossen Krapka ungiltig erklärt , weil sie keinen Wohnort aufwiesen . --- # Fine-tuned Flair Model on German NewsEye NER Dataset (HIPE-2022) This Flair model was fine-tuned on the [German NewsEye](https://github.com/hipe-eval/HIPE-2022-data/blob/main/documentation/README-newseye.md) NER Dataset using hmBERT as backbone LM. The NewsEye dataset is comprised of diachronic historical newspaper material published between 1850 and 1950 in French, German, Finnish, and Swedish. More information can be found [here](https://dl.acm.org/doi/abs/10.1145/3404835.3463255). The following NEs were annotated: `PER`, `LOC`, `ORG` and `HumanProd`. # Results We performed a hyper-parameter search over the following parameters with 5 different seeds per configuration: * Batch Sizes: `[8, 4]` * Learning Rates: `[3e-05, 5e-05]` And report micro F1-score on development set: | Configuration | Run 1 | Run 2 | Run 3 | Run 4 | Run 5 | Avg. | |-----------------|--------------|--------------|--------------|--------------|--------------|--------------| | bs8-e10-lr5e-05 | [0.3974][1] | [0.3847][2] | [0.3941][3] | [0.4084][4] | [0.3881][5] | 39.45 ± 0.82 | | bs8-e10-lr3e-05 | [0.3759][6] | [0.39][7] | [0.4087][8] | [0.3916][9] | [0.3884][10] | 39.09 ± 1.05 | | bs4-e10-lr3e-05 | [0.3732][11] | [0.4014][12] | [0.3915][13] | [0.3741][14] | [0.3805][15] | 38.41 ± 1.08 | | bs4-e10-lr5e-05 | [0.3654][16] | [0.3756][17] | [0.3845][18] | [0.3947][19] | [0.3933][20] | 38.27 ± 1.1 | [1]: https://hf.co/stefan-it/hmbench-newseye-de-hmbert-bs8-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-1 [2]: https://hf.co/stefan-it/hmbench-newseye-de-hmbert-bs8-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-2 [3]: https://hf.co/stefan-it/hmbench-newseye-de-hmbert-bs8-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-3 [4]: https://hf.co/stefan-it/hmbench-newseye-de-hmbert-bs8-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-4 [5]: https://hf.co/stefan-it/hmbench-newseye-de-hmbert-bs8-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-5 [6]: https://hf.co/stefan-it/hmbench-newseye-de-hmbert-bs8-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-1 [7]: https://hf.co/stefan-it/hmbench-newseye-de-hmbert-bs8-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-2 [8]: https://hf.co/stefan-it/hmbench-newseye-de-hmbert-bs8-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-3 [9]: https://hf.co/stefan-it/hmbench-newseye-de-hmbert-bs8-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-4 [10]: https://hf.co/stefan-it/hmbench-newseye-de-hmbert-bs8-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-5 [11]: https://hf.co/stefan-it/hmbench-newseye-de-hmbert-bs4-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-1 [12]: https://hf.co/stefan-it/hmbench-newseye-de-hmbert-bs4-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-2 [13]: https://hf.co/stefan-it/hmbench-newseye-de-hmbert-bs4-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-3 [14]: https://hf.co/stefan-it/hmbench-newseye-de-hmbert-bs4-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-4 [15]: https://hf.co/stefan-it/hmbench-newseye-de-hmbert-bs4-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-5 [16]: https://hf.co/stefan-it/hmbench-newseye-de-hmbert-bs4-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-1 [17]: https://hf.co/stefan-it/hmbench-newseye-de-hmbert-bs4-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-2 [18]: https://hf.co/stefan-it/hmbench-newseye-de-hmbert-bs4-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-3 [19]: https://hf.co/stefan-it/hmbench-newseye-de-hmbert-bs4-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-4 [20]: https://hf.co/stefan-it/hmbench-newseye-de-hmbert-bs4-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-5 The [training log](training.log) and TensorBoard logs (only for hmByT5 and hmTEAMS based models) are also uploaded to the model hub. More information about fine-tuning can be found [here](https://github.com/stefan-it/hmBench). # Acknowledgements We thank [Luisa März](https://github.com/LuisaMaerz), [Katharina Schmid](https://github.com/schmika) and [Erion Çano](https://github.com/erionc) for their fruitful discussions about Historic Language Models. Research supported with Cloud TPUs from Google's [TPU Research Cloud](https://sites.research.google/trc/about/) (TRC). Many Thanks for providing access to the TPUs ❤️
stefan-it/hmbench-newseye-de-hmbert-bs8-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-3
stefan-it
2023-10-17T23:16:17Z
4
0
flair
[ "flair", "pytorch", "token-classification", "sequence-tagger-model", "de", "base_model:dbmdz/bert-base-historic-multilingual-cased", "base_model:finetune:dbmdz/bert-base-historic-multilingual-cased", "license:mit", "region:us" ]
token-classification
2023-10-15T17:11:37Z
--- language: de license: mit tags: - flair - token-classification - sequence-tagger-model base_model: dbmdz/bert-base-historic-multilingual-cased widget: - text: In Teltsch und Jarmeritz wurden die abgegebenen Stimmen für Genossen Krapka ungiltig erklärt , weil sie keinen Wohnort aufwiesen . --- # Fine-tuned Flair Model on German NewsEye NER Dataset (HIPE-2022) This Flair model was fine-tuned on the [German NewsEye](https://github.com/hipe-eval/HIPE-2022-data/blob/main/documentation/README-newseye.md) NER Dataset using hmBERT as backbone LM. The NewsEye dataset is comprised of diachronic historical newspaper material published between 1850 and 1950 in French, German, Finnish, and Swedish. More information can be found [here](https://dl.acm.org/doi/abs/10.1145/3404835.3463255). The following NEs were annotated: `PER`, `LOC`, `ORG` and `HumanProd`. # Results We performed a hyper-parameter search over the following parameters with 5 different seeds per configuration: * Batch Sizes: `[8, 4]` * Learning Rates: `[3e-05, 5e-05]` And report micro F1-score on development set: | Configuration | Run 1 | Run 2 | Run 3 | Run 4 | Run 5 | Avg. | |-----------------|--------------|--------------|--------------|--------------|--------------|--------------| | bs8-e10-lr5e-05 | [0.3974][1] | [0.3847][2] | [0.3941][3] | [0.4084][4] | [0.3881][5] | 39.45 ± 0.82 | | bs8-e10-lr3e-05 | [0.3759][6] | [0.39][7] | [0.4087][8] | [0.3916][9] | [0.3884][10] | 39.09 ± 1.05 | | bs4-e10-lr3e-05 | [0.3732][11] | [0.4014][12] | [0.3915][13] | [0.3741][14] | [0.3805][15] | 38.41 ± 1.08 | | bs4-e10-lr5e-05 | [0.3654][16] | [0.3756][17] | [0.3845][18] | [0.3947][19] | [0.3933][20] | 38.27 ± 1.1 | [1]: https://hf.co/stefan-it/hmbench-newseye-de-hmbert-bs8-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-1 [2]: https://hf.co/stefan-it/hmbench-newseye-de-hmbert-bs8-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-2 [3]: https://hf.co/stefan-it/hmbench-newseye-de-hmbert-bs8-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-3 [4]: https://hf.co/stefan-it/hmbench-newseye-de-hmbert-bs8-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-4 [5]: https://hf.co/stefan-it/hmbench-newseye-de-hmbert-bs8-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-5 [6]: https://hf.co/stefan-it/hmbench-newseye-de-hmbert-bs8-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-1 [7]: https://hf.co/stefan-it/hmbench-newseye-de-hmbert-bs8-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-2 [8]: https://hf.co/stefan-it/hmbench-newseye-de-hmbert-bs8-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-3 [9]: https://hf.co/stefan-it/hmbench-newseye-de-hmbert-bs8-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-4 [10]: https://hf.co/stefan-it/hmbench-newseye-de-hmbert-bs8-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-5 [11]: https://hf.co/stefan-it/hmbench-newseye-de-hmbert-bs4-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-1 [12]: https://hf.co/stefan-it/hmbench-newseye-de-hmbert-bs4-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-2 [13]: https://hf.co/stefan-it/hmbench-newseye-de-hmbert-bs4-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-3 [14]: https://hf.co/stefan-it/hmbench-newseye-de-hmbert-bs4-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-4 [15]: https://hf.co/stefan-it/hmbench-newseye-de-hmbert-bs4-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-5 [16]: https://hf.co/stefan-it/hmbench-newseye-de-hmbert-bs4-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-1 [17]: https://hf.co/stefan-it/hmbench-newseye-de-hmbert-bs4-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-2 [18]: https://hf.co/stefan-it/hmbench-newseye-de-hmbert-bs4-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-3 [19]: https://hf.co/stefan-it/hmbench-newseye-de-hmbert-bs4-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-4 [20]: https://hf.co/stefan-it/hmbench-newseye-de-hmbert-bs4-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-5 The [training log](training.log) and TensorBoard logs (only for hmByT5 and hmTEAMS based models) are also uploaded to the model hub. More information about fine-tuning can be found [here](https://github.com/stefan-it/hmBench). # Acknowledgements We thank [Luisa März](https://github.com/LuisaMaerz), [Katharina Schmid](https://github.com/schmika) and [Erion Çano](https://github.com/erionc) for their fruitful discussions about Historic Language Models. Research supported with Cloud TPUs from Google's [TPU Research Cloud](https://sites.research.google/trc/about/) (TRC). Many Thanks for providing access to the TPUs ❤️
stefan-it/hmbench-newseye-de-hmbert-bs8-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-2
stefan-it
2023-10-17T23:16:15Z
8
0
flair
[ "flair", "pytorch", "token-classification", "sequence-tagger-model", "de", "base_model:dbmdz/bert-base-historic-multilingual-cased", "base_model:finetune:dbmdz/bert-base-historic-multilingual-cased", "license:mit", "region:us" ]
token-classification
2023-10-15T14:34:45Z
--- language: de license: mit tags: - flair - token-classification - sequence-tagger-model base_model: dbmdz/bert-base-historic-multilingual-cased widget: - text: In Teltsch und Jarmeritz wurden die abgegebenen Stimmen für Genossen Krapka ungiltig erklärt , weil sie keinen Wohnort aufwiesen . --- # Fine-tuned Flair Model on German NewsEye NER Dataset (HIPE-2022) This Flair model was fine-tuned on the [German NewsEye](https://github.com/hipe-eval/HIPE-2022-data/blob/main/documentation/README-newseye.md) NER Dataset using hmBERT as backbone LM. The NewsEye dataset is comprised of diachronic historical newspaper material published between 1850 and 1950 in French, German, Finnish, and Swedish. More information can be found [here](https://dl.acm.org/doi/abs/10.1145/3404835.3463255). The following NEs were annotated: `PER`, `LOC`, `ORG` and `HumanProd`. # Results We performed a hyper-parameter search over the following parameters with 5 different seeds per configuration: * Batch Sizes: `[8, 4]` * Learning Rates: `[3e-05, 5e-05]` And report micro F1-score on development set: | Configuration | Run 1 | Run 2 | Run 3 | Run 4 | Run 5 | Avg. | |-----------------|--------------|--------------|--------------|--------------|--------------|--------------| | bs8-e10-lr5e-05 | [0.3974][1] | [0.3847][2] | [0.3941][3] | [0.4084][4] | [0.3881][5] | 39.45 ± 0.82 | | bs8-e10-lr3e-05 | [0.3759][6] | [0.39][7] | [0.4087][8] | [0.3916][9] | [0.3884][10] | 39.09 ± 1.05 | | bs4-e10-lr3e-05 | [0.3732][11] | [0.4014][12] | [0.3915][13] | [0.3741][14] | [0.3805][15] | 38.41 ± 1.08 | | bs4-e10-lr5e-05 | [0.3654][16] | [0.3756][17] | [0.3845][18] | [0.3947][19] | [0.3933][20] | 38.27 ± 1.1 | [1]: https://hf.co/stefan-it/hmbench-newseye-de-hmbert-bs8-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-1 [2]: https://hf.co/stefan-it/hmbench-newseye-de-hmbert-bs8-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-2 [3]: https://hf.co/stefan-it/hmbench-newseye-de-hmbert-bs8-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-3 [4]: https://hf.co/stefan-it/hmbench-newseye-de-hmbert-bs8-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-4 [5]: https://hf.co/stefan-it/hmbench-newseye-de-hmbert-bs8-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-5 [6]: https://hf.co/stefan-it/hmbench-newseye-de-hmbert-bs8-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-1 [7]: https://hf.co/stefan-it/hmbench-newseye-de-hmbert-bs8-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-2 [8]: https://hf.co/stefan-it/hmbench-newseye-de-hmbert-bs8-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-3 [9]: https://hf.co/stefan-it/hmbench-newseye-de-hmbert-bs8-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-4 [10]: https://hf.co/stefan-it/hmbench-newseye-de-hmbert-bs8-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-5 [11]: https://hf.co/stefan-it/hmbench-newseye-de-hmbert-bs4-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-1 [12]: https://hf.co/stefan-it/hmbench-newseye-de-hmbert-bs4-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-2 [13]: https://hf.co/stefan-it/hmbench-newseye-de-hmbert-bs4-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-3 [14]: https://hf.co/stefan-it/hmbench-newseye-de-hmbert-bs4-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-4 [15]: https://hf.co/stefan-it/hmbench-newseye-de-hmbert-bs4-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-5 [16]: https://hf.co/stefan-it/hmbench-newseye-de-hmbert-bs4-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-1 [17]: https://hf.co/stefan-it/hmbench-newseye-de-hmbert-bs4-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-2 [18]: https://hf.co/stefan-it/hmbench-newseye-de-hmbert-bs4-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-3 [19]: https://hf.co/stefan-it/hmbench-newseye-de-hmbert-bs4-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-4 [20]: https://hf.co/stefan-it/hmbench-newseye-de-hmbert-bs4-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-5 The [training log](training.log) and TensorBoard logs (only for hmByT5 and hmTEAMS based models) are also uploaded to the model hub. More information about fine-tuning can be found [here](https://github.com/stefan-it/hmBench). # Acknowledgements We thank [Luisa März](https://github.com/LuisaMaerz), [Katharina Schmid](https://github.com/schmika) and [Erion Çano](https://github.com/erionc) for their fruitful discussions about Historic Language Models. Research supported with Cloud TPUs from Google's [TPU Research Cloud](https://sites.research.google/trc/about/) (TRC). Many Thanks for providing access to the TPUs ❤️
hmbert/flair-hipe-2022-ajmc-fr
hmbert
2023-10-17T23:15:01Z
0
0
flair
[ "flair", "token-classification", "sequence-tagger-model", "fr", "base_model:dbmdz/bert-base-historic-multilingual-cased", "base_model:finetune:dbmdz/bert-base-historic-multilingual-cased", "license:mit", "region:us" ]
token-classification
2023-10-13T11:09:55Z
--- language: fr license: mit tags: - flair - token-classification - sequence-tagger-model base_model: dbmdz/bert-base-historic-multilingual-cased widget: - text: — 469 . Πεδία . Les tribraques formés par un seul mot sont rares chez les tragiques , partont ailleurs qu ’ au premier pied . CÉ . cependant QEd , Roi , 719 , 826 , 4496 . --- # Fine-tuned Flair Model on AjMC French NER Dataset (HIPE-2022) This Flair model was fine-tuned on the [AjMC French](https://github.com/hipe-eval/HIPE-2022-data/blob/main/documentation/README-ajmc.md) NER Dataset using hmBERT as backbone LM. The AjMC dataset consists of NE-annotated historical commentaries in the field of Classics, and was created in the context of the [Ajax MultiCommentary](https://mromanello.github.io/ajax-multi-commentary/) project. The following NEs were annotated: `pers`, `work`, `loc`, `object`, `date` and `scope`. # Results We performed a hyper-parameter search over the following parameters with 5 different seeds per configuration: * Batch Sizes: `[8, 4]` * Learning Rates: `[3e-05, 5e-05]` And report micro F1-score on development set: | Configuration | Run 1 | Run 2 | Run 3 | Run 4 | Run 5 | Avg. | |-----------------|--------------|--------------|--------------|--------------|--------------|--------------| | bs4-e10-lr5e-05 | [0.8436][1] | [0.8287][2] | [0.8475][3] | [0.8455][4] | [0.8553][5] | 84.41 ± 0.87 | | bs8-e10-lr3e-05 | [0.8228][6] | [0.8407][7] | [0.8557][8] | [0.8532][9] | [0.8385][10] | 84.22 ± 1.18 | | bs4-e10-lr3e-05 | [0.8202][11] | [0.8519][12] | [0.8434][13] | [0.8418][14] | [0.8436][15] | 84.02 ± 1.06 | | bs8-e10-lr5e-05 | [0.8333][16] | [0.8338][17] | [0.8394][18] | [0.8409][19] | [0.8504][20] | 83.96 ± 0.62 | [1]: https://hf.co/stefan-it/hmbench-ajmc-fr-hmbert-bs4-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-1 [2]: https://hf.co/stefan-it/hmbench-ajmc-fr-hmbert-bs4-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-2 [3]: https://hf.co/stefan-it/hmbench-ajmc-fr-hmbert-bs4-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-3 [4]: https://hf.co/stefan-it/hmbench-ajmc-fr-hmbert-bs4-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-4 [5]: https://hf.co/stefan-it/hmbench-ajmc-fr-hmbert-bs4-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-5 [6]: https://hf.co/stefan-it/hmbench-ajmc-fr-hmbert-bs8-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-1 [7]: https://hf.co/stefan-it/hmbench-ajmc-fr-hmbert-bs8-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-2 [8]: https://hf.co/stefan-it/hmbench-ajmc-fr-hmbert-bs8-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-3 [9]: https://hf.co/stefan-it/hmbench-ajmc-fr-hmbert-bs8-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-4 [10]: https://hf.co/stefan-it/hmbench-ajmc-fr-hmbert-bs8-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-5 [11]: https://hf.co/stefan-it/hmbench-ajmc-fr-hmbert-bs4-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-1 [12]: https://hf.co/stefan-it/hmbench-ajmc-fr-hmbert-bs4-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-2 [13]: https://hf.co/stefan-it/hmbench-ajmc-fr-hmbert-bs4-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-3 [14]: https://hf.co/stefan-it/hmbench-ajmc-fr-hmbert-bs4-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-4 [15]: https://hf.co/stefan-it/hmbench-ajmc-fr-hmbert-bs4-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-5 [16]: https://hf.co/stefan-it/hmbench-ajmc-fr-hmbert-bs8-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-1 [17]: https://hf.co/stefan-it/hmbench-ajmc-fr-hmbert-bs8-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-2 [18]: https://hf.co/stefan-it/hmbench-ajmc-fr-hmbert-bs8-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-3 [19]: https://hf.co/stefan-it/hmbench-ajmc-fr-hmbert-bs8-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-4 [20]: https://hf.co/stefan-it/hmbench-ajmc-fr-hmbert-bs8-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-5 The [training log](training.log) and TensorBoard logs (only for hmByT5 and hmTEAMS based models) are also uploaded to the model hub. More information about fine-tuning can be found [here](https://github.com/stefan-it/hmBench). # Acknowledgements We thank [Luisa März](https://github.com/LuisaMaerz), [Katharina Schmid](https://github.com/schmika) and [Erion Çano](https://github.com/erionc) for their fruitful discussions about Historic Language Models. Research supported with Cloud TPUs from Google's [TPU Research Cloud](https://sites.research.google/trc/about/) (TRC). Many Thanks for providing access to the TPUs ❤️
stefan-it/hmbench-ajmc-fr-hmbert-bs4-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-3
stefan-it
2023-10-17T23:14:59Z
0
0
flair
[ "flair", "token-classification", "sequence-tagger-model", "fr", "base_model:dbmdz/bert-base-historic-multilingual-cased", "base_model:finetune:dbmdz/bert-base-historic-multilingual-cased", "license:mit", "region:us" ]
token-classification
2023-10-13T10:52:28Z
--- language: fr license: mit tags: - flair - token-classification - sequence-tagger-model base_model: dbmdz/bert-base-historic-multilingual-cased widget: - text: — 469 . Πεδία . Les tribraques formés par un seul mot sont rares chez les tragiques , partont ailleurs qu ’ au premier pied . CÉ . cependant QEd , Roi , 719 , 826 , 4496 . --- # Fine-tuned Flair Model on AjMC French NER Dataset (HIPE-2022) This Flair model was fine-tuned on the [AjMC French](https://github.com/hipe-eval/HIPE-2022-data/blob/main/documentation/README-ajmc.md) NER Dataset using hmBERT as backbone LM. The AjMC dataset consists of NE-annotated historical commentaries in the field of Classics, and was created in the context of the [Ajax MultiCommentary](https://mromanello.github.io/ajax-multi-commentary/) project. The following NEs were annotated: `pers`, `work`, `loc`, `object`, `date` and `scope`. # Results We performed a hyper-parameter search over the following parameters with 5 different seeds per configuration: * Batch Sizes: `[8, 4]` * Learning Rates: `[3e-05, 5e-05]` And report micro F1-score on development set: | Configuration | Run 1 | Run 2 | Run 3 | Run 4 | Run 5 | Avg. | |-----------------|--------------|--------------|--------------|--------------|--------------|--------------| | bs4-e10-lr5e-05 | [0.8436][1] | [0.8287][2] | [0.8475][3] | [0.8455][4] | [0.8553][5] | 84.41 ± 0.87 | | bs8-e10-lr3e-05 | [0.8228][6] | [0.8407][7] | [0.8557][8] | [0.8532][9] | [0.8385][10] | 84.22 ± 1.18 | | bs4-e10-lr3e-05 | [0.8202][11] | [0.8519][12] | [0.8434][13] | [0.8418][14] | [0.8436][15] | 84.02 ± 1.06 | | bs8-e10-lr5e-05 | [0.8333][16] | [0.8338][17] | [0.8394][18] | [0.8409][19] | [0.8504][20] | 83.96 ± 0.62 | [1]: https://hf.co/stefan-it/hmbench-ajmc-fr-hmbert-bs4-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-1 [2]: https://hf.co/stefan-it/hmbench-ajmc-fr-hmbert-bs4-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-2 [3]: https://hf.co/stefan-it/hmbench-ajmc-fr-hmbert-bs4-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-3 [4]: https://hf.co/stefan-it/hmbench-ajmc-fr-hmbert-bs4-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-4 [5]: https://hf.co/stefan-it/hmbench-ajmc-fr-hmbert-bs4-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-5 [6]: https://hf.co/stefan-it/hmbench-ajmc-fr-hmbert-bs8-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-1 [7]: https://hf.co/stefan-it/hmbench-ajmc-fr-hmbert-bs8-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-2 [8]: https://hf.co/stefan-it/hmbench-ajmc-fr-hmbert-bs8-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-3 [9]: https://hf.co/stefan-it/hmbench-ajmc-fr-hmbert-bs8-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-4 [10]: https://hf.co/stefan-it/hmbench-ajmc-fr-hmbert-bs8-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-5 [11]: https://hf.co/stefan-it/hmbench-ajmc-fr-hmbert-bs4-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-1 [12]: https://hf.co/stefan-it/hmbench-ajmc-fr-hmbert-bs4-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-2 [13]: https://hf.co/stefan-it/hmbench-ajmc-fr-hmbert-bs4-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-3 [14]: https://hf.co/stefan-it/hmbench-ajmc-fr-hmbert-bs4-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-4 [15]: https://hf.co/stefan-it/hmbench-ajmc-fr-hmbert-bs4-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-5 [16]: https://hf.co/stefan-it/hmbench-ajmc-fr-hmbert-bs8-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-1 [17]: https://hf.co/stefan-it/hmbench-ajmc-fr-hmbert-bs8-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-2 [18]: https://hf.co/stefan-it/hmbench-ajmc-fr-hmbert-bs8-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-3 [19]: https://hf.co/stefan-it/hmbench-ajmc-fr-hmbert-bs8-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-4 [20]: https://hf.co/stefan-it/hmbench-ajmc-fr-hmbert-bs8-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-5 The [training log](training.log) and TensorBoard logs (only for hmByT5 and hmTEAMS based models) are also uploaded to the model hub. More information about fine-tuning can be found [here](https://github.com/stefan-it/hmBench). # Acknowledgements We thank [Luisa März](https://github.com/LuisaMaerz), [Katharina Schmid](https://github.com/schmika) and [Erion Çano](https://github.com/erionc) for their fruitful discussions about Historic Language Models. Research supported with Cloud TPUs from Google's [TPU Research Cloud](https://sites.research.google/trc/about/) (TRC). Many Thanks for providing access to the TPUs ❤️
stefan-it/hmbench-ajmc-fr-hmbert-bs4-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-2
stefan-it
2023-10-17T23:14:58Z
0
0
flair
[ "flair", "token-classification", "sequence-tagger-model", "fr", "base_model:dbmdz/bert-base-historic-multilingual-cased", "base_model:finetune:dbmdz/bert-base-historic-multilingual-cased", "license:mit", "region:us" ]
token-classification
2023-10-13T10:43:46Z
--- language: fr license: mit tags: - flair - token-classification - sequence-tagger-model base_model: dbmdz/bert-base-historic-multilingual-cased widget: - text: — 469 . Πεδία . Les tribraques formés par un seul mot sont rares chez les tragiques , partont ailleurs qu ’ au premier pied . CÉ . cependant QEd , Roi , 719 , 826 , 4496 . --- # Fine-tuned Flair Model on AjMC French NER Dataset (HIPE-2022) This Flair model was fine-tuned on the [AjMC French](https://github.com/hipe-eval/HIPE-2022-data/blob/main/documentation/README-ajmc.md) NER Dataset using hmBERT as backbone LM. The AjMC dataset consists of NE-annotated historical commentaries in the field of Classics, and was created in the context of the [Ajax MultiCommentary](https://mromanello.github.io/ajax-multi-commentary/) project. The following NEs were annotated: `pers`, `work`, `loc`, `object`, `date` and `scope`. # Results We performed a hyper-parameter search over the following parameters with 5 different seeds per configuration: * Batch Sizes: `[8, 4]` * Learning Rates: `[3e-05, 5e-05]` And report micro F1-score on development set: | Configuration | Run 1 | Run 2 | Run 3 | Run 4 | Run 5 | Avg. | |-----------------|--------------|--------------|--------------|--------------|--------------|--------------| | bs4-e10-lr5e-05 | [0.8436][1] | [0.8287][2] | [0.8475][3] | [0.8455][4] | [0.8553][5] | 84.41 ± 0.87 | | bs8-e10-lr3e-05 | [0.8228][6] | [0.8407][7] | [0.8557][8] | [0.8532][9] | [0.8385][10] | 84.22 ± 1.18 | | bs4-e10-lr3e-05 | [0.8202][11] | [0.8519][12] | [0.8434][13] | [0.8418][14] | [0.8436][15] | 84.02 ± 1.06 | | bs8-e10-lr5e-05 | [0.8333][16] | [0.8338][17] | [0.8394][18] | [0.8409][19] | [0.8504][20] | 83.96 ± 0.62 | [1]: https://hf.co/stefan-it/hmbench-ajmc-fr-hmbert-bs4-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-1 [2]: https://hf.co/stefan-it/hmbench-ajmc-fr-hmbert-bs4-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-2 [3]: https://hf.co/stefan-it/hmbench-ajmc-fr-hmbert-bs4-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-3 [4]: https://hf.co/stefan-it/hmbench-ajmc-fr-hmbert-bs4-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-4 [5]: https://hf.co/stefan-it/hmbench-ajmc-fr-hmbert-bs4-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-5 [6]: https://hf.co/stefan-it/hmbench-ajmc-fr-hmbert-bs8-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-1 [7]: https://hf.co/stefan-it/hmbench-ajmc-fr-hmbert-bs8-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-2 [8]: https://hf.co/stefan-it/hmbench-ajmc-fr-hmbert-bs8-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-3 [9]: https://hf.co/stefan-it/hmbench-ajmc-fr-hmbert-bs8-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-4 [10]: https://hf.co/stefan-it/hmbench-ajmc-fr-hmbert-bs8-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-5 [11]: https://hf.co/stefan-it/hmbench-ajmc-fr-hmbert-bs4-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-1 [12]: https://hf.co/stefan-it/hmbench-ajmc-fr-hmbert-bs4-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-2 [13]: https://hf.co/stefan-it/hmbench-ajmc-fr-hmbert-bs4-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-3 [14]: https://hf.co/stefan-it/hmbench-ajmc-fr-hmbert-bs4-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-4 [15]: https://hf.co/stefan-it/hmbench-ajmc-fr-hmbert-bs4-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-5 [16]: https://hf.co/stefan-it/hmbench-ajmc-fr-hmbert-bs8-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-1 [17]: https://hf.co/stefan-it/hmbench-ajmc-fr-hmbert-bs8-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-2 [18]: https://hf.co/stefan-it/hmbench-ajmc-fr-hmbert-bs8-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-3 [19]: https://hf.co/stefan-it/hmbench-ajmc-fr-hmbert-bs8-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-4 [20]: https://hf.co/stefan-it/hmbench-ajmc-fr-hmbert-bs8-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-5 The [training log](training.log) and TensorBoard logs (only for hmByT5 and hmTEAMS based models) are also uploaded to the model hub. More information about fine-tuning can be found [here](https://github.com/stefan-it/hmBench). # Acknowledgements We thank [Luisa März](https://github.com/LuisaMaerz), [Katharina Schmid](https://github.com/schmika) and [Erion Çano](https://github.com/erionc) for their fruitful discussions about Historic Language Models. Research supported with Cloud TPUs from Google's [TPU Research Cloud](https://sites.research.google/trc/about/) (TRC). Many Thanks for providing access to the TPUs ❤️
stefan-it/hmbench-ajmc-fr-hmbert-bs4-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-5
stefan-it
2023-10-17T23:14:56Z
0
0
flair
[ "flair", "token-classification", "sequence-tagger-model", "fr", "base_model:dbmdz/bert-base-historic-multilingual-cased", "base_model:finetune:dbmdz/bert-base-historic-multilingual-cased", "license:mit", "region:us" ]
token-classification
2023-10-13T11:07:27Z
--- language: fr license: mit tags: - flair - token-classification - sequence-tagger-model base_model: dbmdz/bert-base-historic-multilingual-cased widget: - text: — 469 . Πεδία . Les tribraques formés par un seul mot sont rares chez les tragiques , partont ailleurs qu ’ au premier pied . CÉ . cependant QEd , Roi , 719 , 826 , 4496 . --- # Fine-tuned Flair Model on AjMC French NER Dataset (HIPE-2022) This Flair model was fine-tuned on the [AjMC French](https://github.com/hipe-eval/HIPE-2022-data/blob/main/documentation/README-ajmc.md) NER Dataset using hmBERT as backbone LM. The AjMC dataset consists of NE-annotated historical commentaries in the field of Classics, and was created in the context of the [Ajax MultiCommentary](https://mromanello.github.io/ajax-multi-commentary/) project. The following NEs were annotated: `pers`, `work`, `loc`, `object`, `date` and `scope`. # Results We performed a hyper-parameter search over the following parameters with 5 different seeds per configuration: * Batch Sizes: `[8, 4]` * Learning Rates: `[3e-05, 5e-05]` And report micro F1-score on development set: | Configuration | Run 1 | Run 2 | Run 3 | Run 4 | Run 5 | Avg. | |-----------------|--------------|--------------|--------------|--------------|--------------|--------------| | bs4-e10-lr5e-05 | [0.8436][1] | [0.8287][2] | [0.8475][3] | [0.8455][4] | [0.8553][5] | 84.41 ± 0.87 | | bs8-e10-lr3e-05 | [0.8228][6] | [0.8407][7] | [0.8557][8] | [0.8532][9] | [0.8385][10] | 84.22 ± 1.18 | | bs4-e10-lr3e-05 | [0.8202][11] | [0.8519][12] | [0.8434][13] | [0.8418][14] | [0.8436][15] | 84.02 ± 1.06 | | bs8-e10-lr5e-05 | [0.8333][16] | [0.8338][17] | [0.8394][18] | [0.8409][19] | [0.8504][20] | 83.96 ± 0.62 | [1]: https://hf.co/stefan-it/hmbench-ajmc-fr-hmbert-bs4-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-1 [2]: https://hf.co/stefan-it/hmbench-ajmc-fr-hmbert-bs4-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-2 [3]: https://hf.co/stefan-it/hmbench-ajmc-fr-hmbert-bs4-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-3 [4]: https://hf.co/stefan-it/hmbench-ajmc-fr-hmbert-bs4-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-4 [5]: https://hf.co/stefan-it/hmbench-ajmc-fr-hmbert-bs4-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-5 [6]: https://hf.co/stefan-it/hmbench-ajmc-fr-hmbert-bs8-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-1 [7]: https://hf.co/stefan-it/hmbench-ajmc-fr-hmbert-bs8-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-2 [8]: https://hf.co/stefan-it/hmbench-ajmc-fr-hmbert-bs8-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-3 [9]: https://hf.co/stefan-it/hmbench-ajmc-fr-hmbert-bs8-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-4 [10]: https://hf.co/stefan-it/hmbench-ajmc-fr-hmbert-bs8-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-5 [11]: https://hf.co/stefan-it/hmbench-ajmc-fr-hmbert-bs4-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-1 [12]: https://hf.co/stefan-it/hmbench-ajmc-fr-hmbert-bs4-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-2 [13]: https://hf.co/stefan-it/hmbench-ajmc-fr-hmbert-bs4-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-3 [14]: https://hf.co/stefan-it/hmbench-ajmc-fr-hmbert-bs4-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-4 [15]: https://hf.co/stefan-it/hmbench-ajmc-fr-hmbert-bs4-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-5 [16]: https://hf.co/stefan-it/hmbench-ajmc-fr-hmbert-bs8-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-1 [17]: https://hf.co/stefan-it/hmbench-ajmc-fr-hmbert-bs8-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-2 [18]: https://hf.co/stefan-it/hmbench-ajmc-fr-hmbert-bs8-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-3 [19]: https://hf.co/stefan-it/hmbench-ajmc-fr-hmbert-bs8-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-4 [20]: https://hf.co/stefan-it/hmbench-ajmc-fr-hmbert-bs8-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-5 The [training log](training.log) and TensorBoard logs (only for hmByT5 and hmTEAMS based models) are also uploaded to the model hub. More information about fine-tuning can be found [here](https://github.com/stefan-it/hmBench). # Acknowledgements We thank [Luisa März](https://github.com/LuisaMaerz), [Katharina Schmid](https://github.com/schmika) and [Erion Çano](https://github.com/erionc) for their fruitful discussions about Historic Language Models. Research supported with Cloud TPUs from Google's [TPU Research Cloud](https://sites.research.google/trc/about/) (TRC). Many Thanks for providing access to the TPUs ❤️
stefan-it/hmbench-ajmc-fr-hmbert-bs4-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-4
stefan-it
2023-10-17T23:14:54Z
0
0
flair
[ "flair", "token-classification", "sequence-tagger-model", "fr", "base_model:dbmdz/bert-base-historic-multilingual-cased", "base_model:finetune:dbmdz/bert-base-historic-multilingual-cased", "license:mit", "region:us" ]
token-classification
2023-10-13T10:58:44Z
--- language: fr license: mit tags: - flair - token-classification - sequence-tagger-model base_model: dbmdz/bert-base-historic-multilingual-cased widget: - text: — 469 . Πεδία . Les tribraques formés par un seul mot sont rares chez les tragiques , partont ailleurs qu ’ au premier pied . CÉ . cependant QEd , Roi , 719 , 826 , 4496 . --- # Fine-tuned Flair Model on AjMC French NER Dataset (HIPE-2022) This Flair model was fine-tuned on the [AjMC French](https://github.com/hipe-eval/HIPE-2022-data/blob/main/documentation/README-ajmc.md) NER Dataset using hmBERT as backbone LM. The AjMC dataset consists of NE-annotated historical commentaries in the field of Classics, and was created in the context of the [Ajax MultiCommentary](https://mromanello.github.io/ajax-multi-commentary/) project. The following NEs were annotated: `pers`, `work`, `loc`, `object`, `date` and `scope`. # Results We performed a hyper-parameter search over the following parameters with 5 different seeds per configuration: * Batch Sizes: `[8, 4]` * Learning Rates: `[3e-05, 5e-05]` And report micro F1-score on development set: | Configuration | Run 1 | Run 2 | Run 3 | Run 4 | Run 5 | Avg. | |-----------------|--------------|--------------|--------------|--------------|--------------|--------------| | bs4-e10-lr5e-05 | [0.8436][1] | [0.8287][2] | [0.8475][3] | [0.8455][4] | [0.8553][5] | 84.41 ± 0.87 | | bs8-e10-lr3e-05 | [0.8228][6] | [0.8407][7] | [0.8557][8] | [0.8532][9] | [0.8385][10] | 84.22 ± 1.18 | | bs4-e10-lr3e-05 | [0.8202][11] | [0.8519][12] | [0.8434][13] | [0.8418][14] | [0.8436][15] | 84.02 ± 1.06 | | bs8-e10-lr5e-05 | [0.8333][16] | [0.8338][17] | [0.8394][18] | [0.8409][19] | [0.8504][20] | 83.96 ± 0.62 | [1]: https://hf.co/stefan-it/hmbench-ajmc-fr-hmbert-bs4-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-1 [2]: https://hf.co/stefan-it/hmbench-ajmc-fr-hmbert-bs4-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-2 [3]: https://hf.co/stefan-it/hmbench-ajmc-fr-hmbert-bs4-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-3 [4]: https://hf.co/stefan-it/hmbench-ajmc-fr-hmbert-bs4-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-4 [5]: https://hf.co/stefan-it/hmbench-ajmc-fr-hmbert-bs4-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-5 [6]: https://hf.co/stefan-it/hmbench-ajmc-fr-hmbert-bs8-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-1 [7]: https://hf.co/stefan-it/hmbench-ajmc-fr-hmbert-bs8-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-2 [8]: https://hf.co/stefan-it/hmbench-ajmc-fr-hmbert-bs8-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-3 [9]: https://hf.co/stefan-it/hmbench-ajmc-fr-hmbert-bs8-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-4 [10]: https://hf.co/stefan-it/hmbench-ajmc-fr-hmbert-bs8-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-5 [11]: https://hf.co/stefan-it/hmbench-ajmc-fr-hmbert-bs4-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-1 [12]: https://hf.co/stefan-it/hmbench-ajmc-fr-hmbert-bs4-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-2 [13]: https://hf.co/stefan-it/hmbench-ajmc-fr-hmbert-bs4-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-3 [14]: https://hf.co/stefan-it/hmbench-ajmc-fr-hmbert-bs4-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-4 [15]: https://hf.co/stefan-it/hmbench-ajmc-fr-hmbert-bs4-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-5 [16]: https://hf.co/stefan-it/hmbench-ajmc-fr-hmbert-bs8-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-1 [17]: https://hf.co/stefan-it/hmbench-ajmc-fr-hmbert-bs8-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-2 [18]: https://hf.co/stefan-it/hmbench-ajmc-fr-hmbert-bs8-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-3 [19]: https://hf.co/stefan-it/hmbench-ajmc-fr-hmbert-bs8-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-4 [20]: https://hf.co/stefan-it/hmbench-ajmc-fr-hmbert-bs8-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-5 The [training log](training.log) and TensorBoard logs (only for hmByT5 and hmTEAMS based models) are also uploaded to the model hub. More information about fine-tuning can be found [here](https://github.com/stefan-it/hmBench). # Acknowledgements We thank [Luisa März](https://github.com/LuisaMaerz), [Katharina Schmid](https://github.com/schmika) and [Erion Çano](https://github.com/erionc) for their fruitful discussions about Historic Language Models. Research supported with Cloud TPUs from Google's [TPU Research Cloud](https://sites.research.google/trc/about/) (TRC). Many Thanks for providing access to the TPUs ❤️
stefan-it/hmbench-ajmc-fr-hmbert-bs4-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-3
stefan-it
2023-10-17T23:14:53Z
0
0
flair
[ "flair", "token-classification", "sequence-tagger-model", "fr", "base_model:dbmdz/bert-base-historic-multilingual-cased", "base_model:finetune:dbmdz/bert-base-historic-multilingual-cased", "license:mit", "region:us" ]
token-classification
2023-10-13T10:49:58Z
--- language: fr license: mit tags: - flair - token-classification - sequence-tagger-model base_model: dbmdz/bert-base-historic-multilingual-cased widget: - text: — 469 . Πεδία . Les tribraques formés par un seul mot sont rares chez les tragiques , partont ailleurs qu ’ au premier pied . CÉ . cependant QEd , Roi , 719 , 826 , 4496 . --- # Fine-tuned Flair Model on AjMC French NER Dataset (HIPE-2022) This Flair model was fine-tuned on the [AjMC French](https://github.com/hipe-eval/HIPE-2022-data/blob/main/documentation/README-ajmc.md) NER Dataset using hmBERT as backbone LM. The AjMC dataset consists of NE-annotated historical commentaries in the field of Classics, and was created in the context of the [Ajax MultiCommentary](https://mromanello.github.io/ajax-multi-commentary/) project. The following NEs were annotated: `pers`, `work`, `loc`, `object`, `date` and `scope`. # Results We performed a hyper-parameter search over the following parameters with 5 different seeds per configuration: * Batch Sizes: `[8, 4]` * Learning Rates: `[3e-05, 5e-05]` And report micro F1-score on development set: | Configuration | Run 1 | Run 2 | Run 3 | Run 4 | Run 5 | Avg. | |-----------------|--------------|--------------|--------------|--------------|--------------|--------------| | bs4-e10-lr5e-05 | [0.8436][1] | [0.8287][2] | [0.8475][3] | [0.8455][4] | [0.8553][5] | 84.41 ± 0.87 | | bs8-e10-lr3e-05 | [0.8228][6] | [0.8407][7] | [0.8557][8] | [0.8532][9] | [0.8385][10] | 84.22 ± 1.18 | | bs4-e10-lr3e-05 | [0.8202][11] | [0.8519][12] | [0.8434][13] | [0.8418][14] | [0.8436][15] | 84.02 ± 1.06 | | bs8-e10-lr5e-05 | [0.8333][16] | [0.8338][17] | [0.8394][18] | [0.8409][19] | [0.8504][20] | 83.96 ± 0.62 | [1]: https://hf.co/stefan-it/hmbench-ajmc-fr-hmbert-bs4-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-1 [2]: https://hf.co/stefan-it/hmbench-ajmc-fr-hmbert-bs4-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-2 [3]: https://hf.co/stefan-it/hmbench-ajmc-fr-hmbert-bs4-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-3 [4]: https://hf.co/stefan-it/hmbench-ajmc-fr-hmbert-bs4-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-4 [5]: https://hf.co/stefan-it/hmbench-ajmc-fr-hmbert-bs4-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-5 [6]: https://hf.co/stefan-it/hmbench-ajmc-fr-hmbert-bs8-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-1 [7]: https://hf.co/stefan-it/hmbench-ajmc-fr-hmbert-bs8-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-2 [8]: https://hf.co/stefan-it/hmbench-ajmc-fr-hmbert-bs8-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-3 [9]: https://hf.co/stefan-it/hmbench-ajmc-fr-hmbert-bs8-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-4 [10]: https://hf.co/stefan-it/hmbench-ajmc-fr-hmbert-bs8-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-5 [11]: https://hf.co/stefan-it/hmbench-ajmc-fr-hmbert-bs4-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-1 [12]: https://hf.co/stefan-it/hmbench-ajmc-fr-hmbert-bs4-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-2 [13]: https://hf.co/stefan-it/hmbench-ajmc-fr-hmbert-bs4-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-3 [14]: https://hf.co/stefan-it/hmbench-ajmc-fr-hmbert-bs4-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-4 [15]: https://hf.co/stefan-it/hmbench-ajmc-fr-hmbert-bs4-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-5 [16]: https://hf.co/stefan-it/hmbench-ajmc-fr-hmbert-bs8-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-1 [17]: https://hf.co/stefan-it/hmbench-ajmc-fr-hmbert-bs8-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-2 [18]: https://hf.co/stefan-it/hmbench-ajmc-fr-hmbert-bs8-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-3 [19]: https://hf.co/stefan-it/hmbench-ajmc-fr-hmbert-bs8-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-4 [20]: https://hf.co/stefan-it/hmbench-ajmc-fr-hmbert-bs8-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-5 The [training log](training.log) and TensorBoard logs (only for hmByT5 and hmTEAMS based models) are also uploaded to the model hub. More information about fine-tuning can be found [here](https://github.com/stefan-it/hmBench). # Acknowledgements We thank [Luisa März](https://github.com/LuisaMaerz), [Katharina Schmid](https://github.com/schmika) and [Erion Çano](https://github.com/erionc) for their fruitful discussions about Historic Language Models. Research supported with Cloud TPUs from Google's [TPU Research Cloud](https://sites.research.google/trc/about/) (TRC). Many Thanks for providing access to the TPUs ❤️
stefan-it/hmbench-ajmc-fr-hmbert-bs8-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-5
stefan-it
2023-10-17T23:14:50Z
0
0
flair
[ "flair", "token-classification", "sequence-tagger-model", "fr", "base_model:dbmdz/bert-base-historic-multilingual-cased", "base_model:finetune:dbmdz/bert-base-historic-multilingual-cased", "license:mit", "region:us" ]
token-classification
2023-10-13T11:13:40Z
--- language: fr license: mit tags: - flair - token-classification - sequence-tagger-model base_model: dbmdz/bert-base-historic-multilingual-cased widget: - text: — 469 . Πεδία . Les tribraques formés par un seul mot sont rares chez les tragiques , partont ailleurs qu ’ au premier pied . CÉ . cependant QEd , Roi , 719 , 826 , 4496 . --- # Fine-tuned Flair Model on AjMC French NER Dataset (HIPE-2022) This Flair model was fine-tuned on the [AjMC French](https://github.com/hipe-eval/HIPE-2022-data/blob/main/documentation/README-ajmc.md) NER Dataset using hmBERT as backbone LM. The AjMC dataset consists of NE-annotated historical commentaries in the field of Classics, and was created in the context of the [Ajax MultiCommentary](https://mromanello.github.io/ajax-multi-commentary/) project. The following NEs were annotated: `pers`, `work`, `loc`, `object`, `date` and `scope`. # Results We performed a hyper-parameter search over the following parameters with 5 different seeds per configuration: * Batch Sizes: `[8, 4]` * Learning Rates: `[3e-05, 5e-05]` And report micro F1-score on development set: | Configuration | Run 1 | Run 2 | Run 3 | Run 4 | Run 5 | Avg. | |-----------------|--------------|--------------|--------------|--------------|--------------|--------------| | bs4-e10-lr5e-05 | [0.8436][1] | [0.8287][2] | [0.8475][3] | [0.8455][4] | [0.8553][5] | 84.41 ± 0.87 | | bs8-e10-lr3e-05 | [0.8228][6] | [0.8407][7] | [0.8557][8] | [0.8532][9] | [0.8385][10] | 84.22 ± 1.18 | | bs4-e10-lr3e-05 | [0.8202][11] | [0.8519][12] | [0.8434][13] | [0.8418][14] | [0.8436][15] | 84.02 ± 1.06 | | bs8-e10-lr5e-05 | [0.8333][16] | [0.8338][17] | [0.8394][18] | [0.8409][19] | [0.8504][20] | 83.96 ± 0.62 | [1]: https://hf.co/stefan-it/hmbench-ajmc-fr-hmbert-bs4-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-1 [2]: https://hf.co/stefan-it/hmbench-ajmc-fr-hmbert-bs4-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-2 [3]: https://hf.co/stefan-it/hmbench-ajmc-fr-hmbert-bs4-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-3 [4]: https://hf.co/stefan-it/hmbench-ajmc-fr-hmbert-bs4-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-4 [5]: https://hf.co/stefan-it/hmbench-ajmc-fr-hmbert-bs4-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-5 [6]: https://hf.co/stefan-it/hmbench-ajmc-fr-hmbert-bs8-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-1 [7]: https://hf.co/stefan-it/hmbench-ajmc-fr-hmbert-bs8-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-2 [8]: https://hf.co/stefan-it/hmbench-ajmc-fr-hmbert-bs8-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-3 [9]: https://hf.co/stefan-it/hmbench-ajmc-fr-hmbert-bs8-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-4 [10]: https://hf.co/stefan-it/hmbench-ajmc-fr-hmbert-bs8-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-5 [11]: https://hf.co/stefan-it/hmbench-ajmc-fr-hmbert-bs4-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-1 [12]: https://hf.co/stefan-it/hmbench-ajmc-fr-hmbert-bs4-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-2 [13]: https://hf.co/stefan-it/hmbench-ajmc-fr-hmbert-bs4-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-3 [14]: https://hf.co/stefan-it/hmbench-ajmc-fr-hmbert-bs4-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-4 [15]: https://hf.co/stefan-it/hmbench-ajmc-fr-hmbert-bs4-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-5 [16]: https://hf.co/stefan-it/hmbench-ajmc-fr-hmbert-bs8-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-1 [17]: https://hf.co/stefan-it/hmbench-ajmc-fr-hmbert-bs8-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-2 [18]: https://hf.co/stefan-it/hmbench-ajmc-fr-hmbert-bs8-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-3 [19]: https://hf.co/stefan-it/hmbench-ajmc-fr-hmbert-bs8-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-4 [20]: https://hf.co/stefan-it/hmbench-ajmc-fr-hmbert-bs8-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-5 The [training log](training.log) and TensorBoard logs (only for hmByT5 and hmTEAMS based models) are also uploaded to the model hub. More information about fine-tuning can be found [here](https://github.com/stefan-it/hmBench). # Acknowledgements We thank [Luisa März](https://github.com/LuisaMaerz), [Katharina Schmid](https://github.com/schmika) and [Erion Çano](https://github.com/erionc) for their fruitful discussions about Historic Language Models. Research supported with Cloud TPUs from Google's [TPU Research Cloud](https://sites.research.google/trc/about/) (TRC). Many Thanks for providing access to the TPUs ❤️
stefan-it/hmbench-ajmc-fr-hmbert-bs8-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-4
stefan-it
2023-10-17T23:14:49Z
0
0
flair
[ "flair", "token-classification", "sequence-tagger-model", "fr", "base_model:dbmdz/bert-base-historic-multilingual-cased", "base_model:finetune:dbmdz/bert-base-historic-multilingual-cased", "license:mit", "region:us" ]
token-classification
2023-10-13T11:04:59Z
--- language: fr license: mit tags: - flair - token-classification - sequence-tagger-model base_model: dbmdz/bert-base-historic-multilingual-cased widget: - text: — 469 . Πεδία . Les tribraques formés par un seul mot sont rares chez les tragiques , partont ailleurs qu ’ au premier pied . CÉ . cependant QEd , Roi , 719 , 826 , 4496 . --- # Fine-tuned Flair Model on AjMC French NER Dataset (HIPE-2022) This Flair model was fine-tuned on the [AjMC French](https://github.com/hipe-eval/HIPE-2022-data/blob/main/documentation/README-ajmc.md) NER Dataset using hmBERT as backbone LM. The AjMC dataset consists of NE-annotated historical commentaries in the field of Classics, and was created in the context of the [Ajax MultiCommentary](https://mromanello.github.io/ajax-multi-commentary/) project. The following NEs were annotated: `pers`, `work`, `loc`, `object`, `date` and `scope`. # Results We performed a hyper-parameter search over the following parameters with 5 different seeds per configuration: * Batch Sizes: `[8, 4]` * Learning Rates: `[3e-05, 5e-05]` And report micro F1-score on development set: | Configuration | Run 1 | Run 2 | Run 3 | Run 4 | Run 5 | Avg. | |-----------------|--------------|--------------|--------------|--------------|--------------|--------------| | bs4-e10-lr5e-05 | [0.8436][1] | [0.8287][2] | [0.8475][3] | [0.8455][4] | [0.8553][5] | 84.41 ± 0.87 | | bs8-e10-lr3e-05 | [0.8228][6] | [0.8407][7] | [0.8557][8] | [0.8532][9] | [0.8385][10] | 84.22 ± 1.18 | | bs4-e10-lr3e-05 | [0.8202][11] | [0.8519][12] | [0.8434][13] | [0.8418][14] | [0.8436][15] | 84.02 ± 1.06 | | bs8-e10-lr5e-05 | [0.8333][16] | [0.8338][17] | [0.8394][18] | [0.8409][19] | [0.8504][20] | 83.96 ± 0.62 | [1]: https://hf.co/stefan-it/hmbench-ajmc-fr-hmbert-bs4-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-1 [2]: https://hf.co/stefan-it/hmbench-ajmc-fr-hmbert-bs4-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-2 [3]: https://hf.co/stefan-it/hmbench-ajmc-fr-hmbert-bs4-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-3 [4]: https://hf.co/stefan-it/hmbench-ajmc-fr-hmbert-bs4-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-4 [5]: https://hf.co/stefan-it/hmbench-ajmc-fr-hmbert-bs4-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-5 [6]: https://hf.co/stefan-it/hmbench-ajmc-fr-hmbert-bs8-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-1 [7]: https://hf.co/stefan-it/hmbench-ajmc-fr-hmbert-bs8-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-2 [8]: https://hf.co/stefan-it/hmbench-ajmc-fr-hmbert-bs8-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-3 [9]: https://hf.co/stefan-it/hmbench-ajmc-fr-hmbert-bs8-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-4 [10]: https://hf.co/stefan-it/hmbench-ajmc-fr-hmbert-bs8-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-5 [11]: https://hf.co/stefan-it/hmbench-ajmc-fr-hmbert-bs4-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-1 [12]: https://hf.co/stefan-it/hmbench-ajmc-fr-hmbert-bs4-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-2 [13]: https://hf.co/stefan-it/hmbench-ajmc-fr-hmbert-bs4-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-3 [14]: https://hf.co/stefan-it/hmbench-ajmc-fr-hmbert-bs4-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-4 [15]: https://hf.co/stefan-it/hmbench-ajmc-fr-hmbert-bs4-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-5 [16]: https://hf.co/stefan-it/hmbench-ajmc-fr-hmbert-bs8-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-1 [17]: https://hf.co/stefan-it/hmbench-ajmc-fr-hmbert-bs8-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-2 [18]: https://hf.co/stefan-it/hmbench-ajmc-fr-hmbert-bs8-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-3 [19]: https://hf.co/stefan-it/hmbench-ajmc-fr-hmbert-bs8-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-4 [20]: https://hf.co/stefan-it/hmbench-ajmc-fr-hmbert-bs8-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-5 The [training log](training.log) and TensorBoard logs (only for hmByT5 and hmTEAMS based models) are also uploaded to the model hub. More information about fine-tuning can be found [here](https://github.com/stefan-it/hmBench). # Acknowledgements We thank [Luisa März](https://github.com/LuisaMaerz), [Katharina Schmid](https://github.com/schmika) and [Erion Çano](https://github.com/erionc) for their fruitful discussions about Historic Language Models. Research supported with Cloud TPUs from Google's [TPU Research Cloud](https://sites.research.google/trc/about/) (TRC). Many Thanks for providing access to the TPUs ❤️
stefan-it/hmbench-ajmc-fr-hmbert-bs8-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-3
stefan-it
2023-10-17T23:14:48Z
0
0
flair
[ "flair", "token-classification", "sequence-tagger-model", "fr", "base_model:dbmdz/bert-base-historic-multilingual-cased", "base_model:finetune:dbmdz/bert-base-historic-multilingual-cased", "license:mit", "region:us" ]
token-classification
2023-10-13T10:56:15Z
--- language: fr license: mit tags: - flair - token-classification - sequence-tagger-model base_model: dbmdz/bert-base-historic-multilingual-cased widget: - text: — 469 . Πεδία . Les tribraques formés par un seul mot sont rares chez les tragiques , partont ailleurs qu ’ au premier pied . CÉ . cependant QEd , Roi , 719 , 826 , 4496 . --- # Fine-tuned Flair Model on AjMC French NER Dataset (HIPE-2022) This Flair model was fine-tuned on the [AjMC French](https://github.com/hipe-eval/HIPE-2022-data/blob/main/documentation/README-ajmc.md) NER Dataset using hmBERT as backbone LM. The AjMC dataset consists of NE-annotated historical commentaries in the field of Classics, and was created in the context of the [Ajax MultiCommentary](https://mromanello.github.io/ajax-multi-commentary/) project. The following NEs were annotated: `pers`, `work`, `loc`, `object`, `date` and `scope`. # Results We performed a hyper-parameter search over the following parameters with 5 different seeds per configuration: * Batch Sizes: `[8, 4]` * Learning Rates: `[3e-05, 5e-05]` And report micro F1-score on development set: | Configuration | Run 1 | Run 2 | Run 3 | Run 4 | Run 5 | Avg. | |-----------------|--------------|--------------|--------------|--------------|--------------|--------------| | bs4-e10-lr5e-05 | [0.8436][1] | [0.8287][2] | [0.8475][3] | [0.8455][4] | [0.8553][5] | 84.41 ± 0.87 | | bs8-e10-lr3e-05 | [0.8228][6] | [0.8407][7] | [0.8557][8] | [0.8532][9] | [0.8385][10] | 84.22 ± 1.18 | | bs4-e10-lr3e-05 | [0.8202][11] | [0.8519][12] | [0.8434][13] | [0.8418][14] | [0.8436][15] | 84.02 ± 1.06 | | bs8-e10-lr5e-05 | [0.8333][16] | [0.8338][17] | [0.8394][18] | [0.8409][19] | [0.8504][20] | 83.96 ± 0.62 | [1]: https://hf.co/stefan-it/hmbench-ajmc-fr-hmbert-bs4-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-1 [2]: https://hf.co/stefan-it/hmbench-ajmc-fr-hmbert-bs4-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-2 [3]: https://hf.co/stefan-it/hmbench-ajmc-fr-hmbert-bs4-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-3 [4]: https://hf.co/stefan-it/hmbench-ajmc-fr-hmbert-bs4-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-4 [5]: https://hf.co/stefan-it/hmbench-ajmc-fr-hmbert-bs4-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-5 [6]: https://hf.co/stefan-it/hmbench-ajmc-fr-hmbert-bs8-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-1 [7]: https://hf.co/stefan-it/hmbench-ajmc-fr-hmbert-bs8-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-2 [8]: https://hf.co/stefan-it/hmbench-ajmc-fr-hmbert-bs8-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-3 [9]: https://hf.co/stefan-it/hmbench-ajmc-fr-hmbert-bs8-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-4 [10]: https://hf.co/stefan-it/hmbench-ajmc-fr-hmbert-bs8-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-5 [11]: https://hf.co/stefan-it/hmbench-ajmc-fr-hmbert-bs4-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-1 [12]: https://hf.co/stefan-it/hmbench-ajmc-fr-hmbert-bs4-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-2 [13]: https://hf.co/stefan-it/hmbench-ajmc-fr-hmbert-bs4-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-3 [14]: https://hf.co/stefan-it/hmbench-ajmc-fr-hmbert-bs4-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-4 [15]: https://hf.co/stefan-it/hmbench-ajmc-fr-hmbert-bs4-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-5 [16]: https://hf.co/stefan-it/hmbench-ajmc-fr-hmbert-bs8-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-1 [17]: https://hf.co/stefan-it/hmbench-ajmc-fr-hmbert-bs8-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-2 [18]: https://hf.co/stefan-it/hmbench-ajmc-fr-hmbert-bs8-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-3 [19]: https://hf.co/stefan-it/hmbench-ajmc-fr-hmbert-bs8-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-4 [20]: https://hf.co/stefan-it/hmbench-ajmc-fr-hmbert-bs8-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-5 The [training log](training.log) and TensorBoard logs (only for hmByT5 and hmTEAMS based models) are also uploaded to the model hub. More information about fine-tuning can be found [here](https://github.com/stefan-it/hmBench). # Acknowledgements We thank [Luisa März](https://github.com/LuisaMaerz), [Katharina Schmid](https://github.com/schmika) and [Erion Çano](https://github.com/erionc) for their fruitful discussions about Historic Language Models. Research supported with Cloud TPUs from Google's [TPU Research Cloud](https://sites.research.google/trc/about/) (TRC). Many Thanks for providing access to the TPUs ❤️
stefan-it/hmbench-ajmc-fr-hmbert-bs8-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-2
stefan-it
2023-10-17T23:14:47Z
0
0
flair
[ "flair", "token-classification", "sequence-tagger-model", "fr", "base_model:dbmdz/bert-base-historic-multilingual-cased", "base_model:finetune:dbmdz/bert-base-historic-multilingual-cased", "license:mit", "region:us" ]
token-classification
2023-10-13T10:47:30Z
--- language: fr license: mit tags: - flair - token-classification - sequence-tagger-model base_model: dbmdz/bert-base-historic-multilingual-cased widget: - text: — 469 . Πεδία . Les tribraques formés par un seul mot sont rares chez les tragiques , partont ailleurs qu ’ au premier pied . CÉ . cependant QEd , Roi , 719 , 826 , 4496 . --- # Fine-tuned Flair Model on AjMC French NER Dataset (HIPE-2022) This Flair model was fine-tuned on the [AjMC French](https://github.com/hipe-eval/HIPE-2022-data/blob/main/documentation/README-ajmc.md) NER Dataset using hmBERT as backbone LM. The AjMC dataset consists of NE-annotated historical commentaries in the field of Classics, and was created in the context of the [Ajax MultiCommentary](https://mromanello.github.io/ajax-multi-commentary/) project. The following NEs were annotated: `pers`, `work`, `loc`, `object`, `date` and `scope`. # Results We performed a hyper-parameter search over the following parameters with 5 different seeds per configuration: * Batch Sizes: `[8, 4]` * Learning Rates: `[3e-05, 5e-05]` And report micro F1-score on development set: | Configuration | Run 1 | Run 2 | Run 3 | Run 4 | Run 5 | Avg. | |-----------------|--------------|--------------|--------------|--------------|--------------|--------------| | bs4-e10-lr5e-05 | [0.8436][1] | [0.8287][2] | [0.8475][3] | [0.8455][4] | [0.8553][5] | 84.41 ± 0.87 | | bs8-e10-lr3e-05 | [0.8228][6] | [0.8407][7] | [0.8557][8] | [0.8532][9] | [0.8385][10] | 84.22 ± 1.18 | | bs4-e10-lr3e-05 | [0.8202][11] | [0.8519][12] | [0.8434][13] | [0.8418][14] | [0.8436][15] | 84.02 ± 1.06 | | bs8-e10-lr5e-05 | [0.8333][16] | [0.8338][17] | [0.8394][18] | [0.8409][19] | [0.8504][20] | 83.96 ± 0.62 | [1]: https://hf.co/stefan-it/hmbench-ajmc-fr-hmbert-bs4-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-1 [2]: https://hf.co/stefan-it/hmbench-ajmc-fr-hmbert-bs4-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-2 [3]: https://hf.co/stefan-it/hmbench-ajmc-fr-hmbert-bs4-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-3 [4]: https://hf.co/stefan-it/hmbench-ajmc-fr-hmbert-bs4-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-4 [5]: https://hf.co/stefan-it/hmbench-ajmc-fr-hmbert-bs4-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-5 [6]: https://hf.co/stefan-it/hmbench-ajmc-fr-hmbert-bs8-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-1 [7]: https://hf.co/stefan-it/hmbench-ajmc-fr-hmbert-bs8-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-2 [8]: https://hf.co/stefan-it/hmbench-ajmc-fr-hmbert-bs8-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-3 [9]: https://hf.co/stefan-it/hmbench-ajmc-fr-hmbert-bs8-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-4 [10]: https://hf.co/stefan-it/hmbench-ajmc-fr-hmbert-bs8-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-5 [11]: https://hf.co/stefan-it/hmbench-ajmc-fr-hmbert-bs4-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-1 [12]: https://hf.co/stefan-it/hmbench-ajmc-fr-hmbert-bs4-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-2 [13]: https://hf.co/stefan-it/hmbench-ajmc-fr-hmbert-bs4-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-3 [14]: https://hf.co/stefan-it/hmbench-ajmc-fr-hmbert-bs4-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-4 [15]: https://hf.co/stefan-it/hmbench-ajmc-fr-hmbert-bs4-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-5 [16]: https://hf.co/stefan-it/hmbench-ajmc-fr-hmbert-bs8-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-1 [17]: https://hf.co/stefan-it/hmbench-ajmc-fr-hmbert-bs8-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-2 [18]: https://hf.co/stefan-it/hmbench-ajmc-fr-hmbert-bs8-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-3 [19]: https://hf.co/stefan-it/hmbench-ajmc-fr-hmbert-bs8-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-4 [20]: https://hf.co/stefan-it/hmbench-ajmc-fr-hmbert-bs8-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-5 The [training log](training.log) and TensorBoard logs (only for hmByT5 and hmTEAMS based models) are also uploaded to the model hub. More information about fine-tuning can be found [here](https://github.com/stefan-it/hmBench). # Acknowledgements We thank [Luisa März](https://github.com/LuisaMaerz), [Katharina Schmid](https://github.com/schmika) and [Erion Çano](https://github.com/erionc) for their fruitful discussions about Historic Language Models. Research supported with Cloud TPUs from Google's [TPU Research Cloud](https://sites.research.google/trc/about/) (TRC). Many Thanks for providing access to the TPUs ❤️
stefan-it/hmbench-ajmc-fr-hmbert-bs8-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-3
stefan-it
2023-10-17T23:14:42Z
0
0
flair
[ "flair", "token-classification", "sequence-tagger-model", "fr", "base_model:dbmdz/bert-base-historic-multilingual-cased", "base_model:finetune:dbmdz/bert-base-historic-multilingual-cased", "license:mit", "region:us" ]
token-classification
2023-10-13T10:54:22Z
--- language: fr license: mit tags: - flair - token-classification - sequence-tagger-model base_model: dbmdz/bert-base-historic-multilingual-cased widget: - text: — 469 . Πεδία . Les tribraques formés par un seul mot sont rares chez les tragiques , partont ailleurs qu ’ au premier pied . CÉ . cependant QEd , Roi , 719 , 826 , 4496 . --- # Fine-tuned Flair Model on AjMC French NER Dataset (HIPE-2022) This Flair model was fine-tuned on the [AjMC French](https://github.com/hipe-eval/HIPE-2022-data/blob/main/documentation/README-ajmc.md) NER Dataset using hmBERT as backbone LM. The AjMC dataset consists of NE-annotated historical commentaries in the field of Classics, and was created in the context of the [Ajax MultiCommentary](https://mromanello.github.io/ajax-multi-commentary/) project. The following NEs were annotated: `pers`, `work`, `loc`, `object`, `date` and `scope`. # Results We performed a hyper-parameter search over the following parameters with 5 different seeds per configuration: * Batch Sizes: `[8, 4]` * Learning Rates: `[3e-05, 5e-05]` And report micro F1-score on development set: | Configuration | Run 1 | Run 2 | Run 3 | Run 4 | Run 5 | Avg. | |-----------------|--------------|--------------|--------------|--------------|--------------|--------------| | bs4-e10-lr5e-05 | [0.8436][1] | [0.8287][2] | [0.8475][3] | [0.8455][4] | [0.8553][5] | 84.41 ± 0.87 | | bs8-e10-lr3e-05 | [0.8228][6] | [0.8407][7] | [0.8557][8] | [0.8532][9] | [0.8385][10] | 84.22 ± 1.18 | | bs4-e10-lr3e-05 | [0.8202][11] | [0.8519][12] | [0.8434][13] | [0.8418][14] | [0.8436][15] | 84.02 ± 1.06 | | bs8-e10-lr5e-05 | [0.8333][16] | [0.8338][17] | [0.8394][18] | [0.8409][19] | [0.8504][20] | 83.96 ± 0.62 | [1]: https://hf.co/stefan-it/hmbench-ajmc-fr-hmbert-bs4-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-1 [2]: https://hf.co/stefan-it/hmbench-ajmc-fr-hmbert-bs4-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-2 [3]: https://hf.co/stefan-it/hmbench-ajmc-fr-hmbert-bs4-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-3 [4]: https://hf.co/stefan-it/hmbench-ajmc-fr-hmbert-bs4-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-4 [5]: https://hf.co/stefan-it/hmbench-ajmc-fr-hmbert-bs4-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-5 [6]: https://hf.co/stefan-it/hmbench-ajmc-fr-hmbert-bs8-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-1 [7]: https://hf.co/stefan-it/hmbench-ajmc-fr-hmbert-bs8-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-2 [8]: https://hf.co/stefan-it/hmbench-ajmc-fr-hmbert-bs8-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-3 [9]: https://hf.co/stefan-it/hmbench-ajmc-fr-hmbert-bs8-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-4 [10]: https://hf.co/stefan-it/hmbench-ajmc-fr-hmbert-bs8-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-5 [11]: https://hf.co/stefan-it/hmbench-ajmc-fr-hmbert-bs4-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-1 [12]: https://hf.co/stefan-it/hmbench-ajmc-fr-hmbert-bs4-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-2 [13]: https://hf.co/stefan-it/hmbench-ajmc-fr-hmbert-bs4-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-3 [14]: https://hf.co/stefan-it/hmbench-ajmc-fr-hmbert-bs4-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-4 [15]: https://hf.co/stefan-it/hmbench-ajmc-fr-hmbert-bs4-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-5 [16]: https://hf.co/stefan-it/hmbench-ajmc-fr-hmbert-bs8-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-1 [17]: https://hf.co/stefan-it/hmbench-ajmc-fr-hmbert-bs8-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-2 [18]: https://hf.co/stefan-it/hmbench-ajmc-fr-hmbert-bs8-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-3 [19]: https://hf.co/stefan-it/hmbench-ajmc-fr-hmbert-bs8-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-4 [20]: https://hf.co/stefan-it/hmbench-ajmc-fr-hmbert-bs8-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-5 The [training log](training.log) and TensorBoard logs (only for hmByT5 and hmTEAMS based models) are also uploaded to the model hub. More information about fine-tuning can be found [here](https://github.com/stefan-it/hmBench). # Acknowledgements We thank [Luisa März](https://github.com/LuisaMaerz), [Katharina Schmid](https://github.com/schmika) and [Erion Çano](https://github.com/erionc) for their fruitful discussions about Historic Language Models. Research supported with Cloud TPUs from Google's [TPU Research Cloud](https://sites.research.google/trc/about/) (TRC). Many Thanks for providing access to the TPUs ❤️
stefan-it/hmbench-ajmc-fr-hmbert-bs8-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-2
stefan-it
2023-10-17T23:14:41Z
0
0
flair
[ "flair", "token-classification", "sequence-tagger-model", "fr", "base_model:dbmdz/bert-base-historic-multilingual-cased", "base_model:finetune:dbmdz/bert-base-historic-multilingual-cased", "license:mit", "region:us" ]
token-classification
2023-10-13T10:45:38Z
--- language: fr license: mit tags: - flair - token-classification - sequence-tagger-model base_model: dbmdz/bert-base-historic-multilingual-cased widget: - text: — 469 . Πεδία . Les tribraques formés par un seul mot sont rares chez les tragiques , partont ailleurs qu ’ au premier pied . CÉ . cependant QEd , Roi , 719 , 826 , 4496 . --- # Fine-tuned Flair Model on AjMC French NER Dataset (HIPE-2022) This Flair model was fine-tuned on the [AjMC French](https://github.com/hipe-eval/HIPE-2022-data/blob/main/documentation/README-ajmc.md) NER Dataset using hmBERT as backbone LM. The AjMC dataset consists of NE-annotated historical commentaries in the field of Classics, and was created in the context of the [Ajax MultiCommentary](https://mromanello.github.io/ajax-multi-commentary/) project. The following NEs were annotated: `pers`, `work`, `loc`, `object`, `date` and `scope`. # Results We performed a hyper-parameter search over the following parameters with 5 different seeds per configuration: * Batch Sizes: `[8, 4]` * Learning Rates: `[3e-05, 5e-05]` And report micro F1-score on development set: | Configuration | Run 1 | Run 2 | Run 3 | Run 4 | Run 5 | Avg. | |-----------------|--------------|--------------|--------------|--------------|--------------|--------------| | bs4-e10-lr5e-05 | [0.8436][1] | [0.8287][2] | [0.8475][3] | [0.8455][4] | [0.8553][5] | 84.41 ± 0.87 | | bs8-e10-lr3e-05 | [0.8228][6] | [0.8407][7] | [0.8557][8] | [0.8532][9] | [0.8385][10] | 84.22 ± 1.18 | | bs4-e10-lr3e-05 | [0.8202][11] | [0.8519][12] | [0.8434][13] | [0.8418][14] | [0.8436][15] | 84.02 ± 1.06 | | bs8-e10-lr5e-05 | [0.8333][16] | [0.8338][17] | [0.8394][18] | [0.8409][19] | [0.8504][20] | 83.96 ± 0.62 | [1]: https://hf.co/stefan-it/hmbench-ajmc-fr-hmbert-bs4-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-1 [2]: https://hf.co/stefan-it/hmbench-ajmc-fr-hmbert-bs4-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-2 [3]: https://hf.co/stefan-it/hmbench-ajmc-fr-hmbert-bs4-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-3 [4]: https://hf.co/stefan-it/hmbench-ajmc-fr-hmbert-bs4-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-4 [5]: https://hf.co/stefan-it/hmbench-ajmc-fr-hmbert-bs4-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-5 [6]: https://hf.co/stefan-it/hmbench-ajmc-fr-hmbert-bs8-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-1 [7]: https://hf.co/stefan-it/hmbench-ajmc-fr-hmbert-bs8-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-2 [8]: https://hf.co/stefan-it/hmbench-ajmc-fr-hmbert-bs8-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-3 [9]: https://hf.co/stefan-it/hmbench-ajmc-fr-hmbert-bs8-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-4 [10]: https://hf.co/stefan-it/hmbench-ajmc-fr-hmbert-bs8-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-5 [11]: https://hf.co/stefan-it/hmbench-ajmc-fr-hmbert-bs4-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-1 [12]: https://hf.co/stefan-it/hmbench-ajmc-fr-hmbert-bs4-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-2 [13]: https://hf.co/stefan-it/hmbench-ajmc-fr-hmbert-bs4-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-3 [14]: https://hf.co/stefan-it/hmbench-ajmc-fr-hmbert-bs4-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-4 [15]: https://hf.co/stefan-it/hmbench-ajmc-fr-hmbert-bs4-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-5 [16]: https://hf.co/stefan-it/hmbench-ajmc-fr-hmbert-bs8-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-1 [17]: https://hf.co/stefan-it/hmbench-ajmc-fr-hmbert-bs8-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-2 [18]: https://hf.co/stefan-it/hmbench-ajmc-fr-hmbert-bs8-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-3 [19]: https://hf.co/stefan-it/hmbench-ajmc-fr-hmbert-bs8-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-4 [20]: https://hf.co/stefan-it/hmbench-ajmc-fr-hmbert-bs8-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-5 The [training log](training.log) and TensorBoard logs (only for hmByT5 and hmTEAMS based models) are also uploaded to the model hub. More information about fine-tuning can be found [here](https://github.com/stefan-it/hmBench). # Acknowledgements We thank [Luisa März](https://github.com/LuisaMaerz), [Katharina Schmid](https://github.com/schmika) and [Erion Çano](https://github.com/erionc) for their fruitful discussions about Historic Language Models. Research supported with Cloud TPUs from Google's [TPU Research Cloud](https://sites.research.google/trc/about/) (TRC). Many Thanks for providing access to the TPUs ❤️
stefan-it/hmbench-ajmc-en-hmbert-bs4-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-5
stefan-it
2023-10-17T23:14:12Z
8
0
flair
[ "flair", "pytorch", "token-classification", "sequence-tagger-model", "en", "base_model:dbmdz/bert-base-historic-multilingual-cased", "base_model:finetune:dbmdz/bert-base-historic-multilingual-cased", "license:mit", "region:us" ]
token-classification
2023-10-13T09:48:53Z
--- language: en license: mit tags: - flair - token-classification - sequence-tagger-model base_model: dbmdz/bert-base-historic-multilingual-cased widget: - text: Cp . Eur . Phoen . 240 , 1 , αἷμα ddiov φλέγέι . --- # Fine-tuned Flair Model on AjMC English NER Dataset (HIPE-2022) This Flair model was fine-tuned on the [AjMC English](https://github.com/hipe-eval/HIPE-2022-data/blob/main/documentation/README-ajmc.md) NER Dataset using hmBERT as backbone LM. The AjMC dataset consists of NE-annotated historical commentaries in the field of Classics, and was created in the context of the [Ajax MultiCommentary](https://mromanello.github.io/ajax-multi-commentary/) project. The following NEs were annotated: `pers`, `work`, `loc`, `object`, `date` and `scope`. # Results We performed a hyper-parameter search over the following parameters with 5 different seeds per configuration: * Batch Sizes: `[8, 4]` * Learning Rates: `[3e-05, 5e-05]` And report micro F1-score on development set: | Configuration | Run 1 | Run 2 | Run 3 | Run 4 | Run 5 | Avg. | |-----------------|--------------|--------------|--------------|--------------|--------------|--------------| | bs8-e10-lr3e-05 | [0.8473][1] | [0.8494][2] | [0.8558][3] | [0.8578][4] | [0.8541][5] | 85.29 ± 0.39 | | bs8-e10-lr5e-05 | [0.8504][6] | [0.8474][7] | [0.8501][8] | [0.8486][9] | [0.8491][10] | 84.91 ± 0.11 | | bs4-e10-lr3e-05 | [0.8376][11] | [0.8302][12] | [0.8487][13] | [0.8615][14] | [0.8517][15] | 84.59 ± 1.1 | | bs4-e10-lr5e-05 | [0.8498][16] | [0.8341][17] | [0.8405][18] | [0.8528][19] | [0.8359][20] | 84.26 ± 0.74 | [1]: https://hf.co/stefan-it/hmbench-ajmc-en-hmbert-bs8-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-1 [2]: https://hf.co/stefan-it/hmbench-ajmc-en-hmbert-bs8-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-2 [3]: https://hf.co/stefan-it/hmbench-ajmc-en-hmbert-bs8-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-3 [4]: https://hf.co/stefan-it/hmbench-ajmc-en-hmbert-bs8-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-4 [5]: https://hf.co/stefan-it/hmbench-ajmc-en-hmbert-bs8-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-5 [6]: https://hf.co/stefan-it/hmbench-ajmc-en-hmbert-bs8-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-1 [7]: https://hf.co/stefan-it/hmbench-ajmc-en-hmbert-bs8-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-2 [8]: https://hf.co/stefan-it/hmbench-ajmc-en-hmbert-bs8-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-3 [9]: https://hf.co/stefan-it/hmbench-ajmc-en-hmbert-bs8-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-4 [10]: https://hf.co/stefan-it/hmbench-ajmc-en-hmbert-bs8-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-5 [11]: https://hf.co/stefan-it/hmbench-ajmc-en-hmbert-bs4-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-1 [12]: https://hf.co/stefan-it/hmbench-ajmc-en-hmbert-bs4-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-2 [13]: https://hf.co/stefan-it/hmbench-ajmc-en-hmbert-bs4-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-3 [14]: https://hf.co/stefan-it/hmbench-ajmc-en-hmbert-bs4-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-4 [15]: https://hf.co/stefan-it/hmbench-ajmc-en-hmbert-bs4-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-5 [16]: https://hf.co/stefan-it/hmbench-ajmc-en-hmbert-bs4-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-1 [17]: https://hf.co/stefan-it/hmbench-ajmc-en-hmbert-bs4-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-2 [18]: https://hf.co/stefan-it/hmbench-ajmc-en-hmbert-bs4-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-3 [19]: https://hf.co/stefan-it/hmbench-ajmc-en-hmbert-bs4-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-4 [20]: https://hf.co/stefan-it/hmbench-ajmc-en-hmbert-bs4-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-5 The [training log](training.log) and TensorBoard logs (only for hmByT5 and hmTEAMS based models) are also uploaded to the model hub. More information about fine-tuning can be found [here](https://github.com/stefan-it/hmBench). # Acknowledgements We thank [Luisa März](https://github.com/LuisaMaerz), [Katharina Schmid](https://github.com/schmika) and [Erion Çano](https://github.com/erionc) for their fruitful discussions about Historic Language Models. Research supported with Cloud TPUs from Google's [TPU Research Cloud](https://sites.research.google/trc/about/) (TRC). Many Thanks for providing access to the TPUs ❤️
stefan-it/hmbench-ajmc-en-hmbert-bs4-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-5
stefan-it
2023-10-17T23:14:06Z
1
0
flair
[ "flair", "pytorch", "token-classification", "sequence-tagger-model", "en", "base_model:dbmdz/bert-base-historic-multilingual-cased", "base_model:finetune:dbmdz/bert-base-historic-multilingual-cased", "license:mit", "region:us" ]
token-classification
2023-10-13T09:45:43Z
--- language: en license: mit tags: - flair - token-classification - sequence-tagger-model base_model: dbmdz/bert-base-historic-multilingual-cased widget: - text: Cp . Eur . Phoen . 240 , 1 , αἷμα ddiov φλέγέι . --- # Fine-tuned Flair Model on AjMC English NER Dataset (HIPE-2022) This Flair model was fine-tuned on the [AjMC English](https://github.com/hipe-eval/HIPE-2022-data/blob/main/documentation/README-ajmc.md) NER Dataset using hmBERT as backbone LM. The AjMC dataset consists of NE-annotated historical commentaries in the field of Classics, and was created in the context of the [Ajax MultiCommentary](https://mromanello.github.io/ajax-multi-commentary/) project. The following NEs were annotated: `pers`, `work`, `loc`, `object`, `date` and `scope`. # Results We performed a hyper-parameter search over the following parameters with 5 different seeds per configuration: * Batch Sizes: `[8, 4]` * Learning Rates: `[3e-05, 5e-05]` And report micro F1-score on development set: | Configuration | Run 1 | Run 2 | Run 3 | Run 4 | Run 5 | Avg. | |-----------------|--------------|--------------|--------------|--------------|--------------|--------------| | bs8-e10-lr3e-05 | [0.8473][1] | [0.8494][2] | [0.8558][3] | [0.8578][4] | [0.8541][5] | 85.29 ± 0.39 | | bs8-e10-lr5e-05 | [0.8504][6] | [0.8474][7] | [0.8501][8] | [0.8486][9] | [0.8491][10] | 84.91 ± 0.11 | | bs4-e10-lr3e-05 | [0.8376][11] | [0.8302][12] | [0.8487][13] | [0.8615][14] | [0.8517][15] | 84.59 ± 1.1 | | bs4-e10-lr5e-05 | [0.8498][16] | [0.8341][17] | [0.8405][18] | [0.8528][19] | [0.8359][20] | 84.26 ± 0.74 | [1]: https://hf.co/stefan-it/hmbench-ajmc-en-hmbert-bs8-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-1 [2]: https://hf.co/stefan-it/hmbench-ajmc-en-hmbert-bs8-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-2 [3]: https://hf.co/stefan-it/hmbench-ajmc-en-hmbert-bs8-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-3 [4]: https://hf.co/stefan-it/hmbench-ajmc-en-hmbert-bs8-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-4 [5]: https://hf.co/stefan-it/hmbench-ajmc-en-hmbert-bs8-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-5 [6]: https://hf.co/stefan-it/hmbench-ajmc-en-hmbert-bs8-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-1 [7]: https://hf.co/stefan-it/hmbench-ajmc-en-hmbert-bs8-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-2 [8]: https://hf.co/stefan-it/hmbench-ajmc-en-hmbert-bs8-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-3 [9]: https://hf.co/stefan-it/hmbench-ajmc-en-hmbert-bs8-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-4 [10]: https://hf.co/stefan-it/hmbench-ajmc-en-hmbert-bs8-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-5 [11]: https://hf.co/stefan-it/hmbench-ajmc-en-hmbert-bs4-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-1 [12]: https://hf.co/stefan-it/hmbench-ajmc-en-hmbert-bs4-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-2 [13]: https://hf.co/stefan-it/hmbench-ajmc-en-hmbert-bs4-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-3 [14]: https://hf.co/stefan-it/hmbench-ajmc-en-hmbert-bs4-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-4 [15]: https://hf.co/stefan-it/hmbench-ajmc-en-hmbert-bs4-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-5 [16]: https://hf.co/stefan-it/hmbench-ajmc-en-hmbert-bs4-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-1 [17]: https://hf.co/stefan-it/hmbench-ajmc-en-hmbert-bs4-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-2 [18]: https://hf.co/stefan-it/hmbench-ajmc-en-hmbert-bs4-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-3 [19]: https://hf.co/stefan-it/hmbench-ajmc-en-hmbert-bs4-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-4 [20]: https://hf.co/stefan-it/hmbench-ajmc-en-hmbert-bs4-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-5 The [training log](training.log) and TensorBoard logs (only for hmByT5 and hmTEAMS based models) are also uploaded to the model hub. More information about fine-tuning can be found [here](https://github.com/stefan-it/hmBench). # Acknowledgements We thank [Luisa März](https://github.com/LuisaMaerz), [Katharina Schmid](https://github.com/schmika) and [Erion Çano](https://github.com/erionc) for their fruitful discussions about Historic Language Models. Research supported with Cloud TPUs from Google's [TPU Research Cloud](https://sites.research.google/trc/about/) (TRC). Many Thanks for providing access to the TPUs ❤️
stefan-it/hmbench-ajmc-en-hmbert-bs4-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-2
stefan-it
2023-10-17T23:14:03Z
5
0
flair
[ "flair", "pytorch", "token-classification", "sequence-tagger-model", "en", "base_model:dbmdz/bert-base-historic-multilingual-cased", "base_model:finetune:dbmdz/bert-base-historic-multilingual-cased", "license:mit", "region:us" ]
token-classification
2023-10-13T09:14:11Z
--- language: en license: mit tags: - flair - token-classification - sequence-tagger-model base_model: dbmdz/bert-base-historic-multilingual-cased widget: - text: Cp . Eur . Phoen . 240 , 1 , αἷμα ddiov φλέγέι . --- # Fine-tuned Flair Model on AjMC English NER Dataset (HIPE-2022) This Flair model was fine-tuned on the [AjMC English](https://github.com/hipe-eval/HIPE-2022-data/blob/main/documentation/README-ajmc.md) NER Dataset using hmBERT as backbone LM. The AjMC dataset consists of NE-annotated historical commentaries in the field of Classics, and was created in the context of the [Ajax MultiCommentary](https://mromanello.github.io/ajax-multi-commentary/) project. The following NEs were annotated: `pers`, `work`, `loc`, `object`, `date` and `scope`. # Results We performed a hyper-parameter search over the following parameters with 5 different seeds per configuration: * Batch Sizes: `[8, 4]` * Learning Rates: `[3e-05, 5e-05]` And report micro F1-score on development set: | Configuration | Run 1 | Run 2 | Run 3 | Run 4 | Run 5 | Avg. | |-----------------|--------------|--------------|--------------|--------------|--------------|--------------| | bs8-e10-lr3e-05 | [0.8473][1] | [0.8494][2] | [0.8558][3] | [0.8578][4] | [0.8541][5] | 85.29 ± 0.39 | | bs8-e10-lr5e-05 | [0.8504][6] | [0.8474][7] | [0.8501][8] | [0.8486][9] | [0.8491][10] | 84.91 ± 0.11 | | bs4-e10-lr3e-05 | [0.8376][11] | [0.8302][12] | [0.8487][13] | [0.8615][14] | [0.8517][15] | 84.59 ± 1.1 | | bs4-e10-lr5e-05 | [0.8498][16] | [0.8341][17] | [0.8405][18] | [0.8528][19] | [0.8359][20] | 84.26 ± 0.74 | [1]: https://hf.co/stefan-it/hmbench-ajmc-en-hmbert-bs8-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-1 [2]: https://hf.co/stefan-it/hmbench-ajmc-en-hmbert-bs8-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-2 [3]: https://hf.co/stefan-it/hmbench-ajmc-en-hmbert-bs8-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-3 [4]: https://hf.co/stefan-it/hmbench-ajmc-en-hmbert-bs8-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-4 [5]: https://hf.co/stefan-it/hmbench-ajmc-en-hmbert-bs8-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-5 [6]: https://hf.co/stefan-it/hmbench-ajmc-en-hmbert-bs8-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-1 [7]: https://hf.co/stefan-it/hmbench-ajmc-en-hmbert-bs8-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-2 [8]: https://hf.co/stefan-it/hmbench-ajmc-en-hmbert-bs8-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-3 [9]: https://hf.co/stefan-it/hmbench-ajmc-en-hmbert-bs8-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-4 [10]: https://hf.co/stefan-it/hmbench-ajmc-en-hmbert-bs8-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-5 [11]: https://hf.co/stefan-it/hmbench-ajmc-en-hmbert-bs4-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-1 [12]: https://hf.co/stefan-it/hmbench-ajmc-en-hmbert-bs4-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-2 [13]: https://hf.co/stefan-it/hmbench-ajmc-en-hmbert-bs4-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-3 [14]: https://hf.co/stefan-it/hmbench-ajmc-en-hmbert-bs4-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-4 [15]: https://hf.co/stefan-it/hmbench-ajmc-en-hmbert-bs4-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-5 [16]: https://hf.co/stefan-it/hmbench-ajmc-en-hmbert-bs4-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-1 [17]: https://hf.co/stefan-it/hmbench-ajmc-en-hmbert-bs4-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-2 [18]: https://hf.co/stefan-it/hmbench-ajmc-en-hmbert-bs4-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-3 [19]: https://hf.co/stefan-it/hmbench-ajmc-en-hmbert-bs4-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-4 [20]: https://hf.co/stefan-it/hmbench-ajmc-en-hmbert-bs4-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-5 The [training log](training.log) and TensorBoard logs (only for hmByT5 and hmTEAMS based models) are also uploaded to the model hub. More information about fine-tuning can be found [here](https://github.com/stefan-it/hmBench). # Acknowledgements We thank [Luisa März](https://github.com/LuisaMaerz), [Katharina Schmid](https://github.com/schmika) and [Erion Çano](https://github.com/erionc) for their fruitful discussions about Historic Language Models. Research supported with Cloud TPUs from Google's [TPU Research Cloud](https://sites.research.google/trc/about/) (TRC). Many Thanks for providing access to the TPUs ❤️
stefan-it/hmbench-ajmc-en-hmbert-bs8-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-5
stefan-it
2023-10-17T23:14:01Z
10
0
flair
[ "flair", "pytorch", "token-classification", "sequence-tagger-model", "en", "base_model:dbmdz/bert-base-historic-multilingual-cased", "base_model:finetune:dbmdz/bert-base-historic-multilingual-cased", "license:mit", "region:us" ]
token-classification
2023-10-13T09:53:20Z
--- language: en license: mit tags: - flair - token-classification - sequence-tagger-model base_model: dbmdz/bert-base-historic-multilingual-cased widget: - text: Cp . Eur . Phoen . 240 , 1 , αἷμα ddiov φλέγέι . --- # Fine-tuned Flair Model on AjMC English NER Dataset (HIPE-2022) This Flair model was fine-tuned on the [AjMC English](https://github.com/hipe-eval/HIPE-2022-data/blob/main/documentation/README-ajmc.md) NER Dataset using hmBERT as backbone LM. The AjMC dataset consists of NE-annotated historical commentaries in the field of Classics, and was created in the context of the [Ajax MultiCommentary](https://mromanello.github.io/ajax-multi-commentary/) project. The following NEs were annotated: `pers`, `work`, `loc`, `object`, `date` and `scope`. # Results We performed a hyper-parameter search over the following parameters with 5 different seeds per configuration: * Batch Sizes: `[8, 4]` * Learning Rates: `[3e-05, 5e-05]` And report micro F1-score on development set: | Configuration | Run 1 | Run 2 | Run 3 | Run 4 | Run 5 | Avg. | |-----------------|--------------|--------------|--------------|--------------|--------------|--------------| | bs8-e10-lr3e-05 | [0.8473][1] | [0.8494][2] | [0.8558][3] | [0.8578][4] | [0.8541][5] | 85.29 ± 0.39 | | bs8-e10-lr5e-05 | [0.8504][6] | [0.8474][7] | [0.8501][8] | [0.8486][9] | [0.8491][10] | 84.91 ± 0.11 | | bs4-e10-lr3e-05 | [0.8376][11] | [0.8302][12] | [0.8487][13] | [0.8615][14] | [0.8517][15] | 84.59 ± 1.1 | | bs4-e10-lr5e-05 | [0.8498][16] | [0.8341][17] | [0.8405][18] | [0.8528][19] | [0.8359][20] | 84.26 ± 0.74 | [1]: https://hf.co/stefan-it/hmbench-ajmc-en-hmbert-bs8-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-1 [2]: https://hf.co/stefan-it/hmbench-ajmc-en-hmbert-bs8-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-2 [3]: https://hf.co/stefan-it/hmbench-ajmc-en-hmbert-bs8-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-3 [4]: https://hf.co/stefan-it/hmbench-ajmc-en-hmbert-bs8-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-4 [5]: https://hf.co/stefan-it/hmbench-ajmc-en-hmbert-bs8-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-5 [6]: https://hf.co/stefan-it/hmbench-ajmc-en-hmbert-bs8-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-1 [7]: https://hf.co/stefan-it/hmbench-ajmc-en-hmbert-bs8-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-2 [8]: https://hf.co/stefan-it/hmbench-ajmc-en-hmbert-bs8-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-3 [9]: https://hf.co/stefan-it/hmbench-ajmc-en-hmbert-bs8-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-4 [10]: https://hf.co/stefan-it/hmbench-ajmc-en-hmbert-bs8-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-5 [11]: https://hf.co/stefan-it/hmbench-ajmc-en-hmbert-bs4-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-1 [12]: https://hf.co/stefan-it/hmbench-ajmc-en-hmbert-bs4-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-2 [13]: https://hf.co/stefan-it/hmbench-ajmc-en-hmbert-bs4-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-3 [14]: https://hf.co/stefan-it/hmbench-ajmc-en-hmbert-bs4-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-4 [15]: https://hf.co/stefan-it/hmbench-ajmc-en-hmbert-bs4-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-5 [16]: https://hf.co/stefan-it/hmbench-ajmc-en-hmbert-bs4-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-1 [17]: https://hf.co/stefan-it/hmbench-ajmc-en-hmbert-bs4-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-2 [18]: https://hf.co/stefan-it/hmbench-ajmc-en-hmbert-bs4-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-3 [19]: https://hf.co/stefan-it/hmbench-ajmc-en-hmbert-bs4-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-4 [20]: https://hf.co/stefan-it/hmbench-ajmc-en-hmbert-bs4-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-5 The [training log](training.log) and TensorBoard logs (only for hmByT5 and hmTEAMS based models) are also uploaded to the model hub. More information about fine-tuning can be found [here](https://github.com/stefan-it/hmBench). # Acknowledgements We thank [Luisa März](https://github.com/LuisaMaerz), [Katharina Schmid](https://github.com/schmika) and [Erion Çano](https://github.com/erionc) for their fruitful discussions about Historic Language Models. Research supported with Cloud TPUs from Google's [TPU Research Cloud](https://sites.research.google/trc/about/) (TRC). Many Thanks for providing access to the TPUs ❤️
stefan-it/hmbench-ajmc-en-hmbert-bs8-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-4
stefan-it
2023-10-17T23:14:00Z
9
0
flair
[ "flair", "pytorch", "token-classification", "sequence-tagger-model", "en", "base_model:dbmdz/bert-base-historic-multilingual-cased", "base_model:finetune:dbmdz/bert-base-historic-multilingual-cased", "license:mit", "region:us" ]
token-classification
2023-10-13T09:42:39Z
--- language: en license: mit tags: - flair - token-classification - sequence-tagger-model base_model: dbmdz/bert-base-historic-multilingual-cased widget: - text: Cp . Eur . Phoen . 240 , 1 , αἷμα ddiov φλέγέι . --- # Fine-tuned Flair Model on AjMC English NER Dataset (HIPE-2022) This Flair model was fine-tuned on the [AjMC English](https://github.com/hipe-eval/HIPE-2022-data/blob/main/documentation/README-ajmc.md) NER Dataset using hmBERT as backbone LM. The AjMC dataset consists of NE-annotated historical commentaries in the field of Classics, and was created in the context of the [Ajax MultiCommentary](https://mromanello.github.io/ajax-multi-commentary/) project. The following NEs were annotated: `pers`, `work`, `loc`, `object`, `date` and `scope`. # Results We performed a hyper-parameter search over the following parameters with 5 different seeds per configuration: * Batch Sizes: `[8, 4]` * Learning Rates: `[3e-05, 5e-05]` And report micro F1-score on development set: | Configuration | Run 1 | Run 2 | Run 3 | Run 4 | Run 5 | Avg. | |-----------------|--------------|--------------|--------------|--------------|--------------|--------------| | bs8-e10-lr3e-05 | [0.8473][1] | [0.8494][2] | [0.8558][3] | [0.8578][4] | [0.8541][5] | 85.29 ± 0.39 | | bs8-e10-lr5e-05 | [0.8504][6] | [0.8474][7] | [0.8501][8] | [0.8486][9] | [0.8491][10] | 84.91 ± 0.11 | | bs4-e10-lr3e-05 | [0.8376][11] | [0.8302][12] | [0.8487][13] | [0.8615][14] | [0.8517][15] | 84.59 ± 1.1 | | bs4-e10-lr5e-05 | [0.8498][16] | [0.8341][17] | [0.8405][18] | [0.8528][19] | [0.8359][20] | 84.26 ± 0.74 | [1]: https://hf.co/stefan-it/hmbench-ajmc-en-hmbert-bs8-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-1 [2]: https://hf.co/stefan-it/hmbench-ajmc-en-hmbert-bs8-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-2 [3]: https://hf.co/stefan-it/hmbench-ajmc-en-hmbert-bs8-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-3 [4]: https://hf.co/stefan-it/hmbench-ajmc-en-hmbert-bs8-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-4 [5]: https://hf.co/stefan-it/hmbench-ajmc-en-hmbert-bs8-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-5 [6]: https://hf.co/stefan-it/hmbench-ajmc-en-hmbert-bs8-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-1 [7]: https://hf.co/stefan-it/hmbench-ajmc-en-hmbert-bs8-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-2 [8]: https://hf.co/stefan-it/hmbench-ajmc-en-hmbert-bs8-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-3 [9]: https://hf.co/stefan-it/hmbench-ajmc-en-hmbert-bs8-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-4 [10]: https://hf.co/stefan-it/hmbench-ajmc-en-hmbert-bs8-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-5 [11]: https://hf.co/stefan-it/hmbench-ajmc-en-hmbert-bs4-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-1 [12]: https://hf.co/stefan-it/hmbench-ajmc-en-hmbert-bs4-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-2 [13]: https://hf.co/stefan-it/hmbench-ajmc-en-hmbert-bs4-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-3 [14]: https://hf.co/stefan-it/hmbench-ajmc-en-hmbert-bs4-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-4 [15]: https://hf.co/stefan-it/hmbench-ajmc-en-hmbert-bs4-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-5 [16]: https://hf.co/stefan-it/hmbench-ajmc-en-hmbert-bs4-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-1 [17]: https://hf.co/stefan-it/hmbench-ajmc-en-hmbert-bs4-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-2 [18]: https://hf.co/stefan-it/hmbench-ajmc-en-hmbert-bs4-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-3 [19]: https://hf.co/stefan-it/hmbench-ajmc-en-hmbert-bs4-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-4 [20]: https://hf.co/stefan-it/hmbench-ajmc-en-hmbert-bs4-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-5 The [training log](training.log) and TensorBoard logs (only for hmByT5 and hmTEAMS based models) are also uploaded to the model hub. More information about fine-tuning can be found [here](https://github.com/stefan-it/hmBench). # Acknowledgements We thank [Luisa März](https://github.com/LuisaMaerz), [Katharina Schmid](https://github.com/schmika) and [Erion Çano](https://github.com/erionc) for their fruitful discussions about Historic Language Models. Research supported with Cloud TPUs from Google's [TPU Research Cloud](https://sites.research.google/trc/about/) (TRC). Many Thanks for providing access to the TPUs ❤️
stefan-it/hmbench-ajmc-en-hmbert-bs8-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-3
stefan-it
2023-10-17T23:13:59Z
5
0
flair
[ "flair", "pytorch", "token-classification", "sequence-tagger-model", "en", "base_model:dbmdz/bert-base-historic-multilingual-cased", "base_model:finetune:dbmdz/bert-base-historic-multilingual-cased", "license:mit", "region:us" ]
token-classification
2023-10-13T09:32:18Z
--- language: en license: mit tags: - flair - token-classification - sequence-tagger-model base_model: dbmdz/bert-base-historic-multilingual-cased widget: - text: Cp . Eur . Phoen . 240 , 1 , αἷμα ddiov φλέγέι . --- # Fine-tuned Flair Model on AjMC English NER Dataset (HIPE-2022) This Flair model was fine-tuned on the [AjMC English](https://github.com/hipe-eval/HIPE-2022-data/blob/main/documentation/README-ajmc.md) NER Dataset using hmBERT as backbone LM. The AjMC dataset consists of NE-annotated historical commentaries in the field of Classics, and was created in the context of the [Ajax MultiCommentary](https://mromanello.github.io/ajax-multi-commentary/) project. The following NEs were annotated: `pers`, `work`, `loc`, `object`, `date` and `scope`. # Results We performed a hyper-parameter search over the following parameters with 5 different seeds per configuration: * Batch Sizes: `[8, 4]` * Learning Rates: `[3e-05, 5e-05]` And report micro F1-score on development set: | Configuration | Run 1 | Run 2 | Run 3 | Run 4 | Run 5 | Avg. | |-----------------|--------------|--------------|--------------|--------------|--------------|--------------| | bs8-e10-lr3e-05 | [0.8473][1] | [0.8494][2] | [0.8558][3] | [0.8578][4] | [0.8541][5] | 85.29 ± 0.39 | | bs8-e10-lr5e-05 | [0.8504][6] | [0.8474][7] | [0.8501][8] | [0.8486][9] | [0.8491][10] | 84.91 ± 0.11 | | bs4-e10-lr3e-05 | [0.8376][11] | [0.8302][12] | [0.8487][13] | [0.8615][14] | [0.8517][15] | 84.59 ± 1.1 | | bs4-e10-lr5e-05 | [0.8498][16] | [0.8341][17] | [0.8405][18] | [0.8528][19] | [0.8359][20] | 84.26 ± 0.74 | [1]: https://hf.co/stefan-it/hmbench-ajmc-en-hmbert-bs8-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-1 [2]: https://hf.co/stefan-it/hmbench-ajmc-en-hmbert-bs8-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-2 [3]: https://hf.co/stefan-it/hmbench-ajmc-en-hmbert-bs8-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-3 [4]: https://hf.co/stefan-it/hmbench-ajmc-en-hmbert-bs8-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-4 [5]: https://hf.co/stefan-it/hmbench-ajmc-en-hmbert-bs8-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-5 [6]: https://hf.co/stefan-it/hmbench-ajmc-en-hmbert-bs8-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-1 [7]: https://hf.co/stefan-it/hmbench-ajmc-en-hmbert-bs8-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-2 [8]: https://hf.co/stefan-it/hmbench-ajmc-en-hmbert-bs8-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-3 [9]: https://hf.co/stefan-it/hmbench-ajmc-en-hmbert-bs8-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-4 [10]: https://hf.co/stefan-it/hmbench-ajmc-en-hmbert-bs8-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-5 [11]: https://hf.co/stefan-it/hmbench-ajmc-en-hmbert-bs4-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-1 [12]: https://hf.co/stefan-it/hmbench-ajmc-en-hmbert-bs4-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-2 [13]: https://hf.co/stefan-it/hmbench-ajmc-en-hmbert-bs4-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-3 [14]: https://hf.co/stefan-it/hmbench-ajmc-en-hmbert-bs4-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-4 [15]: https://hf.co/stefan-it/hmbench-ajmc-en-hmbert-bs4-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-5 [16]: https://hf.co/stefan-it/hmbench-ajmc-en-hmbert-bs4-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-1 [17]: https://hf.co/stefan-it/hmbench-ajmc-en-hmbert-bs4-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-2 [18]: https://hf.co/stefan-it/hmbench-ajmc-en-hmbert-bs4-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-3 [19]: https://hf.co/stefan-it/hmbench-ajmc-en-hmbert-bs4-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-4 [20]: https://hf.co/stefan-it/hmbench-ajmc-en-hmbert-bs4-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-5 The [training log](training.log) and TensorBoard logs (only for hmByT5 and hmTEAMS based models) are also uploaded to the model hub. More information about fine-tuning can be found [here](https://github.com/stefan-it/hmBench). # Acknowledgements We thank [Luisa März](https://github.com/LuisaMaerz), [Katharina Schmid](https://github.com/schmika) and [Erion Çano](https://github.com/erionc) for their fruitful discussions about Historic Language Models. Research supported with Cloud TPUs from Google's [TPU Research Cloud](https://sites.research.google/trc/about/) (TRC). Many Thanks for providing access to the TPUs ❤️
hmbert/flair-hipe-2022-ajmc-en
hmbert
2023-10-17T23:13:54Z
3
0
flair
[ "flair", "pytorch", "token-classification", "sequence-tagger-model", "en", "base_model:dbmdz/bert-base-historic-multilingual-cased", "base_model:finetune:dbmdz/bert-base-historic-multilingual-cased", "license:mit", "region:us" ]
token-classification
2023-10-13T09:40:29Z
--- language: en license: mit tags: - flair - token-classification - sequence-tagger-model base_model: dbmdz/bert-base-historic-multilingual-cased widget: - text: Cp . Eur . Phoen . 240 , 1 , αἷμα ddiov φλέγέι . --- # Fine-tuned Flair Model on AjMC English NER Dataset (HIPE-2022) This Flair model was fine-tuned on the [AjMC English](https://github.com/hipe-eval/HIPE-2022-data/blob/main/documentation/README-ajmc.md) NER Dataset using hmBERT as backbone LM. The AjMC dataset consists of NE-annotated historical commentaries in the field of Classics, and was created in the context of the [Ajax MultiCommentary](https://mromanello.github.io/ajax-multi-commentary/) project. The following NEs were annotated: `pers`, `work`, `loc`, `object`, `date` and `scope`. # Results We performed a hyper-parameter search over the following parameters with 5 different seeds per configuration: * Batch Sizes: `[8, 4]` * Learning Rates: `[3e-05, 5e-05]` And report micro F1-score on development set: | Configuration | Run 1 | Run 2 | Run 3 | Run 4 | Run 5 | Avg. | |-----------------|--------------|--------------|--------------|--------------|--------------|--------------| | bs8-e10-lr3e-05 | [0.8473][1] | [0.8494][2] | [0.8558][3] | [0.8578][4] | [0.8541][5] | 85.29 ± 0.39 | | bs8-e10-lr5e-05 | [0.8504][6] | [0.8474][7] | [0.8501][8] | [0.8486][9] | [0.8491][10] | 84.91 ± 0.11 | | bs4-e10-lr3e-05 | [0.8376][11] | [0.8302][12] | [0.8487][13] | [0.8615][14] | [0.8517][15] | 84.59 ± 1.1 | | bs4-e10-lr5e-05 | [0.8498][16] | [0.8341][17] | [0.8405][18] | [0.8528][19] | [0.8359][20] | 84.26 ± 0.74 | [1]: https://hf.co/stefan-it/hmbench-ajmc-en-hmbert-bs8-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-1 [2]: https://hf.co/stefan-it/hmbench-ajmc-en-hmbert-bs8-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-2 [3]: https://hf.co/stefan-it/hmbench-ajmc-en-hmbert-bs8-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-3 [4]: https://hf.co/stefan-it/hmbench-ajmc-en-hmbert-bs8-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-4 [5]: https://hf.co/stefan-it/hmbench-ajmc-en-hmbert-bs8-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-5 [6]: https://hf.co/stefan-it/hmbench-ajmc-en-hmbert-bs8-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-1 [7]: https://hf.co/stefan-it/hmbench-ajmc-en-hmbert-bs8-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-2 [8]: https://hf.co/stefan-it/hmbench-ajmc-en-hmbert-bs8-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-3 [9]: https://hf.co/stefan-it/hmbench-ajmc-en-hmbert-bs8-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-4 [10]: https://hf.co/stefan-it/hmbench-ajmc-en-hmbert-bs8-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-5 [11]: https://hf.co/stefan-it/hmbench-ajmc-en-hmbert-bs4-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-1 [12]: https://hf.co/stefan-it/hmbench-ajmc-en-hmbert-bs4-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-2 [13]: https://hf.co/stefan-it/hmbench-ajmc-en-hmbert-bs4-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-3 [14]: https://hf.co/stefan-it/hmbench-ajmc-en-hmbert-bs4-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-4 [15]: https://hf.co/stefan-it/hmbench-ajmc-en-hmbert-bs4-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-5 [16]: https://hf.co/stefan-it/hmbench-ajmc-en-hmbert-bs4-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-1 [17]: https://hf.co/stefan-it/hmbench-ajmc-en-hmbert-bs4-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-2 [18]: https://hf.co/stefan-it/hmbench-ajmc-en-hmbert-bs4-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-3 [19]: https://hf.co/stefan-it/hmbench-ajmc-en-hmbert-bs4-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-4 [20]: https://hf.co/stefan-it/hmbench-ajmc-en-hmbert-bs4-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-5 The [training log](training.log) and TensorBoard logs (only for hmByT5 and hmTEAMS based models) are also uploaded to the model hub. More information about fine-tuning can be found [here](https://github.com/stefan-it/hmBench). # Acknowledgements We thank [Luisa März](https://github.com/LuisaMaerz), [Katharina Schmid](https://github.com/schmika) and [Erion Çano](https://github.com/erionc) for their fruitful discussions about Historic Language Models. Research supported with Cloud TPUs from Google's [TPU Research Cloud](https://sites.research.google/trc/about/) (TRC). Many Thanks for providing access to the TPUs ❤️
stefan-it/hmbench-ajmc-en-hmbert-bs8-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-3
stefan-it
2023-10-17T23:13:53Z
7
0
flair
[ "flair", "pytorch", "token-classification", "sequence-tagger-model", "en", "base_model:dbmdz/bert-base-historic-multilingual-cased", "base_model:finetune:dbmdz/bert-base-historic-multilingual-cased", "license:mit", "region:us" ]
token-classification
2023-10-13T09:30:09Z
--- language: en license: mit tags: - flair - token-classification - sequence-tagger-model base_model: dbmdz/bert-base-historic-multilingual-cased widget: - text: Cp . Eur . Phoen . 240 , 1 , αἷμα ddiov φλέγέι . --- # Fine-tuned Flair Model on AjMC English NER Dataset (HIPE-2022) This Flair model was fine-tuned on the [AjMC English](https://github.com/hipe-eval/HIPE-2022-data/blob/main/documentation/README-ajmc.md) NER Dataset using hmBERT as backbone LM. The AjMC dataset consists of NE-annotated historical commentaries in the field of Classics, and was created in the context of the [Ajax MultiCommentary](https://mromanello.github.io/ajax-multi-commentary/) project. The following NEs were annotated: `pers`, `work`, `loc`, `object`, `date` and `scope`. # Results We performed a hyper-parameter search over the following parameters with 5 different seeds per configuration: * Batch Sizes: `[8, 4]` * Learning Rates: `[3e-05, 5e-05]` And report micro F1-score on development set: | Configuration | Run 1 | Run 2 | Run 3 | Run 4 | Run 5 | Avg. | |-----------------|--------------|--------------|--------------|--------------|--------------|--------------| | bs8-e10-lr3e-05 | [0.8473][1] | [0.8494][2] | [0.8558][3] | [0.8578][4] | [0.8541][5] | 85.29 ± 0.39 | | bs8-e10-lr5e-05 | [0.8504][6] | [0.8474][7] | [0.8501][8] | [0.8486][9] | [0.8491][10] | 84.91 ± 0.11 | | bs4-e10-lr3e-05 | [0.8376][11] | [0.8302][12] | [0.8487][13] | [0.8615][14] | [0.8517][15] | 84.59 ± 1.1 | | bs4-e10-lr5e-05 | [0.8498][16] | [0.8341][17] | [0.8405][18] | [0.8528][19] | [0.8359][20] | 84.26 ± 0.74 | [1]: https://hf.co/stefan-it/hmbench-ajmc-en-hmbert-bs8-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-1 [2]: https://hf.co/stefan-it/hmbench-ajmc-en-hmbert-bs8-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-2 [3]: https://hf.co/stefan-it/hmbench-ajmc-en-hmbert-bs8-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-3 [4]: https://hf.co/stefan-it/hmbench-ajmc-en-hmbert-bs8-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-4 [5]: https://hf.co/stefan-it/hmbench-ajmc-en-hmbert-bs8-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-5 [6]: https://hf.co/stefan-it/hmbench-ajmc-en-hmbert-bs8-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-1 [7]: https://hf.co/stefan-it/hmbench-ajmc-en-hmbert-bs8-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-2 [8]: https://hf.co/stefan-it/hmbench-ajmc-en-hmbert-bs8-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-3 [9]: https://hf.co/stefan-it/hmbench-ajmc-en-hmbert-bs8-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-4 [10]: https://hf.co/stefan-it/hmbench-ajmc-en-hmbert-bs8-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-5 [11]: https://hf.co/stefan-it/hmbench-ajmc-en-hmbert-bs4-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-1 [12]: https://hf.co/stefan-it/hmbench-ajmc-en-hmbert-bs4-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-2 [13]: https://hf.co/stefan-it/hmbench-ajmc-en-hmbert-bs4-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-3 [14]: https://hf.co/stefan-it/hmbench-ajmc-en-hmbert-bs4-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-4 [15]: https://hf.co/stefan-it/hmbench-ajmc-en-hmbert-bs4-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-5 [16]: https://hf.co/stefan-it/hmbench-ajmc-en-hmbert-bs4-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-1 [17]: https://hf.co/stefan-it/hmbench-ajmc-en-hmbert-bs4-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-2 [18]: https://hf.co/stefan-it/hmbench-ajmc-en-hmbert-bs4-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-3 [19]: https://hf.co/stefan-it/hmbench-ajmc-en-hmbert-bs4-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-4 [20]: https://hf.co/stefan-it/hmbench-ajmc-en-hmbert-bs4-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-5 The [training log](training.log) and TensorBoard logs (only for hmByT5 and hmTEAMS based models) are also uploaded to the model hub. More information about fine-tuning can be found [here](https://github.com/stefan-it/hmBench). # Acknowledgements We thank [Luisa März](https://github.com/LuisaMaerz), [Katharina Schmid](https://github.com/schmika) and [Erion Çano](https://github.com/erionc) for their fruitful discussions about Historic Language Models. Research supported with Cloud TPUs from Google's [TPU Research Cloud](https://sites.research.google/trc/about/) (TRC). Many Thanks for providing access to the TPUs ❤️
stefan-it/hmbench-ajmc-de-hmbert-bs4-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-5
stefan-it
2023-10-17T23:12:30Z
7
0
flair
[ "flair", "pytorch", "token-classification", "sequence-tagger-model", "de", "base_model:dbmdz/bert-base-historic-multilingual-cased", "base_model:finetune:dbmdz/bert-base-historic-multilingual-cased", "license:mit", "region:us" ]
token-classification
2023-10-13T08:55:01Z
--- language: de license: mit tags: - flair - token-classification - sequence-tagger-model base_model: dbmdz/bert-base-historic-multilingual-cased widget: - text: — Dramatiſch war der Stoff vor Sophokles von Äſchylos behandelt worden in den Θροῇσσαι , denen vielleicht in der Trilogie das Stüc>"OnJw» κοίσις vorherging , das Stück Σαλαμίνιαι folgte . --- # Fine-tuned Flair Model on AjMC German NER Dataset (HIPE-2022) This Flair model was fine-tuned on the [AjMC German](https://github.com/hipe-eval/HIPE-2022-data/blob/main/documentation/README-ajmc.md) NER Dataset using hmBERT as backbone LM. The AjMC dataset consists of NE-annotated historical commentaries in the field of Classics, and was created in the context of the [Ajax MultiCommentary](https://mromanello.github.io/ajax-multi-commentary/) project. The following NEs were annotated: `pers`, `work`, `loc`, `object`, `date` and `scope`. # Results We performed a hyper-parameter search over the following parameters with 5 different seeds per configuration: * Batch Sizes: `[8, 4]` * Learning Rates: `[3e-05, 5e-05]` And report micro F1-score on development set: | Configuration | Run 1 | Run 2 | Run 3 | Run 4 | Run 5 | Avg. | |-----------------|--------------|--------------|--------------|--------------|--------------|--------------| | bs4-e10-lr5e-05 | [0.8937][1] | [0.8849][2] | [0.8977][3] | [0.8867][4] | [0.886][5] | 88.98 ± 0.5 | | bs8-e10-lr5e-05 | [0.8816][6] | [0.8952][7] | [0.8766][8] | [0.8934][9] | [0.8875][10] | 88.69 ± 0.7 | | bs4-e10-lr3e-05 | [0.8738][11] | [0.879][12] | [0.8951][13] | [0.8889][14] | [0.8772][15] | 88.28 ± 0.79 | | bs8-e10-lr3e-05 | [0.8743][16] | [0.8741][17] | [0.8722][18] | [0.8932][19] | [0.8809][20] | 87.89 ± 0.77 | [1]: https://hf.co/stefan-it/hmbench-ajmc-de-hmbert-bs4-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-1 [2]: https://hf.co/stefan-it/hmbench-ajmc-de-hmbert-bs4-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-2 [3]: https://hf.co/stefan-it/hmbench-ajmc-de-hmbert-bs4-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-3 [4]: https://hf.co/stefan-it/hmbench-ajmc-de-hmbert-bs4-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-4 [5]: https://hf.co/stefan-it/hmbench-ajmc-de-hmbert-bs4-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-5 [6]: https://hf.co/stefan-it/hmbench-ajmc-de-hmbert-bs8-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-1 [7]: https://hf.co/stefan-it/hmbench-ajmc-de-hmbert-bs8-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-2 [8]: https://hf.co/stefan-it/hmbench-ajmc-de-hmbert-bs8-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-3 [9]: https://hf.co/stefan-it/hmbench-ajmc-de-hmbert-bs8-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-4 [10]: https://hf.co/stefan-it/hmbench-ajmc-de-hmbert-bs8-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-5 [11]: https://hf.co/stefan-it/hmbench-ajmc-de-hmbert-bs4-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-1 [12]: https://hf.co/stefan-it/hmbench-ajmc-de-hmbert-bs4-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-2 [13]: https://hf.co/stefan-it/hmbench-ajmc-de-hmbert-bs4-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-3 [14]: https://hf.co/stefan-it/hmbench-ajmc-de-hmbert-bs4-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-4 [15]: https://hf.co/stefan-it/hmbench-ajmc-de-hmbert-bs4-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-5 [16]: https://hf.co/stefan-it/hmbench-ajmc-de-hmbert-bs8-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-1 [17]: https://hf.co/stefan-it/hmbench-ajmc-de-hmbert-bs8-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-2 [18]: https://hf.co/stefan-it/hmbench-ajmc-de-hmbert-bs8-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-3 [19]: https://hf.co/stefan-it/hmbench-ajmc-de-hmbert-bs8-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-4 [20]: https://hf.co/stefan-it/hmbench-ajmc-de-hmbert-bs8-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-5 The [training log](training.log) and TensorBoard logs (only for hmByT5 and hmTEAMS based models) are also uploaded to the model hub. More information about fine-tuning can be found [here](https://github.com/stefan-it/hmBench). # Acknowledgements We thank [Luisa März](https://github.com/LuisaMaerz), [Katharina Schmid](https://github.com/schmika) and [Erion Çano](https://github.com/erionc) for their fruitful discussions about Historic Language Models. Research supported with Cloud TPUs from Google's [TPU Research Cloud](https://sites.research.google/trc/about/) (TRC). Many Thanks for providing access to the TPUs ❤️
stefan-it/hmbench-ajmc-de-hmbert-bs4-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-2
stefan-it
2023-10-17T23:12:26Z
3
0
flair
[ "flair", "pytorch", "token-classification", "sequence-tagger-model", "de", "base_model:dbmdz/bert-base-historic-multilingual-cased", "base_model:finetune:dbmdz/bert-base-historic-multilingual-cased", "license:mit", "region:us" ]
token-classification
2023-10-13T08:25:43Z
--- language: de license: mit tags: - flair - token-classification - sequence-tagger-model base_model: dbmdz/bert-base-historic-multilingual-cased widget: - text: — Dramatiſch war der Stoff vor Sophokles von Äſchylos behandelt worden in den Θροῇσσαι , denen vielleicht in der Trilogie das Stüc>"OnJw» κοίσις vorherging , das Stück Σαλαμίνιαι folgte . --- # Fine-tuned Flair Model on AjMC German NER Dataset (HIPE-2022) This Flair model was fine-tuned on the [AjMC German](https://github.com/hipe-eval/HIPE-2022-data/blob/main/documentation/README-ajmc.md) NER Dataset using hmBERT as backbone LM. The AjMC dataset consists of NE-annotated historical commentaries in the field of Classics, and was created in the context of the [Ajax MultiCommentary](https://mromanello.github.io/ajax-multi-commentary/) project. The following NEs were annotated: `pers`, `work`, `loc`, `object`, `date` and `scope`. # Results We performed a hyper-parameter search over the following parameters with 5 different seeds per configuration: * Batch Sizes: `[8, 4]` * Learning Rates: `[3e-05, 5e-05]` And report micro F1-score on development set: | Configuration | Run 1 | Run 2 | Run 3 | Run 4 | Run 5 | Avg. | |-----------------|--------------|--------------|--------------|--------------|--------------|--------------| | bs4-e10-lr5e-05 | [0.8937][1] | [0.8849][2] | [0.8977][3] | [0.8867][4] | [0.886][5] | 88.98 ± 0.5 | | bs8-e10-lr5e-05 | [0.8816][6] | [0.8952][7] | [0.8766][8] | [0.8934][9] | [0.8875][10] | 88.69 ± 0.7 | | bs4-e10-lr3e-05 | [0.8738][11] | [0.879][12] | [0.8951][13] | [0.8889][14] | [0.8772][15] | 88.28 ± 0.79 | | bs8-e10-lr3e-05 | [0.8743][16] | [0.8741][17] | [0.8722][18] | [0.8932][19] | [0.8809][20] | 87.89 ± 0.77 | [1]: https://hf.co/stefan-it/hmbench-ajmc-de-hmbert-bs4-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-1 [2]: https://hf.co/stefan-it/hmbench-ajmc-de-hmbert-bs4-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-2 [3]: https://hf.co/stefan-it/hmbench-ajmc-de-hmbert-bs4-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-3 [4]: https://hf.co/stefan-it/hmbench-ajmc-de-hmbert-bs4-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-4 [5]: https://hf.co/stefan-it/hmbench-ajmc-de-hmbert-bs4-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-5 [6]: https://hf.co/stefan-it/hmbench-ajmc-de-hmbert-bs8-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-1 [7]: https://hf.co/stefan-it/hmbench-ajmc-de-hmbert-bs8-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-2 [8]: https://hf.co/stefan-it/hmbench-ajmc-de-hmbert-bs8-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-3 [9]: https://hf.co/stefan-it/hmbench-ajmc-de-hmbert-bs8-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-4 [10]: https://hf.co/stefan-it/hmbench-ajmc-de-hmbert-bs8-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-5 [11]: https://hf.co/stefan-it/hmbench-ajmc-de-hmbert-bs4-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-1 [12]: https://hf.co/stefan-it/hmbench-ajmc-de-hmbert-bs4-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-2 [13]: https://hf.co/stefan-it/hmbench-ajmc-de-hmbert-bs4-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-3 [14]: https://hf.co/stefan-it/hmbench-ajmc-de-hmbert-bs4-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-4 [15]: https://hf.co/stefan-it/hmbench-ajmc-de-hmbert-bs4-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-5 [16]: https://hf.co/stefan-it/hmbench-ajmc-de-hmbert-bs8-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-1 [17]: https://hf.co/stefan-it/hmbench-ajmc-de-hmbert-bs8-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-2 [18]: https://hf.co/stefan-it/hmbench-ajmc-de-hmbert-bs8-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-3 [19]: https://hf.co/stefan-it/hmbench-ajmc-de-hmbert-bs8-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-4 [20]: https://hf.co/stefan-it/hmbench-ajmc-de-hmbert-bs8-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-5 The [training log](training.log) and TensorBoard logs (only for hmByT5 and hmTEAMS based models) are also uploaded to the model hub. More information about fine-tuning can be found [here](https://github.com/stefan-it/hmBench). # Acknowledgements We thank [Luisa März](https://github.com/LuisaMaerz), [Katharina Schmid](https://github.com/schmika) and [Erion Çano](https://github.com/erionc) for their fruitful discussions about Historic Language Models. Research supported with Cloud TPUs from Google's [TPU Research Cloud](https://sites.research.google/trc/about/) (TRC). Many Thanks for providing access to the TPUs ❤️
stefan-it/hmbench-ajmc-de-hmbert-bs4-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-5
stefan-it
2023-10-17T23:12:24Z
9
0
flair
[ "flair", "pytorch", "token-classification", "sequence-tagger-model", "de", "base_model:dbmdz/bert-base-historic-multilingual-cased", "base_model:finetune:dbmdz/bert-base-historic-multilingual-cased", "license:mit", "region:us" ]
token-classification
2023-10-13T08:52:20Z
--- language: de license: mit tags: - flair - token-classification - sequence-tagger-model base_model: dbmdz/bert-base-historic-multilingual-cased widget: - text: — Dramatiſch war der Stoff vor Sophokles von Äſchylos behandelt worden in den Θροῇσσαι , denen vielleicht in der Trilogie das Stüc>"OnJw» κοίσις vorherging , das Stück Σαλαμίνιαι folgte . --- # Fine-tuned Flair Model on AjMC German NER Dataset (HIPE-2022) This Flair model was fine-tuned on the [AjMC German](https://github.com/hipe-eval/HIPE-2022-data/blob/main/documentation/README-ajmc.md) NER Dataset using hmBERT as backbone LM. The AjMC dataset consists of NE-annotated historical commentaries in the field of Classics, and was created in the context of the [Ajax MultiCommentary](https://mromanello.github.io/ajax-multi-commentary/) project. The following NEs were annotated: `pers`, `work`, `loc`, `object`, `date` and `scope`. # Results We performed a hyper-parameter search over the following parameters with 5 different seeds per configuration: * Batch Sizes: `[8, 4]` * Learning Rates: `[3e-05, 5e-05]` And report micro F1-score on development set: | Configuration | Run 1 | Run 2 | Run 3 | Run 4 | Run 5 | Avg. | |-----------------|--------------|--------------|--------------|--------------|--------------|--------------| | bs4-e10-lr5e-05 | [0.8937][1] | [0.8849][2] | [0.8977][3] | [0.8867][4] | [0.886][5] | 88.98 ± 0.5 | | bs8-e10-lr5e-05 | [0.8816][6] | [0.8952][7] | [0.8766][8] | [0.8934][9] | [0.8875][10] | 88.69 ± 0.7 | | bs4-e10-lr3e-05 | [0.8738][11] | [0.879][12] | [0.8951][13] | [0.8889][14] | [0.8772][15] | 88.28 ± 0.79 | | bs8-e10-lr3e-05 | [0.8743][16] | [0.8741][17] | [0.8722][18] | [0.8932][19] | [0.8809][20] | 87.89 ± 0.77 | [1]: https://hf.co/stefan-it/hmbench-ajmc-de-hmbert-bs4-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-1 [2]: https://hf.co/stefan-it/hmbench-ajmc-de-hmbert-bs4-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-2 [3]: https://hf.co/stefan-it/hmbench-ajmc-de-hmbert-bs4-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-3 [4]: https://hf.co/stefan-it/hmbench-ajmc-de-hmbert-bs4-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-4 [5]: https://hf.co/stefan-it/hmbench-ajmc-de-hmbert-bs4-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-5 [6]: https://hf.co/stefan-it/hmbench-ajmc-de-hmbert-bs8-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-1 [7]: https://hf.co/stefan-it/hmbench-ajmc-de-hmbert-bs8-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-2 [8]: https://hf.co/stefan-it/hmbench-ajmc-de-hmbert-bs8-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-3 [9]: https://hf.co/stefan-it/hmbench-ajmc-de-hmbert-bs8-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-4 [10]: https://hf.co/stefan-it/hmbench-ajmc-de-hmbert-bs8-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-5 [11]: https://hf.co/stefan-it/hmbench-ajmc-de-hmbert-bs4-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-1 [12]: https://hf.co/stefan-it/hmbench-ajmc-de-hmbert-bs4-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-2 [13]: https://hf.co/stefan-it/hmbench-ajmc-de-hmbert-bs4-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-3 [14]: https://hf.co/stefan-it/hmbench-ajmc-de-hmbert-bs4-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-4 [15]: https://hf.co/stefan-it/hmbench-ajmc-de-hmbert-bs4-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-5 [16]: https://hf.co/stefan-it/hmbench-ajmc-de-hmbert-bs8-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-1 [17]: https://hf.co/stefan-it/hmbench-ajmc-de-hmbert-bs8-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-2 [18]: https://hf.co/stefan-it/hmbench-ajmc-de-hmbert-bs8-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-3 [19]: https://hf.co/stefan-it/hmbench-ajmc-de-hmbert-bs8-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-4 [20]: https://hf.co/stefan-it/hmbench-ajmc-de-hmbert-bs8-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-5 The [training log](training.log) and TensorBoard logs (only for hmByT5 and hmTEAMS based models) are also uploaded to the model hub. More information about fine-tuning can be found [here](https://github.com/stefan-it/hmBench). # Acknowledgements We thank [Luisa März](https://github.com/LuisaMaerz), [Katharina Schmid](https://github.com/schmika) and [Erion Çano](https://github.com/erionc) for their fruitful discussions about Historic Language Models. Research supported with Cloud TPUs from Google's [TPU Research Cloud](https://sites.research.google/trc/about/) (TRC). Many Thanks for providing access to the TPUs ❤️
stefan-it/hmbench-ajmc-de-hmbert-bs4-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-3
stefan-it
2023-10-17T23:12:22Z
11
0
flair
[ "flair", "pytorch", "token-classification", "sequence-tagger-model", "de", "base_model:dbmdz/bert-base-historic-multilingual-cased", "base_model:finetune:dbmdz/bert-base-historic-multilingual-cased", "license:mit", "region:us" ]
token-classification
2023-10-13T08:32:32Z
--- language: de license: mit tags: - flair - token-classification - sequence-tagger-model base_model: dbmdz/bert-base-historic-multilingual-cased widget: - text: — Dramatiſch war der Stoff vor Sophokles von Äſchylos behandelt worden in den Θροῇσσαι , denen vielleicht in der Trilogie das Stüc>"OnJw» κοίσις vorherging , das Stück Σαλαμίνιαι folgte . --- # Fine-tuned Flair Model on AjMC German NER Dataset (HIPE-2022) This Flair model was fine-tuned on the [AjMC German](https://github.com/hipe-eval/HIPE-2022-data/blob/main/documentation/README-ajmc.md) NER Dataset using hmBERT as backbone LM. The AjMC dataset consists of NE-annotated historical commentaries in the field of Classics, and was created in the context of the [Ajax MultiCommentary](https://mromanello.github.io/ajax-multi-commentary/) project. The following NEs were annotated: `pers`, `work`, `loc`, `object`, `date` and `scope`. # Results We performed a hyper-parameter search over the following parameters with 5 different seeds per configuration: * Batch Sizes: `[8, 4]` * Learning Rates: `[3e-05, 5e-05]` And report micro F1-score on development set: | Configuration | Run 1 | Run 2 | Run 3 | Run 4 | Run 5 | Avg. | |-----------------|--------------|--------------|--------------|--------------|--------------|--------------| | bs4-e10-lr5e-05 | [0.8937][1] | [0.8849][2] | [0.8977][3] | [0.8867][4] | [0.886][5] | 88.98 ± 0.5 | | bs8-e10-lr5e-05 | [0.8816][6] | [0.8952][7] | [0.8766][8] | [0.8934][9] | [0.8875][10] | 88.69 ± 0.7 | | bs4-e10-lr3e-05 | [0.8738][11] | [0.879][12] | [0.8951][13] | [0.8889][14] | [0.8772][15] | 88.28 ± 0.79 | | bs8-e10-lr3e-05 | [0.8743][16] | [0.8741][17] | [0.8722][18] | [0.8932][19] | [0.8809][20] | 87.89 ± 0.77 | [1]: https://hf.co/stefan-it/hmbench-ajmc-de-hmbert-bs4-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-1 [2]: https://hf.co/stefan-it/hmbench-ajmc-de-hmbert-bs4-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-2 [3]: https://hf.co/stefan-it/hmbench-ajmc-de-hmbert-bs4-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-3 [4]: https://hf.co/stefan-it/hmbench-ajmc-de-hmbert-bs4-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-4 [5]: https://hf.co/stefan-it/hmbench-ajmc-de-hmbert-bs4-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-5 [6]: https://hf.co/stefan-it/hmbench-ajmc-de-hmbert-bs8-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-1 [7]: https://hf.co/stefan-it/hmbench-ajmc-de-hmbert-bs8-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-2 [8]: https://hf.co/stefan-it/hmbench-ajmc-de-hmbert-bs8-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-3 [9]: https://hf.co/stefan-it/hmbench-ajmc-de-hmbert-bs8-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-4 [10]: https://hf.co/stefan-it/hmbench-ajmc-de-hmbert-bs8-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-5 [11]: https://hf.co/stefan-it/hmbench-ajmc-de-hmbert-bs4-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-1 [12]: https://hf.co/stefan-it/hmbench-ajmc-de-hmbert-bs4-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-2 [13]: https://hf.co/stefan-it/hmbench-ajmc-de-hmbert-bs4-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-3 [14]: https://hf.co/stefan-it/hmbench-ajmc-de-hmbert-bs4-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-4 [15]: https://hf.co/stefan-it/hmbench-ajmc-de-hmbert-bs4-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-5 [16]: https://hf.co/stefan-it/hmbench-ajmc-de-hmbert-bs8-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-1 [17]: https://hf.co/stefan-it/hmbench-ajmc-de-hmbert-bs8-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-2 [18]: https://hf.co/stefan-it/hmbench-ajmc-de-hmbert-bs8-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-3 [19]: https://hf.co/stefan-it/hmbench-ajmc-de-hmbert-bs8-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-4 [20]: https://hf.co/stefan-it/hmbench-ajmc-de-hmbert-bs8-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-5 The [training log](training.log) and TensorBoard logs (only for hmByT5 and hmTEAMS based models) are also uploaded to the model hub. More information about fine-tuning can be found [here](https://github.com/stefan-it/hmBench). # Acknowledgements We thank [Luisa März](https://github.com/LuisaMaerz), [Katharina Schmid](https://github.com/schmika) and [Erion Çano](https://github.com/erionc) for their fruitful discussions about Historic Language Models. Research supported with Cloud TPUs from Google's [TPU Research Cloud](https://sites.research.google/trc/about/) (TRC). Many Thanks for providing access to the TPUs ❤️
stefan-it/hmbench-ajmc-de-hmbert-bs4-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-2
stefan-it
2023-10-17T23:12:21Z
8
0
flair
[ "flair", "pytorch", "token-classification", "sequence-tagger-model", "de", "base_model:dbmdz/bert-base-historic-multilingual-cased", "base_model:finetune:dbmdz/bert-base-historic-multilingual-cased", "license:mit", "region:us" ]
token-classification
2023-10-13T08:23:01Z
--- language: de license: mit tags: - flair - token-classification - sequence-tagger-model base_model: dbmdz/bert-base-historic-multilingual-cased widget: - text: — Dramatiſch war der Stoff vor Sophokles von Äſchylos behandelt worden in den Θροῇσσαι , denen vielleicht in der Trilogie das Stüc>"OnJw» κοίσις vorherging , das Stück Σαλαμίνιαι folgte . --- # Fine-tuned Flair Model on AjMC German NER Dataset (HIPE-2022) This Flair model was fine-tuned on the [AjMC German](https://github.com/hipe-eval/HIPE-2022-data/blob/main/documentation/README-ajmc.md) NER Dataset using hmBERT as backbone LM. The AjMC dataset consists of NE-annotated historical commentaries in the field of Classics, and was created in the context of the [Ajax MultiCommentary](https://mromanello.github.io/ajax-multi-commentary/) project. The following NEs were annotated: `pers`, `work`, `loc`, `object`, `date` and `scope`. # Results We performed a hyper-parameter search over the following parameters with 5 different seeds per configuration: * Batch Sizes: `[8, 4]` * Learning Rates: `[3e-05, 5e-05]` And report micro F1-score on development set: | Configuration | Run 1 | Run 2 | Run 3 | Run 4 | Run 5 | Avg. | |-----------------|--------------|--------------|--------------|--------------|--------------|--------------| | bs4-e10-lr5e-05 | [0.8937][1] | [0.8849][2] | [0.8977][3] | [0.8867][4] | [0.886][5] | 88.98 ± 0.5 | | bs8-e10-lr5e-05 | [0.8816][6] | [0.8952][7] | [0.8766][8] | [0.8934][9] | [0.8875][10] | 88.69 ± 0.7 | | bs4-e10-lr3e-05 | [0.8738][11] | [0.879][12] | [0.8951][13] | [0.8889][14] | [0.8772][15] | 88.28 ± 0.79 | | bs8-e10-lr3e-05 | [0.8743][16] | [0.8741][17] | [0.8722][18] | [0.8932][19] | [0.8809][20] | 87.89 ± 0.77 | [1]: https://hf.co/stefan-it/hmbench-ajmc-de-hmbert-bs4-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-1 [2]: https://hf.co/stefan-it/hmbench-ajmc-de-hmbert-bs4-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-2 [3]: https://hf.co/stefan-it/hmbench-ajmc-de-hmbert-bs4-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-3 [4]: https://hf.co/stefan-it/hmbench-ajmc-de-hmbert-bs4-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-4 [5]: https://hf.co/stefan-it/hmbench-ajmc-de-hmbert-bs4-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-5 [6]: https://hf.co/stefan-it/hmbench-ajmc-de-hmbert-bs8-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-1 [7]: https://hf.co/stefan-it/hmbench-ajmc-de-hmbert-bs8-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-2 [8]: https://hf.co/stefan-it/hmbench-ajmc-de-hmbert-bs8-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-3 [9]: https://hf.co/stefan-it/hmbench-ajmc-de-hmbert-bs8-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-4 [10]: https://hf.co/stefan-it/hmbench-ajmc-de-hmbert-bs8-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-5 [11]: https://hf.co/stefan-it/hmbench-ajmc-de-hmbert-bs4-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-1 [12]: https://hf.co/stefan-it/hmbench-ajmc-de-hmbert-bs4-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-2 [13]: https://hf.co/stefan-it/hmbench-ajmc-de-hmbert-bs4-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-3 [14]: https://hf.co/stefan-it/hmbench-ajmc-de-hmbert-bs4-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-4 [15]: https://hf.co/stefan-it/hmbench-ajmc-de-hmbert-bs4-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-5 [16]: https://hf.co/stefan-it/hmbench-ajmc-de-hmbert-bs8-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-1 [17]: https://hf.co/stefan-it/hmbench-ajmc-de-hmbert-bs8-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-2 [18]: https://hf.co/stefan-it/hmbench-ajmc-de-hmbert-bs8-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-3 [19]: https://hf.co/stefan-it/hmbench-ajmc-de-hmbert-bs8-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-4 [20]: https://hf.co/stefan-it/hmbench-ajmc-de-hmbert-bs8-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-5 The [training log](training.log) and TensorBoard logs (only for hmByT5 and hmTEAMS based models) are also uploaded to the model hub. More information about fine-tuning can be found [here](https://github.com/stefan-it/hmBench). # Acknowledgements We thank [Luisa März](https://github.com/LuisaMaerz), [Katharina Schmid](https://github.com/schmika) and [Erion Çano](https://github.com/erionc) for their fruitful discussions about Historic Language Models. Research supported with Cloud TPUs from Google's [TPU Research Cloud](https://sites.research.google/trc/about/) (TRC). Many Thanks for providing access to the TPUs ❤️
stefan-it/hmbench-ajmc-de-hmbert-bs8-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-3
stefan-it
2023-10-17T23:12:16Z
7
0
flair
[ "flair", "pytorch", "token-classification", "sequence-tagger-model", "de", "base_model:dbmdz/bert-base-historic-multilingual-cased", "base_model:finetune:dbmdz/bert-base-historic-multilingual-cased", "license:mit", "region:us" ]
token-classification
2023-10-13T08:39:18Z
--- language: de license: mit tags: - flair - token-classification - sequence-tagger-model base_model: dbmdz/bert-base-historic-multilingual-cased widget: - text: — Dramatiſch war der Stoff vor Sophokles von Äſchylos behandelt worden in den Θροῇσσαι , denen vielleicht in der Trilogie das Stüc>"OnJw» κοίσις vorherging , das Stück Σαλαμίνιαι folgte . --- # Fine-tuned Flair Model on AjMC German NER Dataset (HIPE-2022) This Flair model was fine-tuned on the [AjMC German](https://github.com/hipe-eval/HIPE-2022-data/blob/main/documentation/README-ajmc.md) NER Dataset using hmBERT as backbone LM. The AjMC dataset consists of NE-annotated historical commentaries in the field of Classics, and was created in the context of the [Ajax MultiCommentary](https://mromanello.github.io/ajax-multi-commentary/) project. The following NEs were annotated: `pers`, `work`, `loc`, `object`, `date` and `scope`. # Results We performed a hyper-parameter search over the following parameters with 5 different seeds per configuration: * Batch Sizes: `[8, 4]` * Learning Rates: `[3e-05, 5e-05]` And report micro F1-score on development set: | Configuration | Run 1 | Run 2 | Run 3 | Run 4 | Run 5 | Avg. | |-----------------|--------------|--------------|--------------|--------------|--------------|--------------| | bs4-e10-lr5e-05 | [0.8937][1] | [0.8849][2] | [0.8977][3] | [0.8867][4] | [0.886][5] | 88.98 ± 0.5 | | bs8-e10-lr5e-05 | [0.8816][6] | [0.8952][7] | [0.8766][8] | [0.8934][9] | [0.8875][10] | 88.69 ± 0.7 | | bs4-e10-lr3e-05 | [0.8738][11] | [0.879][12] | [0.8951][13] | [0.8889][14] | [0.8772][15] | 88.28 ± 0.79 | | bs8-e10-lr3e-05 | [0.8743][16] | [0.8741][17] | [0.8722][18] | [0.8932][19] | [0.8809][20] | 87.89 ± 0.77 | [1]: https://hf.co/stefan-it/hmbench-ajmc-de-hmbert-bs4-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-1 [2]: https://hf.co/stefan-it/hmbench-ajmc-de-hmbert-bs4-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-2 [3]: https://hf.co/stefan-it/hmbench-ajmc-de-hmbert-bs4-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-3 [4]: https://hf.co/stefan-it/hmbench-ajmc-de-hmbert-bs4-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-4 [5]: https://hf.co/stefan-it/hmbench-ajmc-de-hmbert-bs4-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-5 [6]: https://hf.co/stefan-it/hmbench-ajmc-de-hmbert-bs8-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-1 [7]: https://hf.co/stefan-it/hmbench-ajmc-de-hmbert-bs8-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-2 [8]: https://hf.co/stefan-it/hmbench-ajmc-de-hmbert-bs8-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-3 [9]: https://hf.co/stefan-it/hmbench-ajmc-de-hmbert-bs8-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-4 [10]: https://hf.co/stefan-it/hmbench-ajmc-de-hmbert-bs8-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-5 [11]: https://hf.co/stefan-it/hmbench-ajmc-de-hmbert-bs4-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-1 [12]: https://hf.co/stefan-it/hmbench-ajmc-de-hmbert-bs4-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-2 [13]: https://hf.co/stefan-it/hmbench-ajmc-de-hmbert-bs4-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-3 [14]: https://hf.co/stefan-it/hmbench-ajmc-de-hmbert-bs4-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-4 [15]: https://hf.co/stefan-it/hmbench-ajmc-de-hmbert-bs4-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-5 [16]: https://hf.co/stefan-it/hmbench-ajmc-de-hmbert-bs8-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-1 [17]: https://hf.co/stefan-it/hmbench-ajmc-de-hmbert-bs8-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-2 [18]: https://hf.co/stefan-it/hmbench-ajmc-de-hmbert-bs8-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-3 [19]: https://hf.co/stefan-it/hmbench-ajmc-de-hmbert-bs8-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-4 [20]: https://hf.co/stefan-it/hmbench-ajmc-de-hmbert-bs8-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-5 The [training log](training.log) and TensorBoard logs (only for hmByT5 and hmTEAMS based models) are also uploaded to the model hub. More information about fine-tuning can be found [here](https://github.com/stefan-it/hmBench). # Acknowledgements We thank [Luisa März](https://github.com/LuisaMaerz), [Katharina Schmid](https://github.com/schmika) and [Erion Çano](https://github.com/erionc) for their fruitful discussions about Historic Language Models. Research supported with Cloud TPUs from Google's [TPU Research Cloud](https://sites.research.google/trc/about/) (TRC). Many Thanks for providing access to the TPUs ❤️
stefan-it/hmbench-ajmc-de-hmbert-bs8-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-1
stefan-it
2023-10-17T23:12:13Z
6
0
flair
[ "flair", "pytorch", "token-classification", "sequence-tagger-model", "de", "base_model:dbmdz/bert-base-historic-multilingual-cased", "base_model:finetune:dbmdz/bert-base-historic-multilingual-cased", "license:mit", "region:us" ]
token-classification
2023-10-13T08:20:14Z
--- language: de license: mit tags: - flair - token-classification - sequence-tagger-model base_model: dbmdz/bert-base-historic-multilingual-cased widget: - text: — Dramatiſch war der Stoff vor Sophokles von Äſchylos behandelt worden in den Θροῇσσαι , denen vielleicht in der Trilogie das Stüc>"OnJw» κοίσις vorherging , das Stück Σαλαμίνιαι folgte . --- # Fine-tuned Flair Model on AjMC German NER Dataset (HIPE-2022) This Flair model was fine-tuned on the [AjMC German](https://github.com/hipe-eval/HIPE-2022-data/blob/main/documentation/README-ajmc.md) NER Dataset using hmBERT as backbone LM. The AjMC dataset consists of NE-annotated historical commentaries in the field of Classics, and was created in the context of the [Ajax MultiCommentary](https://mromanello.github.io/ajax-multi-commentary/) project. The following NEs were annotated: `pers`, `work`, `loc`, `object`, `date` and `scope`. # Results We performed a hyper-parameter search over the following parameters with 5 different seeds per configuration: * Batch Sizes: `[8, 4]` * Learning Rates: `[3e-05, 5e-05]` And report micro F1-score on development set: | Configuration | Run 1 | Run 2 | Run 3 | Run 4 | Run 5 | Avg. | |-----------------|--------------|--------------|--------------|--------------|--------------|--------------| | bs4-e10-lr5e-05 | [0.8937][1] | [0.8849][2] | [0.8977][3] | [0.8867][4] | [0.886][5] | 88.98 ± 0.5 | | bs8-e10-lr5e-05 | [0.8816][6] | [0.8952][7] | [0.8766][8] | [0.8934][9] | [0.8875][10] | 88.69 ± 0.7 | | bs4-e10-lr3e-05 | [0.8738][11] | [0.879][12] | [0.8951][13] | [0.8889][14] | [0.8772][15] | 88.28 ± 0.79 | | bs8-e10-lr3e-05 | [0.8743][16] | [0.8741][17] | [0.8722][18] | [0.8932][19] | [0.8809][20] | 87.89 ± 0.77 | [1]: https://hf.co/stefan-it/hmbench-ajmc-de-hmbert-bs4-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-1 [2]: https://hf.co/stefan-it/hmbench-ajmc-de-hmbert-bs4-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-2 [3]: https://hf.co/stefan-it/hmbench-ajmc-de-hmbert-bs4-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-3 [4]: https://hf.co/stefan-it/hmbench-ajmc-de-hmbert-bs4-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-4 [5]: https://hf.co/stefan-it/hmbench-ajmc-de-hmbert-bs4-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-5 [6]: https://hf.co/stefan-it/hmbench-ajmc-de-hmbert-bs8-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-1 [7]: https://hf.co/stefan-it/hmbench-ajmc-de-hmbert-bs8-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-2 [8]: https://hf.co/stefan-it/hmbench-ajmc-de-hmbert-bs8-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-3 [9]: https://hf.co/stefan-it/hmbench-ajmc-de-hmbert-bs8-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-4 [10]: https://hf.co/stefan-it/hmbench-ajmc-de-hmbert-bs8-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-5 [11]: https://hf.co/stefan-it/hmbench-ajmc-de-hmbert-bs4-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-1 [12]: https://hf.co/stefan-it/hmbench-ajmc-de-hmbert-bs4-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-2 [13]: https://hf.co/stefan-it/hmbench-ajmc-de-hmbert-bs4-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-3 [14]: https://hf.co/stefan-it/hmbench-ajmc-de-hmbert-bs4-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-4 [15]: https://hf.co/stefan-it/hmbench-ajmc-de-hmbert-bs4-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-5 [16]: https://hf.co/stefan-it/hmbench-ajmc-de-hmbert-bs8-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-1 [17]: https://hf.co/stefan-it/hmbench-ajmc-de-hmbert-bs8-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-2 [18]: https://hf.co/stefan-it/hmbench-ajmc-de-hmbert-bs8-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-3 [19]: https://hf.co/stefan-it/hmbench-ajmc-de-hmbert-bs8-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-4 [20]: https://hf.co/stefan-it/hmbench-ajmc-de-hmbert-bs8-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-5 The [training log](training.log) and TensorBoard logs (only for hmByT5 and hmTEAMS based models) are also uploaded to the model hub. More information about fine-tuning can be found [here](https://github.com/stefan-it/hmBench). # Acknowledgements We thank [Luisa März](https://github.com/LuisaMaerz), [Katharina Schmid](https://github.com/schmika) and [Erion Çano](https://github.com/erionc) for their fruitful discussions about Historic Language Models. Research supported with Cloud TPUs from Google's [TPU Research Cloud](https://sites.research.google/trc/about/) (TRC). Many Thanks for providing access to the TPUs ❤️
stefan-it/hmbench-ajmc-de-hmbert-bs8-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-5
stefan-it
2023-10-17T23:12:12Z
14
0
flair
[ "flair", "pytorch", "token-classification", "sequence-tagger-model", "de", "base_model:dbmdz/bert-base-historic-multilingual-cased", "base_model:finetune:dbmdz/bert-base-historic-multilingual-cased", "license:mit", "region:us" ]
token-classification
2023-10-13T08:57:03Z
--- language: de license: mit tags: - flair - token-classification - sequence-tagger-model base_model: dbmdz/bert-base-historic-multilingual-cased widget: - text: — Dramatiſch war der Stoff vor Sophokles von Äſchylos behandelt worden in den Θροῇσσαι , denen vielleicht in der Trilogie das Stüc>"OnJw» κοίσις vorherging , das Stück Σαλαμίνιαι folgte . --- # Fine-tuned Flair Model on AjMC German NER Dataset (HIPE-2022) This Flair model was fine-tuned on the [AjMC German](https://github.com/hipe-eval/HIPE-2022-data/blob/main/documentation/README-ajmc.md) NER Dataset using hmBERT as backbone LM. The AjMC dataset consists of NE-annotated historical commentaries in the field of Classics, and was created in the context of the [Ajax MultiCommentary](https://mromanello.github.io/ajax-multi-commentary/) project. The following NEs were annotated: `pers`, `work`, `loc`, `object`, `date` and `scope`. # Results We performed a hyper-parameter search over the following parameters with 5 different seeds per configuration: * Batch Sizes: `[8, 4]` * Learning Rates: `[3e-05, 5e-05]` And report micro F1-score on development set: | Configuration | Run 1 | Run 2 | Run 3 | Run 4 | Run 5 | Avg. | |-----------------|--------------|--------------|--------------|--------------|--------------|--------------| | bs4-e10-lr5e-05 | [0.8937][1] | [0.8849][2] | [0.8977][3] | [0.8867][4] | [0.886][5] | 88.98 ± 0.5 | | bs8-e10-lr5e-05 | [0.8816][6] | [0.8952][7] | [0.8766][8] | [0.8934][9] | [0.8875][10] | 88.69 ± 0.7 | | bs4-e10-lr3e-05 | [0.8738][11] | [0.879][12] | [0.8951][13] | [0.8889][14] | [0.8772][15] | 88.28 ± 0.79 | | bs8-e10-lr3e-05 | [0.8743][16] | [0.8741][17] | [0.8722][18] | [0.8932][19] | [0.8809][20] | 87.89 ± 0.77 | [1]: https://hf.co/stefan-it/hmbench-ajmc-de-hmbert-bs4-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-1 [2]: https://hf.co/stefan-it/hmbench-ajmc-de-hmbert-bs4-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-2 [3]: https://hf.co/stefan-it/hmbench-ajmc-de-hmbert-bs4-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-3 [4]: https://hf.co/stefan-it/hmbench-ajmc-de-hmbert-bs4-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-4 [5]: https://hf.co/stefan-it/hmbench-ajmc-de-hmbert-bs4-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-5 [6]: https://hf.co/stefan-it/hmbench-ajmc-de-hmbert-bs8-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-1 [7]: https://hf.co/stefan-it/hmbench-ajmc-de-hmbert-bs8-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-2 [8]: https://hf.co/stefan-it/hmbench-ajmc-de-hmbert-bs8-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-3 [9]: https://hf.co/stefan-it/hmbench-ajmc-de-hmbert-bs8-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-4 [10]: https://hf.co/stefan-it/hmbench-ajmc-de-hmbert-bs8-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-5 [11]: https://hf.co/stefan-it/hmbench-ajmc-de-hmbert-bs4-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-1 [12]: https://hf.co/stefan-it/hmbench-ajmc-de-hmbert-bs4-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-2 [13]: https://hf.co/stefan-it/hmbench-ajmc-de-hmbert-bs4-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-3 [14]: https://hf.co/stefan-it/hmbench-ajmc-de-hmbert-bs4-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-4 [15]: https://hf.co/stefan-it/hmbench-ajmc-de-hmbert-bs4-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-5 [16]: https://hf.co/stefan-it/hmbench-ajmc-de-hmbert-bs8-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-1 [17]: https://hf.co/stefan-it/hmbench-ajmc-de-hmbert-bs8-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-2 [18]: https://hf.co/stefan-it/hmbench-ajmc-de-hmbert-bs8-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-3 [19]: https://hf.co/stefan-it/hmbench-ajmc-de-hmbert-bs8-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-4 [20]: https://hf.co/stefan-it/hmbench-ajmc-de-hmbert-bs8-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-5 The [training log](training.log) and TensorBoard logs (only for hmByT5 and hmTEAMS based models) are also uploaded to the model hub. More information about fine-tuning can be found [here](https://github.com/stefan-it/hmBench). # Acknowledgements We thank [Luisa März](https://github.com/LuisaMaerz), [Katharina Schmid](https://github.com/schmika) and [Erion Çano](https://github.com/erionc) for their fruitful discussions about Historic Language Models. Research supported with Cloud TPUs from Google's [TPU Research Cloud](https://sites.research.google/trc/about/) (TRC). Many Thanks for providing access to the TPUs ❤️
stefan-it/hmbench-ajmc-de-hmbert-bs8-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-4
stefan-it
2023-10-17T23:12:11Z
12
0
flair
[ "flair", "pytorch", "token-classification", "sequence-tagger-model", "de", "base_model:dbmdz/bert-base-historic-multilingual-cased", "base_model:finetune:dbmdz/bert-base-historic-multilingual-cased", "license:mit", "region:us" ]
token-classification
2023-10-13T08:47:26Z
--- language: de license: mit tags: - flair - token-classification - sequence-tagger-model base_model: dbmdz/bert-base-historic-multilingual-cased widget: - text: — Dramatiſch war der Stoff vor Sophokles von Äſchylos behandelt worden in den Θροῇσσαι , denen vielleicht in der Trilogie das Stüc>"OnJw» κοίσις vorherging , das Stück Σαλαμίνιαι folgte . --- # Fine-tuned Flair Model on AjMC German NER Dataset (HIPE-2022) This Flair model was fine-tuned on the [AjMC German](https://github.com/hipe-eval/HIPE-2022-data/blob/main/documentation/README-ajmc.md) NER Dataset using hmBERT as backbone LM. The AjMC dataset consists of NE-annotated historical commentaries in the field of Classics, and was created in the context of the [Ajax MultiCommentary](https://mromanello.github.io/ajax-multi-commentary/) project. The following NEs were annotated: `pers`, `work`, `loc`, `object`, `date` and `scope`. # Results We performed a hyper-parameter search over the following parameters with 5 different seeds per configuration: * Batch Sizes: `[8, 4]` * Learning Rates: `[3e-05, 5e-05]` And report micro F1-score on development set: | Configuration | Run 1 | Run 2 | Run 3 | Run 4 | Run 5 | Avg. | |-----------------|--------------|--------------|--------------|--------------|--------------|--------------| | bs4-e10-lr5e-05 | [0.8937][1] | [0.8849][2] | [0.8977][3] | [0.8867][4] | [0.886][5] | 88.98 ± 0.5 | | bs8-e10-lr5e-05 | [0.8816][6] | [0.8952][7] | [0.8766][8] | [0.8934][9] | [0.8875][10] | 88.69 ± 0.7 | | bs4-e10-lr3e-05 | [0.8738][11] | [0.879][12] | [0.8951][13] | [0.8889][14] | [0.8772][15] | 88.28 ± 0.79 | | bs8-e10-lr3e-05 | [0.8743][16] | [0.8741][17] | [0.8722][18] | [0.8932][19] | [0.8809][20] | 87.89 ± 0.77 | [1]: https://hf.co/stefan-it/hmbench-ajmc-de-hmbert-bs4-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-1 [2]: https://hf.co/stefan-it/hmbench-ajmc-de-hmbert-bs4-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-2 [3]: https://hf.co/stefan-it/hmbench-ajmc-de-hmbert-bs4-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-3 [4]: https://hf.co/stefan-it/hmbench-ajmc-de-hmbert-bs4-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-4 [5]: https://hf.co/stefan-it/hmbench-ajmc-de-hmbert-bs4-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-5 [6]: https://hf.co/stefan-it/hmbench-ajmc-de-hmbert-bs8-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-1 [7]: https://hf.co/stefan-it/hmbench-ajmc-de-hmbert-bs8-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-2 [8]: https://hf.co/stefan-it/hmbench-ajmc-de-hmbert-bs8-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-3 [9]: https://hf.co/stefan-it/hmbench-ajmc-de-hmbert-bs8-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-4 [10]: https://hf.co/stefan-it/hmbench-ajmc-de-hmbert-bs8-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-5 [11]: https://hf.co/stefan-it/hmbench-ajmc-de-hmbert-bs4-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-1 [12]: https://hf.co/stefan-it/hmbench-ajmc-de-hmbert-bs4-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-2 [13]: https://hf.co/stefan-it/hmbench-ajmc-de-hmbert-bs4-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-3 [14]: https://hf.co/stefan-it/hmbench-ajmc-de-hmbert-bs4-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-4 [15]: https://hf.co/stefan-it/hmbench-ajmc-de-hmbert-bs4-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-5 [16]: https://hf.co/stefan-it/hmbench-ajmc-de-hmbert-bs8-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-1 [17]: https://hf.co/stefan-it/hmbench-ajmc-de-hmbert-bs8-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-2 [18]: https://hf.co/stefan-it/hmbench-ajmc-de-hmbert-bs8-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-3 [19]: https://hf.co/stefan-it/hmbench-ajmc-de-hmbert-bs8-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-4 [20]: https://hf.co/stefan-it/hmbench-ajmc-de-hmbert-bs8-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-5 The [training log](training.log) and TensorBoard logs (only for hmByT5 and hmTEAMS based models) are also uploaded to the model hub. More information about fine-tuning can be found [here](https://github.com/stefan-it/hmBench). # Acknowledgements We thank [Luisa März](https://github.com/LuisaMaerz), [Katharina Schmid](https://github.com/schmika) and [Erion Çano](https://github.com/erionc) for their fruitful discussions about Historic Language Models. Research supported with Cloud TPUs from Google's [TPU Research Cloud](https://sites.research.google/trc/about/) (TRC). Many Thanks for providing access to the TPUs ❤️
hmbyt5-preliminary/flair-hipe-2022-hipe2020-de
hmbyt5-preliminary
2023-10-17T23:09:20Z
9
0
flair
[ "flair", "pytorch", "tensorboard", "token-classification", "sequence-tagger-model", "de", "base_model:hmbyt5-preliminary/byt5-small-historic-multilingual-span20-flax", "base_model:finetune:hmbyt5-preliminary/byt5-small-historic-multilingual-span20-flax", "license:mit", "region:us" ]
token-classification
2023-10-15T12:04:53Z
--- language: de license: mit tags: - flair - token-classification - sequence-tagger-model base_model: hmbyt5-preliminary/byt5-small-historic-multilingual-span20-flax inference: false widget: - text: Es war am 25sten , als Lord Corn wollis Dublin mit seinem Gefolge und mehrern Truppen verließ , um in einer Central - Lage bey Sligo die Operationen der Armee persönlich zu dirigiren . Der Feind dürfte bald in die Enge kommen , da Gen . Lacke mit 6000 Mann ihm entgegen marschirt . --- # Fine-tuned Flair Model on German HIPE-2020 Dataset (HIPE-2022) This Flair model was fine-tuned on the [German HIPE-2020](https://github.com/hipe-eval/HIPE-2022-data/blob/main/documentation/README-hipe2020.md) NER Dataset using hmByT5 as backbone LM. The HIPE-2020 dataset is comprised of newspapers from mid 19C to mid 20C. For information can be found [here](https://dl.acm.org/doi/abs/10.1007/978-3-030-58219-7_21). The following NEs were annotated: `loc`, `org`, `pers`, `prod`, `time` and `comp`. # ⚠️ Inference Widget ⚠️ Fine-Tuning ByT5 models in Flair is currently done by implementing an own [`ByT5Embedding`][0] class. This class needs to be present when running the model with Flair. Thus, the inference widget is not working with hmByT5 at the moment on the Model Hub and is currently disabled. This should be fixed in future, when ByT5 fine-tuning is supported in Flair directly. [0]: https://github.com/stefan-it/hmBench/blob/main/byt5_embeddings.py # Results We performed a hyper-parameter search over the following parameters with 5 different seeds per configuration: * Batch Sizes: `[8, 4]` * Learning Rates: `[0.00015, 0.00016]` And report micro F1-score on development set: | Configuration | Run 1 | Run 2 | Run 3 | Run 4 | Run 5 | Avg. | |-------------------|--------------|--------------|--------------|--------------|--------------|--------------| | bs4-e10-lr0.00016 | [0.7596][1] | [0.7466][2] | [0.7771][3] | [0.7894][4] | [0.7717][5] | 76.89 ± 1.47 | | bs8-e10-lr0.00015 | [0.7593][6] | [0.7663][7] | [0.7611][8] | [0.7647][9] | [0.7667][10] | 76.36 ± 0.29 | | bs8-e10-lr0.00016 | [0.7607][11] | [0.7736][12] | [0.7567][13] | [0.756][14] | [0.746][15] | 75.86 ± 0.89 | | bs4-e10-lr0.00015 | [0.7541][16] | [0.7466][17] | [0.7575][18] | [0.7579][19] | [0.7599][20] | 75.52 ± 0.47 | [1]: https://hf.co/stefan-it/hmbench-hipe2020-de-hmbyt5-bs4-wsFalse-e10-lr0.00016-poolingfirst-layers-1-crfFalse-1 [2]: https://hf.co/stefan-it/hmbench-hipe2020-de-hmbyt5-bs4-wsFalse-e10-lr0.00016-poolingfirst-layers-1-crfFalse-2 [3]: https://hf.co/stefan-it/hmbench-hipe2020-de-hmbyt5-bs4-wsFalse-e10-lr0.00016-poolingfirst-layers-1-crfFalse-3 [4]: https://hf.co/stefan-it/hmbench-hipe2020-de-hmbyt5-bs4-wsFalse-e10-lr0.00016-poolingfirst-layers-1-crfFalse-4 [5]: https://hf.co/stefan-it/hmbench-hipe2020-de-hmbyt5-bs4-wsFalse-e10-lr0.00016-poolingfirst-layers-1-crfFalse-5 [6]: https://hf.co/stefan-it/hmbench-hipe2020-de-hmbyt5-bs8-wsFalse-e10-lr0.00015-poolingfirst-layers-1-crfFalse-1 [7]: https://hf.co/stefan-it/hmbench-hipe2020-de-hmbyt5-bs8-wsFalse-e10-lr0.00015-poolingfirst-layers-1-crfFalse-2 [8]: https://hf.co/stefan-it/hmbench-hipe2020-de-hmbyt5-bs8-wsFalse-e10-lr0.00015-poolingfirst-layers-1-crfFalse-3 [9]: https://hf.co/stefan-it/hmbench-hipe2020-de-hmbyt5-bs8-wsFalse-e10-lr0.00015-poolingfirst-layers-1-crfFalse-4 [10]: https://hf.co/stefan-it/hmbench-hipe2020-de-hmbyt5-bs8-wsFalse-e10-lr0.00015-poolingfirst-layers-1-crfFalse-5 [11]: https://hf.co/stefan-it/hmbench-hipe2020-de-hmbyt5-bs8-wsFalse-e10-lr0.00016-poolingfirst-layers-1-crfFalse-1 [12]: https://hf.co/stefan-it/hmbench-hipe2020-de-hmbyt5-bs8-wsFalse-e10-lr0.00016-poolingfirst-layers-1-crfFalse-2 [13]: https://hf.co/stefan-it/hmbench-hipe2020-de-hmbyt5-bs8-wsFalse-e10-lr0.00016-poolingfirst-layers-1-crfFalse-3 [14]: https://hf.co/stefan-it/hmbench-hipe2020-de-hmbyt5-bs8-wsFalse-e10-lr0.00016-poolingfirst-layers-1-crfFalse-4 [15]: https://hf.co/stefan-it/hmbench-hipe2020-de-hmbyt5-bs8-wsFalse-e10-lr0.00016-poolingfirst-layers-1-crfFalse-5 [16]: https://hf.co/stefan-it/hmbench-hipe2020-de-hmbyt5-bs4-wsFalse-e10-lr0.00015-poolingfirst-layers-1-crfFalse-1 [17]: https://hf.co/stefan-it/hmbench-hipe2020-de-hmbyt5-bs4-wsFalse-e10-lr0.00015-poolingfirst-layers-1-crfFalse-2 [18]: https://hf.co/stefan-it/hmbench-hipe2020-de-hmbyt5-bs4-wsFalse-e10-lr0.00015-poolingfirst-layers-1-crfFalse-3 [19]: https://hf.co/stefan-it/hmbench-hipe2020-de-hmbyt5-bs4-wsFalse-e10-lr0.00015-poolingfirst-layers-1-crfFalse-4 [20]: https://hf.co/stefan-it/hmbench-hipe2020-de-hmbyt5-bs4-wsFalse-e10-lr0.00015-poolingfirst-layers-1-crfFalse-5 The [training log](training.log) and TensorBoard logs (only for hmByT5 and hmTEAMS based models) are also uploaded to the model hub. More information about fine-tuning can be found [here](https://github.com/stefan-it/hmBench). # Acknowledgements We thank [Luisa März](https://github.com/LuisaMaerz), [Katharina Schmid](https://github.com/schmika) and [Erion Çano](https://github.com/erionc) for their fruitful discussions about Historic Language Models. Research supported with Cloud TPUs from Google's [TPU Research Cloud](https://sites.research.google/trc/about/) (TRC). Many Thanks for providing access to the TPUs ❤️
stefan-it/hmbench-hipe2020-de-hmbyt5-bs4-wsFalse-e10-lr0.00016-poolingfirst-layers-1-crfFalse-5
stefan-it
2023-10-17T23:09:20Z
5
0
flair
[ "flair", "pytorch", "tensorboard", "token-classification", "sequence-tagger-model", "de", "base_model:hmbyt5-preliminary/byt5-small-historic-multilingual-span20-flax", "base_model:finetune:hmbyt5-preliminary/byt5-small-historic-multilingual-span20-flax", "license:mit", "region:us" ]
token-classification
2023-10-15T12:10:47Z
--- language: de license: mit tags: - flair - token-classification - sequence-tagger-model base_model: hmbyt5-preliminary/byt5-small-historic-multilingual-span20-flax inference: false widget: - text: Es war am 25sten , als Lord Corn wollis Dublin mit seinem Gefolge und mehrern Truppen verließ , um in einer Central - Lage bey Sligo die Operationen der Armee persönlich zu dirigiren . Der Feind dürfte bald in die Enge kommen , da Gen . Lacke mit 6000 Mann ihm entgegen marschirt . --- # Fine-tuned Flair Model on German HIPE-2020 Dataset (HIPE-2022) This Flair model was fine-tuned on the [German HIPE-2020](https://github.com/hipe-eval/HIPE-2022-data/blob/main/documentation/README-hipe2020.md) NER Dataset using hmByT5 as backbone LM. The HIPE-2020 dataset is comprised of newspapers from mid 19C to mid 20C. For information can be found [here](https://dl.acm.org/doi/abs/10.1007/978-3-030-58219-7_21). The following NEs were annotated: `loc`, `org`, `pers`, `prod`, `time` and `comp`. # ⚠️ Inference Widget ⚠️ Fine-Tuning ByT5 models in Flair is currently done by implementing an own [`ByT5Embedding`][0] class. This class needs to be present when running the model with Flair. Thus, the inference widget is not working with hmByT5 at the moment on the Model Hub and is currently disabled. This should be fixed in future, when ByT5 fine-tuning is supported in Flair directly. [0]: https://github.com/stefan-it/hmBench/blob/main/byt5_embeddings.py # Results We performed a hyper-parameter search over the following parameters with 5 different seeds per configuration: * Batch Sizes: `[8, 4]` * Learning Rates: `[0.00015, 0.00016]` And report micro F1-score on development set: | Configuration | Run 1 | Run 2 | Run 3 | Run 4 | Run 5 | Avg. | |-------------------|--------------|--------------|--------------|--------------|--------------|--------------| | bs4-e10-lr0.00016 | [0.7596][1] | [0.7466][2] | [0.7771][3] | [0.7894][4] | [0.7717][5] | 76.89 ± 1.47 | | bs8-e10-lr0.00015 | [0.7593][6] | [0.7663][7] | [0.7611][8] | [0.7647][9] | [0.7667][10] | 76.36 ± 0.29 | | bs8-e10-lr0.00016 | [0.7607][11] | [0.7736][12] | [0.7567][13] | [0.756][14] | [0.746][15] | 75.86 ± 0.89 | | bs4-e10-lr0.00015 | [0.7541][16] | [0.7466][17] | [0.7575][18] | [0.7579][19] | [0.7599][20] | 75.52 ± 0.47 | [1]: https://hf.co/stefan-it/hmbench-hipe2020-de-hmbyt5-bs4-wsFalse-e10-lr0.00016-poolingfirst-layers-1-crfFalse-1 [2]: https://hf.co/stefan-it/hmbench-hipe2020-de-hmbyt5-bs4-wsFalse-e10-lr0.00016-poolingfirst-layers-1-crfFalse-2 [3]: https://hf.co/stefan-it/hmbench-hipe2020-de-hmbyt5-bs4-wsFalse-e10-lr0.00016-poolingfirst-layers-1-crfFalse-3 [4]: https://hf.co/stefan-it/hmbench-hipe2020-de-hmbyt5-bs4-wsFalse-e10-lr0.00016-poolingfirst-layers-1-crfFalse-4 [5]: https://hf.co/stefan-it/hmbench-hipe2020-de-hmbyt5-bs4-wsFalse-e10-lr0.00016-poolingfirst-layers-1-crfFalse-5 [6]: https://hf.co/stefan-it/hmbench-hipe2020-de-hmbyt5-bs8-wsFalse-e10-lr0.00015-poolingfirst-layers-1-crfFalse-1 [7]: https://hf.co/stefan-it/hmbench-hipe2020-de-hmbyt5-bs8-wsFalse-e10-lr0.00015-poolingfirst-layers-1-crfFalse-2 [8]: https://hf.co/stefan-it/hmbench-hipe2020-de-hmbyt5-bs8-wsFalse-e10-lr0.00015-poolingfirst-layers-1-crfFalse-3 [9]: https://hf.co/stefan-it/hmbench-hipe2020-de-hmbyt5-bs8-wsFalse-e10-lr0.00015-poolingfirst-layers-1-crfFalse-4 [10]: https://hf.co/stefan-it/hmbench-hipe2020-de-hmbyt5-bs8-wsFalse-e10-lr0.00015-poolingfirst-layers-1-crfFalse-5 [11]: https://hf.co/stefan-it/hmbench-hipe2020-de-hmbyt5-bs8-wsFalse-e10-lr0.00016-poolingfirst-layers-1-crfFalse-1 [12]: https://hf.co/stefan-it/hmbench-hipe2020-de-hmbyt5-bs8-wsFalse-e10-lr0.00016-poolingfirst-layers-1-crfFalse-2 [13]: https://hf.co/stefan-it/hmbench-hipe2020-de-hmbyt5-bs8-wsFalse-e10-lr0.00016-poolingfirst-layers-1-crfFalse-3 [14]: https://hf.co/stefan-it/hmbench-hipe2020-de-hmbyt5-bs8-wsFalse-e10-lr0.00016-poolingfirst-layers-1-crfFalse-4 [15]: https://hf.co/stefan-it/hmbench-hipe2020-de-hmbyt5-bs8-wsFalse-e10-lr0.00016-poolingfirst-layers-1-crfFalse-5 [16]: https://hf.co/stefan-it/hmbench-hipe2020-de-hmbyt5-bs4-wsFalse-e10-lr0.00015-poolingfirst-layers-1-crfFalse-1 [17]: https://hf.co/stefan-it/hmbench-hipe2020-de-hmbyt5-bs4-wsFalse-e10-lr0.00015-poolingfirst-layers-1-crfFalse-2 [18]: https://hf.co/stefan-it/hmbench-hipe2020-de-hmbyt5-bs4-wsFalse-e10-lr0.00015-poolingfirst-layers-1-crfFalse-3 [19]: https://hf.co/stefan-it/hmbench-hipe2020-de-hmbyt5-bs4-wsFalse-e10-lr0.00015-poolingfirst-layers-1-crfFalse-4 [20]: https://hf.co/stefan-it/hmbench-hipe2020-de-hmbyt5-bs4-wsFalse-e10-lr0.00015-poolingfirst-layers-1-crfFalse-5 The [training log](training.log) and TensorBoard logs (only for hmByT5 and hmTEAMS based models) are also uploaded to the model hub. More information about fine-tuning can be found [here](https://github.com/stefan-it/hmBench). # Acknowledgements We thank [Luisa März](https://github.com/LuisaMaerz), [Katharina Schmid](https://github.com/schmika) and [Erion Çano](https://github.com/erionc) for their fruitful discussions about Historic Language Models. Research supported with Cloud TPUs from Google's [TPU Research Cloud](https://sites.research.google/trc/about/) (TRC). Many Thanks for providing access to the TPUs ❤️
stefan-it/hmbench-hipe2020-de-hmbyt5-bs4-wsFalse-e10-lr0.00016-poolingfirst-layers-1-crfFalse-2
stefan-it
2023-10-17T23:09:18Z
6
0
flair
[ "flair", "pytorch", "tensorboard", "token-classification", "sequence-tagger-model", "de", "base_model:hmbyt5-preliminary/byt5-small-historic-multilingual-span20-flax", "base_model:finetune:hmbyt5-preliminary/byt5-small-historic-multilingual-span20-flax", "license:mit", "region:us" ]
token-classification
2023-10-15T11:56:32Z
--- language: de license: mit tags: - flair - token-classification - sequence-tagger-model base_model: hmbyt5-preliminary/byt5-small-historic-multilingual-span20-flax inference: false widget: - text: Es war am 25sten , als Lord Corn wollis Dublin mit seinem Gefolge und mehrern Truppen verließ , um in einer Central - Lage bey Sligo die Operationen der Armee persönlich zu dirigiren . Der Feind dürfte bald in die Enge kommen , da Gen . Lacke mit 6000 Mann ihm entgegen marschirt . --- # Fine-tuned Flair Model on German HIPE-2020 Dataset (HIPE-2022) This Flair model was fine-tuned on the [German HIPE-2020](https://github.com/hipe-eval/HIPE-2022-data/blob/main/documentation/README-hipe2020.md) NER Dataset using hmByT5 as backbone LM. The HIPE-2020 dataset is comprised of newspapers from mid 19C to mid 20C. For information can be found [here](https://dl.acm.org/doi/abs/10.1007/978-3-030-58219-7_21). The following NEs were annotated: `loc`, `org`, `pers`, `prod`, `time` and `comp`. # ⚠️ Inference Widget ⚠️ Fine-Tuning ByT5 models in Flair is currently done by implementing an own [`ByT5Embedding`][0] class. This class needs to be present when running the model with Flair. Thus, the inference widget is not working with hmByT5 at the moment on the Model Hub and is currently disabled. This should be fixed in future, when ByT5 fine-tuning is supported in Flair directly. [0]: https://github.com/stefan-it/hmBench/blob/main/byt5_embeddings.py # Results We performed a hyper-parameter search over the following parameters with 5 different seeds per configuration: * Batch Sizes: `[8, 4]` * Learning Rates: `[0.00015, 0.00016]` And report micro F1-score on development set: | Configuration | Run 1 | Run 2 | Run 3 | Run 4 | Run 5 | Avg. | |-------------------|--------------|--------------|--------------|--------------|--------------|--------------| | bs4-e10-lr0.00016 | [0.7596][1] | [0.7466][2] | [0.7771][3] | [0.7894][4] | [0.7717][5] | 76.89 ± 1.47 | | bs8-e10-lr0.00015 | [0.7593][6] | [0.7663][7] | [0.7611][8] | [0.7647][9] | [0.7667][10] | 76.36 ± 0.29 | | bs8-e10-lr0.00016 | [0.7607][11] | [0.7736][12] | [0.7567][13] | [0.756][14] | [0.746][15] | 75.86 ± 0.89 | | bs4-e10-lr0.00015 | [0.7541][16] | [0.7466][17] | [0.7575][18] | [0.7579][19] | [0.7599][20] | 75.52 ± 0.47 | [1]: https://hf.co/stefan-it/hmbench-hipe2020-de-hmbyt5-bs4-wsFalse-e10-lr0.00016-poolingfirst-layers-1-crfFalse-1 [2]: https://hf.co/stefan-it/hmbench-hipe2020-de-hmbyt5-bs4-wsFalse-e10-lr0.00016-poolingfirst-layers-1-crfFalse-2 [3]: https://hf.co/stefan-it/hmbench-hipe2020-de-hmbyt5-bs4-wsFalse-e10-lr0.00016-poolingfirst-layers-1-crfFalse-3 [4]: https://hf.co/stefan-it/hmbench-hipe2020-de-hmbyt5-bs4-wsFalse-e10-lr0.00016-poolingfirst-layers-1-crfFalse-4 [5]: https://hf.co/stefan-it/hmbench-hipe2020-de-hmbyt5-bs4-wsFalse-e10-lr0.00016-poolingfirst-layers-1-crfFalse-5 [6]: https://hf.co/stefan-it/hmbench-hipe2020-de-hmbyt5-bs8-wsFalse-e10-lr0.00015-poolingfirst-layers-1-crfFalse-1 [7]: https://hf.co/stefan-it/hmbench-hipe2020-de-hmbyt5-bs8-wsFalse-e10-lr0.00015-poolingfirst-layers-1-crfFalse-2 [8]: https://hf.co/stefan-it/hmbench-hipe2020-de-hmbyt5-bs8-wsFalse-e10-lr0.00015-poolingfirst-layers-1-crfFalse-3 [9]: https://hf.co/stefan-it/hmbench-hipe2020-de-hmbyt5-bs8-wsFalse-e10-lr0.00015-poolingfirst-layers-1-crfFalse-4 [10]: https://hf.co/stefan-it/hmbench-hipe2020-de-hmbyt5-bs8-wsFalse-e10-lr0.00015-poolingfirst-layers-1-crfFalse-5 [11]: https://hf.co/stefan-it/hmbench-hipe2020-de-hmbyt5-bs8-wsFalse-e10-lr0.00016-poolingfirst-layers-1-crfFalse-1 [12]: https://hf.co/stefan-it/hmbench-hipe2020-de-hmbyt5-bs8-wsFalse-e10-lr0.00016-poolingfirst-layers-1-crfFalse-2 [13]: https://hf.co/stefan-it/hmbench-hipe2020-de-hmbyt5-bs8-wsFalse-e10-lr0.00016-poolingfirst-layers-1-crfFalse-3 [14]: https://hf.co/stefan-it/hmbench-hipe2020-de-hmbyt5-bs8-wsFalse-e10-lr0.00016-poolingfirst-layers-1-crfFalse-4 [15]: https://hf.co/stefan-it/hmbench-hipe2020-de-hmbyt5-bs8-wsFalse-e10-lr0.00016-poolingfirst-layers-1-crfFalse-5 [16]: https://hf.co/stefan-it/hmbench-hipe2020-de-hmbyt5-bs4-wsFalse-e10-lr0.00015-poolingfirst-layers-1-crfFalse-1 [17]: https://hf.co/stefan-it/hmbench-hipe2020-de-hmbyt5-bs4-wsFalse-e10-lr0.00015-poolingfirst-layers-1-crfFalse-2 [18]: https://hf.co/stefan-it/hmbench-hipe2020-de-hmbyt5-bs4-wsFalse-e10-lr0.00015-poolingfirst-layers-1-crfFalse-3 [19]: https://hf.co/stefan-it/hmbench-hipe2020-de-hmbyt5-bs4-wsFalse-e10-lr0.00015-poolingfirst-layers-1-crfFalse-4 [20]: https://hf.co/stefan-it/hmbench-hipe2020-de-hmbyt5-bs4-wsFalse-e10-lr0.00015-poolingfirst-layers-1-crfFalse-5 The [training log](training.log) and TensorBoard logs (only for hmByT5 and hmTEAMS based models) are also uploaded to the model hub. More information about fine-tuning can be found [here](https://github.com/stefan-it/hmBench). # Acknowledgements We thank [Luisa März](https://github.com/LuisaMaerz), [Katharina Schmid](https://github.com/schmika) and [Erion Çano](https://github.com/erionc) for their fruitful discussions about Historic Language Models. Research supported with Cloud TPUs from Google's [TPU Research Cloud](https://sites.research.google/trc/about/) (TRC). Many Thanks for providing access to the TPUs ❤️
stefan-it/hmbench-hipe2020-de-hmbyt5-bs4-wsFalse-e10-lr0.00015-poolingfirst-layers-1-crfFalse-5
stefan-it
2023-10-17T23:09:17Z
8
0
flair
[ "flair", "pytorch", "tensorboard", "token-classification", "sequence-tagger-model", "de", "base_model:hmbyt5-preliminary/byt5-small-historic-multilingual-span20-flax", "base_model:finetune:hmbyt5-preliminary/byt5-small-historic-multilingual-span20-flax", "license:mit", "region:us" ]
token-classification
2023-10-15T12:09:32Z
--- language: de license: mit tags: - flair - token-classification - sequence-tagger-model base_model: hmbyt5-preliminary/byt5-small-historic-multilingual-span20-flax inference: false widget: - text: Es war am 25sten , als Lord Corn wollis Dublin mit seinem Gefolge und mehrern Truppen verließ , um in einer Central - Lage bey Sligo die Operationen der Armee persönlich zu dirigiren . Der Feind dürfte bald in die Enge kommen , da Gen . Lacke mit 6000 Mann ihm entgegen marschirt . --- # Fine-tuned Flair Model on German HIPE-2020 Dataset (HIPE-2022) This Flair model was fine-tuned on the [German HIPE-2020](https://github.com/hipe-eval/HIPE-2022-data/blob/main/documentation/README-hipe2020.md) NER Dataset using hmByT5 as backbone LM. The HIPE-2020 dataset is comprised of newspapers from mid 19C to mid 20C. For information can be found [here](https://dl.acm.org/doi/abs/10.1007/978-3-030-58219-7_21). The following NEs were annotated: `loc`, `org`, `pers`, `prod`, `time` and `comp`. # ⚠️ Inference Widget ⚠️ Fine-Tuning ByT5 models in Flair is currently done by implementing an own [`ByT5Embedding`][0] class. This class needs to be present when running the model with Flair. Thus, the inference widget is not working with hmByT5 at the moment on the Model Hub and is currently disabled. This should be fixed in future, when ByT5 fine-tuning is supported in Flair directly. [0]: https://github.com/stefan-it/hmBench/blob/main/byt5_embeddings.py # Results We performed a hyper-parameter search over the following parameters with 5 different seeds per configuration: * Batch Sizes: `[8, 4]` * Learning Rates: `[0.00015, 0.00016]` And report micro F1-score on development set: | Configuration | Run 1 | Run 2 | Run 3 | Run 4 | Run 5 | Avg. | |-------------------|--------------|--------------|--------------|--------------|--------------|--------------| | bs4-e10-lr0.00016 | [0.7596][1] | [0.7466][2] | [0.7771][3] | [0.7894][4] | [0.7717][5] | 76.89 ± 1.47 | | bs8-e10-lr0.00015 | [0.7593][6] | [0.7663][7] | [0.7611][8] | [0.7647][9] | [0.7667][10] | 76.36 ± 0.29 | | bs8-e10-lr0.00016 | [0.7607][11] | [0.7736][12] | [0.7567][13] | [0.756][14] | [0.746][15] | 75.86 ± 0.89 | | bs4-e10-lr0.00015 | [0.7541][16] | [0.7466][17] | [0.7575][18] | [0.7579][19] | [0.7599][20] | 75.52 ± 0.47 | [1]: https://hf.co/stefan-it/hmbench-hipe2020-de-hmbyt5-bs4-wsFalse-e10-lr0.00016-poolingfirst-layers-1-crfFalse-1 [2]: https://hf.co/stefan-it/hmbench-hipe2020-de-hmbyt5-bs4-wsFalse-e10-lr0.00016-poolingfirst-layers-1-crfFalse-2 [3]: https://hf.co/stefan-it/hmbench-hipe2020-de-hmbyt5-bs4-wsFalse-e10-lr0.00016-poolingfirst-layers-1-crfFalse-3 [4]: https://hf.co/stefan-it/hmbench-hipe2020-de-hmbyt5-bs4-wsFalse-e10-lr0.00016-poolingfirst-layers-1-crfFalse-4 [5]: https://hf.co/stefan-it/hmbench-hipe2020-de-hmbyt5-bs4-wsFalse-e10-lr0.00016-poolingfirst-layers-1-crfFalse-5 [6]: https://hf.co/stefan-it/hmbench-hipe2020-de-hmbyt5-bs8-wsFalse-e10-lr0.00015-poolingfirst-layers-1-crfFalse-1 [7]: https://hf.co/stefan-it/hmbench-hipe2020-de-hmbyt5-bs8-wsFalse-e10-lr0.00015-poolingfirst-layers-1-crfFalse-2 [8]: https://hf.co/stefan-it/hmbench-hipe2020-de-hmbyt5-bs8-wsFalse-e10-lr0.00015-poolingfirst-layers-1-crfFalse-3 [9]: https://hf.co/stefan-it/hmbench-hipe2020-de-hmbyt5-bs8-wsFalse-e10-lr0.00015-poolingfirst-layers-1-crfFalse-4 [10]: https://hf.co/stefan-it/hmbench-hipe2020-de-hmbyt5-bs8-wsFalse-e10-lr0.00015-poolingfirst-layers-1-crfFalse-5 [11]: https://hf.co/stefan-it/hmbench-hipe2020-de-hmbyt5-bs8-wsFalse-e10-lr0.00016-poolingfirst-layers-1-crfFalse-1 [12]: https://hf.co/stefan-it/hmbench-hipe2020-de-hmbyt5-bs8-wsFalse-e10-lr0.00016-poolingfirst-layers-1-crfFalse-2 [13]: https://hf.co/stefan-it/hmbench-hipe2020-de-hmbyt5-bs8-wsFalse-e10-lr0.00016-poolingfirst-layers-1-crfFalse-3 [14]: https://hf.co/stefan-it/hmbench-hipe2020-de-hmbyt5-bs8-wsFalse-e10-lr0.00016-poolingfirst-layers-1-crfFalse-4 [15]: https://hf.co/stefan-it/hmbench-hipe2020-de-hmbyt5-bs8-wsFalse-e10-lr0.00016-poolingfirst-layers-1-crfFalse-5 [16]: https://hf.co/stefan-it/hmbench-hipe2020-de-hmbyt5-bs4-wsFalse-e10-lr0.00015-poolingfirst-layers-1-crfFalse-1 [17]: https://hf.co/stefan-it/hmbench-hipe2020-de-hmbyt5-bs4-wsFalse-e10-lr0.00015-poolingfirst-layers-1-crfFalse-2 [18]: https://hf.co/stefan-it/hmbench-hipe2020-de-hmbyt5-bs4-wsFalse-e10-lr0.00015-poolingfirst-layers-1-crfFalse-3 [19]: https://hf.co/stefan-it/hmbench-hipe2020-de-hmbyt5-bs4-wsFalse-e10-lr0.00015-poolingfirst-layers-1-crfFalse-4 [20]: https://hf.co/stefan-it/hmbench-hipe2020-de-hmbyt5-bs4-wsFalse-e10-lr0.00015-poolingfirst-layers-1-crfFalse-5 The [training log](training.log) and TensorBoard logs (only for hmByT5 and hmTEAMS based models) are also uploaded to the model hub. More information about fine-tuning can be found [here](https://github.com/stefan-it/hmBench). # Acknowledgements We thank [Luisa März](https://github.com/LuisaMaerz), [Katharina Schmid](https://github.com/schmika) and [Erion Çano](https://github.com/erionc) for their fruitful discussions about Historic Language Models. Research supported with Cloud TPUs from Google's [TPU Research Cloud](https://sites.research.google/trc/about/) (TRC). Many Thanks for providing access to the TPUs ❤️
stefan-it/hmbench-hipe2020-de-hmbyt5-bs4-wsFalse-e10-lr0.00015-poolingfirst-layers-1-crfFalse-3
stefan-it
2023-10-17T23:09:16Z
5
0
flair
[ "flair", "pytorch", "tensorboard", "token-classification", "sequence-tagger-model", "de", "base_model:hmbyt5-preliminary/byt5-small-historic-multilingual-span20-flax", "base_model:finetune:hmbyt5-preliminary/byt5-small-historic-multilingual-span20-flax", "license:mit", "region:us" ]
token-classification
2023-10-15T11:59:18Z
--- language: de license: mit tags: - flair - token-classification - sequence-tagger-model base_model: hmbyt5-preliminary/byt5-small-historic-multilingual-span20-flax inference: false widget: - text: Es war am 25sten , als Lord Corn wollis Dublin mit seinem Gefolge und mehrern Truppen verließ , um in einer Central - Lage bey Sligo die Operationen der Armee persönlich zu dirigiren . Der Feind dürfte bald in die Enge kommen , da Gen . Lacke mit 6000 Mann ihm entgegen marschirt . --- # Fine-tuned Flair Model on German HIPE-2020 Dataset (HIPE-2022) This Flair model was fine-tuned on the [German HIPE-2020](https://github.com/hipe-eval/HIPE-2022-data/blob/main/documentation/README-hipe2020.md) NER Dataset using hmByT5 as backbone LM. The HIPE-2020 dataset is comprised of newspapers from mid 19C to mid 20C. For information can be found [here](https://dl.acm.org/doi/abs/10.1007/978-3-030-58219-7_21). The following NEs were annotated: `loc`, `org`, `pers`, `prod`, `time` and `comp`. # ⚠️ Inference Widget ⚠️ Fine-Tuning ByT5 models in Flair is currently done by implementing an own [`ByT5Embedding`][0] class. This class needs to be present when running the model with Flair. Thus, the inference widget is not working with hmByT5 at the moment on the Model Hub and is currently disabled. This should be fixed in future, when ByT5 fine-tuning is supported in Flair directly. [0]: https://github.com/stefan-it/hmBench/blob/main/byt5_embeddings.py # Results We performed a hyper-parameter search over the following parameters with 5 different seeds per configuration: * Batch Sizes: `[8, 4]` * Learning Rates: `[0.00015, 0.00016]` And report micro F1-score on development set: | Configuration | Run 1 | Run 2 | Run 3 | Run 4 | Run 5 | Avg. | |-------------------|--------------|--------------|--------------|--------------|--------------|--------------| | bs4-e10-lr0.00016 | [0.7596][1] | [0.7466][2] | [0.7771][3] | [0.7894][4] | [0.7717][5] | 76.89 ± 1.47 | | bs8-e10-lr0.00015 | [0.7593][6] | [0.7663][7] | [0.7611][8] | [0.7647][9] | [0.7667][10] | 76.36 ± 0.29 | | bs8-e10-lr0.00016 | [0.7607][11] | [0.7736][12] | [0.7567][13] | [0.756][14] | [0.746][15] | 75.86 ± 0.89 | | bs4-e10-lr0.00015 | [0.7541][16] | [0.7466][17] | [0.7575][18] | [0.7579][19] | [0.7599][20] | 75.52 ± 0.47 | [1]: https://hf.co/stefan-it/hmbench-hipe2020-de-hmbyt5-bs4-wsFalse-e10-lr0.00016-poolingfirst-layers-1-crfFalse-1 [2]: https://hf.co/stefan-it/hmbench-hipe2020-de-hmbyt5-bs4-wsFalse-e10-lr0.00016-poolingfirst-layers-1-crfFalse-2 [3]: https://hf.co/stefan-it/hmbench-hipe2020-de-hmbyt5-bs4-wsFalse-e10-lr0.00016-poolingfirst-layers-1-crfFalse-3 [4]: https://hf.co/stefan-it/hmbench-hipe2020-de-hmbyt5-bs4-wsFalse-e10-lr0.00016-poolingfirst-layers-1-crfFalse-4 [5]: https://hf.co/stefan-it/hmbench-hipe2020-de-hmbyt5-bs4-wsFalse-e10-lr0.00016-poolingfirst-layers-1-crfFalse-5 [6]: https://hf.co/stefan-it/hmbench-hipe2020-de-hmbyt5-bs8-wsFalse-e10-lr0.00015-poolingfirst-layers-1-crfFalse-1 [7]: https://hf.co/stefan-it/hmbench-hipe2020-de-hmbyt5-bs8-wsFalse-e10-lr0.00015-poolingfirst-layers-1-crfFalse-2 [8]: https://hf.co/stefan-it/hmbench-hipe2020-de-hmbyt5-bs8-wsFalse-e10-lr0.00015-poolingfirst-layers-1-crfFalse-3 [9]: https://hf.co/stefan-it/hmbench-hipe2020-de-hmbyt5-bs8-wsFalse-e10-lr0.00015-poolingfirst-layers-1-crfFalse-4 [10]: https://hf.co/stefan-it/hmbench-hipe2020-de-hmbyt5-bs8-wsFalse-e10-lr0.00015-poolingfirst-layers-1-crfFalse-5 [11]: https://hf.co/stefan-it/hmbench-hipe2020-de-hmbyt5-bs8-wsFalse-e10-lr0.00016-poolingfirst-layers-1-crfFalse-1 [12]: https://hf.co/stefan-it/hmbench-hipe2020-de-hmbyt5-bs8-wsFalse-e10-lr0.00016-poolingfirst-layers-1-crfFalse-2 [13]: https://hf.co/stefan-it/hmbench-hipe2020-de-hmbyt5-bs8-wsFalse-e10-lr0.00016-poolingfirst-layers-1-crfFalse-3 [14]: https://hf.co/stefan-it/hmbench-hipe2020-de-hmbyt5-bs8-wsFalse-e10-lr0.00016-poolingfirst-layers-1-crfFalse-4 [15]: https://hf.co/stefan-it/hmbench-hipe2020-de-hmbyt5-bs8-wsFalse-e10-lr0.00016-poolingfirst-layers-1-crfFalse-5 [16]: https://hf.co/stefan-it/hmbench-hipe2020-de-hmbyt5-bs4-wsFalse-e10-lr0.00015-poolingfirst-layers-1-crfFalse-1 [17]: https://hf.co/stefan-it/hmbench-hipe2020-de-hmbyt5-bs4-wsFalse-e10-lr0.00015-poolingfirst-layers-1-crfFalse-2 [18]: https://hf.co/stefan-it/hmbench-hipe2020-de-hmbyt5-bs4-wsFalse-e10-lr0.00015-poolingfirst-layers-1-crfFalse-3 [19]: https://hf.co/stefan-it/hmbench-hipe2020-de-hmbyt5-bs4-wsFalse-e10-lr0.00015-poolingfirst-layers-1-crfFalse-4 [20]: https://hf.co/stefan-it/hmbench-hipe2020-de-hmbyt5-bs4-wsFalse-e10-lr0.00015-poolingfirst-layers-1-crfFalse-5 The [training log](training.log) and TensorBoard logs (only for hmByT5 and hmTEAMS based models) are also uploaded to the model hub. More information about fine-tuning can be found [here](https://github.com/stefan-it/hmBench). # Acknowledgements We thank [Luisa März](https://github.com/LuisaMaerz), [Katharina Schmid](https://github.com/schmika) and [Erion Çano](https://github.com/erionc) for their fruitful discussions about Historic Language Models. Research supported with Cloud TPUs from Google's [TPU Research Cloud](https://sites.research.google/trc/about/) (TRC). Many Thanks for providing access to the TPUs ❤️
stefan-it/hmbench-hipe2020-de-hmbyt5-bs4-wsFalse-e10-lr0.00015-poolingfirst-layers-1-crfFalse-2
stefan-it
2023-10-17T23:09:15Z
4
0
flair
[ "flair", "pytorch", "tensorboard", "token-classification", "sequence-tagger-model", "de", "base_model:hmbyt5-preliminary/byt5-small-historic-multilingual-span20-flax", "base_model:finetune:hmbyt5-preliminary/byt5-small-historic-multilingual-span20-flax", "license:mit", "region:us" ]
token-classification
2023-10-15T11:55:41Z
--- language: de license: mit tags: - flair - token-classification - sequence-tagger-model base_model: hmbyt5-preliminary/byt5-small-historic-multilingual-span20-flax inference: false widget: - text: Es war am 25sten , als Lord Corn wollis Dublin mit seinem Gefolge und mehrern Truppen verließ , um in einer Central - Lage bey Sligo die Operationen der Armee persönlich zu dirigiren . Der Feind dürfte bald in die Enge kommen , da Gen . Lacke mit 6000 Mann ihm entgegen marschirt . --- # Fine-tuned Flair Model on German HIPE-2020 Dataset (HIPE-2022) This Flair model was fine-tuned on the [German HIPE-2020](https://github.com/hipe-eval/HIPE-2022-data/blob/main/documentation/README-hipe2020.md) NER Dataset using hmByT5 as backbone LM. The HIPE-2020 dataset is comprised of newspapers from mid 19C to mid 20C. For information can be found [here](https://dl.acm.org/doi/abs/10.1007/978-3-030-58219-7_21). The following NEs were annotated: `loc`, `org`, `pers`, `prod`, `time` and `comp`. # ⚠️ Inference Widget ⚠️ Fine-Tuning ByT5 models in Flair is currently done by implementing an own [`ByT5Embedding`][0] class. This class needs to be present when running the model with Flair. Thus, the inference widget is not working with hmByT5 at the moment on the Model Hub and is currently disabled. This should be fixed in future, when ByT5 fine-tuning is supported in Flair directly. [0]: https://github.com/stefan-it/hmBench/blob/main/byt5_embeddings.py # Results We performed a hyper-parameter search over the following parameters with 5 different seeds per configuration: * Batch Sizes: `[8, 4]` * Learning Rates: `[0.00015, 0.00016]` And report micro F1-score on development set: | Configuration | Run 1 | Run 2 | Run 3 | Run 4 | Run 5 | Avg. | |-------------------|--------------|--------------|--------------|--------------|--------------|--------------| | bs4-e10-lr0.00016 | [0.7596][1] | [0.7466][2] | [0.7771][3] | [0.7894][4] | [0.7717][5] | 76.89 ± 1.47 | | bs8-e10-lr0.00015 | [0.7593][6] | [0.7663][7] | [0.7611][8] | [0.7647][9] | [0.7667][10] | 76.36 ± 0.29 | | bs8-e10-lr0.00016 | [0.7607][11] | [0.7736][12] | [0.7567][13] | [0.756][14] | [0.746][15] | 75.86 ± 0.89 | | bs4-e10-lr0.00015 | [0.7541][16] | [0.7466][17] | [0.7575][18] | [0.7579][19] | [0.7599][20] | 75.52 ± 0.47 | [1]: https://hf.co/stefan-it/hmbench-hipe2020-de-hmbyt5-bs4-wsFalse-e10-lr0.00016-poolingfirst-layers-1-crfFalse-1 [2]: https://hf.co/stefan-it/hmbench-hipe2020-de-hmbyt5-bs4-wsFalse-e10-lr0.00016-poolingfirst-layers-1-crfFalse-2 [3]: https://hf.co/stefan-it/hmbench-hipe2020-de-hmbyt5-bs4-wsFalse-e10-lr0.00016-poolingfirst-layers-1-crfFalse-3 [4]: https://hf.co/stefan-it/hmbench-hipe2020-de-hmbyt5-bs4-wsFalse-e10-lr0.00016-poolingfirst-layers-1-crfFalse-4 [5]: https://hf.co/stefan-it/hmbench-hipe2020-de-hmbyt5-bs4-wsFalse-e10-lr0.00016-poolingfirst-layers-1-crfFalse-5 [6]: https://hf.co/stefan-it/hmbench-hipe2020-de-hmbyt5-bs8-wsFalse-e10-lr0.00015-poolingfirst-layers-1-crfFalse-1 [7]: https://hf.co/stefan-it/hmbench-hipe2020-de-hmbyt5-bs8-wsFalse-e10-lr0.00015-poolingfirst-layers-1-crfFalse-2 [8]: https://hf.co/stefan-it/hmbench-hipe2020-de-hmbyt5-bs8-wsFalse-e10-lr0.00015-poolingfirst-layers-1-crfFalse-3 [9]: https://hf.co/stefan-it/hmbench-hipe2020-de-hmbyt5-bs8-wsFalse-e10-lr0.00015-poolingfirst-layers-1-crfFalse-4 [10]: https://hf.co/stefan-it/hmbench-hipe2020-de-hmbyt5-bs8-wsFalse-e10-lr0.00015-poolingfirst-layers-1-crfFalse-5 [11]: https://hf.co/stefan-it/hmbench-hipe2020-de-hmbyt5-bs8-wsFalse-e10-lr0.00016-poolingfirst-layers-1-crfFalse-1 [12]: https://hf.co/stefan-it/hmbench-hipe2020-de-hmbyt5-bs8-wsFalse-e10-lr0.00016-poolingfirst-layers-1-crfFalse-2 [13]: https://hf.co/stefan-it/hmbench-hipe2020-de-hmbyt5-bs8-wsFalse-e10-lr0.00016-poolingfirst-layers-1-crfFalse-3 [14]: https://hf.co/stefan-it/hmbench-hipe2020-de-hmbyt5-bs8-wsFalse-e10-lr0.00016-poolingfirst-layers-1-crfFalse-4 [15]: https://hf.co/stefan-it/hmbench-hipe2020-de-hmbyt5-bs8-wsFalse-e10-lr0.00016-poolingfirst-layers-1-crfFalse-5 [16]: https://hf.co/stefan-it/hmbench-hipe2020-de-hmbyt5-bs4-wsFalse-e10-lr0.00015-poolingfirst-layers-1-crfFalse-1 [17]: https://hf.co/stefan-it/hmbench-hipe2020-de-hmbyt5-bs4-wsFalse-e10-lr0.00015-poolingfirst-layers-1-crfFalse-2 [18]: https://hf.co/stefan-it/hmbench-hipe2020-de-hmbyt5-bs4-wsFalse-e10-lr0.00015-poolingfirst-layers-1-crfFalse-3 [19]: https://hf.co/stefan-it/hmbench-hipe2020-de-hmbyt5-bs4-wsFalse-e10-lr0.00015-poolingfirst-layers-1-crfFalse-4 [20]: https://hf.co/stefan-it/hmbench-hipe2020-de-hmbyt5-bs4-wsFalse-e10-lr0.00015-poolingfirst-layers-1-crfFalse-5 The [training log](training.log) and TensorBoard logs (only for hmByT5 and hmTEAMS based models) are also uploaded to the model hub. More information about fine-tuning can be found [here](https://github.com/stefan-it/hmBench). # Acknowledgements We thank [Luisa März](https://github.com/LuisaMaerz), [Katharina Schmid](https://github.com/schmika) and [Erion Çano](https://github.com/erionc) for their fruitful discussions about Historic Language Models. Research supported with Cloud TPUs from Google's [TPU Research Cloud](https://sites.research.google/trc/about/) (TRC). Many Thanks for providing access to the TPUs ❤️
stefan-it/hmbench-hipe2020-de-hmbyt5-bs8-wsFalse-e10-lr0.00016-poolingfirst-layers-1-crfFalse-4
stefan-it
2023-10-17T23:09:13Z
5
0
flair
[ "flair", "pytorch", "tensorboard", "token-classification", "sequence-tagger-model", "de", "base_model:hmbyt5-preliminary/byt5-small-historic-multilingual-span20-flax", "base_model:finetune:hmbyt5-preliminary/byt5-small-historic-multilingual-span20-flax", "license:mit", "region:us" ]
token-classification
2023-10-15T12:03:25Z
--- language: de license: mit tags: - flair - token-classification - sequence-tagger-model base_model: hmbyt5-preliminary/byt5-small-historic-multilingual-span20-flax inference: false widget: - text: Es war am 25sten , als Lord Corn wollis Dublin mit seinem Gefolge und mehrern Truppen verließ , um in einer Central - Lage bey Sligo die Operationen der Armee persönlich zu dirigiren . Der Feind dürfte bald in die Enge kommen , da Gen . Lacke mit 6000 Mann ihm entgegen marschirt . --- # Fine-tuned Flair Model on German HIPE-2020 Dataset (HIPE-2022) This Flair model was fine-tuned on the [German HIPE-2020](https://github.com/hipe-eval/HIPE-2022-data/blob/main/documentation/README-hipe2020.md) NER Dataset using hmByT5 as backbone LM. The HIPE-2020 dataset is comprised of newspapers from mid 19C to mid 20C. For information can be found [here](https://dl.acm.org/doi/abs/10.1007/978-3-030-58219-7_21). The following NEs were annotated: `loc`, `org`, `pers`, `prod`, `time` and `comp`. # ⚠️ Inference Widget ⚠️ Fine-Tuning ByT5 models in Flair is currently done by implementing an own [`ByT5Embedding`][0] class. This class needs to be present when running the model with Flair. Thus, the inference widget is not working with hmByT5 at the moment on the Model Hub and is currently disabled. This should be fixed in future, when ByT5 fine-tuning is supported in Flair directly. [0]: https://github.com/stefan-it/hmBench/blob/main/byt5_embeddings.py # Results We performed a hyper-parameter search over the following parameters with 5 different seeds per configuration: * Batch Sizes: `[8, 4]` * Learning Rates: `[0.00015, 0.00016]` And report micro F1-score on development set: | Configuration | Run 1 | Run 2 | Run 3 | Run 4 | Run 5 | Avg. | |-------------------|--------------|--------------|--------------|--------------|--------------|--------------| | bs4-e10-lr0.00016 | [0.7596][1] | [0.7466][2] | [0.7771][3] | [0.7894][4] | [0.7717][5] | 76.89 ± 1.47 | | bs8-e10-lr0.00015 | [0.7593][6] | [0.7663][7] | [0.7611][8] | [0.7647][9] | [0.7667][10] | 76.36 ± 0.29 | | bs8-e10-lr0.00016 | [0.7607][11] | [0.7736][12] | [0.7567][13] | [0.756][14] | [0.746][15] | 75.86 ± 0.89 | | bs4-e10-lr0.00015 | [0.7541][16] | [0.7466][17] | [0.7575][18] | [0.7579][19] | [0.7599][20] | 75.52 ± 0.47 | [1]: https://hf.co/stefan-it/hmbench-hipe2020-de-hmbyt5-bs4-wsFalse-e10-lr0.00016-poolingfirst-layers-1-crfFalse-1 [2]: https://hf.co/stefan-it/hmbench-hipe2020-de-hmbyt5-bs4-wsFalse-e10-lr0.00016-poolingfirst-layers-1-crfFalse-2 [3]: https://hf.co/stefan-it/hmbench-hipe2020-de-hmbyt5-bs4-wsFalse-e10-lr0.00016-poolingfirst-layers-1-crfFalse-3 [4]: https://hf.co/stefan-it/hmbench-hipe2020-de-hmbyt5-bs4-wsFalse-e10-lr0.00016-poolingfirst-layers-1-crfFalse-4 [5]: https://hf.co/stefan-it/hmbench-hipe2020-de-hmbyt5-bs4-wsFalse-e10-lr0.00016-poolingfirst-layers-1-crfFalse-5 [6]: https://hf.co/stefan-it/hmbench-hipe2020-de-hmbyt5-bs8-wsFalse-e10-lr0.00015-poolingfirst-layers-1-crfFalse-1 [7]: https://hf.co/stefan-it/hmbench-hipe2020-de-hmbyt5-bs8-wsFalse-e10-lr0.00015-poolingfirst-layers-1-crfFalse-2 [8]: https://hf.co/stefan-it/hmbench-hipe2020-de-hmbyt5-bs8-wsFalse-e10-lr0.00015-poolingfirst-layers-1-crfFalse-3 [9]: https://hf.co/stefan-it/hmbench-hipe2020-de-hmbyt5-bs8-wsFalse-e10-lr0.00015-poolingfirst-layers-1-crfFalse-4 [10]: https://hf.co/stefan-it/hmbench-hipe2020-de-hmbyt5-bs8-wsFalse-e10-lr0.00015-poolingfirst-layers-1-crfFalse-5 [11]: https://hf.co/stefan-it/hmbench-hipe2020-de-hmbyt5-bs8-wsFalse-e10-lr0.00016-poolingfirst-layers-1-crfFalse-1 [12]: https://hf.co/stefan-it/hmbench-hipe2020-de-hmbyt5-bs8-wsFalse-e10-lr0.00016-poolingfirst-layers-1-crfFalse-2 [13]: https://hf.co/stefan-it/hmbench-hipe2020-de-hmbyt5-bs8-wsFalse-e10-lr0.00016-poolingfirst-layers-1-crfFalse-3 [14]: https://hf.co/stefan-it/hmbench-hipe2020-de-hmbyt5-bs8-wsFalse-e10-lr0.00016-poolingfirst-layers-1-crfFalse-4 [15]: https://hf.co/stefan-it/hmbench-hipe2020-de-hmbyt5-bs8-wsFalse-e10-lr0.00016-poolingfirst-layers-1-crfFalse-5 [16]: https://hf.co/stefan-it/hmbench-hipe2020-de-hmbyt5-bs4-wsFalse-e10-lr0.00015-poolingfirst-layers-1-crfFalse-1 [17]: https://hf.co/stefan-it/hmbench-hipe2020-de-hmbyt5-bs4-wsFalse-e10-lr0.00015-poolingfirst-layers-1-crfFalse-2 [18]: https://hf.co/stefan-it/hmbench-hipe2020-de-hmbyt5-bs4-wsFalse-e10-lr0.00015-poolingfirst-layers-1-crfFalse-3 [19]: https://hf.co/stefan-it/hmbench-hipe2020-de-hmbyt5-bs4-wsFalse-e10-lr0.00015-poolingfirst-layers-1-crfFalse-4 [20]: https://hf.co/stefan-it/hmbench-hipe2020-de-hmbyt5-bs4-wsFalse-e10-lr0.00015-poolingfirst-layers-1-crfFalse-5 The [training log](training.log) and TensorBoard logs (only for hmByT5 and hmTEAMS based models) are also uploaded to the model hub. More information about fine-tuning can be found [here](https://github.com/stefan-it/hmBench). # Acknowledgements We thank [Luisa März](https://github.com/LuisaMaerz), [Katharina Schmid](https://github.com/schmika) and [Erion Çano](https://github.com/erionc) for their fruitful discussions about Historic Language Models. Research supported with Cloud TPUs from Google's [TPU Research Cloud](https://sites.research.google/trc/about/) (TRC). Many Thanks for providing access to the TPUs ❤️
stefan-it/hmbench-hipe2020-de-hmbyt5-bs8-wsFalse-e10-lr0.00015-poolingfirst-layers-1-crfFalse-5
stefan-it
2023-10-17T23:09:11Z
4
0
flair
[ "flair", "pytorch", "tensorboard", "token-classification", "sequence-tagger-model", "de", "base_model:hmbyt5-preliminary/byt5-small-historic-multilingual-span20-flax", "base_model:finetune:hmbyt5-preliminary/byt5-small-historic-multilingual-span20-flax", "license:mit", "region:us" ]
token-classification
2023-10-15T12:06:30Z
--- language: de license: mit tags: - flair - token-classification - sequence-tagger-model base_model: hmbyt5-preliminary/byt5-small-historic-multilingual-span20-flax inference: false widget: - text: Es war am 25sten , als Lord Corn wollis Dublin mit seinem Gefolge und mehrern Truppen verließ , um in einer Central - Lage bey Sligo die Operationen der Armee persönlich zu dirigiren . Der Feind dürfte bald in die Enge kommen , da Gen . Lacke mit 6000 Mann ihm entgegen marschirt . --- # Fine-tuned Flair Model on German HIPE-2020 Dataset (HIPE-2022) This Flair model was fine-tuned on the [German HIPE-2020](https://github.com/hipe-eval/HIPE-2022-data/blob/main/documentation/README-hipe2020.md) NER Dataset using hmByT5 as backbone LM. The HIPE-2020 dataset is comprised of newspapers from mid 19C to mid 20C. For information can be found [here](https://dl.acm.org/doi/abs/10.1007/978-3-030-58219-7_21). The following NEs were annotated: `loc`, `org`, `pers`, `prod`, `time` and `comp`. # ⚠️ Inference Widget ⚠️ Fine-Tuning ByT5 models in Flair is currently done by implementing an own [`ByT5Embedding`][0] class. This class needs to be present when running the model with Flair. Thus, the inference widget is not working with hmByT5 at the moment on the Model Hub and is currently disabled. This should be fixed in future, when ByT5 fine-tuning is supported in Flair directly. [0]: https://github.com/stefan-it/hmBench/blob/main/byt5_embeddings.py # Results We performed a hyper-parameter search over the following parameters with 5 different seeds per configuration: * Batch Sizes: `[8, 4]` * Learning Rates: `[0.00015, 0.00016]` And report micro F1-score on development set: | Configuration | Run 1 | Run 2 | Run 3 | Run 4 | Run 5 | Avg. | |-------------------|--------------|--------------|--------------|--------------|--------------|--------------| | bs4-e10-lr0.00016 | [0.7596][1] | [0.7466][2] | [0.7771][3] | [0.7894][4] | [0.7717][5] | 76.89 ± 1.47 | | bs8-e10-lr0.00015 | [0.7593][6] | [0.7663][7] | [0.7611][8] | [0.7647][9] | [0.7667][10] | 76.36 ± 0.29 | | bs8-e10-lr0.00016 | [0.7607][11] | [0.7736][12] | [0.7567][13] | [0.756][14] | [0.746][15] | 75.86 ± 0.89 | | bs4-e10-lr0.00015 | [0.7541][16] | [0.7466][17] | [0.7575][18] | [0.7579][19] | [0.7599][20] | 75.52 ± 0.47 | [1]: https://hf.co/stefan-it/hmbench-hipe2020-de-hmbyt5-bs4-wsFalse-e10-lr0.00016-poolingfirst-layers-1-crfFalse-1 [2]: https://hf.co/stefan-it/hmbench-hipe2020-de-hmbyt5-bs4-wsFalse-e10-lr0.00016-poolingfirst-layers-1-crfFalse-2 [3]: https://hf.co/stefan-it/hmbench-hipe2020-de-hmbyt5-bs4-wsFalse-e10-lr0.00016-poolingfirst-layers-1-crfFalse-3 [4]: https://hf.co/stefan-it/hmbench-hipe2020-de-hmbyt5-bs4-wsFalse-e10-lr0.00016-poolingfirst-layers-1-crfFalse-4 [5]: https://hf.co/stefan-it/hmbench-hipe2020-de-hmbyt5-bs4-wsFalse-e10-lr0.00016-poolingfirst-layers-1-crfFalse-5 [6]: https://hf.co/stefan-it/hmbench-hipe2020-de-hmbyt5-bs8-wsFalse-e10-lr0.00015-poolingfirst-layers-1-crfFalse-1 [7]: https://hf.co/stefan-it/hmbench-hipe2020-de-hmbyt5-bs8-wsFalse-e10-lr0.00015-poolingfirst-layers-1-crfFalse-2 [8]: https://hf.co/stefan-it/hmbench-hipe2020-de-hmbyt5-bs8-wsFalse-e10-lr0.00015-poolingfirst-layers-1-crfFalse-3 [9]: https://hf.co/stefan-it/hmbench-hipe2020-de-hmbyt5-bs8-wsFalse-e10-lr0.00015-poolingfirst-layers-1-crfFalse-4 [10]: https://hf.co/stefan-it/hmbench-hipe2020-de-hmbyt5-bs8-wsFalse-e10-lr0.00015-poolingfirst-layers-1-crfFalse-5 [11]: https://hf.co/stefan-it/hmbench-hipe2020-de-hmbyt5-bs8-wsFalse-e10-lr0.00016-poolingfirst-layers-1-crfFalse-1 [12]: https://hf.co/stefan-it/hmbench-hipe2020-de-hmbyt5-bs8-wsFalse-e10-lr0.00016-poolingfirst-layers-1-crfFalse-2 [13]: https://hf.co/stefan-it/hmbench-hipe2020-de-hmbyt5-bs8-wsFalse-e10-lr0.00016-poolingfirst-layers-1-crfFalse-3 [14]: https://hf.co/stefan-it/hmbench-hipe2020-de-hmbyt5-bs8-wsFalse-e10-lr0.00016-poolingfirst-layers-1-crfFalse-4 [15]: https://hf.co/stefan-it/hmbench-hipe2020-de-hmbyt5-bs8-wsFalse-e10-lr0.00016-poolingfirst-layers-1-crfFalse-5 [16]: https://hf.co/stefan-it/hmbench-hipe2020-de-hmbyt5-bs4-wsFalse-e10-lr0.00015-poolingfirst-layers-1-crfFalse-1 [17]: https://hf.co/stefan-it/hmbench-hipe2020-de-hmbyt5-bs4-wsFalse-e10-lr0.00015-poolingfirst-layers-1-crfFalse-2 [18]: https://hf.co/stefan-it/hmbench-hipe2020-de-hmbyt5-bs4-wsFalse-e10-lr0.00015-poolingfirst-layers-1-crfFalse-3 [19]: https://hf.co/stefan-it/hmbench-hipe2020-de-hmbyt5-bs4-wsFalse-e10-lr0.00015-poolingfirst-layers-1-crfFalse-4 [20]: https://hf.co/stefan-it/hmbench-hipe2020-de-hmbyt5-bs4-wsFalse-e10-lr0.00015-poolingfirst-layers-1-crfFalse-5 The [training log](training.log) and TensorBoard logs (only for hmByT5 and hmTEAMS based models) are also uploaded to the model hub. More information about fine-tuning can be found [here](https://github.com/stefan-it/hmBench). # Acknowledgements We thank [Luisa März](https://github.com/LuisaMaerz), [Katharina Schmid](https://github.com/schmika) and [Erion Çano](https://github.com/erionc) for their fruitful discussions about Historic Language Models. Research supported with Cloud TPUs from Google's [TPU Research Cloud](https://sites.research.google/trc/about/) (TRC). Many Thanks for providing access to the TPUs ❤️
stefan-it/hmbench-hipe2020-de-hmbyt5-bs8-wsFalse-e10-lr0.00015-poolingfirst-layers-1-crfFalse-4
stefan-it
2023-10-17T23:09:10Z
6
0
flair
[ "flair", "pytorch", "tensorboard", "token-classification", "sequence-tagger-model", "de", "base_model:hmbyt5-preliminary/byt5-small-historic-multilingual-span20-flax", "base_model:finetune:hmbyt5-preliminary/byt5-small-historic-multilingual-span20-flax", "license:mit", "region:us" ]
token-classification
2023-10-15T12:02:36Z
--- language: de license: mit tags: - flair - token-classification - sequence-tagger-model base_model: hmbyt5-preliminary/byt5-small-historic-multilingual-span20-flax inference: false widget: - text: Es war am 25sten , als Lord Corn wollis Dublin mit seinem Gefolge und mehrern Truppen verließ , um in einer Central - Lage bey Sligo die Operationen der Armee persönlich zu dirigiren . Der Feind dürfte bald in die Enge kommen , da Gen . Lacke mit 6000 Mann ihm entgegen marschirt . --- # Fine-tuned Flair Model on German HIPE-2020 Dataset (HIPE-2022) This Flair model was fine-tuned on the [German HIPE-2020](https://github.com/hipe-eval/HIPE-2022-data/blob/main/documentation/README-hipe2020.md) NER Dataset using hmByT5 as backbone LM. The HIPE-2020 dataset is comprised of newspapers from mid 19C to mid 20C. For information can be found [here](https://dl.acm.org/doi/abs/10.1007/978-3-030-58219-7_21). The following NEs were annotated: `loc`, `org`, `pers`, `prod`, `time` and `comp`. # ⚠️ Inference Widget ⚠️ Fine-Tuning ByT5 models in Flair is currently done by implementing an own [`ByT5Embedding`][0] class. This class needs to be present when running the model with Flair. Thus, the inference widget is not working with hmByT5 at the moment on the Model Hub and is currently disabled. This should be fixed in future, when ByT5 fine-tuning is supported in Flair directly. [0]: https://github.com/stefan-it/hmBench/blob/main/byt5_embeddings.py # Results We performed a hyper-parameter search over the following parameters with 5 different seeds per configuration: * Batch Sizes: `[8, 4]` * Learning Rates: `[0.00015, 0.00016]` And report micro F1-score on development set: | Configuration | Run 1 | Run 2 | Run 3 | Run 4 | Run 5 | Avg. | |-------------------|--------------|--------------|--------------|--------------|--------------|--------------| | bs4-e10-lr0.00016 | [0.7596][1] | [0.7466][2] | [0.7771][3] | [0.7894][4] | [0.7717][5] | 76.89 ± 1.47 | | bs8-e10-lr0.00015 | [0.7593][6] | [0.7663][7] | [0.7611][8] | [0.7647][9] | [0.7667][10] | 76.36 ± 0.29 | | bs8-e10-lr0.00016 | [0.7607][11] | [0.7736][12] | [0.7567][13] | [0.756][14] | [0.746][15] | 75.86 ± 0.89 | | bs4-e10-lr0.00015 | [0.7541][16] | [0.7466][17] | [0.7575][18] | [0.7579][19] | [0.7599][20] | 75.52 ± 0.47 | [1]: https://hf.co/stefan-it/hmbench-hipe2020-de-hmbyt5-bs4-wsFalse-e10-lr0.00016-poolingfirst-layers-1-crfFalse-1 [2]: https://hf.co/stefan-it/hmbench-hipe2020-de-hmbyt5-bs4-wsFalse-e10-lr0.00016-poolingfirst-layers-1-crfFalse-2 [3]: https://hf.co/stefan-it/hmbench-hipe2020-de-hmbyt5-bs4-wsFalse-e10-lr0.00016-poolingfirst-layers-1-crfFalse-3 [4]: https://hf.co/stefan-it/hmbench-hipe2020-de-hmbyt5-bs4-wsFalse-e10-lr0.00016-poolingfirst-layers-1-crfFalse-4 [5]: https://hf.co/stefan-it/hmbench-hipe2020-de-hmbyt5-bs4-wsFalse-e10-lr0.00016-poolingfirst-layers-1-crfFalse-5 [6]: https://hf.co/stefan-it/hmbench-hipe2020-de-hmbyt5-bs8-wsFalse-e10-lr0.00015-poolingfirst-layers-1-crfFalse-1 [7]: https://hf.co/stefan-it/hmbench-hipe2020-de-hmbyt5-bs8-wsFalse-e10-lr0.00015-poolingfirst-layers-1-crfFalse-2 [8]: https://hf.co/stefan-it/hmbench-hipe2020-de-hmbyt5-bs8-wsFalse-e10-lr0.00015-poolingfirst-layers-1-crfFalse-3 [9]: https://hf.co/stefan-it/hmbench-hipe2020-de-hmbyt5-bs8-wsFalse-e10-lr0.00015-poolingfirst-layers-1-crfFalse-4 [10]: https://hf.co/stefan-it/hmbench-hipe2020-de-hmbyt5-bs8-wsFalse-e10-lr0.00015-poolingfirst-layers-1-crfFalse-5 [11]: https://hf.co/stefan-it/hmbench-hipe2020-de-hmbyt5-bs8-wsFalse-e10-lr0.00016-poolingfirst-layers-1-crfFalse-1 [12]: https://hf.co/stefan-it/hmbench-hipe2020-de-hmbyt5-bs8-wsFalse-e10-lr0.00016-poolingfirst-layers-1-crfFalse-2 [13]: https://hf.co/stefan-it/hmbench-hipe2020-de-hmbyt5-bs8-wsFalse-e10-lr0.00016-poolingfirst-layers-1-crfFalse-3 [14]: https://hf.co/stefan-it/hmbench-hipe2020-de-hmbyt5-bs8-wsFalse-e10-lr0.00016-poolingfirst-layers-1-crfFalse-4 [15]: https://hf.co/stefan-it/hmbench-hipe2020-de-hmbyt5-bs8-wsFalse-e10-lr0.00016-poolingfirst-layers-1-crfFalse-5 [16]: https://hf.co/stefan-it/hmbench-hipe2020-de-hmbyt5-bs4-wsFalse-e10-lr0.00015-poolingfirst-layers-1-crfFalse-1 [17]: https://hf.co/stefan-it/hmbench-hipe2020-de-hmbyt5-bs4-wsFalse-e10-lr0.00015-poolingfirst-layers-1-crfFalse-2 [18]: https://hf.co/stefan-it/hmbench-hipe2020-de-hmbyt5-bs4-wsFalse-e10-lr0.00015-poolingfirst-layers-1-crfFalse-3 [19]: https://hf.co/stefan-it/hmbench-hipe2020-de-hmbyt5-bs4-wsFalse-e10-lr0.00015-poolingfirst-layers-1-crfFalse-4 [20]: https://hf.co/stefan-it/hmbench-hipe2020-de-hmbyt5-bs4-wsFalse-e10-lr0.00015-poolingfirst-layers-1-crfFalse-5 The [training log](training.log) and TensorBoard logs (only for hmByT5 and hmTEAMS based models) are also uploaded to the model hub. More information about fine-tuning can be found [here](https://github.com/stefan-it/hmBench). # Acknowledgements We thank [Luisa März](https://github.com/LuisaMaerz), [Katharina Schmid](https://github.com/schmika) and [Erion Çano](https://github.com/erionc) for their fruitful discussions about Historic Language Models. Research supported with Cloud TPUs from Google's [TPU Research Cloud](https://sites.research.google/trc/about/) (TRC). Many Thanks for providing access to the TPUs ❤️
stefan-it/hmbench-hipe2020-de-hmbyt5-bs8-wsFalse-e10-lr0.00015-poolingfirst-layers-1-crfFalse-2
stefan-it
2023-10-17T23:09:09Z
8
0
flair
[ "flair", "pytorch", "tensorboard", "token-classification", "sequence-tagger-model", "de", "base_model:hmbyt5-preliminary/byt5-small-historic-multilingual-span20-flax", "base_model:finetune:hmbyt5-preliminary/byt5-small-historic-multilingual-span20-flax", "license:mit", "region:us" ]
token-classification
2023-10-15T11:38:28Z
--- language: de license: mit tags: - flair - token-classification - sequence-tagger-model base_model: hmbyt5-preliminary/byt5-small-historic-multilingual-span20-flax inference: false widget: - text: Es war am 25sten , als Lord Corn wollis Dublin mit seinem Gefolge und mehrern Truppen verließ , um in einer Central - Lage bey Sligo die Operationen der Armee persönlich zu dirigiren . Der Feind dürfte bald in die Enge kommen , da Gen . Lacke mit 6000 Mann ihm entgegen marschirt . --- # Fine-tuned Flair Model on German HIPE-2020 Dataset (HIPE-2022) This Flair model was fine-tuned on the [German HIPE-2020](https://github.com/hipe-eval/HIPE-2022-data/blob/main/documentation/README-hipe2020.md) NER Dataset using hmByT5 as backbone LM. The HIPE-2020 dataset is comprised of newspapers from mid 19C to mid 20C. For information can be found [here](https://dl.acm.org/doi/abs/10.1007/978-3-030-58219-7_21). The following NEs were annotated: `loc`, `org`, `pers`, `prod`, `time` and `comp`. # ⚠️ Inference Widget ⚠️ Fine-Tuning ByT5 models in Flair is currently done by implementing an own [`ByT5Embedding`][0] class. This class needs to be present when running the model with Flair. Thus, the inference widget is not working with hmByT5 at the moment on the Model Hub and is currently disabled. This should be fixed in future, when ByT5 fine-tuning is supported in Flair directly. [0]: https://github.com/stefan-it/hmBench/blob/main/byt5_embeddings.py # Results We performed a hyper-parameter search over the following parameters with 5 different seeds per configuration: * Batch Sizes: `[8, 4]` * Learning Rates: `[0.00015, 0.00016]` And report micro F1-score on development set: | Configuration | Run 1 | Run 2 | Run 3 | Run 4 | Run 5 | Avg. | |-------------------|--------------|--------------|--------------|--------------|--------------|--------------| | bs4-e10-lr0.00016 | [0.7596][1] | [0.7466][2] | [0.7771][3] | [0.7894][4] | [0.7717][5] | 76.89 ± 1.47 | | bs8-e10-lr0.00015 | [0.7593][6] | [0.7663][7] | [0.7611][8] | [0.7647][9] | [0.7667][10] | 76.36 ± 0.29 | | bs8-e10-lr0.00016 | [0.7607][11] | [0.7736][12] | [0.7567][13] | [0.756][14] | [0.746][15] | 75.86 ± 0.89 | | bs4-e10-lr0.00015 | [0.7541][16] | [0.7466][17] | [0.7575][18] | [0.7579][19] | [0.7599][20] | 75.52 ± 0.47 | [1]: https://hf.co/stefan-it/hmbench-hipe2020-de-hmbyt5-bs4-wsFalse-e10-lr0.00016-poolingfirst-layers-1-crfFalse-1 [2]: https://hf.co/stefan-it/hmbench-hipe2020-de-hmbyt5-bs4-wsFalse-e10-lr0.00016-poolingfirst-layers-1-crfFalse-2 [3]: https://hf.co/stefan-it/hmbench-hipe2020-de-hmbyt5-bs4-wsFalse-e10-lr0.00016-poolingfirst-layers-1-crfFalse-3 [4]: https://hf.co/stefan-it/hmbench-hipe2020-de-hmbyt5-bs4-wsFalse-e10-lr0.00016-poolingfirst-layers-1-crfFalse-4 [5]: https://hf.co/stefan-it/hmbench-hipe2020-de-hmbyt5-bs4-wsFalse-e10-lr0.00016-poolingfirst-layers-1-crfFalse-5 [6]: https://hf.co/stefan-it/hmbench-hipe2020-de-hmbyt5-bs8-wsFalse-e10-lr0.00015-poolingfirst-layers-1-crfFalse-1 [7]: https://hf.co/stefan-it/hmbench-hipe2020-de-hmbyt5-bs8-wsFalse-e10-lr0.00015-poolingfirst-layers-1-crfFalse-2 [8]: https://hf.co/stefan-it/hmbench-hipe2020-de-hmbyt5-bs8-wsFalse-e10-lr0.00015-poolingfirst-layers-1-crfFalse-3 [9]: https://hf.co/stefan-it/hmbench-hipe2020-de-hmbyt5-bs8-wsFalse-e10-lr0.00015-poolingfirst-layers-1-crfFalse-4 [10]: https://hf.co/stefan-it/hmbench-hipe2020-de-hmbyt5-bs8-wsFalse-e10-lr0.00015-poolingfirst-layers-1-crfFalse-5 [11]: https://hf.co/stefan-it/hmbench-hipe2020-de-hmbyt5-bs8-wsFalse-e10-lr0.00016-poolingfirst-layers-1-crfFalse-1 [12]: https://hf.co/stefan-it/hmbench-hipe2020-de-hmbyt5-bs8-wsFalse-e10-lr0.00016-poolingfirst-layers-1-crfFalse-2 [13]: https://hf.co/stefan-it/hmbench-hipe2020-de-hmbyt5-bs8-wsFalse-e10-lr0.00016-poolingfirst-layers-1-crfFalse-3 [14]: https://hf.co/stefan-it/hmbench-hipe2020-de-hmbyt5-bs8-wsFalse-e10-lr0.00016-poolingfirst-layers-1-crfFalse-4 [15]: https://hf.co/stefan-it/hmbench-hipe2020-de-hmbyt5-bs8-wsFalse-e10-lr0.00016-poolingfirst-layers-1-crfFalse-5 [16]: https://hf.co/stefan-it/hmbench-hipe2020-de-hmbyt5-bs4-wsFalse-e10-lr0.00015-poolingfirst-layers-1-crfFalse-1 [17]: https://hf.co/stefan-it/hmbench-hipe2020-de-hmbyt5-bs4-wsFalse-e10-lr0.00015-poolingfirst-layers-1-crfFalse-2 [18]: https://hf.co/stefan-it/hmbench-hipe2020-de-hmbyt5-bs4-wsFalse-e10-lr0.00015-poolingfirst-layers-1-crfFalse-3 [19]: https://hf.co/stefan-it/hmbench-hipe2020-de-hmbyt5-bs4-wsFalse-e10-lr0.00015-poolingfirst-layers-1-crfFalse-4 [20]: https://hf.co/stefan-it/hmbench-hipe2020-de-hmbyt5-bs4-wsFalse-e10-lr0.00015-poolingfirst-layers-1-crfFalse-5 The [training log](training.log) and TensorBoard logs (only for hmByT5 and hmTEAMS based models) are also uploaded to the model hub. More information about fine-tuning can be found [here](https://github.com/stefan-it/hmBench). # Acknowledgements We thank [Luisa März](https://github.com/LuisaMaerz), [Katharina Schmid](https://github.com/schmika) and [Erion Çano](https://github.com/erionc) for their fruitful discussions about Historic Language Models. Research supported with Cloud TPUs from Google's [TPU Research Cloud](https://sites.research.google/trc/about/) (TRC). Many Thanks for providing access to the TPUs ❤️
stefan-it/hmbench-hipe2020-de-hmbyt5-bs8-wsFalse-e10-lr0.00015-poolingfirst-layers-1-crfFalse-1
stefan-it
2023-10-17T23:09:08Z
3
0
flair
[ "flair", "pytorch", "tensorboard", "token-classification", "sequence-tagger-model", "de", "base_model:hmbyt5-preliminary/byt5-small-historic-multilingual-span20-flax", "base_model:finetune:hmbyt5-preliminary/byt5-small-historic-multilingual-span20-flax", "license:mit", "region:us" ]
token-classification
2023-10-15T11:35:36Z
--- language: de license: mit tags: - flair - token-classification - sequence-tagger-model base_model: hmbyt5-preliminary/byt5-small-historic-multilingual-span20-flax inference: false widget: - text: Es war am 25sten , als Lord Corn wollis Dublin mit seinem Gefolge und mehrern Truppen verließ , um in einer Central - Lage bey Sligo die Operationen der Armee persönlich zu dirigiren . Der Feind dürfte bald in die Enge kommen , da Gen . Lacke mit 6000 Mann ihm entgegen marschirt . --- # Fine-tuned Flair Model on German HIPE-2020 Dataset (HIPE-2022) This Flair model was fine-tuned on the [German HIPE-2020](https://github.com/hipe-eval/HIPE-2022-data/blob/main/documentation/README-hipe2020.md) NER Dataset using hmByT5 as backbone LM. The HIPE-2020 dataset is comprised of newspapers from mid 19C to mid 20C. For information can be found [here](https://dl.acm.org/doi/abs/10.1007/978-3-030-58219-7_21). The following NEs were annotated: `loc`, `org`, `pers`, `prod`, `time` and `comp`. # ⚠️ Inference Widget ⚠️ Fine-Tuning ByT5 models in Flair is currently done by implementing an own [`ByT5Embedding`][0] class. This class needs to be present when running the model with Flair. Thus, the inference widget is not working with hmByT5 at the moment on the Model Hub and is currently disabled. This should be fixed in future, when ByT5 fine-tuning is supported in Flair directly. [0]: https://github.com/stefan-it/hmBench/blob/main/byt5_embeddings.py # Results We performed a hyper-parameter search over the following parameters with 5 different seeds per configuration: * Batch Sizes: `[8, 4]` * Learning Rates: `[0.00015, 0.00016]` And report micro F1-score on development set: | Configuration | Run 1 | Run 2 | Run 3 | Run 4 | Run 5 | Avg. | |-------------------|--------------|--------------|--------------|--------------|--------------|--------------| | bs4-e10-lr0.00016 | [0.7596][1] | [0.7466][2] | [0.7771][3] | [0.7894][4] | [0.7717][5] | 76.89 ± 1.47 | | bs8-e10-lr0.00015 | [0.7593][6] | [0.7663][7] | [0.7611][8] | [0.7647][9] | [0.7667][10] | 76.36 ± 0.29 | | bs8-e10-lr0.00016 | [0.7607][11] | [0.7736][12] | [0.7567][13] | [0.756][14] | [0.746][15] | 75.86 ± 0.89 | | bs4-e10-lr0.00015 | [0.7541][16] | [0.7466][17] | [0.7575][18] | [0.7579][19] | [0.7599][20] | 75.52 ± 0.47 | [1]: https://hf.co/stefan-it/hmbench-hipe2020-de-hmbyt5-bs4-wsFalse-e10-lr0.00016-poolingfirst-layers-1-crfFalse-1 [2]: https://hf.co/stefan-it/hmbench-hipe2020-de-hmbyt5-bs4-wsFalse-e10-lr0.00016-poolingfirst-layers-1-crfFalse-2 [3]: https://hf.co/stefan-it/hmbench-hipe2020-de-hmbyt5-bs4-wsFalse-e10-lr0.00016-poolingfirst-layers-1-crfFalse-3 [4]: https://hf.co/stefan-it/hmbench-hipe2020-de-hmbyt5-bs4-wsFalse-e10-lr0.00016-poolingfirst-layers-1-crfFalse-4 [5]: https://hf.co/stefan-it/hmbench-hipe2020-de-hmbyt5-bs4-wsFalse-e10-lr0.00016-poolingfirst-layers-1-crfFalse-5 [6]: https://hf.co/stefan-it/hmbench-hipe2020-de-hmbyt5-bs8-wsFalse-e10-lr0.00015-poolingfirst-layers-1-crfFalse-1 [7]: https://hf.co/stefan-it/hmbench-hipe2020-de-hmbyt5-bs8-wsFalse-e10-lr0.00015-poolingfirst-layers-1-crfFalse-2 [8]: https://hf.co/stefan-it/hmbench-hipe2020-de-hmbyt5-bs8-wsFalse-e10-lr0.00015-poolingfirst-layers-1-crfFalse-3 [9]: https://hf.co/stefan-it/hmbench-hipe2020-de-hmbyt5-bs8-wsFalse-e10-lr0.00015-poolingfirst-layers-1-crfFalse-4 [10]: https://hf.co/stefan-it/hmbench-hipe2020-de-hmbyt5-bs8-wsFalse-e10-lr0.00015-poolingfirst-layers-1-crfFalse-5 [11]: https://hf.co/stefan-it/hmbench-hipe2020-de-hmbyt5-bs8-wsFalse-e10-lr0.00016-poolingfirst-layers-1-crfFalse-1 [12]: https://hf.co/stefan-it/hmbench-hipe2020-de-hmbyt5-bs8-wsFalse-e10-lr0.00016-poolingfirst-layers-1-crfFalse-2 [13]: https://hf.co/stefan-it/hmbench-hipe2020-de-hmbyt5-bs8-wsFalse-e10-lr0.00016-poolingfirst-layers-1-crfFalse-3 [14]: https://hf.co/stefan-it/hmbench-hipe2020-de-hmbyt5-bs8-wsFalse-e10-lr0.00016-poolingfirst-layers-1-crfFalse-4 [15]: https://hf.co/stefan-it/hmbench-hipe2020-de-hmbyt5-bs8-wsFalse-e10-lr0.00016-poolingfirst-layers-1-crfFalse-5 [16]: https://hf.co/stefan-it/hmbench-hipe2020-de-hmbyt5-bs4-wsFalse-e10-lr0.00015-poolingfirst-layers-1-crfFalse-1 [17]: https://hf.co/stefan-it/hmbench-hipe2020-de-hmbyt5-bs4-wsFalse-e10-lr0.00015-poolingfirst-layers-1-crfFalse-2 [18]: https://hf.co/stefan-it/hmbench-hipe2020-de-hmbyt5-bs4-wsFalse-e10-lr0.00015-poolingfirst-layers-1-crfFalse-3 [19]: https://hf.co/stefan-it/hmbench-hipe2020-de-hmbyt5-bs4-wsFalse-e10-lr0.00015-poolingfirst-layers-1-crfFalse-4 [20]: https://hf.co/stefan-it/hmbench-hipe2020-de-hmbyt5-bs4-wsFalse-e10-lr0.00015-poolingfirst-layers-1-crfFalse-5 The [training log](training.log) and TensorBoard logs (only for hmByT5 and hmTEAMS based models) are also uploaded to the model hub. More information about fine-tuning can be found [here](https://github.com/stefan-it/hmBench). # Acknowledgements We thank [Luisa März](https://github.com/LuisaMaerz), [Katharina Schmid](https://github.com/schmika) and [Erion Çano](https://github.com/erionc) for their fruitful discussions about Historic Language Models. Research supported with Cloud TPUs from Google's [TPU Research Cloud](https://sites.research.google/trc/about/) (TRC). Many Thanks for providing access to the TPUs ❤️
stefan-it/hmbench-topres19th-en-hmbyt5-bs4-wsFalse-e10-lr0.00016-poolingfirst-layers-1-crfFalse-5
stefan-it
2023-10-17T23:08:40Z
7
0
flair
[ "flair", "pytorch", "tensorboard", "token-classification", "sequence-tagger-model", "en", "base_model:hmbyt5-preliminary/byt5-small-historic-multilingual-span20-flax", "base_model:finetune:hmbyt5-preliminary/byt5-small-historic-multilingual-span20-flax", "license:mit", "region:us" ]
token-classification
2023-10-14T08:15:59Z
--- language: en license: mit tags: - flair - token-classification - sequence-tagger-model base_model: hmbyt5-preliminary/byt5-small-historic-multilingual-span20-flax inference: false widget: - text: On Wednesday , a public dinner was given by the Conservative Burgesses of Leads , to the Conservative members of the Leeds Town Council , in the Music Hall , Albion-street , which was very numerously attended . --- # Fine-tuned Flair Model on TopRes19th English NER Dataset (HIPE-2022) This Flair model was fine-tuned on the [TopRes19th English](https://github.com/hipe-eval/HIPE-2022-data/blob/main/documentation/README-topres19th.md) NER Dataset using hmByT5 as backbone LM. The TopRes19th dataset consists of NE-annotated historical English newspaper articles from 19C. The following NEs were annotated: `BUILDING`, `LOC` and `STREET`. # ⚠️ Inference Widget ⚠️ Fine-Tuning ByT5 models in Flair is currently done by implementing an own [`ByT5Embedding`][0] class. This class needs to be present when running the model with Flair. Thus, the inference widget is not working with hmByT5 at the moment on the Model Hub and is currently disabled. This should be fixed in future, when ByT5 fine-tuning is supported in Flair directly. [0]: https://github.com/stefan-it/hmBench/blob/main/byt5_embeddings.py # Results We performed a hyper-parameter search over the following parameters with 5 different seeds per configuration: * Batch Sizes: `[8, 4]` * Learning Rates: `[0.00015, 0.00016]` And report micro F1-score on development set: | Configuration | Run 1 | Run 2 | Run 3 | Run 4 | Run 5 | Avg. | |-------------------|--------------|--------------|--------------|--------------|--------------|--------------| | bs4-e10-lr0.00015 | [0.7992][1] | [0.8226][2] | [0.8205][3] | [0.8364][4] | [0.809][5] | 81.75 ± 1.26 | | bs8-e10-lr0.00015 | [0.8095][6] | [0.83][7] | [0.8024][8] | [0.8112][9] | [0.8189][10] | 81.44 ± 0.94 | | bs8-e10-lr0.00016 | [0.8144][11] | [0.8209][12] | [0.8065][13] | [0.8056][14] | [0.82][15] | 81.35 ± 0.65 | | bs4-e10-lr0.00016 | [0.8056][16] | [0.8105][17] | [0.809][18] | [0.808][19] | [0.8056][20] | 80.77 ± 0.19 | [1]: https://hf.co/stefan-it/hmbench-topres19th-en-hmbyt5-bs4-wsFalse-e10-lr0.00015-poolingfirst-layers-1-crfFalse-1 [2]: https://hf.co/stefan-it/hmbench-topres19th-en-hmbyt5-bs4-wsFalse-e10-lr0.00015-poolingfirst-layers-1-crfFalse-2 [3]: https://hf.co/stefan-it/hmbench-topres19th-en-hmbyt5-bs4-wsFalse-e10-lr0.00015-poolingfirst-layers-1-crfFalse-3 [4]: https://hf.co/stefan-it/hmbench-topres19th-en-hmbyt5-bs4-wsFalse-e10-lr0.00015-poolingfirst-layers-1-crfFalse-4 [5]: https://hf.co/stefan-it/hmbench-topres19th-en-hmbyt5-bs4-wsFalse-e10-lr0.00015-poolingfirst-layers-1-crfFalse-5 [6]: https://hf.co/stefan-it/hmbench-topres19th-en-hmbyt5-bs8-wsFalse-e10-lr0.00015-poolingfirst-layers-1-crfFalse-1 [7]: https://hf.co/stefan-it/hmbench-topres19th-en-hmbyt5-bs8-wsFalse-e10-lr0.00015-poolingfirst-layers-1-crfFalse-2 [8]: https://hf.co/stefan-it/hmbench-topres19th-en-hmbyt5-bs8-wsFalse-e10-lr0.00015-poolingfirst-layers-1-crfFalse-3 [9]: https://hf.co/stefan-it/hmbench-topres19th-en-hmbyt5-bs8-wsFalse-e10-lr0.00015-poolingfirst-layers-1-crfFalse-4 [10]: https://hf.co/stefan-it/hmbench-topres19th-en-hmbyt5-bs8-wsFalse-e10-lr0.00015-poolingfirst-layers-1-crfFalse-5 [11]: https://hf.co/stefan-it/hmbench-topres19th-en-hmbyt5-bs8-wsFalse-e10-lr0.00016-poolingfirst-layers-1-crfFalse-1 [12]: https://hf.co/stefan-it/hmbench-topres19th-en-hmbyt5-bs8-wsFalse-e10-lr0.00016-poolingfirst-layers-1-crfFalse-2 [13]: https://hf.co/stefan-it/hmbench-topres19th-en-hmbyt5-bs8-wsFalse-e10-lr0.00016-poolingfirst-layers-1-crfFalse-3 [14]: https://hf.co/stefan-it/hmbench-topres19th-en-hmbyt5-bs8-wsFalse-e10-lr0.00016-poolingfirst-layers-1-crfFalse-4 [15]: https://hf.co/stefan-it/hmbench-topres19th-en-hmbyt5-bs8-wsFalse-e10-lr0.00016-poolingfirst-layers-1-crfFalse-5 [16]: https://hf.co/stefan-it/hmbench-topres19th-en-hmbyt5-bs4-wsFalse-e10-lr0.00016-poolingfirst-layers-1-crfFalse-1 [17]: https://hf.co/stefan-it/hmbench-topres19th-en-hmbyt5-bs4-wsFalse-e10-lr0.00016-poolingfirst-layers-1-crfFalse-2 [18]: https://hf.co/stefan-it/hmbench-topres19th-en-hmbyt5-bs4-wsFalse-e10-lr0.00016-poolingfirst-layers-1-crfFalse-3 [19]: https://hf.co/stefan-it/hmbench-topres19th-en-hmbyt5-bs4-wsFalse-e10-lr0.00016-poolingfirst-layers-1-crfFalse-4 [20]: https://hf.co/stefan-it/hmbench-topres19th-en-hmbyt5-bs4-wsFalse-e10-lr0.00016-poolingfirst-layers-1-crfFalse-5 The [training log](training.log) and TensorBoard logs (only for hmByT5 and hmTEAMS based models) are also uploaded to the model hub. More information about fine-tuning can be found [here](https://github.com/stefan-it/hmBench). # Acknowledgements We thank [Luisa März](https://github.com/LuisaMaerz), [Katharina Schmid](https://github.com/schmika) and [Erion Çano](https://github.com/erionc) for their fruitful discussions about Historic Language Models. Research supported with Cloud TPUs from Google's [TPU Research Cloud](https://sites.research.google/trc/about/) (TRC). Many Thanks for providing access to the TPUs ❤️
stefan-it/hmbench-topres19th-en-hmbyt5-bs4-wsFalse-e10-lr0.00016-poolingfirst-layers-1-crfFalse-3
stefan-it
2023-10-17T23:08:39Z
6
0
flair
[ "flair", "pytorch", "tensorboard", "token-classification", "sequence-tagger-model", "en", "base_model:hmbyt5-preliminary/byt5-small-historic-multilingual-span20-flax", "base_model:finetune:hmbyt5-preliminary/byt5-small-historic-multilingual-span20-flax", "license:mit", "region:us" ]
token-classification
2023-10-13T22:15:13Z
--- language: en license: mit tags: - flair - token-classification - sequence-tagger-model base_model: hmbyt5-preliminary/byt5-small-historic-multilingual-span20-flax inference: false widget: - text: On Wednesday , a public dinner was given by the Conservative Burgesses of Leads , to the Conservative members of the Leeds Town Council , in the Music Hall , Albion-street , which was very numerously attended . --- # Fine-tuned Flair Model on TopRes19th English NER Dataset (HIPE-2022) This Flair model was fine-tuned on the [TopRes19th English](https://github.com/hipe-eval/HIPE-2022-data/blob/main/documentation/README-topres19th.md) NER Dataset using hmByT5 as backbone LM. The TopRes19th dataset consists of NE-annotated historical English newspaper articles from 19C. The following NEs were annotated: `BUILDING`, `LOC` and `STREET`. # ⚠️ Inference Widget ⚠️ Fine-Tuning ByT5 models in Flair is currently done by implementing an own [`ByT5Embedding`][0] class. This class needs to be present when running the model with Flair. Thus, the inference widget is not working with hmByT5 at the moment on the Model Hub and is currently disabled. This should be fixed in future, when ByT5 fine-tuning is supported in Flair directly. [0]: https://github.com/stefan-it/hmBench/blob/main/byt5_embeddings.py # Results We performed a hyper-parameter search over the following parameters with 5 different seeds per configuration: * Batch Sizes: `[8, 4]` * Learning Rates: `[0.00015, 0.00016]` And report micro F1-score on development set: | Configuration | Run 1 | Run 2 | Run 3 | Run 4 | Run 5 | Avg. | |-------------------|--------------|--------------|--------------|--------------|--------------|--------------| | bs4-e10-lr0.00015 | [0.7992][1] | [0.8226][2] | [0.8205][3] | [0.8364][4] | [0.809][5] | 81.75 ± 1.26 | | bs8-e10-lr0.00015 | [0.8095][6] | [0.83][7] | [0.8024][8] | [0.8112][9] | [0.8189][10] | 81.44 ± 0.94 | | bs8-e10-lr0.00016 | [0.8144][11] | [0.8209][12] | [0.8065][13] | [0.8056][14] | [0.82][15] | 81.35 ± 0.65 | | bs4-e10-lr0.00016 | [0.8056][16] | [0.8105][17] | [0.809][18] | [0.808][19] | [0.8056][20] | 80.77 ± 0.19 | [1]: https://hf.co/stefan-it/hmbench-topres19th-en-hmbyt5-bs4-wsFalse-e10-lr0.00015-poolingfirst-layers-1-crfFalse-1 [2]: https://hf.co/stefan-it/hmbench-topres19th-en-hmbyt5-bs4-wsFalse-e10-lr0.00015-poolingfirst-layers-1-crfFalse-2 [3]: https://hf.co/stefan-it/hmbench-topres19th-en-hmbyt5-bs4-wsFalse-e10-lr0.00015-poolingfirst-layers-1-crfFalse-3 [4]: https://hf.co/stefan-it/hmbench-topres19th-en-hmbyt5-bs4-wsFalse-e10-lr0.00015-poolingfirst-layers-1-crfFalse-4 [5]: https://hf.co/stefan-it/hmbench-topres19th-en-hmbyt5-bs4-wsFalse-e10-lr0.00015-poolingfirst-layers-1-crfFalse-5 [6]: https://hf.co/stefan-it/hmbench-topres19th-en-hmbyt5-bs8-wsFalse-e10-lr0.00015-poolingfirst-layers-1-crfFalse-1 [7]: https://hf.co/stefan-it/hmbench-topres19th-en-hmbyt5-bs8-wsFalse-e10-lr0.00015-poolingfirst-layers-1-crfFalse-2 [8]: https://hf.co/stefan-it/hmbench-topres19th-en-hmbyt5-bs8-wsFalse-e10-lr0.00015-poolingfirst-layers-1-crfFalse-3 [9]: https://hf.co/stefan-it/hmbench-topres19th-en-hmbyt5-bs8-wsFalse-e10-lr0.00015-poolingfirst-layers-1-crfFalse-4 [10]: https://hf.co/stefan-it/hmbench-topres19th-en-hmbyt5-bs8-wsFalse-e10-lr0.00015-poolingfirst-layers-1-crfFalse-5 [11]: https://hf.co/stefan-it/hmbench-topres19th-en-hmbyt5-bs8-wsFalse-e10-lr0.00016-poolingfirst-layers-1-crfFalse-1 [12]: https://hf.co/stefan-it/hmbench-topres19th-en-hmbyt5-bs8-wsFalse-e10-lr0.00016-poolingfirst-layers-1-crfFalse-2 [13]: https://hf.co/stefan-it/hmbench-topres19th-en-hmbyt5-bs8-wsFalse-e10-lr0.00016-poolingfirst-layers-1-crfFalse-3 [14]: https://hf.co/stefan-it/hmbench-topres19th-en-hmbyt5-bs8-wsFalse-e10-lr0.00016-poolingfirst-layers-1-crfFalse-4 [15]: https://hf.co/stefan-it/hmbench-topres19th-en-hmbyt5-bs8-wsFalse-e10-lr0.00016-poolingfirst-layers-1-crfFalse-5 [16]: https://hf.co/stefan-it/hmbench-topres19th-en-hmbyt5-bs4-wsFalse-e10-lr0.00016-poolingfirst-layers-1-crfFalse-1 [17]: https://hf.co/stefan-it/hmbench-topres19th-en-hmbyt5-bs4-wsFalse-e10-lr0.00016-poolingfirst-layers-1-crfFalse-2 [18]: https://hf.co/stefan-it/hmbench-topres19th-en-hmbyt5-bs4-wsFalse-e10-lr0.00016-poolingfirst-layers-1-crfFalse-3 [19]: https://hf.co/stefan-it/hmbench-topres19th-en-hmbyt5-bs4-wsFalse-e10-lr0.00016-poolingfirst-layers-1-crfFalse-4 [20]: https://hf.co/stefan-it/hmbench-topres19th-en-hmbyt5-bs4-wsFalse-e10-lr0.00016-poolingfirst-layers-1-crfFalse-5 The [training log](training.log) and TensorBoard logs (only for hmByT5 and hmTEAMS based models) are also uploaded to the model hub. More information about fine-tuning can be found [here](https://github.com/stefan-it/hmBench). # Acknowledgements We thank [Luisa März](https://github.com/LuisaMaerz), [Katharina Schmid](https://github.com/schmika) and [Erion Çano](https://github.com/erionc) for their fruitful discussions about Historic Language Models. Research supported with Cloud TPUs from Google's [TPU Research Cloud](https://sites.research.google/trc/about/) (TRC). Many Thanks for providing access to the TPUs ❤️
stefan-it/hmbench-topres19th-en-hmbyt5-bs4-wsFalse-e10-lr0.00016-poolingfirst-layers-1-crfFalse-2
stefan-it
2023-10-17T23:08:38Z
11
0
flair
[ "flair", "pytorch", "tensorboard", "token-classification", "sequence-tagger-model", "en", "base_model:hmbyt5-preliminary/byt5-small-historic-multilingual-span20-flax", "base_model:finetune:hmbyt5-preliminary/byt5-small-historic-multilingual-span20-flax", "license:mit", "region:us" ]
token-classification
2023-10-13T17:14:40Z
--- language: en license: mit tags: - flair - token-classification - sequence-tagger-model base_model: hmbyt5-preliminary/byt5-small-historic-multilingual-span20-flax inference: false widget: - text: On Wednesday , a public dinner was given by the Conservative Burgesses of Leads , to the Conservative members of the Leeds Town Council , in the Music Hall , Albion-street , which was very numerously attended . --- # Fine-tuned Flair Model on TopRes19th English NER Dataset (HIPE-2022) This Flair model was fine-tuned on the [TopRes19th English](https://github.com/hipe-eval/HIPE-2022-data/blob/main/documentation/README-topres19th.md) NER Dataset using hmByT5 as backbone LM. The TopRes19th dataset consists of NE-annotated historical English newspaper articles from 19C. The following NEs were annotated: `BUILDING`, `LOC` and `STREET`. # ⚠️ Inference Widget ⚠️ Fine-Tuning ByT5 models in Flair is currently done by implementing an own [`ByT5Embedding`][0] class. This class needs to be present when running the model with Flair. Thus, the inference widget is not working with hmByT5 at the moment on the Model Hub and is currently disabled. This should be fixed in future, when ByT5 fine-tuning is supported in Flair directly. [0]: https://github.com/stefan-it/hmBench/blob/main/byt5_embeddings.py # Results We performed a hyper-parameter search over the following parameters with 5 different seeds per configuration: * Batch Sizes: `[8, 4]` * Learning Rates: `[0.00015, 0.00016]` And report micro F1-score on development set: | Configuration | Run 1 | Run 2 | Run 3 | Run 4 | Run 5 | Avg. | |-------------------|--------------|--------------|--------------|--------------|--------------|--------------| | bs4-e10-lr0.00015 | [0.7992][1] | [0.8226][2] | [0.8205][3] | [0.8364][4] | [0.809][5] | 81.75 ± 1.26 | | bs8-e10-lr0.00015 | [0.8095][6] | [0.83][7] | [0.8024][8] | [0.8112][9] | [0.8189][10] | 81.44 ± 0.94 | | bs8-e10-lr0.00016 | [0.8144][11] | [0.8209][12] | [0.8065][13] | [0.8056][14] | [0.82][15] | 81.35 ± 0.65 | | bs4-e10-lr0.00016 | [0.8056][16] | [0.8105][17] | [0.809][18] | [0.808][19] | [0.8056][20] | 80.77 ± 0.19 | [1]: https://hf.co/stefan-it/hmbench-topres19th-en-hmbyt5-bs4-wsFalse-e10-lr0.00015-poolingfirst-layers-1-crfFalse-1 [2]: https://hf.co/stefan-it/hmbench-topres19th-en-hmbyt5-bs4-wsFalse-e10-lr0.00015-poolingfirst-layers-1-crfFalse-2 [3]: https://hf.co/stefan-it/hmbench-topres19th-en-hmbyt5-bs4-wsFalse-e10-lr0.00015-poolingfirst-layers-1-crfFalse-3 [4]: https://hf.co/stefan-it/hmbench-topres19th-en-hmbyt5-bs4-wsFalse-e10-lr0.00015-poolingfirst-layers-1-crfFalse-4 [5]: https://hf.co/stefan-it/hmbench-topres19th-en-hmbyt5-bs4-wsFalse-e10-lr0.00015-poolingfirst-layers-1-crfFalse-5 [6]: https://hf.co/stefan-it/hmbench-topres19th-en-hmbyt5-bs8-wsFalse-e10-lr0.00015-poolingfirst-layers-1-crfFalse-1 [7]: https://hf.co/stefan-it/hmbench-topres19th-en-hmbyt5-bs8-wsFalse-e10-lr0.00015-poolingfirst-layers-1-crfFalse-2 [8]: https://hf.co/stefan-it/hmbench-topres19th-en-hmbyt5-bs8-wsFalse-e10-lr0.00015-poolingfirst-layers-1-crfFalse-3 [9]: https://hf.co/stefan-it/hmbench-topres19th-en-hmbyt5-bs8-wsFalse-e10-lr0.00015-poolingfirst-layers-1-crfFalse-4 [10]: https://hf.co/stefan-it/hmbench-topres19th-en-hmbyt5-bs8-wsFalse-e10-lr0.00015-poolingfirst-layers-1-crfFalse-5 [11]: https://hf.co/stefan-it/hmbench-topres19th-en-hmbyt5-bs8-wsFalse-e10-lr0.00016-poolingfirst-layers-1-crfFalse-1 [12]: https://hf.co/stefan-it/hmbench-topres19th-en-hmbyt5-bs8-wsFalse-e10-lr0.00016-poolingfirst-layers-1-crfFalse-2 [13]: https://hf.co/stefan-it/hmbench-topres19th-en-hmbyt5-bs8-wsFalse-e10-lr0.00016-poolingfirst-layers-1-crfFalse-3 [14]: https://hf.co/stefan-it/hmbench-topres19th-en-hmbyt5-bs8-wsFalse-e10-lr0.00016-poolingfirst-layers-1-crfFalse-4 [15]: https://hf.co/stefan-it/hmbench-topres19th-en-hmbyt5-bs8-wsFalse-e10-lr0.00016-poolingfirst-layers-1-crfFalse-5 [16]: https://hf.co/stefan-it/hmbench-topres19th-en-hmbyt5-bs4-wsFalse-e10-lr0.00016-poolingfirst-layers-1-crfFalse-1 [17]: https://hf.co/stefan-it/hmbench-topres19th-en-hmbyt5-bs4-wsFalse-e10-lr0.00016-poolingfirst-layers-1-crfFalse-2 [18]: https://hf.co/stefan-it/hmbench-topres19th-en-hmbyt5-bs4-wsFalse-e10-lr0.00016-poolingfirst-layers-1-crfFalse-3 [19]: https://hf.co/stefan-it/hmbench-topres19th-en-hmbyt5-bs4-wsFalse-e10-lr0.00016-poolingfirst-layers-1-crfFalse-4 [20]: https://hf.co/stefan-it/hmbench-topres19th-en-hmbyt5-bs4-wsFalse-e10-lr0.00016-poolingfirst-layers-1-crfFalse-5 The [training log](training.log) and TensorBoard logs (only for hmByT5 and hmTEAMS based models) are also uploaded to the model hub. More information about fine-tuning can be found [here](https://github.com/stefan-it/hmBench). # Acknowledgements We thank [Luisa März](https://github.com/LuisaMaerz), [Katharina Schmid](https://github.com/schmika) and [Erion Çano](https://github.com/erionc) for their fruitful discussions about Historic Language Models. Research supported with Cloud TPUs from Google's [TPU Research Cloud](https://sites.research.google/trc/about/) (TRC). Many Thanks for providing access to the TPUs ❤️
stefan-it/hmbench-topres19th-en-hmbyt5-bs4-wsFalse-e10-lr0.00016-poolingfirst-layers-1-crfFalse-1
stefan-it
2023-10-17T23:08:37Z
6
0
flair
[ "flair", "pytorch", "tensorboard", "token-classification", "sequence-tagger-model", "en", "base_model:hmbyt5-preliminary/byt5-small-historic-multilingual-span20-flax", "base_model:finetune:hmbyt5-preliminary/byt5-small-historic-multilingual-span20-flax", "license:mit", "region:us" ]
token-classification
2023-10-13T12:11:07Z
--- language: en license: mit tags: - flair - token-classification - sequence-tagger-model base_model: hmbyt5-preliminary/byt5-small-historic-multilingual-span20-flax inference: false widget: - text: On Wednesday , a public dinner was given by the Conservative Burgesses of Leads , to the Conservative members of the Leeds Town Council , in the Music Hall , Albion-street , which was very numerously attended . --- # Fine-tuned Flair Model on TopRes19th English NER Dataset (HIPE-2022) This Flair model was fine-tuned on the [TopRes19th English](https://github.com/hipe-eval/HIPE-2022-data/blob/main/documentation/README-topres19th.md) NER Dataset using hmByT5 as backbone LM. The TopRes19th dataset consists of NE-annotated historical English newspaper articles from 19C. The following NEs were annotated: `BUILDING`, `LOC` and `STREET`. # ⚠️ Inference Widget ⚠️ Fine-Tuning ByT5 models in Flair is currently done by implementing an own [`ByT5Embedding`][0] class. This class needs to be present when running the model with Flair. Thus, the inference widget is not working with hmByT5 at the moment on the Model Hub and is currently disabled. This should be fixed in future, when ByT5 fine-tuning is supported in Flair directly. [0]: https://github.com/stefan-it/hmBench/blob/main/byt5_embeddings.py # Results We performed a hyper-parameter search over the following parameters with 5 different seeds per configuration: * Batch Sizes: `[8, 4]` * Learning Rates: `[0.00015, 0.00016]` And report micro F1-score on development set: | Configuration | Run 1 | Run 2 | Run 3 | Run 4 | Run 5 | Avg. | |-------------------|--------------|--------------|--------------|--------------|--------------|--------------| | bs4-e10-lr0.00015 | [0.7992][1] | [0.8226][2] | [0.8205][3] | [0.8364][4] | [0.809][5] | 81.75 ± 1.26 | | bs8-e10-lr0.00015 | [0.8095][6] | [0.83][7] | [0.8024][8] | [0.8112][9] | [0.8189][10] | 81.44 ± 0.94 | | bs8-e10-lr0.00016 | [0.8144][11] | [0.8209][12] | [0.8065][13] | [0.8056][14] | [0.82][15] | 81.35 ± 0.65 | | bs4-e10-lr0.00016 | [0.8056][16] | [0.8105][17] | [0.809][18] | [0.808][19] | [0.8056][20] | 80.77 ± 0.19 | [1]: https://hf.co/stefan-it/hmbench-topres19th-en-hmbyt5-bs4-wsFalse-e10-lr0.00015-poolingfirst-layers-1-crfFalse-1 [2]: https://hf.co/stefan-it/hmbench-topres19th-en-hmbyt5-bs4-wsFalse-e10-lr0.00015-poolingfirst-layers-1-crfFalse-2 [3]: https://hf.co/stefan-it/hmbench-topres19th-en-hmbyt5-bs4-wsFalse-e10-lr0.00015-poolingfirst-layers-1-crfFalse-3 [4]: https://hf.co/stefan-it/hmbench-topres19th-en-hmbyt5-bs4-wsFalse-e10-lr0.00015-poolingfirst-layers-1-crfFalse-4 [5]: https://hf.co/stefan-it/hmbench-topres19th-en-hmbyt5-bs4-wsFalse-e10-lr0.00015-poolingfirst-layers-1-crfFalse-5 [6]: https://hf.co/stefan-it/hmbench-topres19th-en-hmbyt5-bs8-wsFalse-e10-lr0.00015-poolingfirst-layers-1-crfFalse-1 [7]: https://hf.co/stefan-it/hmbench-topres19th-en-hmbyt5-bs8-wsFalse-e10-lr0.00015-poolingfirst-layers-1-crfFalse-2 [8]: https://hf.co/stefan-it/hmbench-topres19th-en-hmbyt5-bs8-wsFalse-e10-lr0.00015-poolingfirst-layers-1-crfFalse-3 [9]: https://hf.co/stefan-it/hmbench-topres19th-en-hmbyt5-bs8-wsFalse-e10-lr0.00015-poolingfirst-layers-1-crfFalse-4 [10]: https://hf.co/stefan-it/hmbench-topres19th-en-hmbyt5-bs8-wsFalse-e10-lr0.00015-poolingfirst-layers-1-crfFalse-5 [11]: https://hf.co/stefan-it/hmbench-topres19th-en-hmbyt5-bs8-wsFalse-e10-lr0.00016-poolingfirst-layers-1-crfFalse-1 [12]: https://hf.co/stefan-it/hmbench-topres19th-en-hmbyt5-bs8-wsFalse-e10-lr0.00016-poolingfirst-layers-1-crfFalse-2 [13]: https://hf.co/stefan-it/hmbench-topres19th-en-hmbyt5-bs8-wsFalse-e10-lr0.00016-poolingfirst-layers-1-crfFalse-3 [14]: https://hf.co/stefan-it/hmbench-topres19th-en-hmbyt5-bs8-wsFalse-e10-lr0.00016-poolingfirst-layers-1-crfFalse-4 [15]: https://hf.co/stefan-it/hmbench-topres19th-en-hmbyt5-bs8-wsFalse-e10-lr0.00016-poolingfirst-layers-1-crfFalse-5 [16]: https://hf.co/stefan-it/hmbench-topres19th-en-hmbyt5-bs4-wsFalse-e10-lr0.00016-poolingfirst-layers-1-crfFalse-1 [17]: https://hf.co/stefan-it/hmbench-topres19th-en-hmbyt5-bs4-wsFalse-e10-lr0.00016-poolingfirst-layers-1-crfFalse-2 [18]: https://hf.co/stefan-it/hmbench-topres19th-en-hmbyt5-bs4-wsFalse-e10-lr0.00016-poolingfirst-layers-1-crfFalse-3 [19]: https://hf.co/stefan-it/hmbench-topres19th-en-hmbyt5-bs4-wsFalse-e10-lr0.00016-poolingfirst-layers-1-crfFalse-4 [20]: https://hf.co/stefan-it/hmbench-topres19th-en-hmbyt5-bs4-wsFalse-e10-lr0.00016-poolingfirst-layers-1-crfFalse-5 The [training log](training.log) and TensorBoard logs (only for hmByT5 and hmTEAMS based models) are also uploaded to the model hub. More information about fine-tuning can be found [here](https://github.com/stefan-it/hmBench). # Acknowledgements We thank [Luisa März](https://github.com/LuisaMaerz), [Katharina Schmid](https://github.com/schmika) and [Erion Çano](https://github.com/erionc) for their fruitful discussions about Historic Language Models. Research supported with Cloud TPUs from Google's [TPU Research Cloud](https://sites.research.google/trc/about/) (TRC). Many Thanks for providing access to the TPUs ❤️
stefan-it/hmbench-topres19th-en-hmbyt5-bs4-wsFalse-e10-lr0.00015-poolingfirst-layers-1-crfFalse-5
stefan-it
2023-10-17T23:08:37Z
7
0
flair
[ "flair", "pytorch", "tensorboard", "token-classification", "sequence-tagger-model", "en", "base_model:hmbyt5-preliminary/byt5-small-historic-multilingual-span20-flax", "base_model:finetune:hmbyt5-preliminary/byt5-small-historic-multilingual-span20-flax", "license:mit", "region:us" ]
token-classification
2023-10-14T06:58:16Z
--- language: en license: mit tags: - flair - token-classification - sequence-tagger-model base_model: hmbyt5-preliminary/byt5-small-historic-multilingual-span20-flax inference: false widget: - text: On Wednesday , a public dinner was given by the Conservative Burgesses of Leads , to the Conservative members of the Leeds Town Council , in the Music Hall , Albion-street , which was very numerously attended . --- # Fine-tuned Flair Model on TopRes19th English NER Dataset (HIPE-2022) This Flair model was fine-tuned on the [TopRes19th English](https://github.com/hipe-eval/HIPE-2022-data/blob/main/documentation/README-topres19th.md) NER Dataset using hmByT5 as backbone LM. The TopRes19th dataset consists of NE-annotated historical English newspaper articles from 19C. The following NEs were annotated: `BUILDING`, `LOC` and `STREET`. # ⚠️ Inference Widget ⚠️ Fine-Tuning ByT5 models in Flair is currently done by implementing an own [`ByT5Embedding`][0] class. This class needs to be present when running the model with Flair. Thus, the inference widget is not working with hmByT5 at the moment on the Model Hub and is currently disabled. This should be fixed in future, when ByT5 fine-tuning is supported in Flair directly. [0]: https://github.com/stefan-it/hmBench/blob/main/byt5_embeddings.py # Results We performed a hyper-parameter search over the following parameters with 5 different seeds per configuration: * Batch Sizes: `[8, 4]` * Learning Rates: `[0.00015, 0.00016]` And report micro F1-score on development set: | Configuration | Run 1 | Run 2 | Run 3 | Run 4 | Run 5 | Avg. | |-------------------|--------------|--------------|--------------|--------------|--------------|--------------| | bs4-e10-lr0.00015 | [0.7992][1] | [0.8226][2] | [0.8205][3] | [0.8364][4] | [0.809][5] | 81.75 ± 1.26 | | bs8-e10-lr0.00015 | [0.8095][6] | [0.83][7] | [0.8024][8] | [0.8112][9] | [0.8189][10] | 81.44 ± 0.94 | | bs8-e10-lr0.00016 | [0.8144][11] | [0.8209][12] | [0.8065][13] | [0.8056][14] | [0.82][15] | 81.35 ± 0.65 | | bs4-e10-lr0.00016 | [0.8056][16] | [0.8105][17] | [0.809][18] | [0.808][19] | [0.8056][20] | 80.77 ± 0.19 | [1]: https://hf.co/stefan-it/hmbench-topres19th-en-hmbyt5-bs4-wsFalse-e10-lr0.00015-poolingfirst-layers-1-crfFalse-1 [2]: https://hf.co/stefan-it/hmbench-topres19th-en-hmbyt5-bs4-wsFalse-e10-lr0.00015-poolingfirst-layers-1-crfFalse-2 [3]: https://hf.co/stefan-it/hmbench-topres19th-en-hmbyt5-bs4-wsFalse-e10-lr0.00015-poolingfirst-layers-1-crfFalse-3 [4]: https://hf.co/stefan-it/hmbench-topres19th-en-hmbyt5-bs4-wsFalse-e10-lr0.00015-poolingfirst-layers-1-crfFalse-4 [5]: https://hf.co/stefan-it/hmbench-topres19th-en-hmbyt5-bs4-wsFalse-e10-lr0.00015-poolingfirst-layers-1-crfFalse-5 [6]: https://hf.co/stefan-it/hmbench-topres19th-en-hmbyt5-bs8-wsFalse-e10-lr0.00015-poolingfirst-layers-1-crfFalse-1 [7]: https://hf.co/stefan-it/hmbench-topres19th-en-hmbyt5-bs8-wsFalse-e10-lr0.00015-poolingfirst-layers-1-crfFalse-2 [8]: https://hf.co/stefan-it/hmbench-topres19th-en-hmbyt5-bs8-wsFalse-e10-lr0.00015-poolingfirst-layers-1-crfFalse-3 [9]: https://hf.co/stefan-it/hmbench-topres19th-en-hmbyt5-bs8-wsFalse-e10-lr0.00015-poolingfirst-layers-1-crfFalse-4 [10]: https://hf.co/stefan-it/hmbench-topres19th-en-hmbyt5-bs8-wsFalse-e10-lr0.00015-poolingfirst-layers-1-crfFalse-5 [11]: https://hf.co/stefan-it/hmbench-topres19th-en-hmbyt5-bs8-wsFalse-e10-lr0.00016-poolingfirst-layers-1-crfFalse-1 [12]: https://hf.co/stefan-it/hmbench-topres19th-en-hmbyt5-bs8-wsFalse-e10-lr0.00016-poolingfirst-layers-1-crfFalse-2 [13]: https://hf.co/stefan-it/hmbench-topres19th-en-hmbyt5-bs8-wsFalse-e10-lr0.00016-poolingfirst-layers-1-crfFalse-3 [14]: https://hf.co/stefan-it/hmbench-topres19th-en-hmbyt5-bs8-wsFalse-e10-lr0.00016-poolingfirst-layers-1-crfFalse-4 [15]: https://hf.co/stefan-it/hmbench-topres19th-en-hmbyt5-bs8-wsFalse-e10-lr0.00016-poolingfirst-layers-1-crfFalse-5 [16]: https://hf.co/stefan-it/hmbench-topres19th-en-hmbyt5-bs4-wsFalse-e10-lr0.00016-poolingfirst-layers-1-crfFalse-1 [17]: https://hf.co/stefan-it/hmbench-topres19th-en-hmbyt5-bs4-wsFalse-e10-lr0.00016-poolingfirst-layers-1-crfFalse-2 [18]: https://hf.co/stefan-it/hmbench-topres19th-en-hmbyt5-bs4-wsFalse-e10-lr0.00016-poolingfirst-layers-1-crfFalse-3 [19]: https://hf.co/stefan-it/hmbench-topres19th-en-hmbyt5-bs4-wsFalse-e10-lr0.00016-poolingfirst-layers-1-crfFalse-4 [20]: https://hf.co/stefan-it/hmbench-topres19th-en-hmbyt5-bs4-wsFalse-e10-lr0.00016-poolingfirst-layers-1-crfFalse-5 The [training log](training.log) and TensorBoard logs (only for hmByT5 and hmTEAMS based models) are also uploaded to the model hub. More information about fine-tuning can be found [here](https://github.com/stefan-it/hmBench). # Acknowledgements We thank [Luisa März](https://github.com/LuisaMaerz), [Katharina Schmid](https://github.com/schmika) and [Erion Çano](https://github.com/erionc) for their fruitful discussions about Historic Language Models. Research supported with Cloud TPUs from Google's [TPU Research Cloud](https://sites.research.google/trc/about/) (TRC). Many Thanks for providing access to the TPUs ❤️
hmbyt5-preliminary/flair-hipe-2022-topres19th-en
hmbyt5-preliminary
2023-10-17T23:08:36Z
10
0
flair
[ "flair", "pytorch", "tensorboard", "token-classification", "sequence-tagger-model", "en", "base_model:hmbyt5-preliminary/byt5-small-historic-multilingual-span20-flax", "base_model:finetune:hmbyt5-preliminary/byt5-small-historic-multilingual-span20-flax", "license:mit", "region:us" ]
token-classification
2023-10-14T01:57:55Z
--- language: en license: mit tags: - flair - token-classification - sequence-tagger-model base_model: hmbyt5-preliminary/byt5-small-historic-multilingual-span20-flax inference: false widget: - text: On Wednesday , a public dinner was given by the Conservative Burgesses of Leads , to the Conservative members of the Leeds Town Council , in the Music Hall , Albion-street , which was very numerously attended . --- # Fine-tuned Flair Model on TopRes19th English NER Dataset (HIPE-2022) This Flair model was fine-tuned on the [TopRes19th English](https://github.com/hipe-eval/HIPE-2022-data/blob/main/documentation/README-topres19th.md) NER Dataset using hmByT5 as backbone LM. The TopRes19th dataset consists of NE-annotated historical English newspaper articles from 19C. The following NEs were annotated: `BUILDING`, `LOC` and `STREET`. # ⚠️ Inference Widget ⚠️ Fine-Tuning ByT5 models in Flair is currently done by implementing an own [`ByT5Embedding`][0] class. This class needs to be present when running the model with Flair. Thus, the inference widget is not working with hmByT5 at the moment on the Model Hub and is currently disabled. This should be fixed in future, when ByT5 fine-tuning is supported in Flair directly. [0]: https://github.com/stefan-it/hmBench/blob/main/byt5_embeddings.py # Results We performed a hyper-parameter search over the following parameters with 5 different seeds per configuration: * Batch Sizes: `[8, 4]` * Learning Rates: `[0.00015, 0.00016]` And report micro F1-score on development set: | Configuration | Run 1 | Run 2 | Run 3 | Run 4 | Run 5 | Avg. | |-------------------|--------------|--------------|--------------|--------------|--------------|--------------| | bs4-e10-lr0.00015 | [0.7992][1] | [0.8226][2] | [0.8205][3] | [0.8364][4] | [0.809][5] | 81.75 ± 1.26 | | bs8-e10-lr0.00015 | [0.8095][6] | [0.83][7] | [0.8024][8] | [0.8112][9] | [0.8189][10] | 81.44 ± 0.94 | | bs8-e10-lr0.00016 | [0.8144][11] | [0.8209][12] | [0.8065][13] | [0.8056][14] | [0.82][15] | 81.35 ± 0.65 | | bs4-e10-lr0.00016 | [0.8056][16] | [0.8105][17] | [0.809][18] | [0.808][19] | [0.8056][20] | 80.77 ± 0.19 | [1]: https://hf.co/stefan-it/hmbench-topres19th-en-hmbyt5-bs4-wsFalse-e10-lr0.00015-poolingfirst-layers-1-crfFalse-1 [2]: https://hf.co/stefan-it/hmbench-topres19th-en-hmbyt5-bs4-wsFalse-e10-lr0.00015-poolingfirst-layers-1-crfFalse-2 [3]: https://hf.co/stefan-it/hmbench-topres19th-en-hmbyt5-bs4-wsFalse-e10-lr0.00015-poolingfirst-layers-1-crfFalse-3 [4]: https://hf.co/stefan-it/hmbench-topres19th-en-hmbyt5-bs4-wsFalse-e10-lr0.00015-poolingfirst-layers-1-crfFalse-4 [5]: https://hf.co/stefan-it/hmbench-topres19th-en-hmbyt5-bs4-wsFalse-e10-lr0.00015-poolingfirst-layers-1-crfFalse-5 [6]: https://hf.co/stefan-it/hmbench-topres19th-en-hmbyt5-bs8-wsFalse-e10-lr0.00015-poolingfirst-layers-1-crfFalse-1 [7]: https://hf.co/stefan-it/hmbench-topres19th-en-hmbyt5-bs8-wsFalse-e10-lr0.00015-poolingfirst-layers-1-crfFalse-2 [8]: https://hf.co/stefan-it/hmbench-topres19th-en-hmbyt5-bs8-wsFalse-e10-lr0.00015-poolingfirst-layers-1-crfFalse-3 [9]: https://hf.co/stefan-it/hmbench-topres19th-en-hmbyt5-bs8-wsFalse-e10-lr0.00015-poolingfirst-layers-1-crfFalse-4 [10]: https://hf.co/stefan-it/hmbench-topres19th-en-hmbyt5-bs8-wsFalse-e10-lr0.00015-poolingfirst-layers-1-crfFalse-5 [11]: https://hf.co/stefan-it/hmbench-topres19th-en-hmbyt5-bs8-wsFalse-e10-lr0.00016-poolingfirst-layers-1-crfFalse-1 [12]: https://hf.co/stefan-it/hmbench-topres19th-en-hmbyt5-bs8-wsFalse-e10-lr0.00016-poolingfirst-layers-1-crfFalse-2 [13]: https://hf.co/stefan-it/hmbench-topres19th-en-hmbyt5-bs8-wsFalse-e10-lr0.00016-poolingfirst-layers-1-crfFalse-3 [14]: https://hf.co/stefan-it/hmbench-topres19th-en-hmbyt5-bs8-wsFalse-e10-lr0.00016-poolingfirst-layers-1-crfFalse-4 [15]: https://hf.co/stefan-it/hmbench-topres19th-en-hmbyt5-bs8-wsFalse-e10-lr0.00016-poolingfirst-layers-1-crfFalse-5 [16]: https://hf.co/stefan-it/hmbench-topres19th-en-hmbyt5-bs4-wsFalse-e10-lr0.00016-poolingfirst-layers-1-crfFalse-1 [17]: https://hf.co/stefan-it/hmbench-topres19th-en-hmbyt5-bs4-wsFalse-e10-lr0.00016-poolingfirst-layers-1-crfFalse-2 [18]: https://hf.co/stefan-it/hmbench-topres19th-en-hmbyt5-bs4-wsFalse-e10-lr0.00016-poolingfirst-layers-1-crfFalse-3 [19]: https://hf.co/stefan-it/hmbench-topres19th-en-hmbyt5-bs4-wsFalse-e10-lr0.00016-poolingfirst-layers-1-crfFalse-4 [20]: https://hf.co/stefan-it/hmbench-topres19th-en-hmbyt5-bs4-wsFalse-e10-lr0.00016-poolingfirst-layers-1-crfFalse-5 The [training log](training.log) and TensorBoard logs (only for hmByT5 and hmTEAMS based models) are also uploaded to the model hub. More information about fine-tuning can be found [here](https://github.com/stefan-it/hmBench). # Acknowledgements We thank [Luisa März](https://github.com/LuisaMaerz), [Katharina Schmid](https://github.com/schmika) and [Erion Çano](https://github.com/erionc) for their fruitful discussions about Historic Language Models. Research supported with Cloud TPUs from Google's [TPU Research Cloud](https://sites.research.google/trc/about/) (TRC). Many Thanks for providing access to the TPUs ❤️
stefan-it/hmbench-topres19th-en-hmbyt5-bs4-wsFalse-e10-lr0.00015-poolingfirst-layers-1-crfFalse-3
stefan-it
2023-10-17T23:08:35Z
11
0
flair
[ "flair", "pytorch", "tensorboard", "token-classification", "sequence-tagger-model", "en", "base_model:hmbyt5-preliminary/byt5-small-historic-multilingual-span20-flax", "base_model:finetune:hmbyt5-preliminary/byt5-small-historic-multilingual-span20-flax", "license:mit", "region:us" ]
token-classification
2023-10-13T20:57:21Z
--- language: en license: mit tags: - flair - token-classification - sequence-tagger-model base_model: hmbyt5-preliminary/byt5-small-historic-multilingual-span20-flax inference: false widget: - text: On Wednesday , a public dinner was given by the Conservative Burgesses of Leads , to the Conservative members of the Leeds Town Council , in the Music Hall , Albion-street , which was very numerously attended . --- # Fine-tuned Flair Model on TopRes19th English NER Dataset (HIPE-2022) This Flair model was fine-tuned on the [TopRes19th English](https://github.com/hipe-eval/HIPE-2022-data/blob/main/documentation/README-topres19th.md) NER Dataset using hmByT5 as backbone LM. The TopRes19th dataset consists of NE-annotated historical English newspaper articles from 19C. The following NEs were annotated: `BUILDING`, `LOC` and `STREET`. # ⚠️ Inference Widget ⚠️ Fine-Tuning ByT5 models in Flair is currently done by implementing an own [`ByT5Embedding`][0] class. This class needs to be present when running the model with Flair. Thus, the inference widget is not working with hmByT5 at the moment on the Model Hub and is currently disabled. This should be fixed in future, when ByT5 fine-tuning is supported in Flair directly. [0]: https://github.com/stefan-it/hmBench/blob/main/byt5_embeddings.py # Results We performed a hyper-parameter search over the following parameters with 5 different seeds per configuration: * Batch Sizes: `[8, 4]` * Learning Rates: `[0.00015, 0.00016]` And report micro F1-score on development set: | Configuration | Run 1 | Run 2 | Run 3 | Run 4 | Run 5 | Avg. | |-------------------|--------------|--------------|--------------|--------------|--------------|--------------| | bs4-e10-lr0.00015 | [0.7992][1] | [0.8226][2] | [0.8205][3] | [0.8364][4] | [0.809][5] | 81.75 ± 1.26 | | bs8-e10-lr0.00015 | [0.8095][6] | [0.83][7] | [0.8024][8] | [0.8112][9] | [0.8189][10] | 81.44 ± 0.94 | | bs8-e10-lr0.00016 | [0.8144][11] | [0.8209][12] | [0.8065][13] | [0.8056][14] | [0.82][15] | 81.35 ± 0.65 | | bs4-e10-lr0.00016 | [0.8056][16] | [0.8105][17] | [0.809][18] | [0.808][19] | [0.8056][20] | 80.77 ± 0.19 | [1]: https://hf.co/stefan-it/hmbench-topres19th-en-hmbyt5-bs4-wsFalse-e10-lr0.00015-poolingfirst-layers-1-crfFalse-1 [2]: https://hf.co/stefan-it/hmbench-topres19th-en-hmbyt5-bs4-wsFalse-e10-lr0.00015-poolingfirst-layers-1-crfFalse-2 [3]: https://hf.co/stefan-it/hmbench-topres19th-en-hmbyt5-bs4-wsFalse-e10-lr0.00015-poolingfirst-layers-1-crfFalse-3 [4]: https://hf.co/stefan-it/hmbench-topres19th-en-hmbyt5-bs4-wsFalse-e10-lr0.00015-poolingfirst-layers-1-crfFalse-4 [5]: https://hf.co/stefan-it/hmbench-topres19th-en-hmbyt5-bs4-wsFalse-e10-lr0.00015-poolingfirst-layers-1-crfFalse-5 [6]: https://hf.co/stefan-it/hmbench-topres19th-en-hmbyt5-bs8-wsFalse-e10-lr0.00015-poolingfirst-layers-1-crfFalse-1 [7]: https://hf.co/stefan-it/hmbench-topres19th-en-hmbyt5-bs8-wsFalse-e10-lr0.00015-poolingfirst-layers-1-crfFalse-2 [8]: https://hf.co/stefan-it/hmbench-topres19th-en-hmbyt5-bs8-wsFalse-e10-lr0.00015-poolingfirst-layers-1-crfFalse-3 [9]: https://hf.co/stefan-it/hmbench-topres19th-en-hmbyt5-bs8-wsFalse-e10-lr0.00015-poolingfirst-layers-1-crfFalse-4 [10]: https://hf.co/stefan-it/hmbench-topres19th-en-hmbyt5-bs8-wsFalse-e10-lr0.00015-poolingfirst-layers-1-crfFalse-5 [11]: https://hf.co/stefan-it/hmbench-topres19th-en-hmbyt5-bs8-wsFalse-e10-lr0.00016-poolingfirst-layers-1-crfFalse-1 [12]: https://hf.co/stefan-it/hmbench-topres19th-en-hmbyt5-bs8-wsFalse-e10-lr0.00016-poolingfirst-layers-1-crfFalse-2 [13]: https://hf.co/stefan-it/hmbench-topres19th-en-hmbyt5-bs8-wsFalse-e10-lr0.00016-poolingfirst-layers-1-crfFalse-3 [14]: https://hf.co/stefan-it/hmbench-topres19th-en-hmbyt5-bs8-wsFalse-e10-lr0.00016-poolingfirst-layers-1-crfFalse-4 [15]: https://hf.co/stefan-it/hmbench-topres19th-en-hmbyt5-bs8-wsFalse-e10-lr0.00016-poolingfirst-layers-1-crfFalse-5 [16]: https://hf.co/stefan-it/hmbench-topres19th-en-hmbyt5-bs4-wsFalse-e10-lr0.00016-poolingfirst-layers-1-crfFalse-1 [17]: https://hf.co/stefan-it/hmbench-topres19th-en-hmbyt5-bs4-wsFalse-e10-lr0.00016-poolingfirst-layers-1-crfFalse-2 [18]: https://hf.co/stefan-it/hmbench-topres19th-en-hmbyt5-bs4-wsFalse-e10-lr0.00016-poolingfirst-layers-1-crfFalse-3 [19]: https://hf.co/stefan-it/hmbench-topres19th-en-hmbyt5-bs4-wsFalse-e10-lr0.00016-poolingfirst-layers-1-crfFalse-4 [20]: https://hf.co/stefan-it/hmbench-topres19th-en-hmbyt5-bs4-wsFalse-e10-lr0.00016-poolingfirst-layers-1-crfFalse-5 The [training log](training.log) and TensorBoard logs (only for hmByT5 and hmTEAMS based models) are also uploaded to the model hub. More information about fine-tuning can be found [here](https://github.com/stefan-it/hmBench). # Acknowledgements We thank [Luisa März](https://github.com/LuisaMaerz), [Katharina Schmid](https://github.com/schmika) and [Erion Çano](https://github.com/erionc) for their fruitful discussions about Historic Language Models. Research supported with Cloud TPUs from Google's [TPU Research Cloud](https://sites.research.google/trc/about/) (TRC). Many Thanks for providing access to the TPUs ❤️
stefan-it/hmbench-topres19th-en-hmbyt5-bs4-wsFalse-e10-lr0.00015-poolingfirst-layers-1-crfFalse-1
stefan-it
2023-10-17T23:08:34Z
6
0
flair
[ "flair", "pytorch", "tensorboard", "token-classification", "sequence-tagger-model", "en", "base_model:hmbyt5-preliminary/byt5-small-historic-multilingual-span20-flax", "base_model:finetune:hmbyt5-preliminary/byt5-small-historic-multilingual-span20-flax", "license:mit", "region:us" ]
token-classification
2023-10-13T10:50:25Z
--- language: en license: mit tags: - flair - token-classification - sequence-tagger-model base_model: hmbyt5-preliminary/byt5-small-historic-multilingual-span20-flax inference: false widget: - text: On Wednesday , a public dinner was given by the Conservative Burgesses of Leads , to the Conservative members of the Leeds Town Council , in the Music Hall , Albion-street , which was very numerously attended . --- # Fine-tuned Flair Model on TopRes19th English NER Dataset (HIPE-2022) This Flair model was fine-tuned on the [TopRes19th English](https://github.com/hipe-eval/HIPE-2022-data/blob/main/documentation/README-topres19th.md) NER Dataset using hmByT5 as backbone LM. The TopRes19th dataset consists of NE-annotated historical English newspaper articles from 19C. The following NEs were annotated: `BUILDING`, `LOC` and `STREET`. # ⚠️ Inference Widget ⚠️ Fine-Tuning ByT5 models in Flair is currently done by implementing an own [`ByT5Embedding`][0] class. This class needs to be present when running the model with Flair. Thus, the inference widget is not working with hmByT5 at the moment on the Model Hub and is currently disabled. This should be fixed in future, when ByT5 fine-tuning is supported in Flair directly. [0]: https://github.com/stefan-it/hmBench/blob/main/byt5_embeddings.py # Results We performed a hyper-parameter search over the following parameters with 5 different seeds per configuration: * Batch Sizes: `[8, 4]` * Learning Rates: `[0.00015, 0.00016]` And report micro F1-score on development set: | Configuration | Run 1 | Run 2 | Run 3 | Run 4 | Run 5 | Avg. | |-------------------|--------------|--------------|--------------|--------------|--------------|--------------| | bs4-e10-lr0.00015 | [0.7992][1] | [0.8226][2] | [0.8205][3] | [0.8364][4] | [0.809][5] | 81.75 ± 1.26 | | bs8-e10-lr0.00015 | [0.8095][6] | [0.83][7] | [0.8024][8] | [0.8112][9] | [0.8189][10] | 81.44 ± 0.94 | | bs8-e10-lr0.00016 | [0.8144][11] | [0.8209][12] | [0.8065][13] | [0.8056][14] | [0.82][15] | 81.35 ± 0.65 | | bs4-e10-lr0.00016 | [0.8056][16] | [0.8105][17] | [0.809][18] | [0.808][19] | [0.8056][20] | 80.77 ± 0.19 | [1]: https://hf.co/stefan-it/hmbench-topres19th-en-hmbyt5-bs4-wsFalse-e10-lr0.00015-poolingfirst-layers-1-crfFalse-1 [2]: https://hf.co/stefan-it/hmbench-topres19th-en-hmbyt5-bs4-wsFalse-e10-lr0.00015-poolingfirst-layers-1-crfFalse-2 [3]: https://hf.co/stefan-it/hmbench-topres19th-en-hmbyt5-bs4-wsFalse-e10-lr0.00015-poolingfirst-layers-1-crfFalse-3 [4]: https://hf.co/stefan-it/hmbench-topres19th-en-hmbyt5-bs4-wsFalse-e10-lr0.00015-poolingfirst-layers-1-crfFalse-4 [5]: https://hf.co/stefan-it/hmbench-topres19th-en-hmbyt5-bs4-wsFalse-e10-lr0.00015-poolingfirst-layers-1-crfFalse-5 [6]: https://hf.co/stefan-it/hmbench-topres19th-en-hmbyt5-bs8-wsFalse-e10-lr0.00015-poolingfirst-layers-1-crfFalse-1 [7]: https://hf.co/stefan-it/hmbench-topres19th-en-hmbyt5-bs8-wsFalse-e10-lr0.00015-poolingfirst-layers-1-crfFalse-2 [8]: https://hf.co/stefan-it/hmbench-topres19th-en-hmbyt5-bs8-wsFalse-e10-lr0.00015-poolingfirst-layers-1-crfFalse-3 [9]: https://hf.co/stefan-it/hmbench-topres19th-en-hmbyt5-bs8-wsFalse-e10-lr0.00015-poolingfirst-layers-1-crfFalse-4 [10]: https://hf.co/stefan-it/hmbench-topres19th-en-hmbyt5-bs8-wsFalse-e10-lr0.00015-poolingfirst-layers-1-crfFalse-5 [11]: https://hf.co/stefan-it/hmbench-topres19th-en-hmbyt5-bs8-wsFalse-e10-lr0.00016-poolingfirst-layers-1-crfFalse-1 [12]: https://hf.co/stefan-it/hmbench-topres19th-en-hmbyt5-bs8-wsFalse-e10-lr0.00016-poolingfirst-layers-1-crfFalse-2 [13]: https://hf.co/stefan-it/hmbench-topres19th-en-hmbyt5-bs8-wsFalse-e10-lr0.00016-poolingfirst-layers-1-crfFalse-3 [14]: https://hf.co/stefan-it/hmbench-topres19th-en-hmbyt5-bs8-wsFalse-e10-lr0.00016-poolingfirst-layers-1-crfFalse-4 [15]: https://hf.co/stefan-it/hmbench-topres19th-en-hmbyt5-bs8-wsFalse-e10-lr0.00016-poolingfirst-layers-1-crfFalse-5 [16]: https://hf.co/stefan-it/hmbench-topres19th-en-hmbyt5-bs4-wsFalse-e10-lr0.00016-poolingfirst-layers-1-crfFalse-1 [17]: https://hf.co/stefan-it/hmbench-topres19th-en-hmbyt5-bs4-wsFalse-e10-lr0.00016-poolingfirst-layers-1-crfFalse-2 [18]: https://hf.co/stefan-it/hmbench-topres19th-en-hmbyt5-bs4-wsFalse-e10-lr0.00016-poolingfirst-layers-1-crfFalse-3 [19]: https://hf.co/stefan-it/hmbench-topres19th-en-hmbyt5-bs4-wsFalse-e10-lr0.00016-poolingfirst-layers-1-crfFalse-4 [20]: https://hf.co/stefan-it/hmbench-topres19th-en-hmbyt5-bs4-wsFalse-e10-lr0.00016-poolingfirst-layers-1-crfFalse-5 The [training log](training.log) and TensorBoard logs (only for hmByT5 and hmTEAMS based models) are also uploaded to the model hub. More information about fine-tuning can be found [here](https://github.com/stefan-it/hmBench). # Acknowledgements We thank [Luisa März](https://github.com/LuisaMaerz), [Katharina Schmid](https://github.com/schmika) and [Erion Çano](https://github.com/erionc) for their fruitful discussions about Historic Language Models. Research supported with Cloud TPUs from Google's [TPU Research Cloud](https://sites.research.google/trc/about/) (TRC). Many Thanks for providing access to the TPUs ❤️
stefan-it/hmbench-topres19th-en-hmbyt5-bs8-wsFalse-e10-lr0.00016-poolingfirst-layers-1-crfFalse-5
stefan-it
2023-10-17T23:08:34Z
6
0
flair
[ "flair", "pytorch", "tensorboard", "token-classification", "sequence-tagger-model", "en", "base_model:hmbyt5-preliminary/byt5-small-historic-multilingual-span20-flax", "base_model:finetune:hmbyt5-preliminary/byt5-small-historic-multilingual-span20-flax", "license:mit", "region:us" ]
token-classification
2023-10-14T05:40:10Z
--- language: en license: mit tags: - flair - token-classification - sequence-tagger-model base_model: hmbyt5-preliminary/byt5-small-historic-multilingual-span20-flax inference: false widget: - text: On Wednesday , a public dinner was given by the Conservative Burgesses of Leads , to the Conservative members of the Leeds Town Council , in the Music Hall , Albion-street , which was very numerously attended . --- # Fine-tuned Flair Model on TopRes19th English NER Dataset (HIPE-2022) This Flair model was fine-tuned on the [TopRes19th English](https://github.com/hipe-eval/HIPE-2022-data/blob/main/documentation/README-topres19th.md) NER Dataset using hmByT5 as backbone LM. The TopRes19th dataset consists of NE-annotated historical English newspaper articles from 19C. The following NEs were annotated: `BUILDING`, `LOC` and `STREET`. # ⚠️ Inference Widget ⚠️ Fine-Tuning ByT5 models in Flair is currently done by implementing an own [`ByT5Embedding`][0] class. This class needs to be present when running the model with Flair. Thus, the inference widget is not working with hmByT5 at the moment on the Model Hub and is currently disabled. This should be fixed in future, when ByT5 fine-tuning is supported in Flair directly. [0]: https://github.com/stefan-it/hmBench/blob/main/byt5_embeddings.py # Results We performed a hyper-parameter search over the following parameters with 5 different seeds per configuration: * Batch Sizes: `[8, 4]` * Learning Rates: `[0.00015, 0.00016]` And report micro F1-score on development set: | Configuration | Run 1 | Run 2 | Run 3 | Run 4 | Run 5 | Avg. | |-------------------|--------------|--------------|--------------|--------------|--------------|--------------| | bs4-e10-lr0.00015 | [0.7992][1] | [0.8226][2] | [0.8205][3] | [0.8364][4] | [0.809][5] | 81.75 ± 1.26 | | bs8-e10-lr0.00015 | [0.8095][6] | [0.83][7] | [0.8024][8] | [0.8112][9] | [0.8189][10] | 81.44 ± 0.94 | | bs8-e10-lr0.00016 | [0.8144][11] | [0.8209][12] | [0.8065][13] | [0.8056][14] | [0.82][15] | 81.35 ± 0.65 | | bs4-e10-lr0.00016 | [0.8056][16] | [0.8105][17] | [0.809][18] | [0.808][19] | [0.8056][20] | 80.77 ± 0.19 | [1]: https://hf.co/stefan-it/hmbench-topres19th-en-hmbyt5-bs4-wsFalse-e10-lr0.00015-poolingfirst-layers-1-crfFalse-1 [2]: https://hf.co/stefan-it/hmbench-topres19th-en-hmbyt5-bs4-wsFalse-e10-lr0.00015-poolingfirst-layers-1-crfFalse-2 [3]: https://hf.co/stefan-it/hmbench-topres19th-en-hmbyt5-bs4-wsFalse-e10-lr0.00015-poolingfirst-layers-1-crfFalse-3 [4]: https://hf.co/stefan-it/hmbench-topres19th-en-hmbyt5-bs4-wsFalse-e10-lr0.00015-poolingfirst-layers-1-crfFalse-4 [5]: https://hf.co/stefan-it/hmbench-topres19th-en-hmbyt5-bs4-wsFalse-e10-lr0.00015-poolingfirst-layers-1-crfFalse-5 [6]: https://hf.co/stefan-it/hmbench-topres19th-en-hmbyt5-bs8-wsFalse-e10-lr0.00015-poolingfirst-layers-1-crfFalse-1 [7]: https://hf.co/stefan-it/hmbench-topres19th-en-hmbyt5-bs8-wsFalse-e10-lr0.00015-poolingfirst-layers-1-crfFalse-2 [8]: https://hf.co/stefan-it/hmbench-topres19th-en-hmbyt5-bs8-wsFalse-e10-lr0.00015-poolingfirst-layers-1-crfFalse-3 [9]: https://hf.co/stefan-it/hmbench-topres19th-en-hmbyt5-bs8-wsFalse-e10-lr0.00015-poolingfirst-layers-1-crfFalse-4 [10]: https://hf.co/stefan-it/hmbench-topres19th-en-hmbyt5-bs8-wsFalse-e10-lr0.00015-poolingfirst-layers-1-crfFalse-5 [11]: https://hf.co/stefan-it/hmbench-topres19th-en-hmbyt5-bs8-wsFalse-e10-lr0.00016-poolingfirst-layers-1-crfFalse-1 [12]: https://hf.co/stefan-it/hmbench-topres19th-en-hmbyt5-bs8-wsFalse-e10-lr0.00016-poolingfirst-layers-1-crfFalse-2 [13]: https://hf.co/stefan-it/hmbench-topres19th-en-hmbyt5-bs8-wsFalse-e10-lr0.00016-poolingfirst-layers-1-crfFalse-3 [14]: https://hf.co/stefan-it/hmbench-topres19th-en-hmbyt5-bs8-wsFalse-e10-lr0.00016-poolingfirst-layers-1-crfFalse-4 [15]: https://hf.co/stefan-it/hmbench-topres19th-en-hmbyt5-bs8-wsFalse-e10-lr0.00016-poolingfirst-layers-1-crfFalse-5 [16]: https://hf.co/stefan-it/hmbench-topres19th-en-hmbyt5-bs4-wsFalse-e10-lr0.00016-poolingfirst-layers-1-crfFalse-1 [17]: https://hf.co/stefan-it/hmbench-topres19th-en-hmbyt5-bs4-wsFalse-e10-lr0.00016-poolingfirst-layers-1-crfFalse-2 [18]: https://hf.co/stefan-it/hmbench-topres19th-en-hmbyt5-bs4-wsFalse-e10-lr0.00016-poolingfirst-layers-1-crfFalse-3 [19]: https://hf.co/stefan-it/hmbench-topres19th-en-hmbyt5-bs4-wsFalse-e10-lr0.00016-poolingfirst-layers-1-crfFalse-4 [20]: https://hf.co/stefan-it/hmbench-topres19th-en-hmbyt5-bs4-wsFalse-e10-lr0.00016-poolingfirst-layers-1-crfFalse-5 The [training log](training.log) and TensorBoard logs (only for hmByT5 and hmTEAMS based models) are also uploaded to the model hub. More information about fine-tuning can be found [here](https://github.com/stefan-it/hmBench). # Acknowledgements We thank [Luisa März](https://github.com/LuisaMaerz), [Katharina Schmid](https://github.com/schmika) and [Erion Çano](https://github.com/erionc) for their fruitful discussions about Historic Language Models. Research supported with Cloud TPUs from Google's [TPU Research Cloud](https://sites.research.google/trc/about/) (TRC). Many Thanks for providing access to the TPUs ❤️
stefan-it/hmbench-topres19th-en-hmbyt5-bs8-wsFalse-e10-lr0.00016-poolingfirst-layers-1-crfFalse-3
stefan-it
2023-10-17T23:08:32Z
6
0
flair
[ "flair", "pytorch", "tensorboard", "token-classification", "sequence-tagger-model", "en", "base_model:hmbyt5-preliminary/byt5-small-historic-multilingual-span20-flax", "base_model:finetune:hmbyt5-preliminary/byt5-small-historic-multilingual-span20-flax", "license:mit", "region:us" ]
token-classification
2023-10-13T19:39:42Z
--- language: en license: mit tags: - flair - token-classification - sequence-tagger-model base_model: hmbyt5-preliminary/byt5-small-historic-multilingual-span20-flax inference: false widget: - text: On Wednesday , a public dinner was given by the Conservative Burgesses of Leads , to the Conservative members of the Leeds Town Council , in the Music Hall , Albion-street , which was very numerously attended . --- # Fine-tuned Flair Model on TopRes19th English NER Dataset (HIPE-2022) This Flair model was fine-tuned on the [TopRes19th English](https://github.com/hipe-eval/HIPE-2022-data/blob/main/documentation/README-topres19th.md) NER Dataset using hmByT5 as backbone LM. The TopRes19th dataset consists of NE-annotated historical English newspaper articles from 19C. The following NEs were annotated: `BUILDING`, `LOC` and `STREET`. # ⚠️ Inference Widget ⚠️ Fine-Tuning ByT5 models in Flair is currently done by implementing an own [`ByT5Embedding`][0] class. This class needs to be present when running the model with Flair. Thus, the inference widget is not working with hmByT5 at the moment on the Model Hub and is currently disabled. This should be fixed in future, when ByT5 fine-tuning is supported in Flair directly. [0]: https://github.com/stefan-it/hmBench/blob/main/byt5_embeddings.py # Results We performed a hyper-parameter search over the following parameters with 5 different seeds per configuration: * Batch Sizes: `[8, 4]` * Learning Rates: `[0.00015, 0.00016]` And report micro F1-score on development set: | Configuration | Run 1 | Run 2 | Run 3 | Run 4 | Run 5 | Avg. | |-------------------|--------------|--------------|--------------|--------------|--------------|--------------| | bs4-e10-lr0.00015 | [0.7992][1] | [0.8226][2] | [0.8205][3] | [0.8364][4] | [0.809][5] | 81.75 ± 1.26 | | bs8-e10-lr0.00015 | [0.8095][6] | [0.83][7] | [0.8024][8] | [0.8112][9] | [0.8189][10] | 81.44 ± 0.94 | | bs8-e10-lr0.00016 | [0.8144][11] | [0.8209][12] | [0.8065][13] | [0.8056][14] | [0.82][15] | 81.35 ± 0.65 | | bs4-e10-lr0.00016 | [0.8056][16] | [0.8105][17] | [0.809][18] | [0.808][19] | [0.8056][20] | 80.77 ± 0.19 | [1]: https://hf.co/stefan-it/hmbench-topres19th-en-hmbyt5-bs4-wsFalse-e10-lr0.00015-poolingfirst-layers-1-crfFalse-1 [2]: https://hf.co/stefan-it/hmbench-topres19th-en-hmbyt5-bs4-wsFalse-e10-lr0.00015-poolingfirst-layers-1-crfFalse-2 [3]: https://hf.co/stefan-it/hmbench-topres19th-en-hmbyt5-bs4-wsFalse-e10-lr0.00015-poolingfirst-layers-1-crfFalse-3 [4]: https://hf.co/stefan-it/hmbench-topres19th-en-hmbyt5-bs4-wsFalse-e10-lr0.00015-poolingfirst-layers-1-crfFalse-4 [5]: https://hf.co/stefan-it/hmbench-topres19th-en-hmbyt5-bs4-wsFalse-e10-lr0.00015-poolingfirst-layers-1-crfFalse-5 [6]: https://hf.co/stefan-it/hmbench-topres19th-en-hmbyt5-bs8-wsFalse-e10-lr0.00015-poolingfirst-layers-1-crfFalse-1 [7]: https://hf.co/stefan-it/hmbench-topres19th-en-hmbyt5-bs8-wsFalse-e10-lr0.00015-poolingfirst-layers-1-crfFalse-2 [8]: https://hf.co/stefan-it/hmbench-topres19th-en-hmbyt5-bs8-wsFalse-e10-lr0.00015-poolingfirst-layers-1-crfFalse-3 [9]: https://hf.co/stefan-it/hmbench-topres19th-en-hmbyt5-bs8-wsFalse-e10-lr0.00015-poolingfirst-layers-1-crfFalse-4 [10]: https://hf.co/stefan-it/hmbench-topres19th-en-hmbyt5-bs8-wsFalse-e10-lr0.00015-poolingfirst-layers-1-crfFalse-5 [11]: https://hf.co/stefan-it/hmbench-topres19th-en-hmbyt5-bs8-wsFalse-e10-lr0.00016-poolingfirst-layers-1-crfFalse-1 [12]: https://hf.co/stefan-it/hmbench-topres19th-en-hmbyt5-bs8-wsFalse-e10-lr0.00016-poolingfirst-layers-1-crfFalse-2 [13]: https://hf.co/stefan-it/hmbench-topres19th-en-hmbyt5-bs8-wsFalse-e10-lr0.00016-poolingfirst-layers-1-crfFalse-3 [14]: https://hf.co/stefan-it/hmbench-topres19th-en-hmbyt5-bs8-wsFalse-e10-lr0.00016-poolingfirst-layers-1-crfFalse-4 [15]: https://hf.co/stefan-it/hmbench-topres19th-en-hmbyt5-bs8-wsFalse-e10-lr0.00016-poolingfirst-layers-1-crfFalse-5 [16]: https://hf.co/stefan-it/hmbench-topres19th-en-hmbyt5-bs4-wsFalse-e10-lr0.00016-poolingfirst-layers-1-crfFalse-1 [17]: https://hf.co/stefan-it/hmbench-topres19th-en-hmbyt5-bs4-wsFalse-e10-lr0.00016-poolingfirst-layers-1-crfFalse-2 [18]: https://hf.co/stefan-it/hmbench-topres19th-en-hmbyt5-bs4-wsFalse-e10-lr0.00016-poolingfirst-layers-1-crfFalse-3 [19]: https://hf.co/stefan-it/hmbench-topres19th-en-hmbyt5-bs4-wsFalse-e10-lr0.00016-poolingfirst-layers-1-crfFalse-4 [20]: https://hf.co/stefan-it/hmbench-topres19th-en-hmbyt5-bs4-wsFalse-e10-lr0.00016-poolingfirst-layers-1-crfFalse-5 The [training log](training.log) and TensorBoard logs (only for hmByT5 and hmTEAMS based models) are also uploaded to the model hub. More information about fine-tuning can be found [here](https://github.com/stefan-it/hmBench). # Acknowledgements We thank [Luisa März](https://github.com/LuisaMaerz), [Katharina Schmid](https://github.com/schmika) and [Erion Çano](https://github.com/erionc) for their fruitful discussions about Historic Language Models. Research supported with Cloud TPUs from Google's [TPU Research Cloud](https://sites.research.google/trc/about/) (TRC). Many Thanks for providing access to the TPUs ❤️