modelId
stringlengths 5
139
| author
stringlengths 2
42
| last_modified
timestamp[us, tz=UTC]date 2020-02-15 11:33:14
2025-07-16 00:42:46
| downloads
int64 0
223M
| likes
int64 0
11.7k
| library_name
stringclasses 522
values | tags
listlengths 1
4.05k
| pipeline_tag
stringclasses 55
values | createdAt
timestamp[us, tz=UTC]date 2022-03-02 23:29:04
2025-07-16 00:42:16
| card
stringlengths 11
1.01M
|
---|---|---|---|---|---|---|---|---|---|
Jochoi/sd-class-butterflies-Jochoi-32 | Jochoi | 2023-10-18T05:45:07Z | 1 | 0 | diffusers | [
"diffusers",
"safetensors",
"pytorch",
"unconditional-image-generation",
"diffusion-models-class",
"license:mit",
"diffusers:DDPMPipeline",
"region:us"
]
| unconditional-image-generation | 2023-10-18T05:44:46Z | ---
license: mit
tags:
- pytorch
- diffusers
- unconditional-image-generation
- diffusion-models-class
---
# Model Card for Unit 1 of the [Diffusion Models Class 🧨](https://github.com/huggingface/diffusion-models-class)
This model is a diffusion model for unconditional image generation of cute 🦋.
## Usage
```python
from diffusers import DDPMPipeline
pipeline = DDPMPipeline.from_pretrained('Jochoi/sd-class-butterflies-Jochoi-32')
image = pipeline().images[0]
image
```
|
vgral/cornetto-creams-dreams | vgral | 2023-10-18T05:35:32Z | 2 | 0 | diffusers | [
"diffusers",
"text-to-image",
"autotrain",
"base_model:stabilityai/stable-diffusion-xl-base-1.0",
"base_model:finetune:stabilityai/stable-diffusion-xl-base-1.0",
"region:us"
]
| text-to-image | 2023-10-18T05:35:29Z |
---
base_model: stabilityai/stable-diffusion-xl-base-1.0
instance_prompt: A cctic55yui888xcv ice cream
tags:
- text-to-image
- diffusers
- autotrain
inference: true
---
# DreamBooth trained by AutoTrain
Text encoder was not trained.
|
hung200504/electra-finetuned-cpgqa | hung200504 | 2023-10-18T05:24:56Z | 3 | 0 | transformers | [
"transformers",
"pytorch",
"electra",
"question-answering",
"generated_from_trainer",
"base_model:deepset/electra-base-squad2",
"base_model:finetune:deepset/electra-base-squad2",
"license:cc-by-4.0",
"endpoints_compatible",
"region:us"
]
| question-answering | 2023-10-18T05:24:37Z | ---
license: cc-by-4.0
base_model: deepset/electra-base-squad2
tags:
- generated_from_trainer
model-index:
- name: electra-finetuned-cpgqa
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# electra-finetuned-cpgqa
This model is a fine-tuned version of [deepset/electra-base-squad2](https://huggingface.co/deepset/electra-base-squad2) on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
### Framework versions
- Transformers 4.34.0
- Pytorch 2.0.1+cu118
- Datasets 2.14.5
- Tokenizers 0.14.1
|
haesun/bert-base-uncased-issues-128 | haesun | 2023-10-18T05:15:34Z | 6 | 0 | transformers | [
"transformers",
"pytorch",
"bert",
"fill-mask",
"generated_from_trainer",
"base_model:google-bert/bert-base-uncased",
"base_model:finetune:google-bert/bert-base-uncased",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| fill-mask | 2022-08-31T04:25:02Z | ---
license: apache-2.0
base_model: bert-base-uncased
tags:
- generated_from_trainer
model-index:
- name: bert-base-uncased-issues-128
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-base-uncased-issues-128
This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.1725
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 32
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 16
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 2.1083 | 1.0 | 291 | 1.6830 |
| 1.6339 | 2.0 | 582 | 1.3684 |
| 1.4793 | 3.0 | 873 | 1.4214 |
| 1.3969 | 4.0 | 1164 | 1.4244 |
| 1.3425 | 5.0 | 1455 | 1.2755 |
| 1.2837 | 6.0 | 1746 | 1.3189 |
| 1.2316 | 7.0 | 2037 | 1.2051 |
| 1.2106 | 8.0 | 2328 | 1.2933 |
| 1.1779 | 9.0 | 2619 | 1.2659 |
| 1.1491 | 10.0 | 2910 | 1.2092 |
| 1.1254 | 11.0 | 3201 | 1.2649 |
| 1.0971 | 12.0 | 3492 | 1.1623 |
| 1.0925 | 13.0 | 3783 | 1.1459 |
| 1.0772 | 14.0 | 4074 | 1.0685 |
| 1.0648 | 15.0 | 4365 | 1.2594 |
| 1.0651 | 16.0 | 4656 | 1.1725 |
### Framework versions
- Transformers 4.34.0
- Pytorch 2.0.1+cu118
- Datasets 2.14.5
- Tokenizers 0.14.1
|
gokuls/hBERTv1_new_pretrain_w_init_48_ver2_qnli | gokuls | 2023-10-18T05:08:03Z | 3 | 0 | transformers | [
"transformers",
"pytorch",
"hybridbert",
"text-classification",
"generated_from_trainer",
"en",
"dataset:glue",
"base_model:gokuls/bert_12_layer_model_v1_complete_training_new_wt_init_48",
"base_model:finetune:gokuls/bert_12_layer_model_v1_complete_training_new_wt_init_48",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| text-classification | 2023-10-18T00:54:53Z | ---
language:
- en
base_model: gokuls/bert_12_layer_model_v1_complete_training_new_wt_init_48
tags:
- generated_from_trainer
datasets:
- glue
metrics:
- accuracy
model-index:
- name: hBERTv1_new_pretrain_w_init_48_ver2_qnli
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: GLUE QNLI
type: glue
config: qnli
split: validation
args: qnli
metrics:
- name: Accuracy
type: accuracy
value: 0.5053999633900788
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# hBERTv1_new_pretrain_w_init_48_ver2_qnli
This model is a fine-tuned version of [gokuls/bert_12_layer_model_v1_complete_training_new_wt_init_48](https://huggingface.co/gokuls/bert_12_layer_model_v1_complete_training_new_wt_init_48) on the GLUE QNLI dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6931
- Accuracy: 0.5054
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 4e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 10
- distributed_type: multi-GPU
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 15
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:-----:|:---------------:|:--------:|
| 0.7034 | 1.0 | 1637 | 0.6952 | 0.5054 |
| 0.6953 | 2.0 | 3274 | 0.6950 | 0.4946 |
| 0.694 | 3.0 | 4911 | 0.6932 | 0.4946 |
| 0.6934 | 4.0 | 6548 | 0.6931 | 0.5054 |
| 0.6936 | 5.0 | 8185 | 0.6936 | 0.4946 |
| 0.6933 | 6.0 | 9822 | 0.6931 | 0.5054 |
| 0.6933 | 7.0 | 11459 | 0.6931 | 0.4946 |
| 0.6932 | 8.0 | 13096 | 0.6932 | 0.4946 |
| 0.6933 | 9.0 | 14733 | 0.6932 | 0.4946 |
### Framework versions
- Transformers 4.34.0
- Pytorch 1.14.0a0+410ce96
- Datasets 2.14.5
- Tokenizers 0.14.1
|
EzraWilliam/ezra-wav2vec2-large-xls-r-300m-indonesian-Fleurs-CL-colab | EzraWilliam | 2023-10-18T05:05:06Z | 3 | 0 | transformers | [
"transformers",
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"generated_from_trainer",
"base_model:facebook/wav2vec2-xls-r-300m",
"base_model:finetune:facebook/wav2vec2-xls-r-300m",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
]
| automatic-speech-recognition | 2023-10-11T14:30:31Z | ---
license: apache-2.0
base_model: facebook/wav2vec2-xls-r-300m
tags:
- generated_from_trainer
metrics:
- wer
model-index:
- name: ezra-wav2vec2-large-xls-r-300m-indonesian-Fleurs-CL-colab
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# ezra-wav2vec2-large-xls-r-300m-indonesian-Fleurs-CL-colab
This model is a fine-tuned version of [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4675
- Wer: 0.4520
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0003
- train_batch_size: 4
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 8
- total_train_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 250
- num_epochs: 12
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 5.8777 | 2.02 | 200 | 2.9207 | 1.0 |
| 2.913 | 4.03 | 400 | 2.8439 | 1.0 |
| 2.23 | 6.05 | 600 | 0.9957 | 0.8677 |
| 0.5692 | 8.06 | 800 | 0.5358 | 0.5346 |
| 0.2491 | 10.08 | 1000 | 0.4675 | 0.4520 |
### Framework versions
- Transformers 4.34.0
- Pytorch 2.1.0+cu121
- Datasets 2.14.5
- Tokenizers 0.14.1
|
srjn/Reinforce-Pixelcopter-PLE-v0 | srjn | 2023-10-18T05:03:02Z | 0 | 0 | null | [
"Pixelcopter-PLE-v0",
"reinforce",
"reinforcement-learning",
"custom-implementation",
"deep-rl-class",
"model-index",
"region:us"
]
| reinforcement-learning | 2023-10-18T05:02:59Z | ---
tags:
- Pixelcopter-PLE-v0
- reinforce
- reinforcement-learning
- custom-implementation
- deep-rl-class
model-index:
- name: Reinforce-Pixelcopter-PLE-v0
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: Pixelcopter-PLE-v0
type: Pixelcopter-PLE-v0
metrics:
- type: mean_reward
value: 15.50 +/- 10.49
name: mean_reward
verified: false
---
# **Reinforce** Agent playing **Pixelcopter-PLE-v0**
This is a trained model of a **Reinforce** agent playing **Pixelcopter-PLE-v0** .
To learn to use this model and train yours check Unit 4 of the Deep Reinforcement Learning Course: https://huggingface.co/deep-rl-course/unit4/introduction
|
hung200504/bert-uncased-finetuned-cpgqa | hung200504 | 2023-10-18T04:57:34Z | 3 | 0 | transformers | [
"transformers",
"pytorch",
"bert",
"question-answering",
"generated_from_trainer",
"base_model:twmkn9/bert-base-uncased-squad2",
"base_model:finetune:twmkn9/bert-base-uncased-squad2",
"endpoints_compatible",
"region:us"
]
| question-answering | 2023-10-18T04:57:18Z | ---
base_model: twmkn9/bert-base-uncased-squad2
tags:
- generated_from_trainer
model-index:
- name: bert-uncased-finetuned-cpgqa
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-uncased-finetuned-cpgqa
This model is a fine-tuned version of [twmkn9/bert-base-uncased-squad2](https://huggingface.co/twmkn9/bert-base-uncased-squad2) on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
### Framework versions
- Transformers 4.34.0
- Pytorch 2.0.1+cu118
- Datasets 2.14.5
- Tokenizers 0.14.1
|
ijik-loker/RVC_Gumball_Watterson_Season_3 | ijik-loker | 2023-10-18T04:45:08Z | 0 | 0 | null | [
"rvc",
"voice cloning",
"The Amazing World of Gumball",
"Gumball Watterson",
"Jacob Hopkins",
"en",
"region:us"
]
| null | 2023-10-18T04:06:42Z | ---
language:
- en
tags:
- rvc
- voice cloning
- The Amazing World of Gumball
- Gumball Watterson
- Jacob Hopkins
---
## Model Details
Voice of Jacob Hopkins as Gumball Watterson in Season 3 of the cartoon The Amazing World of Gumball.
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [ijik-loker](https://huggingface.co/ijik-loker)
- **Model type:** [Retrieval-based Voice Conversion (RVC)](https://github.com/RVC-Project/Retrieval-based-Voice-Conversion-WebUI)
- **Language(s):** English
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
Used in the popular Retrieval-based Voice Conversion WebUI via inference or real-time using [Voice Changer](https://github.com/w-okada/voice-changer).
The index file should be used alongside the model.
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
#### Voice clips dataset total duration
v1 model: 11min 18s
v2 model: 26min 50s
Trained using these episodes from Season 3:
1. The Boss
2. The Move
3. The Burden
4. The Bros
5. The Countdown
6. The Nobody
7. The Fraud
8. The Void
9. The Name
10. The Extras (1 line)
11. The Oracle
12. The Safety
13. The Procrastinators
14. The Puppy
15. The Recipe
16. The Society
17. The Spoiler
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
1. Remove noise using [Ultimate Vocal Remover 5](https://github.com/Anjok07/ultimatevocalremovergui) UVR-DeNoise.
2. Extract vocals using RVC Web UI [HP5-主旋律人声vocals+其他instrumentals.pth](https://huggingface.co/lj1995/VoiceConversionWebUI/blob/main/uvr5_weights/HP5-%E4%B8%BB%E6%97%8B%E5%BE%8B%E4%BA%BA%E5%A3%B0vocals%2B%E5%85%B6%E4%BB%96instrumentals.pth).
3. Remove echo and reverb using Ultimate Vocal Remover 5 UVR-DeEcho-DeReverb.
4. Manually diarise voices in [Audacity](https://www.audacityteam.org/) using labels.
5. Export multiple to .wav by labels.
6. Train using RVC
* Target Sample Rate: 48k
* Version: v2
* Total training epochs: 200
* Base model G: f0G48k.pth
* Base model D: f0D48k.pth
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
#### Summary
v1 seems to perform just fine. The v2 voice sounds coarse at times. |
ijik-loker/RVC_Darwin_Watterson_Season_3 | ijik-loker | 2023-10-18T04:43:54Z | 0 | 0 | null | [
"rvc",
"voice cloning",
"The Amazing World of Gumball",
"Darwin Watterson",
"Terrell Ransom, Jr.",
"en",
"region:us"
]
| null | 2023-10-18T04:20:48Z | ---
language:
- en
tags:
- rvc
- voice cloning
- The Amazing World of Gumball
- Darwin Watterson
- Terrell Ransom, Jr.
---
## Model Details
Voice of Terrell Ransom, Jr. as Darwin Watterson in Season 3 of the cartoon The Amazing World of Gumball.
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [ijik-loker](https://huggingface.co/ijik-loker)
- **Model type:** [Retrieval-based Voice Conversion (RVC)](https://github.com/RVC-Project/Retrieval-based-Voice-Conversion-WebUI)
- **Language(s):** English
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
Used in the popular Retrieval-based Voice Conversion WebUI via inference or real-time using [Voice Changer](https://github.com/w-okada/voice-changer).
The index file should be used alongside the model.
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
#### Voice clips dataset total duration
v1 model: 16min 14s
Trained using these episodes from Season 3:
1. The Boss
2. The Move
3. The Burden
4. The Bros
5. The Countdown
6. The Nobody
7. The Fraud
8. The Void
9. The Name
10. The Oracle
11. The Safety
12. The Procrastinators
13. The Puppy
14. The Recipe
15. The Society
16. The Spoiler
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
1. Remove noise using [Ultimate Vocal Remover 5](https://github.com/Anjok07/ultimatevocalremovergui) UVR-DeNoise.
2. Extract vocals using RVC Web UI [HP5-主旋律人声vocals+其他instrumentals.pth](https://huggingface.co/lj1995/VoiceConversionWebUI/blob/main/uvr5_weights/HP5-%E4%B8%BB%E6%97%8B%E5%BE%8B%E4%BA%BA%E5%A3%B0vocals%2B%E5%85%B6%E4%BB%96instrumentals.pth).
3. Remove echo and reverb using Ultimate Vocal Remover 5 UVR-DeEcho-DeReverb.
4. Manually diarise voices in [Audacity](https://www.audacityteam.org/) using labels.
5. Export multiple to .wav by labels.
6. Train using RVC
* Target Sample Rate: 48k
* Version: v2
* Total training epochs: 200
* Base model G: f0G48k.pth
* Base model D: f0D48k.pth |
gokuls/hBERTv2_new_pretrain_48_ver2_qnli | gokuls | 2023-10-18T04:25:36Z | 3 | 0 | transformers | [
"transformers",
"pytorch",
"hybridbert",
"text-classification",
"generated_from_trainer",
"en",
"dataset:glue",
"base_model:gokuls/bert_12_layer_model_v2_complete_training_new_48",
"base_model:finetune:gokuls/bert_12_layer_model_v2_complete_training_new_48",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| text-classification | 2023-10-18T00:41:54Z | ---
language:
- en
base_model: gokuls/bert_12_layer_model_v2_complete_training_new_48
tags:
- generated_from_trainer
datasets:
- glue
metrics:
- accuracy
model-index:
- name: hBERTv2_new_pretrain_48_ver2_qnli
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: GLUE QNLI
type: glue
config: qnli
split: validation
args: qnli
metrics:
- name: Accuracy
type: accuracy
value: 0.5839282445542742
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# hBERTv2_new_pretrain_48_ver2_qnli
This model is a fine-tuned version of [gokuls/bert_12_layer_model_v2_complete_training_new_48](https://huggingface.co/gokuls/bert_12_layer_model_v2_complete_training_new_48) on the GLUE QNLI dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6731
- Accuracy: 0.5839
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 4e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 10
- distributed_type: multi-GPU
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 15
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:-----:|:---------------:|:--------:|
| 0.6905 | 1.0 | 1637 | 0.6830 | 0.5724 |
| 0.6788 | 2.0 | 3274 | 0.6778 | 0.5812 |
| 0.6699 | 3.0 | 4911 | 0.6731 | 0.5839 |
| 0.6711 | 4.0 | 6548 | 0.6835 | 0.5678 |
| 0.6723 | 5.0 | 8185 | 0.6852 | 0.5546 |
| 0.674 | 6.0 | 9822 | 0.6818 | 0.5678 |
| 0.6679 | 7.0 | 11459 | 0.6759 | 0.5737 |
| 0.6681 | 8.0 | 13096 | 0.6932 | 0.5329 |
### Framework versions
- Transformers 4.34.0
- Pytorch 1.14.0a0+410ce96
- Datasets 2.14.5
- Tokenizers 0.14.1
|
allenai/specter | allenai | 2023-10-18T04:19:07Z | 62,408 | 60 | transformers | [
"transformers",
"pytorch",
"tf",
"jax",
"bert",
"feature-extraction",
"en",
"dataset:SciDocs",
"arxiv:2004.07180",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
]
| feature-extraction | 2022-03-02T23:29:05Z | ---
language: en
thumbnail: "https://camo.githubusercontent.com/7d080b7a769f7fdf64ac0ebeb47b039cb50be35287e3071f9d633f0fe33e7596/68747470733a2f2f692e6962622e636f2f33544331576d472f737065637465722d6c6f676f2d63726f707065642e706e67"
license: apache-2.0
datasets:
- SciDocs
metrics:
- F1
- accuracy
- map
- ndcg
---
## SPECTER
SPECTER is a pre-trained language model to generate document-level embedding of documents. It is pre-trained on a powerful signal of document-level relatedness: the citation graph. Unlike existing pretrained language models, SPECTER can be easily applied to downstream applications without task-specific fine-tuning.
If you're coming here because you want to embed papers, SPECTER has now been superceded by [SPECTER2](https://huggingface.co/allenai/specter2_proximity). Use that instead.
Paper: [SPECTER: Document-level Representation Learning using Citation-informed Transformers](https://arxiv.org/pdf/2004.07180.pdf)
Original Repo: [Github](https://github.com/allenai/specter)
Evaluation Benchmark: [SciDocs](https://github.com/allenai/scidocs)
Authors: *Arman Cohan, Sergey Feldman, Iz Beltagy, Doug Downey, Daniel S. Weld*
|
openaccess-ai-collective/neft-exp1 | openaccess-ai-collective | 2023-10-18T04:16:32Z | 3 | 0 | transformers | [
"transformers",
"pytorch",
"mistral",
"text-generation",
"generated_from_trainer",
"base_model:mistralai/Mistral-7B-v0.1",
"base_model:finetune:mistralai/Mistral-7B-v0.1",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
]
| text-generation | 2023-10-18T03:54:48Z | ---
license: apache-2.0
base_model: mistralai/Mistral-7B-v0.1
tags:
- generated_from_trainer
model-index:
- name: out
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/OpenAccess-AI-Collective/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/OpenAccess-AI-Collective/axolotl)
# out
This model is a fine-tuned version of [mistralai/Mistral-7B-v0.1](https://huggingface.co/mistralai/Mistral-7B-v0.1) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.3731
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 6e-06
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- distributed_type: multi-GPU
- num_devices: 8
- total_train_batch_size: 16
- total_eval_batch_size: 16
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 10
- num_epochs: 4
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 0.9422 | 0.02 | 1 | 1.0091 |
| 1.0215 | 0.2 | 13 | 1.0004 |
| 0.9933 | 0.41 | 26 | 1.0071 |
| 0.9197 | 0.61 | 39 | 1.0136 |
| 0.9285 | 0.81 | 52 | 1.0075 |
| 0.5858 | 1.02 | 65 | 1.0082 |
| 0.5522 | 1.22 | 78 | 1.0546 |
| 0.4992 | 1.42 | 91 | 1.0683 |
| 0.6085 | 1.62 | 104 | 1.0638 |
| 0.5118 | 1.83 | 117 | 1.0654 |
| 0.3243 | 2.03 | 130 | 1.1113 |
| 0.3196 | 2.23 | 143 | 1.1957 |
| 0.2582 | 2.44 | 156 | 1.2038 |
| 0.273 | 2.64 | 169 | 1.1949 |
| 0.2818 | 2.84 | 182 | 1.2000 |
| 0.1427 | 3.05 | 195 | 1.2817 |
| 0.1246 | 3.25 | 208 | 1.3245 |
| 0.1394 | 3.45 | 221 | 1.3561 |
| 0.1088 | 3.66 | 234 | 1.3770 |
| 0.0985 | 3.86 | 247 | 1.3731 |
### Framework versions
- Transformers 4.34.0.dev0
- Pytorch 2.0.1+cu118
- Datasets 2.14.4
- Tokenizers 0.14.0
|
hpandana/Q-Learning-Taxi-v3 | hpandana | 2023-10-18T04:12:30Z | 0 | 0 | null | [
"Taxi-v3",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
]
| reinforcement-learning | 2023-10-18T04:12:25Z | ---
tags:
- Taxi-v3
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: Q-Learning-Taxi-v3
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: Taxi-v3
type: Taxi-v3
metrics:
- type: mean_reward
value: 7.52 +/- 2.70
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing1 **Taxi-v3**
This is a trained model of a **Q-Learning** agent playing **Taxi-v3** .
## Usage
```python
model = load_from_hub(repo_id="hpandana/Q-Learning-Taxi-v3", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
```
|
hung200504/bert-finetuned-cpgqa | hung200504 | 2023-10-18T04:07:56Z | 4 | 0 | transformers | [
"transformers",
"pytorch",
"bert",
"question-answering",
"generated_from_trainer",
"base_model:deepset/bert-base-cased-squad2",
"base_model:finetune:deepset/bert-base-cased-squad2",
"license:cc-by-4.0",
"endpoints_compatible",
"region:us"
]
| question-answering | 2023-10-18T04:07:22Z | ---
license: cc-by-4.0
base_model: deepset/bert-base-cased-squad2
tags:
- generated_from_trainer
model-index:
- name: bert-finetuned-cpgqa
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-finetuned-cpgqa
This model is a fine-tuned version of [deepset/bert-base-cased-squad2](https://huggingface.co/deepset/bert-base-cased-squad2) on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
### Framework versions
- Transformers 4.34.0
- Pytorch 2.0.1+cu118
- Datasets 2.14.5
- Tokenizers 0.14.1
|
Potatoonz/my_awesome_eli5_mlm_model | Potatoonz | 2023-10-18T04:03:40Z | 3 | 0 | transformers | [
"transformers",
"pytorch",
"roberta",
"fill-mask",
"generated_from_trainer",
"base_model:distilbert/distilroberta-base",
"base_model:finetune:distilbert/distilroberta-base",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| fill-mask | 2023-10-18T03:33:47Z | ---
license: apache-2.0
base_model: distilroberta-base
tags:
- generated_from_trainer
model-index:
- name: my_awesome_eli5_mlm_model
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# my_awesome_eli5_mlm_model
This model is a fine-tuned version of [distilroberta-base](https://huggingface.co/distilroberta-base) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.9878
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 1.7249 | 1.0 | 1147 | 2.0644 |
| 1.8276 | 2.0 | 2294 | 2.0437 |
| 1.9742 | 3.0 | 3441 | 1.9878 |
### Framework versions
- Transformers 4.34.0
- Pytorch 2.0.1+cu118
- Datasets 2.14.5
- Tokenizers 0.14.1
|
Rabul/Le | Rabul | 2023-10-18T04:02:28Z | 0 | 0 | adapter-transformers | [
"adapter-transformers",
"finance",
"text-classification",
"ae",
"dataset:lmsys/lmsys-chat-1m",
"license:apache-2.0",
"region:us"
]
| text-classification | 2023-10-18T04:01:32Z | ---
license: apache-2.0
datasets:
- lmsys/lmsys-chat-1m
language:
- ae
metrics:
- bertscore
library_name: adapter-transformers
pipeline_tag: text-classification
tags:
- finance
--- |
ryanlcf/my_awesome_eli5_mlm_model | ryanlcf | 2023-10-18T03:56:17Z | 3 | 0 | transformers | [
"transformers",
"pytorch",
"roberta",
"fill-mask",
"generated_from_trainer",
"base_model:distilbert/distilroberta-base",
"base_model:finetune:distilbert/distilroberta-base",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| fill-mask | 2023-10-18T03:39:04Z | ---
license: apache-2.0
base_model: distilroberta-base
tags:
- generated_from_trainer
model-index:
- name: my_awesome_eli5_mlm_model
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# my_awesome_eli5_mlm_model
This model is a fine-tuned version of [distilroberta-base](https://huggingface.co/distilroberta-base) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.9951
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 2.2466 | 1.0 | 1159 | 2.0600 |
| 2.1492 | 2.0 | 2318 | 2.0183 |
| 2.1107 | 3.0 | 3477 | 1.9931 |
### Framework versions
- Transformers 4.34.0
- Pytorch 2.0.1+cu118
- Datasets 2.14.5
- Tokenizers 0.14.1
|
Afishally/my_awesome_eli5_mlm_model | Afishally | 2023-10-18T03:54:53Z | 3 | 0 | transformers | [
"transformers",
"pytorch",
"roberta",
"fill-mask",
"generated_from_trainer",
"base_model:distilbert/distilroberta-base",
"base_model:finetune:distilbert/distilroberta-base",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| fill-mask | 2023-10-18T03:38:29Z | ---
license: apache-2.0
base_model: distilroberta-base
tags:
- generated_from_trainer
model-index:
- name: my_awesome_eli5_mlm_model
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# my_awesome_eli5_mlm_model
This model is a fine-tuned version of [distilroberta-base](https://huggingface.co/distilroberta-base) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.9908
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 2.2574 | 1.0 | 1141 | 2.0525 |
| 2.1639 | 2.0 | 2282 | 2.0132 |
| 2.118 | 3.0 | 3423 | 1.9563 |
### Framework versions
- Transformers 4.34.0
- Pytorch 2.0.1+cu118
- Datasets 2.14.5
- Tokenizers 0.14.1
|
Calibae/my_awesome_eli5_mlm_model | Calibae | 2023-10-18T03:53:04Z | 4 | 0 | transformers | [
"transformers",
"pytorch",
"roberta",
"fill-mask",
"generated_from_trainer",
"base_model:distilbert/distilroberta-base",
"base_model:finetune:distilbert/distilroberta-base",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| fill-mask | 2023-10-18T03:34:43Z | ---
license: apache-2.0
base_model: distilroberta-base
tags:
- generated_from_trainer
model-index:
- name: my_awesome_eli5_mlm_model
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# my_awesome_eli5_mlm_model
This model is a fine-tuned version of [distilroberta-base](https://huggingface.co/distilroberta-base) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.9914
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 2.2403 | 1.0 | 1126 | 2.0550 |
| 2.1623 | 2.0 | 2252 | 2.0164 |
| 2.1091 | 3.0 | 3378 | 2.0004 |
### Framework versions
- Transformers 4.34.0
- Pytorch 2.0.1+cu118
- Datasets 2.14.5
- Tokenizers 0.14.1
|
Pssssss/my_awesome_eli5_mlm_model | Pssssss | 2023-10-18T03:52:36Z | 3 | 0 | transformers | [
"transformers",
"pytorch",
"roberta",
"fill-mask",
"generated_from_trainer",
"base_model:distilbert/distilroberta-base",
"base_model:finetune:distilbert/distilroberta-base",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| fill-mask | 2023-10-18T03:34:46Z | ---
license: apache-2.0
base_model: distilroberta-base
tags:
- generated_from_trainer
model-index:
- name: my_awesome_eli5_mlm_model
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# my_awesome_eli5_mlm_model
This model is a fine-tuned version of [distilroberta-base](https://huggingface.co/distilroberta-base) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 2.0178
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 2.2677 | 1.0 | 1132 | 2.0885 |
| 2.1499 | 2.0 | 2264 | 2.0546 |
| 2.1333 | 3.0 | 3396 | 2.0309 |
### Framework versions
- Transformers 4.34.0
- Pytorch 2.0.1+cu118
- Datasets 2.14.5
- Tokenizers 0.14.1
|
ghzc/my_awesome_eli5_mlm_model | ghzc | 2023-10-18T03:51:43Z | 3 | 0 | transformers | [
"transformers",
"pytorch",
"roberta",
"fill-mask",
"generated_from_trainer",
"base_model:distilbert/distilroberta-base",
"base_model:finetune:distilbert/distilroberta-base",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| fill-mask | 2023-10-18T03:34:04Z | ---
license: apache-2.0
base_model: distilroberta-base
tags:
- generated_from_trainer
model-index:
- name: my_awesome_eli5_mlm_model
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# my_awesome_eli5_mlm_model
This model is a fine-tuned version of [distilroberta-base](https://huggingface.co/distilroberta-base) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.9977
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 2.2446 | 1.0 | 1143 | 2.0583 |
| 2.1637 | 2.0 | 2286 | 2.0377 |
| 2.1135 | 3.0 | 3429 | 2.0078 |
### Framework versions
- Transformers 4.34.0
- Pytorch 2.0.1+cu118
- Datasets 2.14.5
- Tokenizers 0.14.1
|
macarius/my_awesome_eli5_mlm_model | macarius | 2023-10-18T03:51:24Z | 3 | 0 | transformers | [
"transformers",
"pytorch",
"roberta",
"fill-mask",
"generated_from_trainer",
"base_model:distilbert/distilroberta-base",
"base_model:finetune:distilbert/distilroberta-base",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| fill-mask | 2023-10-18T03:33:08Z | ---
license: apache-2.0
base_model: distilroberta-base
tags:
- generated_from_trainer
model-index:
- name: my_awesome_eli5_mlm_model
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# my_awesome_eli5_mlm_model
This model is a fine-tuned version of [distilroberta-base](https://huggingface.co/distilroberta-base) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.9907
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 2.247 | 1.0 | 1142 | 2.0409 |
| 2.1588 | 2.0 | 2284 | 1.9868 |
| 2.1256 | 3.0 | 3426 | 1.9938 |
### Framework versions
- Transformers 4.34.0
- Pytorch 2.0.1+cu118
- Datasets 2.14.5
- Tokenizers 0.14.1
|
wfelizabeth/my_awesome_eli5_mlm_model | wfelizabeth | 2023-10-18T03:50:47Z | 3 | 0 | transformers | [
"transformers",
"pytorch",
"roberta",
"fill-mask",
"generated_from_trainer",
"base_model:distilbert/distilroberta-base",
"base_model:finetune:distilbert/distilroberta-base",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| fill-mask | 2023-10-18T03:33:46Z | ---
license: apache-2.0
base_model: distilroberta-base
tags:
- generated_from_trainer
model-index:
- name: my_awesome_eli5_mlm_model
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# my_awesome_eli5_mlm_model
This model is a fine-tuned version of [distilroberta-base](https://huggingface.co/distilroberta-base) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 2.0145
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 2.2449 | 1.0 | 1136 | 2.0600 |
| 2.1747 | 2.0 | 2272 | 2.0294 |
| 2.1262 | 3.0 | 3408 | 1.9973 |
### Framework versions
- Transformers 4.34.0
- Pytorch 2.0.1+cu118
- Datasets 2.14.5
- Tokenizers 0.14.1
|
PHILANDER/my_awesome_eli5_mlm_model | PHILANDER | 2023-10-18T03:50:42Z | 3 | 0 | transformers | [
"transformers",
"pytorch",
"roberta",
"fill-mask",
"generated_from_trainer",
"base_model:distilbert/distilroberta-base",
"base_model:finetune:distilbert/distilroberta-base",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| fill-mask | 2023-10-18T03:33:44Z | ---
license: apache-2.0
base_model: distilroberta-base
tags:
- generated_from_trainer
model-index:
- name: my_awesome_eli5_mlm_model
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# my_awesome_eli5_mlm_model
This model is a fine-tuned version of [distilroberta-base](https://huggingface.co/distilroberta-base) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.9719
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 2.2415 | 1.0 | 1146 | 2.0722 |
| 2.159 | 2.0 | 2292 | 2.0261 |
| 2.1127 | 3.0 | 3438 | 2.0136 |
### Framework versions
- Transformers 4.34.0
- Pytorch 2.0.1+cu118
- Datasets 2.14.5
- Tokenizers 0.14.1
|
yeyintaung29/my_awesome_eli5_mlm_model | yeyintaung29 | 2023-10-18T03:50:32Z | 3 | 0 | transformers | [
"transformers",
"pytorch",
"roberta",
"fill-mask",
"generated_from_trainer",
"base_model:distilbert/distilroberta-base",
"base_model:finetune:distilbert/distilroberta-base",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| fill-mask | 2023-10-18T03:28:39Z | ---
license: apache-2.0
base_model: distilroberta-base
tags:
- generated_from_trainer
model-index:
- name: my_awesome_eli5_mlm_model
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# my_awesome_eli5_mlm_model
This model is a fine-tuned version of [distilroberta-base](https://huggingface.co/distilroberta-base) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 2.0176
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 2.2487 | 1.0 | 1138 | 2.0710 |
| 2.1434 | 2.0 | 2276 | 2.0345 |
| 2.129 | 3.0 | 3414 | 2.0011 |
### Framework versions
- Transformers 4.34.0
- Pytorch 2.0.1+cu118
- Datasets 2.14.5
- Tokenizers 0.14.1
|
frenchcries/my_awesome_eli5_mlm_model | frenchcries | 2023-10-18T03:50:23Z | 3 | 0 | transformers | [
"transformers",
"pytorch",
"roberta",
"fill-mask",
"generated_from_trainer",
"base_model:distilbert/distilroberta-base",
"base_model:finetune:distilbert/distilroberta-base",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| fill-mask | 2023-10-18T03:33:29Z | ---
license: apache-2.0
base_model: distilroberta-base
tags:
- generated_from_trainer
model-index:
- name: my_awesome_eli5_mlm_model
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# my_awesome_eli5_mlm_model
This model is a fine-tuned version of [distilroberta-base](https://huggingface.co/distilroberta-base) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 2.0237
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 2.238 | 1.0 | 1127 | 2.0800 |
| 2.1592 | 2.0 | 2254 | 2.0530 |
| 2.1143 | 3.0 | 3381 | 1.9991 |
### Framework versions
- Transformers 4.34.0
- Pytorch 2.0.1+cu118
- Datasets 2.14.5
- Tokenizers 0.14.1
|
GhostX38/my_awesome_eli5_mlm_model | GhostX38 | 2023-10-18T03:49:42Z | 4 | 0 | transformers | [
"transformers",
"pytorch",
"roberta",
"fill-mask",
"generated_from_trainer",
"base_model:distilbert/distilroberta-base",
"base_model:finetune:distilbert/distilroberta-base",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| fill-mask | 2023-10-18T03:30:02Z | ---
license: apache-2.0
base_model: distilroberta-base
tags:
- generated_from_trainer
model-index:
- name: my_awesome_eli5_mlm_model
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# my_awesome_eli5_mlm_model
This model is a fine-tuned version of [distilroberta-base](https://huggingface.co/distilroberta-base) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.9940
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 2.2451 | 1.0 | 1134 | 2.0896 |
| 2.151 | 2.0 | 2268 | 2.0393 |
| 2.118 | 3.0 | 3402 | 2.0124 |
### Framework versions
- Transformers 4.34.0
- Pytorch 2.0.1+cu118
- Datasets 2.14.5
- Tokenizers 0.14.1
|
Globaly/lora-familias-130k | Globaly | 2023-10-18T03:35:26Z | 0 | 0 | peft | [
"peft",
"region:us"
]
| null | 2023-10-18T03:23:22Z | ---
library_name: peft
---
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: True
- bnb_4bit_compute_dtype: bfloat16
### Framework versions
- PEFT 0.4.0
|
SaiedAlshahrani/bloom_3B_4bit_qlora_flores | SaiedAlshahrani | 2023-10-18T03:35:10Z | 0 | 0 | null | [
"generated_from_trainer",
"base_model:asas-ai/bloom_3B_8bit",
"base_model:finetune:asas-ai/bloom_3B_8bit",
"region:us"
]
| null | 2023-10-18T02:33:51Z | ---
base_model: asas-ai/bloom_3B_8bit
tags:
- generated_from_trainer
model-index:
- name: bloom_3B_4bit_qlora_flores
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bloom_3B_4bit_qlora_flores
This model is a fine-tuned version of [asas-ai/bloom_3B_8bit](https://huggingface.co/asas-ai/bloom_3B_8bit) on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 4
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 16
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: constant
- lr_scheduler_warmup_ratio: 0.03
- training_steps: 2200
### Training results
### Framework versions
- Transformers 4.34.0
- Pytorch 2.1.0+cu121
- Datasets 2.4.0
- Tokenizers 0.14.1
|
guocheng66/a2c-PandaReachDense-v3 | guocheng66 | 2023-10-18T03:11:08Z | 2 | 0 | stable-baselines3 | [
"stable-baselines3",
"PandaReachDense-v3",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
]
| reinforcement-learning | 2023-10-18T03:05:19Z | ---
library_name: stable-baselines3
tags:
- PandaReachDense-v3
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: A2C
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: PandaReachDense-v3
type: PandaReachDense-v3
metrics:
- type: mean_reward
value: -0.18 +/- 0.10
name: mean_reward
verified: false
---
# **A2C** Agent playing **PandaReachDense-v3**
This is a trained model of a **A2C** agent playing **PandaReachDense-v3**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
asas-ai/bloom_3B_8bit_qlora_flores | asas-ai | 2023-10-18T02:30:34Z | 0 | 0 | null | [
"tensorboard",
"generated_from_trainer",
"base_model:asas-ai/bloom_3B_8bit",
"base_model:finetune:asas-ai/bloom_3B_8bit",
"region:us"
]
| null | 2023-10-18T02:29:50Z | ---
base_model: asas-ai/bloom_3B_8bit
tags:
- generated_from_trainer
model-index:
- name: bloom_3B_8bit_qlora_flores
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bloom_3B_8bit_qlora_flores
This model is a fine-tuned version of [asas-ai/bloom_3B_8bit](https://huggingface.co/asas-ai/bloom_3B_8bit) on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 4
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 16
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: constant
- lr_scheduler_warmup_ratio: 0.03
- training_steps: 2200
### Training results
### Framework versions
- Transformers 4.34.0
- Pytorch 2.1.0+cu121
- Datasets 2.4.0
- Tokenizers 0.14.1
|
SaiedAlshahrani/bloom_3B_8bit_qlora_flores | SaiedAlshahrani | 2023-10-18T02:29:53Z | 0 | 0 | null | [
"generated_from_trainer",
"base_model:asas-ai/bloom_3B_8bit",
"base_model:finetune:asas-ai/bloom_3B_8bit",
"region:us"
]
| null | 2023-10-18T01:28:26Z | ---
base_model: asas-ai/bloom_3B_8bit
tags:
- generated_from_trainer
model-index:
- name: bloom_3B_8bit_qlora_flores
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bloom_3B_8bit_qlora_flores
This model is a fine-tuned version of [asas-ai/bloom_3B_8bit](https://huggingface.co/asas-ai/bloom_3B_8bit) on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 4
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 16
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: constant
- lr_scheduler_warmup_ratio: 0.03
- training_steps: 2200
### Training results
### Framework versions
- Transformers 4.34.0
- Pytorch 2.1.0+cu121
- Datasets 2.4.0
- Tokenizers 0.14.1
|
vatipham/layoutlm-funsd | vatipham | 2023-10-18T02:27:18Z | 6 | 0 | transformers | [
"transformers",
"pytorch",
"layoutlm",
"token-classification",
"generated_from_trainer",
"dataset:funsd",
"base_model:microsoft/layoutlm-base-uncased",
"base_model:finetune:microsoft/layoutlm-base-uncased",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| token-classification | 2023-10-17T02:08:55Z | ---
base_model: microsoft/layoutlm-base-uncased
tags:
- generated_from_trainer
datasets:
- funsd
model-index:
- name: layoutlm-funsd
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# layoutlm-funsd
This model is a fine-tuned version of [microsoft/layoutlm-base-uncased](https://huggingface.co/microsoft/layoutlm-base-uncased) on the funsd dataset.
It achieves the following results on the evaluation set:
- Loss: 0.8174
- Answer: {'precision': 0.7233333333333334, 'recall': 0.8046971569839307, 'f1': 0.7618490345231129, 'number': 809}
- Header: {'precision': 0.35766423357664234, 'recall': 0.4117647058823529, 'f1': 0.3828125, 'number': 119}
- Question: {'precision': 0.7904085257548845, 'recall': 0.8356807511737089, 'f1': 0.8124144226380648, 'number': 1065}
- Overall Precision: 0.7351
- Overall Recall: 0.7978
- Overall F1: 0.7652
- Overall Accuracy: 0.8019
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-05
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 20
### Training results
| Training Loss | Epoch | Step | Validation Loss | Answer | Header | Question | Overall Precision | Overall Recall | Overall F1 | Overall Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:----------------------------------------------------------------------------------------------------------:|:------------------------------------------------------------------------------------------------------------:|:---------------------------------------------------------------------------------------------------------:|:-----------------:|:--------------:|:----------:|:----------------:|
| 1.3435 | 1.0 | 10 | 1.1455 | {'precision': 0.29554655870445345, 'recall': 0.27070457354758964, 'f1': 0.2825806451612903, 'number': 809} | {'precision': 0.0, 'recall': 0.0, 'f1': 0.0, 'number': 119} | {'precision': 0.43828125, 'recall': 0.5267605633802817, 'f1': 0.47846481876332625, 'number': 1065} | 0.3858 | 0.3914 | 0.3885 | 0.6180 |
| 0.9706 | 2.0 | 20 | 0.8933 | {'precision': 0.5545454545454546, 'recall': 0.6786155747836835, 'f1': 0.6103390772651472, 'number': 809} | {'precision': 0.08695652173913043, 'recall': 0.03361344537815126, 'f1': 0.048484848484848485, 'number': 119} | {'precision': 0.6115916955017301, 'recall': 0.6638497652582159, 'f1': 0.6366501575866726, 'number': 1065} | 0.5748 | 0.6322 | 0.6022 | 0.7308 |
| 0.7426 | 3.0 | 30 | 0.7478 | {'precision': 0.6294058408862034, 'recall': 0.7725587144622992, 'f1': 0.6936736958934517, 'number': 809} | {'precision': 0.1891891891891892, 'recall': 0.11764705882352941, 'f1': 0.1450777202072539, 'number': 119} | {'precision': 0.6858333333333333, 'recall': 0.7727699530516432, 'f1': 0.726710816777042, 'number': 1065} | 0.6449 | 0.7336 | 0.6864 | 0.7770 |
| 0.6123 | 4.0 | 40 | 0.6950 | {'precision': 0.6286266924564797, 'recall': 0.8034610630407911, 'f1': 0.705371676614216, 'number': 809} | {'precision': 0.19387755102040816, 'recall': 0.15966386554621848, 'f1': 0.17511520737327188, 'number': 119} | {'precision': 0.6943268416596104, 'recall': 0.7699530516431925, 'f1': 0.730186999109528, 'number': 1065} | 0.6438 | 0.7471 | 0.6916 | 0.7886 |
| 0.5267 | 5.0 | 50 | 0.6804 | {'precision': 0.6574172892209178, 'recall': 0.761433868974042, 'f1': 0.7056128293241695, 'number': 809} | {'precision': 0.21818181818181817, 'recall': 0.20168067226890757, 'f1': 0.2096069868995633, 'number': 119} | {'precision': 0.7246496290189612, 'recall': 0.8253521126760563, 'f1': 0.771729587357331, 'number': 1065} | 0.6721 | 0.7622 | 0.7143 | 0.8013 |
| 0.4587 | 6.0 | 60 | 0.6701 | {'precision': 0.670490093847758, 'recall': 0.7948084054388134, 'f1': 0.7273755656108597, 'number': 809} | {'precision': 0.2108843537414966, 'recall': 0.2605042016806723, 'f1': 0.2330827067669173, 'number': 119} | {'precision': 0.7309602649006622, 'recall': 0.8291079812206573, 'f1': 0.7769467663880335, 'number': 1065} | 0.6729 | 0.7812 | 0.7230 | 0.7977 |
| 0.3981 | 7.0 | 70 | 0.6637 | {'precision': 0.7029063509149623, 'recall': 0.8071693448702101, 'f1': 0.7514384349827388, 'number': 809} | {'precision': 0.2698412698412698, 'recall': 0.2857142857142857, 'f1': 0.27755102040816326, 'number': 119} | {'precision': 0.7621483375959079, 'recall': 0.8394366197183099, 'f1': 0.7989276139410187, 'number': 1065} | 0.7096 | 0.7933 | 0.7491 | 0.8062 |
| 0.3608 | 8.0 | 80 | 0.6778 | {'precision': 0.7083333333333334, 'recall': 0.7985166872682324, 'f1': 0.7507263219058687, 'number': 809} | {'precision': 0.25874125874125875, 'recall': 0.31092436974789917, 'f1': 0.2824427480916031, 'number': 119} | {'precision': 0.7633851468048359, 'recall': 0.8300469483568075, 'f1': 0.7953216374269007, 'number': 1065} | 0.7081 | 0.7863 | 0.7451 | 0.8003 |
| 0.311 | 9.0 | 90 | 0.6931 | {'precision': 0.6991247264770241, 'recall': 0.7898640296662547, 'f1': 0.7417295414973882, 'number': 809} | {'precision': 0.2835820895522388, 'recall': 0.31932773109243695, 'f1': 0.30039525691699603, 'number': 119} | {'precision': 0.7606244579358196, 'recall': 0.8234741784037559, 'f1': 0.7908025247971145, 'number': 1065} | 0.7060 | 0.7797 | 0.7411 | 0.8055 |
| 0.276 | 10.0 | 100 | 0.7144 | {'precision': 0.7298787210584344, 'recall': 0.8182941903584673, 'f1': 0.7715617715617716, 'number': 809} | {'precision': 0.3103448275862069, 'recall': 0.37815126050420167, 'f1': 0.34090909090909094, 'number': 119} | {'precision': 0.7814159292035399, 'recall': 0.8291079812206573, 'f1': 0.8045558086560365, 'number': 1065} | 0.7287 | 0.7978 | 0.7617 | 0.8062 |
| 0.2393 | 11.0 | 110 | 0.7342 | {'precision': 0.7155555555555555, 'recall': 0.796044499381953, 'f1': 0.7536571094207138, 'number': 809} | {'precision': 0.296551724137931, 'recall': 0.36134453781512604, 'f1': 0.32575757575757575, 'number': 119} | {'precision': 0.774869109947644, 'recall': 0.8338028169014085, 'f1': 0.8032564450474899, 'number': 1065} | 0.7188 | 0.7903 | 0.7529 | 0.8042 |
| 0.2227 | 12.0 | 120 | 0.7539 | {'precision': 0.7054945054945055, 'recall': 0.7935723114956736, 'f1': 0.7469458987783596, 'number': 809} | {'precision': 0.33884297520661155, 'recall': 0.3445378151260504, 'f1': 0.3416666666666667, 'number': 119} | {'precision': 0.7686440677966102, 'recall': 0.8516431924882629, 'f1': 0.8080178173719377, 'number': 1065} | 0.7191 | 0.7978 | 0.7564 | 0.8006 |
| 0.2119 | 13.0 | 130 | 0.7774 | {'precision': 0.7263736263736263, 'recall': 0.8170580964153276, 'f1': 0.7690517742873763, 'number': 809} | {'precision': 0.28125, 'recall': 0.37815126050420167, 'f1': 0.3225806451612903, 'number': 119} | {'precision': 0.7714033539276258, 'recall': 0.8206572769953052, 'f1': 0.7952684258416743, 'number': 1065} | 0.7172 | 0.7928 | 0.7531 | 0.7952 |
| 0.1882 | 14.0 | 140 | 0.7688 | {'precision': 0.7270668176670442, 'recall': 0.7935723114956736, 'f1': 0.7588652482269503, 'number': 809} | {'precision': 0.3384615384615385, 'recall': 0.3697478991596639, 'f1': 0.35341365461847385, 'number': 119} | {'precision': 0.7883597883597884, 'recall': 0.8394366197183099, 'f1': 0.8130968622100955, 'number': 1065} | 0.7359 | 0.7928 | 0.7633 | 0.8024 |
| 0.1767 | 15.0 | 150 | 0.7717 | {'precision': 0.7244785949506037, 'recall': 0.8158220024721878, 'f1': 0.7674418604651163, 'number': 809} | {'precision': 0.3548387096774194, 'recall': 0.3697478991596639, 'f1': 0.36213991769547327, 'number': 119} | {'precision': 0.789612676056338, 'recall': 0.8422535211267606, 'f1': 0.8150840527033166, 'number': 1065} | 0.7374 | 0.8033 | 0.7690 | 0.8020 |
| 0.1703 | 16.0 | 160 | 0.7943 | {'precision': 0.7231638418079096, 'recall': 0.7911001236093943, 'f1': 0.755608028335301, 'number': 809} | {'precision': 0.36231884057971014, 'recall': 0.42016806722689076, 'f1': 0.38910505836575876, 'number': 119} | {'precision': 0.79185119574845, 'recall': 0.8394366197183099, 'f1': 0.8149498632634458, 'number': 1065} | 0.7361 | 0.7948 | 0.7643 | 0.8017 |
| 0.1643 | 17.0 | 170 | 0.8087 | {'precision': 0.7207207207207207, 'recall': 0.7911001236093943, 'f1': 0.7542722451384797, 'number': 809} | {'precision': 0.33098591549295775, 'recall': 0.3949579831932773, 'f1': 0.3601532567049809, 'number': 119} | {'precision': 0.7932263814616756, 'recall': 0.8356807511737089, 'f1': 0.8139003200731596, 'number': 1065} | 0.7328 | 0.7913 | 0.7609 | 0.7990 |
| 0.1443 | 18.0 | 180 | 0.8170 | {'precision': 0.7230419977298524, 'recall': 0.7873918417799752, 'f1': 0.7538461538461538, 'number': 809} | {'precision': 0.36231884057971014, 'recall': 0.42016806722689076, 'f1': 0.38910505836575876, 'number': 119} | {'precision': 0.7898936170212766, 'recall': 0.8366197183098592, 'f1': 0.8125854993160054, 'number': 1065} | 0.7350 | 0.7918 | 0.7623 | 0.7994 |
| 0.148 | 19.0 | 190 | 0.8169 | {'precision': 0.7245240761478163, 'recall': 0.799752781211372, 'f1': 0.7602820211515863, 'number': 809} | {'precision': 0.35766423357664234, 'recall': 0.4117647058823529, 'f1': 0.3828125, 'number': 119} | {'precision': 0.792149866190901, 'recall': 0.8338028169014085, 'f1': 0.8124428179322964, 'number': 1065} | 0.7364 | 0.7948 | 0.7645 | 0.8015 |
| 0.1441 | 20.0 | 200 | 0.8174 | {'precision': 0.7233333333333334, 'recall': 0.8046971569839307, 'f1': 0.7618490345231129, 'number': 809} | {'precision': 0.35766423357664234, 'recall': 0.4117647058823529, 'f1': 0.3828125, 'number': 119} | {'precision': 0.7904085257548845, 'recall': 0.8356807511737089, 'f1': 0.8124144226380648, 'number': 1065} | 0.7351 | 0.7978 | 0.7652 | 0.8019 |
### Framework versions
- Transformers 4.34.0
- Pytorch 2.0.1+cu118
- Datasets 2.14.5
- Tokenizers 0.14.1
|
gokuls/hBERTv2_new_pretrain_w_init_48_ver2_qnli | gokuls | 2023-10-18T02:24:10Z | 3 | 0 | transformers | [
"transformers",
"pytorch",
"hybridbert",
"text-classification",
"generated_from_trainer",
"en",
"dataset:glue",
"base_model:gokuls/bert_12_layer_model_v2_complete_training_new_wt_init_48",
"base_model:finetune:gokuls/bert_12_layer_model_v2_complete_training_new_wt_init_48",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| text-classification | 2023-10-17T23:48:19Z | ---
language:
- en
base_model: gokuls/bert_12_layer_model_v2_complete_training_new_wt_init_48
tags:
- generated_from_trainer
datasets:
- glue
metrics:
- accuracy
model-index:
- name: hBERTv2_new_pretrain_w_init_48_ver2_qnli
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: GLUE QNLI
type: glue
config: qnli
split: validation
args: qnli
metrics:
- name: Accuracy
type: accuracy
value: 0.5053999633900788
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# hBERTv2_new_pretrain_w_init_48_ver2_qnli
This model is a fine-tuned version of [gokuls/bert_12_layer_model_v2_complete_training_new_wt_init_48](https://huggingface.co/gokuls/bert_12_layer_model_v2_complete_training_new_wt_init_48) on the GLUE QNLI dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6931
- Accuracy: 0.5054
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 4e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 10
- distributed_type: multi-GPU
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 15
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:-----:|:---------------:|:--------:|
| 0.7008 | 1.0 | 1637 | 0.6943 | 0.5054 |
| 0.6946 | 2.0 | 3274 | 0.6931 | 0.5054 |
| 0.6938 | 3.0 | 4911 | 0.6932 | 0.4946 |
| 0.6943 | 4.0 | 6548 | 0.6934 | 0.5054 |
| 0.694 | 5.0 | 8185 | 0.6933 | 0.4946 |
| 0.6932 | 6.0 | 9822 | 0.6931 | 0.5054 |
| 0.6934 | 7.0 | 11459 | 0.6931 | 0.5054 |
| 0.6932 | 8.0 | 13096 | 0.6931 | 0.5054 |
| 0.6932 | 9.0 | 14733 | 0.6932 | 0.4946 |
| 0.6932 | 10.0 | 16370 | 0.6933 | 0.4946 |
| 0.6932 | 11.0 | 18007 | 0.6931 | 0.5054 |
| 0.6932 | 12.0 | 19644 | 0.6931 | 0.5054 |
| 0.6932 | 13.0 | 21281 | 0.6931 | 0.4946 |
### Framework versions
- Transformers 4.34.0
- Pytorch 1.14.0a0+410ce96
- Datasets 2.14.5
- Tokenizers 0.14.1
|
mangoxb/tangled3 | mangoxb | 2023-10-18T02:23:28Z | 2 | 0 | diffusers | [
"diffusers",
"tensorboard",
"stable-diffusion",
"stable-diffusion-diffusers",
"text-to-image",
"lora",
"base_model:runwayml/stable-diffusion-v1-5",
"base_model:adapter:runwayml/stable-diffusion-v1-5",
"license:creativeml-openrail-m",
"region:us"
]
| text-to-image | 2023-10-18T02:18:17Z |
---
license: creativeml-openrail-m
base_model: runwayml/stable-diffusion-v1-5
instance_prompt: a photo of qzx rapunzel or vmn flynn
tags:
- stable-diffusion
- stable-diffusion-diffusers
- text-to-image
- diffusers
- lora
inference: true
---
# LoRA DreamBooth - mangoxb/tangled3
These are LoRA adaption weights for runwayml/stable-diffusion-v1-5. The weights were trained on a photo of qzx rapunzel or vmn flynn using [DreamBooth](https://dreambooth.github.io/). You can find some example images in the following.




LoRA for the text encoder was enabled: True.
|
gokuls/hBERTv1_new_pretrain_48_ver2_qnli | gokuls | 2023-10-18T02:08:17Z | 3 | 0 | transformers | [
"transformers",
"pytorch",
"hybridbert",
"text-classification",
"generated_from_trainer",
"en",
"dataset:glue",
"base_model:gokuls/bert_12_layer_model_v1_complete_training_new_48",
"base_model:finetune:gokuls/bert_12_layer_model_v1_complete_training_new_48",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| text-classification | 2023-10-17T23:35:26Z | ---
language:
- en
base_model: gokuls/bert_12_layer_model_v1_complete_training_new_48
tags:
- generated_from_trainer
datasets:
- glue
metrics:
- accuracy
model-index:
- name: hBERTv1_new_pretrain_48_ver2_qnli
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: GLUE QNLI
type: glue
config: qnli
split: validation
args: qnli
metrics:
- name: Accuracy
type: accuracy
value: 0.5053999633900788
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# hBERTv1_new_pretrain_48_ver2_qnli
This model is a fine-tuned version of [gokuls/bert_12_layer_model_v1_complete_training_new_48](https://huggingface.co/gokuls/bert_12_layer_model_v1_complete_training_new_48) on the GLUE QNLI dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6931
- Accuracy: 0.5054
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 4e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 10
- distributed_type: multi-GPU
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 15
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:-----:|:---------------:|:--------:|
| 0.6982 | 1.0 | 1637 | 0.6940 | 0.5054 |
| 0.6941 | 2.0 | 3274 | 0.6932 | 0.4946 |
| 0.6938 | 3.0 | 4911 | 0.6933 | 0.4946 |
| 0.6936 | 4.0 | 6548 | 0.6931 | 0.5054 |
| 0.6934 | 5.0 | 8185 | 0.6936 | 0.4946 |
| 0.6934 | 6.0 | 9822 | 0.6936 | 0.4946 |
| 0.6934 | 7.0 | 11459 | 0.6931 | 0.5054 |
| 0.6932 | 8.0 | 13096 | 0.6931 | 0.4946 |
| 0.6932 | 9.0 | 14733 | 0.6935 | 0.5054 |
| 0.6932 | 10.0 | 16370 | 0.6932 | 0.4946 |
| 0.6932 | 11.0 | 18007 | 0.6931 | 0.5054 |
| 0.6932 | 12.0 | 19644 | 0.6932 | 0.4946 |
### Framework versions
- Transformers 4.34.0
- Pytorch 1.14.0a0+410ce96
- Datasets 2.14.5
- Tokenizers 0.14.1
|
codys12/MergeLlama-7b | codys12 | 2023-10-18T02:04:35Z | 13 | 2 | peft | [
"peft",
"pytorch",
"llama",
"text-generation",
"dataset:codys12/MergeLlama",
"arxiv:1910.09700",
"base_model:codellama/CodeLlama-7b-hf",
"base_model:adapter:codellama/CodeLlama-7b-hf",
"license:llama2",
"region:us"
]
| text-generation | 2023-10-11T20:49:26Z | ---
library_name: peft
base_model: codellama/CodeLlama-7b-hf
license: llama2
datasets:
- codys12/MergeLlama
pipeline_tag: text-generation
---
# Model Card for Model ID
Automated merge conflict resolution
## Model Details
Peft finetune of CodeLlama-7b
### Model Description
- **Developed by:** DreamcatcherAI
- **License:** llama2
- **Finetuned from model [optional]:** CodeLlama-7b
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** codys12/MergeLlama-7b
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
Input should be formatted as
```
<<<<<<<
Current change
=======
Incoming change
>>>>>>>
```
MergeLlama will provide the resolution.
You can chain multiple conflicts/resolutions for improved context
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Data Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Data Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
## Training procedure
### Framework versions
- PEFT 0.6.0.dev0 |
foreverip/poca-SoccerTwos | foreverip | 2023-10-18T01:40:21Z | 65 | 0 | ml-agents | [
"ml-agents",
"tensorboard",
"onnx",
"SoccerTwos",
"deep-reinforcement-learning",
"reinforcement-learning",
"ML-Agents-SoccerTwos",
"region:us"
]
| reinforcement-learning | 2023-10-18T01:25:53Z | ---
library_name: ml-agents
tags:
- SoccerTwos
- deep-reinforcement-learning
- reinforcement-learning
- ML-Agents-SoccerTwos
---
# **poca** Agent playing **SoccerTwos**
This is a trained model of a **poca** agent playing **SoccerTwos**
using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://unity-technologies.github.io/ml-agents/ML-Agents-Toolkit-Documentation/
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
- A *short tutorial* where you teach Huggy the Dog 🐶 to fetch the stick and then play with him directly in your
browser: https://huggingface.co/learn/deep-rl-course/unitbonus1/introduction
- A *longer tutorial* to understand how works ML-Agents:
https://huggingface.co/learn/deep-rl-course/unit5/introduction
### Resume the training
```bash
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser**
1. If the environment is part of ML-Agents official environments, go to https://huggingface.co/unity
2. Step 1: Find your model_id: foreverip/poca-SoccerTwos
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play 👀
|
Tatvajsh/Lllama_AHS_V_7.1 | Tatvajsh | 2023-10-18T01:30:21Z | 0 | 0 | null | [
"generated_from_trainer",
"base_model:openlm-research/open_llama_3b_v2",
"base_model:finetune:openlm-research/open_llama_3b_v2",
"license:apache-2.0",
"region:us"
]
| null | 2023-10-17T22:10:06Z | ---
license: apache-2.0
base_model: openlm-research/open_llama_3b_v2
tags:
- generated_from_trainer
model-index:
- name: Lllama_AHS_V_7.1
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Lllama_AHS_V_7.1
This model is a fine-tuned version of [openlm-research/open_llama_3b_v2](https://huggingface.co/openlm-research/open_llama_3b_v2) on the None dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 4e-09
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 100
- num_epochs: 1
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- Transformers 4.35.0.dev0
- Pytorch 2.0.0
- Datasets 2.1.0
- Tokenizers 0.14.1
|
pyf98/DPHuBERT | pyf98 | 2023-10-18T01:15:55Z | 0 | 4 | null | [
"region:us"
]
| null | 2023-05-29T04:52:12Z | ---
{}
---
This repo contains model checkpoints for [DPHuBERT](https://github.com/pyf98/DPHuBERT), a task-agnostic compression method for speech SSL based on joint distillation and structured pruning.
Yifan Peng, Yui Sudo, Shakeel Muhammad, and Shinji Watanabe, “DPHuBERT: Joint Distillation and Pruning of Self-Supervised Speech Models,” in Proc. INTERSPEECH, 2023. |
teslanando/ChatterBotQA | teslanando | 2023-10-18T00:57:16Z | 16 | 0 | transformers | [
"transformers",
"pytorch",
"distilbert",
"question-answering",
"generated_from_trainer",
"base_model:distilbert/distilbert-base-multilingual-cased",
"base_model:finetune:distilbert/distilbert-base-multilingual-cased",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
]
| question-answering | 2023-10-18T00:56:49Z | ---
license: apache-2.0
base_model: distilbert-base-multilingual-cased
tags:
- generated_from_trainer
model-index:
- name: ChatterBotQA
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# ChatterBotQA
This model is a fine-tuned version of [distilbert-base-multilingual-cased](https://huggingface.co/distilbert-base-multilingual-cased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 1.6871
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1.5899556108621122e-05
- train_batch_size: 8
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 4
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 3.1445 | 1.0 | 750 | 1.9222 |
| 1.6222 | 2.0 | 1500 | 1.6359 |
| 1.1724 | 3.0 | 2250 | 1.6205 |
| 0.9271 | 4.0 | 3000 | 1.6871 |
### Framework versions
- Transformers 4.34.0
- Pytorch 2.0.1+cu118
- Datasets 2.14.5
- Tokenizers 0.14.1
|
yesj1234/mbart-mmt_mid3_ko-ja | yesj1234 | 2023-10-18T00:52:40Z | 3 | 0 | transformers | [
"transformers",
"pytorch",
"mbart",
"text2text-generation",
"generated_from_trainer",
"ko",
"ja",
"base_model:facebook/mbart-large-50-many-to-many-mmt",
"base_model:finetune:facebook/mbart-large-50-many-to-many-mmt",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| text2text-generation | 2023-10-18T00:45:21Z | ---
language:
- ko
- ja
base_model: facebook/mbart-large-50-many-to-many-mmt
tags:
- generated_from_trainer
metrics:
- bleu
model-index:
- name: mbart-mmt_mid3_ko-ja
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# mbart-mmt_mid3_ko-ja
This model is a fine-tuned version of [facebook/mbart-large-50-many-to-many-mmt](https://huggingface.co/facebook/mbart-large-50-many-to-many-mmt) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.8652
- Bleu: 10.1883
- Gen Len: 17.2057
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- distributed_type: multi-GPU
- num_devices: 4
- gradient_accumulation_steps: 2
- total_train_batch_size: 32
- total_eval_batch_size: 16
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 35
### Training results
| Training Loss | Epoch | Step | Validation Loss | Bleu | Gen Len |
|:-------------:|:-----:|:-----:|:---------------:|:-------:|:-------:|
| 1.6216 | 0.23 | 1500 | 1.5229 | 2.686 | 17.599 |
| 1.3587 | 0.46 | 3000 | 1.3061 | 4.0749 | 17.3772 |
| 1.2279 | 0.68 | 4500 | 1.1881 | 5.2878 | 17.3642 |
| 1.1408 | 0.91 | 6000 | 1.0994 | 5.4783 | 17.4093 |
| 0.9977 | 1.14 | 7500 | 1.0313 | 7.6015 | 17.36 |
| 0.9582 | 1.37 | 9000 | 0.9918 | 8.2303 | 17.3526 |
| 0.9525 | 1.59 | 10500 | 0.9811 | 8.2837 | 17.2597 |
| 0.9415 | 1.82 | 12000 | 0.9589 | 8.1592 | 17.2241 |
| 0.856 | 2.05 | 13500 | 0.9462 | 7.8401 | 17.4066 |
| 0.8273 | 2.28 | 15000 | 0.9336 | 8.6082 | 17.1918 |
| 0.8066 | 2.5 | 16500 | 0.9220 | 9.7751 | 17.5198 |
| 0.784 | 2.73 | 18000 | 0.8949 | 10.292 | 17.4097 |
| 0.8016 | 2.96 | 19500 | 0.8958 | 9.0262 | 17.4097 |
| 0.6872 | 3.19 | 21000 | 0.9043 | 9.7549 | 17.2672 |
| 0.7107 | 3.42 | 22500 | 0.8994 | 10.3016 | 17.0973 |
| 0.6726 | 3.64 | 24000 | 0.8747 | 10.5183 | 17.2871 |
| 0.6699 | 3.87 | 25500 | 0.8652 | 10.1883 | 17.2057 |
| 0.612 | 4.1 | 27000 | 0.8949 | 9.5697 | 17.2443 |
| 0.621 | 4.33 | 28500 | 0.8904 | 10.8592 | 17.329 |
| 0.6219 | 4.55 | 30000 | 0.8772 | 10.925 | 17.482 |
| 0.6164 | 4.78 | 31500 | 0.8694 | 11.8749 | 17.1624 |
### Framework versions
- Transformers 4.34.0
- Pytorch 2.1.0+cu121
- Datasets 2.14.5
- Tokenizers 0.14.1
|
gokuls/hBERTv1_new_pretrain_w_init_48_ver2_cola | gokuls | 2023-10-18T00:43:19Z | 4 | 0 | transformers | [
"transformers",
"pytorch",
"hybridbert",
"text-classification",
"generated_from_trainer",
"en",
"dataset:glue",
"base_model:gokuls/bert_12_layer_model_v1_complete_training_new_wt_init_48",
"base_model:finetune:gokuls/bert_12_layer_model_v1_complete_training_new_wt_init_48",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| text-classification | 2023-10-18T00:12:53Z | ---
language:
- en
base_model: gokuls/bert_12_layer_model_v1_complete_training_new_wt_init_48
tags:
- generated_from_trainer
datasets:
- glue
metrics:
- matthews_correlation
- accuracy
model-index:
- name: hBERTv1_new_pretrain_w_init_48_ver2_cola
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: GLUE COLA
type: glue
config: cola
split: validation
args: cola
metrics:
- name: Matthews Correlation
type: matthews_correlation
value: 0.0
- name: Accuracy
type: accuracy
value: 0.6912751793861389
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# hBERTv1_new_pretrain_w_init_48_ver2_cola
This model is a fine-tuned version of [gokuls/bert_12_layer_model_v1_complete_training_new_wt_init_48](https://huggingface.co/gokuls/bert_12_layer_model_v1_complete_training_new_wt_init_48) on the GLUE COLA dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6179
- Matthews Correlation: 0.0
- Accuracy: 0.6913
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 4e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 10
- distributed_type: multi-GPU
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 15
### Training results
| Training Loss | Epoch | Step | Validation Loss | Matthews Correlation | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------------------:|:--------:|
| 0.6277 | 1.0 | 134 | 0.6457 | 0.0 | 0.6913 |
| 0.6178 | 2.0 | 268 | 0.6240 | 0.0 | 0.6913 |
| 0.6152 | 3.0 | 402 | 0.6201 | 0.0 | 0.6913 |
| 0.6138 | 4.0 | 536 | 0.6181 | 0.0 | 0.6913 |
| 0.6111 | 5.0 | 670 | 0.6181 | 0.0 | 0.6913 |
| 0.6122 | 6.0 | 804 | 0.6232 | 0.0 | 0.6913 |
| 0.611 | 7.0 | 938 | 0.6179 | 0.0 | 0.6913 |
| 0.6087 | 8.0 | 1072 | 0.6182 | 0.0 | 0.6913 |
| 0.6112 | 9.0 | 1206 | 0.6226 | 0.0 | 0.6913 |
| 0.6116 | 10.0 | 1340 | 0.6210 | 0.0 | 0.6913 |
| 0.6091 | 11.0 | 1474 | 0.6194 | 0.0 | 0.6913 |
| 0.6087 | 12.0 | 1608 | 0.6184 | 0.0 | 0.6913 |
### Framework versions
- Transformers 4.34.0
- Pytorch 1.14.0a0+410ce96
- Datasets 2.14.5
- Tokenizers 0.14.1
|
yesj1234/xlsr_mid1_ko-zh | yesj1234 | 2023-10-18T00:42:54Z | 3 | 0 | transformers | [
"transformers",
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"./sample_speech.py",
"generated_from_trainer",
"base_model:facebook/wav2vec2-large-xlsr-53",
"base_model:finetune:facebook/wav2vec2-large-xlsr-53",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
]
| automatic-speech-recognition | 2023-10-18T00:39:11Z | ---
license: apache-2.0
base_model: facebook/wav2vec2-large-xlsr-53
tags:
- automatic-speech-recognition
- ./sample_speech.py
- generated_from_trainer
model-index:
- name: ko-xlsr5
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# ko-xlsr5
This model is a fine-tuned version of [facebook/wav2vec2-large-xlsr-53](https://huggingface.co/facebook/wav2vec2-large-xlsr-53) on the ./SAMPLE_SPEECH.PY - NA dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4067
- Cer: 0.1077
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0003
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- distributed_type: multi-GPU
- num_devices: 4
- gradient_accumulation_steps: 4
- total_train_batch_size: 64
- total_eval_batch_size: 16
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss | Cer |
|:-------------:|:-----:|:-----:|:---------------:|:------:|
| 1.4452 | 0.97 | 1800 | 0.9257 | 0.2337 |
| 1.0684 | 1.95 | 3600 | 0.6873 | 0.1834 |
| 0.9269 | 2.92 | 5400 | 0.5857 | 0.1599 |
| 0.8264 | 3.9 | 7200 | 0.5357 | 0.1442 |
| 0.7637 | 4.87 | 9000 | 0.5069 | 0.1365 |
| 0.7033 | 5.85 | 10800 | 0.4744 | 0.1277 |
| 0.652 | 6.82 | 12600 | 0.4477 | 0.1210 |
| 0.5999 | 7.8 | 14400 | 0.4310 | 0.1144 |
| 0.5606 | 8.77 | 16200 | 0.4150 | 0.1114 |
| 0.5371 | 9.75 | 18000 | 0.4068 | 0.1085 |
### Framework versions
- Transformers 4.34.0
- Pytorch 2.1.0+cu121
- Datasets 2.14.5
- Tokenizers 0.14.1
|
chargoddard/rpguild-chatml-13b | chargoddard | 2023-10-18T00:41:37Z | 1,416 | 10 | transformers | [
"transformers",
"pytorch",
"llama",
"text-generation",
"roleplay",
"en",
"dataset:chargoddard/rpguild",
"license:cc-by-nc-4.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
]
| text-generation | 2023-10-17T23:57:26Z | ---
datasets:
- chargoddard/rpguild
language:
- en
tags:
- llama
- roleplay
license: cc-by-nc-4.0
---
It's a roleplay model! Trained on ~50k examples of human written roleplay sessions. Not a hint of gpt-flavor writing. Very creative and very able to roll with the punches.
Does tend to speak for you more than it should and has a very, very strong bias towards speech in quotes.
Uses ChatML prompt format, with optional length suffixes:
```
<|im_start|>system
Enter roleplay mode. You are Steve.
Steve is a nasty little man and solves all his problems by punching people in the face.<|im_end|>
<|im_start|>Gundalf, Gandalf's cousin with a gun
Gundalf runs frantically up to Steve and begins yelling. "We have a big problem, Steve!"<|im_end|>
<|im_start|>Steve (Length: short)
"What is it now Gundalf? Do you want me to punch you in the face?" Steve asked, looking down at him. He was pretty sure that this guy wasn't worth the trouble of punching him in the face, but he didn't like to leave loose ends hanging either.
"There's been an incident," Gundalf said quickly, "A dwarf has stolen some gold from Smaug."
"Well I don't care about any of those things," Steve responded, turning away. <|im_end|>
```
[<img src="https://raw.githubusercontent.com/OpenAccess-AI-Collective/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/OpenAccess-AI-Collective/axolotl) |
hung200504/2 | hung200504 | 2023-10-18T00:29:48Z | 3 | 0 | transformers | [
"transformers",
"pytorch",
"bert",
"question-answering",
"generated_from_trainer",
"endpoints_compatible",
"region:us"
]
| question-answering | 2023-10-18T00:29:26Z | ---
tags:
- generated_from_trainer
model-index:
- name: '2'
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# 2
This model was trained from scratch on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
### Training results
### Framework versions
- Transformers 4.34.0
- Pytorch 2.0.1+cu118
- Datasets 2.14.5
- Tokenizers 0.14.1
|
gokuls/hBERTv2_new_pretrain_48_ver2_cola | gokuls | 2023-10-18T00:26:40Z | 3 | 0 | transformers | [
"transformers",
"pytorch",
"hybridbert",
"text-classification",
"generated_from_trainer",
"en",
"dataset:glue",
"base_model:gokuls/bert_12_layer_model_v2_complete_training_new_48",
"base_model:finetune:gokuls/bert_12_layer_model_v2_complete_training_new_48",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| text-classification | 2023-10-18T00:07:33Z | ---
language:
- en
base_model: gokuls/bert_12_layer_model_v2_complete_training_new_48
tags:
- generated_from_trainer
datasets:
- glue
metrics:
- matthews_correlation
- accuracy
model-index:
- name: hBERTv2_new_pretrain_48_ver2_cola
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: GLUE COLA
type: glue
config: cola
split: validation
args: cola
metrics:
- name: Matthews Correlation
type: matthews_correlation
value: 0.0
- name: Accuracy
type: accuracy
value: 0.6912751793861389
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# hBERTv2_new_pretrain_48_ver2_cola
This model is a fine-tuned version of [gokuls/bert_12_layer_model_v2_complete_training_new_48](https://huggingface.co/gokuls/bert_12_layer_model_v2_complete_training_new_48) on the GLUE COLA dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6182
- Matthews Correlation: 0.0
- Accuracy: 0.6913
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 4e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 10
- distributed_type: multi-GPU
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 15
### Training results
| Training Loss | Epoch | Step | Validation Loss | Matthews Correlation | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------------------:|:--------:|
| 0.6316 | 1.0 | 134 | 0.6287 | 0.0 | 0.6913 |
| 0.6171 | 2.0 | 268 | 0.6182 | 0.0 | 0.6913 |
| 0.6141 | 3.0 | 402 | 0.6182 | 0.0 | 0.6913 |
| 0.613 | 4.0 | 536 | 0.6184 | 0.0 | 0.6913 |
| 0.6112 | 5.0 | 670 | 0.6185 | 0.0 | 0.6913 |
| 0.6127 | 6.0 | 804 | 0.6248 | 0.0 | 0.6913 |
| 0.6109 | 7.0 | 938 | 0.6182 | 0.0 | 0.6913 |
### Framework versions
- Transformers 4.34.0
- Pytorch 1.14.0a0+410ce96
- Datasets 2.14.5
- Tokenizers 0.14.1
|
kdw2136/llama2-category-fine-tuned-0.2 | kdw2136 | 2023-10-18T00:23:08Z | 2 | 0 | peft | [
"peft",
"region:us"
]
| null | 2023-10-17T07:25:15Z | ---
library_name: peft
---
## Training procedure
### Framework versions
- PEFT 0.5.0
|
hung200504/bert-base-cased | hung200504 | 2023-10-18T00:20:20Z | 3 | 0 | transformers | [
"transformers",
"pytorch",
"bert",
"question-answering",
"generated_from_trainer",
"endpoints_compatible",
"region:us"
]
| question-answering | 2023-10-18T00:20:05Z | ---
tags:
- generated_from_trainer
model-index:
- name: bert-base-cased
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-base-cased
This model was trained from scratch on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
### Training results
### Framework versions
- Transformers 4.34.0
- Pytorch 2.0.1+cu118
- Datasets 2.14.5
- Tokenizers 0.14.1
|
alessiodm/ppo-LunarLander-v2 | alessiodm | 2023-10-18T00:05:02Z | 0 | 0 | stable-baselines3 | [
"stable-baselines3",
"LunarLander-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
]
| reinforcement-learning | 2023-10-17T23:07:57Z | ---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: 281.32 +/- 14.86
name: mean_reward
verified: false
---
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
dtorres-zAgile/opt-zc-misti-ft | dtorres-zAgile | 2023-10-17T23:57:55Z | 4 | 0 | transformers | [
"transformers",
"pytorch",
"opt",
"text-generation",
"generated_from_trainer",
"base_model:facebook/opt-350m",
"base_model:finetune:facebook/opt-350m",
"license:other",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
]
| text-generation | 2023-10-17T05:18:33Z | ---
license: other
base_model: facebook/opt-350m
tags:
- generated_from_trainer
model-index:
- name: opt-zc
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# opt-zc
This model is a fine-tuned version of [facebook/opt-350m](https://huggingface.co/facebook/opt-350m) on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 30
### Training results
### Framework versions
- Transformers 4.34.0
- Pytorch 2.0.1+cu118
- Datasets 2.14.5
- Tokenizers 0.14.1
|
OckerGui/videomae-base-finetuned-SSBD | OckerGui | 2023-10-17T23:40:16Z | 2 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"videomae",
"video-classification",
"generated_from_trainer",
"license:cc-by-nc-4.0",
"endpoints_compatible",
"region:us"
]
| video-classification | 2023-10-17T23:23:53Z | ---
license: cc-by-nc-4.0
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: videomae-base-finetuned-SSBD
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# videomae-base-finetuned-SSBD
This model is a fine-tuned version of [MCG-NJU/videomae-base](https://huggingface.co/MCG-NJU/videomae-base) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.9692
- Accuracy: 0.5714
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 12
- eval_batch_size: 12
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- training_steps: 150
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 1.1628 | 0.14 | 21 | 1.0386 | 0.4 |
| 1.1777 | 1.14 | 42 | 1.0652 | 0.4 |
| 1.0752 | 2.14 | 63 | 1.4299 | 0.12 |
| 0.6149 | 3.14 | 84 | 1.1845 | 0.32 |
| 0.4486 | 4.14 | 105 | 1.6551 | 0.16 |
| 0.2998 | 5.14 | 126 | 1.6550 | 0.16 |
| 0.138 | 6.14 | 147 | 2.1708 | 0.2 |
| 0.097 | 7.02 | 150 | 2.1794 | 0.2 |
### Framework versions
- Transformers 4.28.0
- Pytorch 2.0.1+cu118
- Datasets 2.14.5
- Tokenizers 0.13.3
|
gokuls/hBERTv1_new_pretrain_48_ver2_mrpc | gokuls | 2023-10-17T23:35:00Z | 3 | 0 | transformers | [
"transformers",
"pytorch",
"hybridbert",
"text-classification",
"generated_from_trainer",
"en",
"dataset:glue",
"base_model:gokuls/bert_12_layer_model_v1_complete_training_new_48",
"base_model:finetune:gokuls/bert_12_layer_model_v1_complete_training_new_48",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| text-classification | 2023-10-17T23:27:35Z | ---
language:
- en
base_model: gokuls/bert_12_layer_model_v1_complete_training_new_48
tags:
- generated_from_trainer
datasets:
- glue
metrics:
- accuracy
- f1
model-index:
- name: hBERTv1_new_pretrain_48_ver2_mrpc
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: GLUE MRPC
type: glue
config: mrpc
split: validation
args: mrpc
metrics:
- name: Accuracy
type: accuracy
value: 0.6838235294117647
- name: F1
type: f1
value: 0.7968503937007875
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# hBERTv1_new_pretrain_48_ver2_mrpc
This model is a fine-tuned version of [gokuls/bert_12_layer_model_v1_complete_training_new_48](https://huggingface.co/gokuls/bert_12_layer_model_v1_complete_training_new_48) on the GLUE MRPC dataset.
It achieves the following results on the evaluation set:
- Loss: 0.5902
- Accuracy: 0.6838
- F1: 0.7969
- Combined Score: 0.7403
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 4e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 10
- distributed_type: multi-GPU
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 15
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | Combined Score |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|:--------------:|
| 0.6435 | 1.0 | 58 | 0.6367 | 0.6544 | 0.7531 | 0.7037 |
| 0.6154 | 2.0 | 116 | 0.5902 | 0.6838 | 0.7969 | 0.7403 |
| 0.5494 | 3.0 | 174 | 0.6056 | 0.6985 | 0.7953 | 0.7469 |
| 0.4115 | 4.0 | 232 | 0.7328 | 0.625 | 0.7119 | 0.6684 |
| 0.2774 | 5.0 | 290 | 0.9700 | 0.6740 | 0.7654 | 0.7197 |
| 0.1932 | 6.0 | 348 | 1.1100 | 0.6667 | 0.7563 | 0.7115 |
| 0.1464 | 7.0 | 406 | 1.1999 | 0.6642 | 0.7540 | 0.7091 |
### Framework versions
- Transformers 4.34.0
- Pytorch 1.14.0a0+410ce96
- Datasets 2.14.5
- Tokenizers 0.14.1
|
gokuls/hBERTv1_new_pretrain_48_ver2_cola | gokuls | 2023-10-17T23:27:20Z | 5 | 0 | transformers | [
"transformers",
"pytorch",
"hybridbert",
"text-classification",
"generated_from_trainer",
"en",
"dataset:glue",
"base_model:gokuls/bert_12_layer_model_v1_complete_training_new_48",
"base_model:finetune:gokuls/bert_12_layer_model_v1_complete_training_new_48",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| text-classification | 2023-10-17T23:16:25Z | ---
language:
- en
base_model: gokuls/bert_12_layer_model_v1_complete_training_new_48
tags:
- generated_from_trainer
datasets:
- glue
metrics:
- matthews_correlation
- accuracy
model-index:
- name: hBERTv1_new_pretrain_48_ver2_cola
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: GLUE COLA
type: glue
config: cola
split: validation
args: cola
metrics:
- name: Matthews Correlation
type: matthews_correlation
value: 0.0
- name: Accuracy
type: accuracy
value: 0.6912751793861389
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# hBERTv1_new_pretrain_48_ver2_cola
This model is a fine-tuned version of [gokuls/bert_12_layer_model_v1_complete_training_new_48](https://huggingface.co/gokuls/bert_12_layer_model_v1_complete_training_new_48) on the GLUE COLA dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6181
- Matthews Correlation: 0.0
- Accuracy: 0.6913
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 4e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 10
- distributed_type: multi-GPU
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 15
### Training results
| Training Loss | Epoch | Step | Validation Loss | Matthews Correlation | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------------------:|:--------:|
| 0.6454 | 1.0 | 134 | 0.6330 | 0.0 | 0.6913 |
| 0.6173 | 2.0 | 268 | 0.6188 | 0.0 | 0.6913 |
| 0.6141 | 3.0 | 402 | 0.6181 | 0.0 | 0.6913 |
| 0.6147 | 4.0 | 536 | 0.6181 | 0.0 | 0.6913 |
| 0.6134 | 5.0 | 670 | 0.6191 | 0.0 | 0.6913 |
| 0.6112 | 6.0 | 804 | 0.6335 | 0.0 | 0.6913 |
| 0.6114 | 7.0 | 938 | 0.6183 | 0.0 | 0.6913 |
| 0.6095 | 8.0 | 1072 | 0.6181 | 0.0 | 0.6913 |
| 0.6113 | 9.0 | 1206 | 0.6206 | 0.0 | 0.6913 |
### Framework versions
- Transformers 4.34.0
- Pytorch 1.14.0a0+410ce96
- Datasets 2.14.5
- Tokenizers 0.14.1
|
stefan-it/hmbench-hipe2020-fr-hmbert-bs4-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-4 | stefan-it | 2023-10-17T23:23:42Z | 5 | 0 | flair | [
"flair",
"pytorch",
"token-classification",
"sequence-tagger-model",
"fr",
"base_model:dbmdz/bert-base-historic-multilingual-cased",
"base_model:finetune:dbmdz/bert-base-historic-multilingual-cased",
"license:mit",
"region:us"
]
| token-classification | 2023-10-13T18:25:26Z | ---
language: fr
license: mit
tags:
- flair
- token-classification
- sequence-tagger-model
base_model: dbmdz/bert-base-historic-multilingual-cased
widget:
- text: Nous recevons le premier numéro d ' un nouveau journal , le Radical - Libéral
, qui paraîtra à Genève deux fois la semaine . Son but est de représenter l '
élément national du radicalisme genevois , en d ' autres termes , de défendre
la politique intransigeante do M . Carteret , en opposition aux tendances du groupe
_ > dont le Genevois est l ' organe . Bétail .
---
# Fine-tuned Flair Model on French HIPE-2020 Dataset (HIPE-2022)
This Flair model was fine-tuned on the
[French HIPE-2020](https://github.com/hipe-eval/HIPE-2022-data/blob/main/documentation/README-hipe2020.md)
NER Dataset using hmBERT as backbone LM.
The HIPE-2020 dataset is comprised of newspapers from mid 19C to mid 20C. For information can be found
[here](https://dl.acm.org/doi/abs/10.1007/978-3-030-58219-7_21).
The following NEs were annotated: `loc`, `org`, `pers`, `prod`, `time` and `comp`.
# Results
We performed a hyper-parameter search over the following parameters with 5 different seeds per configuration:
* Batch Sizes: `[8, 4]`
* Learning Rates: `[3e-05, 5e-05]`
And report micro F1-score on development set:
| Configuration | Run 1 | Run 2 | Run 3 | Run 4 | Run 5 | Avg. |
|-----------------|--------------|--------------|--------------|--------------|--------------|--------------|
| bs4-e10-lr3e-05 | [0.8314][1] | [0.8377][2] | [0.8359][3] | [0.8214][4] | [0.8364][5] | 83.26 ± 0.6 |
| bs8-e10-lr3e-05 | [0.83][6] | [0.8274][7] | [0.8358][8] | [0.8234][9] | [0.8327][10] | 82.99 ± 0.43 |
| bs8-e10-lr5e-05 | [0.8301][11] | [0.8321][12] | [0.8267][13] | [0.8266][14] | [0.8308][15] | 82.93 ± 0.22 |
| bs4-e10-lr5e-05 | [0.8181][16] | [0.8087][17] | [0.8239][18] | [0.8219][19] | [0.8224][20] | 81.9 ± 0.55 |
[1]: https://hf.co/stefan-it/hmbench-hipe2020-fr-hmbert-bs4-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-1
[2]: https://hf.co/stefan-it/hmbench-hipe2020-fr-hmbert-bs4-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-2
[3]: https://hf.co/stefan-it/hmbench-hipe2020-fr-hmbert-bs4-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-3
[4]: https://hf.co/stefan-it/hmbench-hipe2020-fr-hmbert-bs4-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-4
[5]: https://hf.co/stefan-it/hmbench-hipe2020-fr-hmbert-bs4-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-5
[6]: https://hf.co/stefan-it/hmbench-hipe2020-fr-hmbert-bs8-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-1
[7]: https://hf.co/stefan-it/hmbench-hipe2020-fr-hmbert-bs8-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-2
[8]: https://hf.co/stefan-it/hmbench-hipe2020-fr-hmbert-bs8-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-3
[9]: https://hf.co/stefan-it/hmbench-hipe2020-fr-hmbert-bs8-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-4
[10]: https://hf.co/stefan-it/hmbench-hipe2020-fr-hmbert-bs8-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-5
[11]: https://hf.co/stefan-it/hmbench-hipe2020-fr-hmbert-bs8-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-1
[12]: https://hf.co/stefan-it/hmbench-hipe2020-fr-hmbert-bs8-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-2
[13]: https://hf.co/stefan-it/hmbench-hipe2020-fr-hmbert-bs8-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-3
[14]: https://hf.co/stefan-it/hmbench-hipe2020-fr-hmbert-bs8-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-4
[15]: https://hf.co/stefan-it/hmbench-hipe2020-fr-hmbert-bs8-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-5
[16]: https://hf.co/stefan-it/hmbench-hipe2020-fr-hmbert-bs4-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-1
[17]: https://hf.co/stefan-it/hmbench-hipe2020-fr-hmbert-bs4-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-2
[18]: https://hf.co/stefan-it/hmbench-hipe2020-fr-hmbert-bs4-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-3
[19]: https://hf.co/stefan-it/hmbench-hipe2020-fr-hmbert-bs4-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-4
[20]: https://hf.co/stefan-it/hmbench-hipe2020-fr-hmbert-bs4-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-5
The [training log](training.log) and TensorBoard logs (only for hmByT5 and hmTEAMS based models) are also uploaded to the model hub.
More information about fine-tuning can be found [here](https://github.com/stefan-it/hmBench).
# Acknowledgements
We thank [Luisa März](https://github.com/LuisaMaerz), [Katharina Schmid](https://github.com/schmika) and
[Erion Çano](https://github.com/erionc) for their fruitful discussions about Historic Language Models.
Research supported with Cloud TPUs from Google's [TPU Research Cloud](https://sites.research.google/trc/about/) (TRC).
Many Thanks for providing access to the TPUs ❤️
|
stefan-it/hmbench-hipe2020-fr-hmbert-bs4-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-3 | stefan-it | 2023-10-17T23:23:41Z | 7 | 0 | flair | [
"flair",
"pytorch",
"token-classification",
"sequence-tagger-model",
"fr",
"base_model:dbmdz/bert-base-historic-multilingual-cased",
"base_model:finetune:dbmdz/bert-base-historic-multilingual-cased",
"license:mit",
"region:us"
]
| token-classification | 2023-10-13T17:35:44Z | ---
language: fr
license: mit
tags:
- flair
- token-classification
- sequence-tagger-model
base_model: dbmdz/bert-base-historic-multilingual-cased
widget:
- text: Nous recevons le premier numéro d ' un nouveau journal , le Radical - Libéral
, qui paraîtra à Genève deux fois la semaine . Son but est de représenter l '
élément national du radicalisme genevois , en d ' autres termes , de défendre
la politique intransigeante do M . Carteret , en opposition aux tendances du groupe
_ > dont le Genevois est l ' organe . Bétail .
---
# Fine-tuned Flair Model on French HIPE-2020 Dataset (HIPE-2022)
This Flair model was fine-tuned on the
[French HIPE-2020](https://github.com/hipe-eval/HIPE-2022-data/blob/main/documentation/README-hipe2020.md)
NER Dataset using hmBERT as backbone LM.
The HIPE-2020 dataset is comprised of newspapers from mid 19C to mid 20C. For information can be found
[here](https://dl.acm.org/doi/abs/10.1007/978-3-030-58219-7_21).
The following NEs were annotated: `loc`, `org`, `pers`, `prod`, `time` and `comp`.
# Results
We performed a hyper-parameter search over the following parameters with 5 different seeds per configuration:
* Batch Sizes: `[8, 4]`
* Learning Rates: `[3e-05, 5e-05]`
And report micro F1-score on development set:
| Configuration | Run 1 | Run 2 | Run 3 | Run 4 | Run 5 | Avg. |
|-----------------|--------------|--------------|--------------|--------------|--------------|--------------|
| bs4-e10-lr3e-05 | [0.8314][1] | [0.8377][2] | [0.8359][3] | [0.8214][4] | [0.8364][5] | 83.26 ± 0.6 |
| bs8-e10-lr3e-05 | [0.83][6] | [0.8274][7] | [0.8358][8] | [0.8234][9] | [0.8327][10] | 82.99 ± 0.43 |
| bs8-e10-lr5e-05 | [0.8301][11] | [0.8321][12] | [0.8267][13] | [0.8266][14] | [0.8308][15] | 82.93 ± 0.22 |
| bs4-e10-lr5e-05 | [0.8181][16] | [0.8087][17] | [0.8239][18] | [0.8219][19] | [0.8224][20] | 81.9 ± 0.55 |
[1]: https://hf.co/stefan-it/hmbench-hipe2020-fr-hmbert-bs4-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-1
[2]: https://hf.co/stefan-it/hmbench-hipe2020-fr-hmbert-bs4-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-2
[3]: https://hf.co/stefan-it/hmbench-hipe2020-fr-hmbert-bs4-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-3
[4]: https://hf.co/stefan-it/hmbench-hipe2020-fr-hmbert-bs4-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-4
[5]: https://hf.co/stefan-it/hmbench-hipe2020-fr-hmbert-bs4-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-5
[6]: https://hf.co/stefan-it/hmbench-hipe2020-fr-hmbert-bs8-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-1
[7]: https://hf.co/stefan-it/hmbench-hipe2020-fr-hmbert-bs8-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-2
[8]: https://hf.co/stefan-it/hmbench-hipe2020-fr-hmbert-bs8-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-3
[9]: https://hf.co/stefan-it/hmbench-hipe2020-fr-hmbert-bs8-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-4
[10]: https://hf.co/stefan-it/hmbench-hipe2020-fr-hmbert-bs8-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-5
[11]: https://hf.co/stefan-it/hmbench-hipe2020-fr-hmbert-bs8-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-1
[12]: https://hf.co/stefan-it/hmbench-hipe2020-fr-hmbert-bs8-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-2
[13]: https://hf.co/stefan-it/hmbench-hipe2020-fr-hmbert-bs8-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-3
[14]: https://hf.co/stefan-it/hmbench-hipe2020-fr-hmbert-bs8-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-4
[15]: https://hf.co/stefan-it/hmbench-hipe2020-fr-hmbert-bs8-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-5
[16]: https://hf.co/stefan-it/hmbench-hipe2020-fr-hmbert-bs4-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-1
[17]: https://hf.co/stefan-it/hmbench-hipe2020-fr-hmbert-bs4-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-2
[18]: https://hf.co/stefan-it/hmbench-hipe2020-fr-hmbert-bs4-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-3
[19]: https://hf.co/stefan-it/hmbench-hipe2020-fr-hmbert-bs4-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-4
[20]: https://hf.co/stefan-it/hmbench-hipe2020-fr-hmbert-bs4-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-5
The [training log](training.log) and TensorBoard logs (only for hmByT5 and hmTEAMS based models) are also uploaded to the model hub.
More information about fine-tuning can be found [here](https://github.com/stefan-it/hmBench).
# Acknowledgements
We thank [Luisa März](https://github.com/LuisaMaerz), [Katharina Schmid](https://github.com/schmika) and
[Erion Çano](https://github.com/erionc) for their fruitful discussions about Historic Language Models.
Research supported with Cloud TPUs from Google's [TPU Research Cloud](https://sites.research.google/trc/about/) (TRC).
Many Thanks for providing access to the TPUs ❤️
|
stefan-it/hmbench-hipe2020-fr-hmbert-bs4-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-5 | stefan-it | 2023-10-17T23:23:38Z | 7 | 0 | flair | [
"flair",
"pytorch",
"token-classification",
"sequence-tagger-model",
"fr",
"base_model:dbmdz/bert-base-historic-multilingual-cased",
"base_model:finetune:dbmdz/bert-base-historic-multilingual-cased",
"license:mit",
"region:us"
]
| token-classification | 2023-10-13T19:01:00Z | ---
language: fr
license: mit
tags:
- flair
- token-classification
- sequence-tagger-model
base_model: dbmdz/bert-base-historic-multilingual-cased
widget:
- text: Nous recevons le premier numéro d ' un nouveau journal , le Radical - Libéral
, qui paraîtra à Genève deux fois la semaine . Son but est de représenter l '
élément national du radicalisme genevois , en d ' autres termes , de défendre
la politique intransigeante do M . Carteret , en opposition aux tendances du groupe
_ > dont le Genevois est l ' organe . Bétail .
---
# Fine-tuned Flair Model on French HIPE-2020 Dataset (HIPE-2022)
This Flair model was fine-tuned on the
[French HIPE-2020](https://github.com/hipe-eval/HIPE-2022-data/blob/main/documentation/README-hipe2020.md)
NER Dataset using hmBERT as backbone LM.
The HIPE-2020 dataset is comprised of newspapers from mid 19C to mid 20C. For information can be found
[here](https://dl.acm.org/doi/abs/10.1007/978-3-030-58219-7_21).
The following NEs were annotated: `loc`, `org`, `pers`, `prod`, `time` and `comp`.
# Results
We performed a hyper-parameter search over the following parameters with 5 different seeds per configuration:
* Batch Sizes: `[8, 4]`
* Learning Rates: `[3e-05, 5e-05]`
And report micro F1-score on development set:
| Configuration | Run 1 | Run 2 | Run 3 | Run 4 | Run 5 | Avg. |
|-----------------|--------------|--------------|--------------|--------------|--------------|--------------|
| bs4-e10-lr3e-05 | [0.8314][1] | [0.8377][2] | [0.8359][3] | [0.8214][4] | [0.8364][5] | 83.26 ± 0.6 |
| bs8-e10-lr3e-05 | [0.83][6] | [0.8274][7] | [0.8358][8] | [0.8234][9] | [0.8327][10] | 82.99 ± 0.43 |
| bs8-e10-lr5e-05 | [0.8301][11] | [0.8321][12] | [0.8267][13] | [0.8266][14] | [0.8308][15] | 82.93 ± 0.22 |
| bs4-e10-lr5e-05 | [0.8181][16] | [0.8087][17] | [0.8239][18] | [0.8219][19] | [0.8224][20] | 81.9 ± 0.55 |
[1]: https://hf.co/stefan-it/hmbench-hipe2020-fr-hmbert-bs4-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-1
[2]: https://hf.co/stefan-it/hmbench-hipe2020-fr-hmbert-bs4-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-2
[3]: https://hf.co/stefan-it/hmbench-hipe2020-fr-hmbert-bs4-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-3
[4]: https://hf.co/stefan-it/hmbench-hipe2020-fr-hmbert-bs4-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-4
[5]: https://hf.co/stefan-it/hmbench-hipe2020-fr-hmbert-bs4-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-5
[6]: https://hf.co/stefan-it/hmbench-hipe2020-fr-hmbert-bs8-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-1
[7]: https://hf.co/stefan-it/hmbench-hipe2020-fr-hmbert-bs8-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-2
[8]: https://hf.co/stefan-it/hmbench-hipe2020-fr-hmbert-bs8-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-3
[9]: https://hf.co/stefan-it/hmbench-hipe2020-fr-hmbert-bs8-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-4
[10]: https://hf.co/stefan-it/hmbench-hipe2020-fr-hmbert-bs8-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-5
[11]: https://hf.co/stefan-it/hmbench-hipe2020-fr-hmbert-bs8-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-1
[12]: https://hf.co/stefan-it/hmbench-hipe2020-fr-hmbert-bs8-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-2
[13]: https://hf.co/stefan-it/hmbench-hipe2020-fr-hmbert-bs8-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-3
[14]: https://hf.co/stefan-it/hmbench-hipe2020-fr-hmbert-bs8-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-4
[15]: https://hf.co/stefan-it/hmbench-hipe2020-fr-hmbert-bs8-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-5
[16]: https://hf.co/stefan-it/hmbench-hipe2020-fr-hmbert-bs4-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-1
[17]: https://hf.co/stefan-it/hmbench-hipe2020-fr-hmbert-bs4-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-2
[18]: https://hf.co/stefan-it/hmbench-hipe2020-fr-hmbert-bs4-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-3
[19]: https://hf.co/stefan-it/hmbench-hipe2020-fr-hmbert-bs4-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-4
[20]: https://hf.co/stefan-it/hmbench-hipe2020-fr-hmbert-bs4-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-5
The [training log](training.log) and TensorBoard logs (only for hmByT5 and hmTEAMS based models) are also uploaded to the model hub.
More information about fine-tuning can be found [here](https://github.com/stefan-it/hmBench).
# Acknowledgements
We thank [Luisa März](https://github.com/LuisaMaerz), [Katharina Schmid](https://github.com/schmika) and
[Erion Çano](https://github.com/erionc) for their fruitful discussions about Historic Language Models.
Research supported with Cloud TPUs from Google's [TPU Research Cloud](https://sites.research.google/trc/about/) (TRC).
Many Thanks for providing access to the TPUs ❤️
|
stefan-it/hmbench-hipe2020-fr-hmbert-bs4-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-4 | stefan-it | 2023-10-17T23:23:37Z | 4 | 0 | flair | [
"flair",
"pytorch",
"token-classification",
"sequence-tagger-model",
"fr",
"base_model:dbmdz/bert-base-historic-multilingual-cased",
"base_model:finetune:dbmdz/bert-base-historic-multilingual-cased",
"license:mit",
"region:us"
]
| token-classification | 2023-10-13T18:11:22Z | ---
language: fr
license: mit
tags:
- flair
- token-classification
- sequence-tagger-model
base_model: dbmdz/bert-base-historic-multilingual-cased
widget:
- text: Nous recevons le premier numéro d ' un nouveau journal , le Radical - Libéral
, qui paraîtra à Genève deux fois la semaine . Son but est de représenter l '
élément national du radicalisme genevois , en d ' autres termes , de défendre
la politique intransigeante do M . Carteret , en opposition aux tendances du groupe
_ > dont le Genevois est l ' organe . Bétail .
---
# Fine-tuned Flair Model on French HIPE-2020 Dataset (HIPE-2022)
This Flair model was fine-tuned on the
[French HIPE-2020](https://github.com/hipe-eval/HIPE-2022-data/blob/main/documentation/README-hipe2020.md)
NER Dataset using hmBERT as backbone LM.
The HIPE-2020 dataset is comprised of newspapers from mid 19C to mid 20C. For information can be found
[here](https://dl.acm.org/doi/abs/10.1007/978-3-030-58219-7_21).
The following NEs were annotated: `loc`, `org`, `pers`, `prod`, `time` and `comp`.
# Results
We performed a hyper-parameter search over the following parameters with 5 different seeds per configuration:
* Batch Sizes: `[8, 4]`
* Learning Rates: `[3e-05, 5e-05]`
And report micro F1-score on development set:
| Configuration | Run 1 | Run 2 | Run 3 | Run 4 | Run 5 | Avg. |
|-----------------|--------------|--------------|--------------|--------------|--------------|--------------|
| bs4-e10-lr3e-05 | [0.8314][1] | [0.8377][2] | [0.8359][3] | [0.8214][4] | [0.8364][5] | 83.26 ± 0.6 |
| bs8-e10-lr3e-05 | [0.83][6] | [0.8274][7] | [0.8358][8] | [0.8234][9] | [0.8327][10] | 82.99 ± 0.43 |
| bs8-e10-lr5e-05 | [0.8301][11] | [0.8321][12] | [0.8267][13] | [0.8266][14] | [0.8308][15] | 82.93 ± 0.22 |
| bs4-e10-lr5e-05 | [0.8181][16] | [0.8087][17] | [0.8239][18] | [0.8219][19] | [0.8224][20] | 81.9 ± 0.55 |
[1]: https://hf.co/stefan-it/hmbench-hipe2020-fr-hmbert-bs4-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-1
[2]: https://hf.co/stefan-it/hmbench-hipe2020-fr-hmbert-bs4-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-2
[3]: https://hf.co/stefan-it/hmbench-hipe2020-fr-hmbert-bs4-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-3
[4]: https://hf.co/stefan-it/hmbench-hipe2020-fr-hmbert-bs4-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-4
[5]: https://hf.co/stefan-it/hmbench-hipe2020-fr-hmbert-bs4-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-5
[6]: https://hf.co/stefan-it/hmbench-hipe2020-fr-hmbert-bs8-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-1
[7]: https://hf.co/stefan-it/hmbench-hipe2020-fr-hmbert-bs8-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-2
[8]: https://hf.co/stefan-it/hmbench-hipe2020-fr-hmbert-bs8-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-3
[9]: https://hf.co/stefan-it/hmbench-hipe2020-fr-hmbert-bs8-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-4
[10]: https://hf.co/stefan-it/hmbench-hipe2020-fr-hmbert-bs8-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-5
[11]: https://hf.co/stefan-it/hmbench-hipe2020-fr-hmbert-bs8-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-1
[12]: https://hf.co/stefan-it/hmbench-hipe2020-fr-hmbert-bs8-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-2
[13]: https://hf.co/stefan-it/hmbench-hipe2020-fr-hmbert-bs8-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-3
[14]: https://hf.co/stefan-it/hmbench-hipe2020-fr-hmbert-bs8-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-4
[15]: https://hf.co/stefan-it/hmbench-hipe2020-fr-hmbert-bs8-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-5
[16]: https://hf.co/stefan-it/hmbench-hipe2020-fr-hmbert-bs4-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-1
[17]: https://hf.co/stefan-it/hmbench-hipe2020-fr-hmbert-bs4-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-2
[18]: https://hf.co/stefan-it/hmbench-hipe2020-fr-hmbert-bs4-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-3
[19]: https://hf.co/stefan-it/hmbench-hipe2020-fr-hmbert-bs4-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-4
[20]: https://hf.co/stefan-it/hmbench-hipe2020-fr-hmbert-bs4-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-5
The [training log](training.log) and TensorBoard logs (only for hmByT5 and hmTEAMS based models) are also uploaded to the model hub.
More information about fine-tuning can be found [here](https://github.com/stefan-it/hmBench).
# Acknowledgements
We thank [Luisa März](https://github.com/LuisaMaerz), [Katharina Schmid](https://github.com/schmika) and
[Erion Çano](https://github.com/erionc) for their fruitful discussions about Historic Language Models.
Research supported with Cloud TPUs from Google's [TPU Research Cloud](https://sites.research.google/trc/about/) (TRC).
Many Thanks for providing access to the TPUs ❤️
|
stefan-it/hmbench-hipe2020-fr-hmbert-bs4-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-3 | stefan-it | 2023-10-17T23:23:36Z | 3 | 0 | flair | [
"flair",
"pytorch",
"token-classification",
"sequence-tagger-model",
"fr",
"base_model:dbmdz/bert-base-historic-multilingual-cased",
"base_model:finetune:dbmdz/bert-base-historic-multilingual-cased",
"license:mit",
"region:us"
]
| token-classification | 2023-10-13T17:21:42Z | ---
language: fr
license: mit
tags:
- flair
- token-classification
- sequence-tagger-model
base_model: dbmdz/bert-base-historic-multilingual-cased
widget:
- text: Nous recevons le premier numéro d ' un nouveau journal , le Radical - Libéral
, qui paraîtra à Genève deux fois la semaine . Son but est de représenter l '
élément national du radicalisme genevois , en d ' autres termes , de défendre
la politique intransigeante do M . Carteret , en opposition aux tendances du groupe
_ > dont le Genevois est l ' organe . Bétail .
---
# Fine-tuned Flair Model on French HIPE-2020 Dataset (HIPE-2022)
This Flair model was fine-tuned on the
[French HIPE-2020](https://github.com/hipe-eval/HIPE-2022-data/blob/main/documentation/README-hipe2020.md)
NER Dataset using hmBERT as backbone LM.
The HIPE-2020 dataset is comprised of newspapers from mid 19C to mid 20C. For information can be found
[here](https://dl.acm.org/doi/abs/10.1007/978-3-030-58219-7_21).
The following NEs were annotated: `loc`, `org`, `pers`, `prod`, `time` and `comp`.
# Results
We performed a hyper-parameter search over the following parameters with 5 different seeds per configuration:
* Batch Sizes: `[8, 4]`
* Learning Rates: `[3e-05, 5e-05]`
And report micro F1-score on development set:
| Configuration | Run 1 | Run 2 | Run 3 | Run 4 | Run 5 | Avg. |
|-----------------|--------------|--------------|--------------|--------------|--------------|--------------|
| bs4-e10-lr3e-05 | [0.8314][1] | [0.8377][2] | [0.8359][3] | [0.8214][4] | [0.8364][5] | 83.26 ± 0.6 |
| bs8-e10-lr3e-05 | [0.83][6] | [0.8274][7] | [0.8358][8] | [0.8234][9] | [0.8327][10] | 82.99 ± 0.43 |
| bs8-e10-lr5e-05 | [0.8301][11] | [0.8321][12] | [0.8267][13] | [0.8266][14] | [0.8308][15] | 82.93 ± 0.22 |
| bs4-e10-lr5e-05 | [0.8181][16] | [0.8087][17] | [0.8239][18] | [0.8219][19] | [0.8224][20] | 81.9 ± 0.55 |
[1]: https://hf.co/stefan-it/hmbench-hipe2020-fr-hmbert-bs4-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-1
[2]: https://hf.co/stefan-it/hmbench-hipe2020-fr-hmbert-bs4-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-2
[3]: https://hf.co/stefan-it/hmbench-hipe2020-fr-hmbert-bs4-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-3
[4]: https://hf.co/stefan-it/hmbench-hipe2020-fr-hmbert-bs4-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-4
[5]: https://hf.co/stefan-it/hmbench-hipe2020-fr-hmbert-bs4-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-5
[6]: https://hf.co/stefan-it/hmbench-hipe2020-fr-hmbert-bs8-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-1
[7]: https://hf.co/stefan-it/hmbench-hipe2020-fr-hmbert-bs8-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-2
[8]: https://hf.co/stefan-it/hmbench-hipe2020-fr-hmbert-bs8-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-3
[9]: https://hf.co/stefan-it/hmbench-hipe2020-fr-hmbert-bs8-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-4
[10]: https://hf.co/stefan-it/hmbench-hipe2020-fr-hmbert-bs8-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-5
[11]: https://hf.co/stefan-it/hmbench-hipe2020-fr-hmbert-bs8-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-1
[12]: https://hf.co/stefan-it/hmbench-hipe2020-fr-hmbert-bs8-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-2
[13]: https://hf.co/stefan-it/hmbench-hipe2020-fr-hmbert-bs8-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-3
[14]: https://hf.co/stefan-it/hmbench-hipe2020-fr-hmbert-bs8-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-4
[15]: https://hf.co/stefan-it/hmbench-hipe2020-fr-hmbert-bs8-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-5
[16]: https://hf.co/stefan-it/hmbench-hipe2020-fr-hmbert-bs4-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-1
[17]: https://hf.co/stefan-it/hmbench-hipe2020-fr-hmbert-bs4-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-2
[18]: https://hf.co/stefan-it/hmbench-hipe2020-fr-hmbert-bs4-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-3
[19]: https://hf.co/stefan-it/hmbench-hipe2020-fr-hmbert-bs4-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-4
[20]: https://hf.co/stefan-it/hmbench-hipe2020-fr-hmbert-bs4-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-5
The [training log](training.log) and TensorBoard logs (only for hmByT5 and hmTEAMS based models) are also uploaded to the model hub.
More information about fine-tuning can be found [here](https://github.com/stefan-it/hmBench).
# Acknowledgements
We thank [Luisa März](https://github.com/LuisaMaerz), [Katharina Schmid](https://github.com/schmika) and
[Erion Çano](https://github.com/erionc) for their fruitful discussions about Historic Language Models.
Research supported with Cloud TPUs from Google's [TPU Research Cloud](https://sites.research.google/trc/about/) (TRC).
Many Thanks for providing access to the TPUs ❤️
|
hmbert/flair-hipe-2022-hipe2020-fr | hmbert | 2023-10-17T23:23:35Z | 4 | 0 | flair | [
"flair",
"pytorch",
"token-classification",
"sequence-tagger-model",
"fr",
"base_model:dbmdz/bert-base-historic-multilingual-cased",
"base_model:finetune:dbmdz/bert-base-historic-multilingual-cased",
"license:mit",
"region:us"
]
| token-classification | 2023-10-13T16:31:58Z | ---
language: fr
license: mit
tags:
- flair
- token-classification
- sequence-tagger-model
base_model: dbmdz/bert-base-historic-multilingual-cased
widget:
- text: Nous recevons le premier numéro d ' un nouveau journal , le Radical - Libéral
, qui paraîtra à Genève deux fois la semaine . Son but est de représenter l '
élément national du radicalisme genevois , en d ' autres termes , de défendre
la politique intransigeante do M . Carteret , en opposition aux tendances du groupe
_ > dont le Genevois est l ' organe . Bétail .
---
# Fine-tuned Flair Model on French HIPE-2020 Dataset (HIPE-2022)
This Flair model was fine-tuned on the
[French HIPE-2020](https://github.com/hipe-eval/HIPE-2022-data/blob/main/documentation/README-hipe2020.md)
NER Dataset using hmBERT as backbone LM.
The HIPE-2020 dataset is comprised of newspapers from mid 19C to mid 20C. For information can be found
[here](https://dl.acm.org/doi/abs/10.1007/978-3-030-58219-7_21).
The following NEs were annotated: `loc`, `org`, `pers`, `prod`, `time` and `comp`.
# Results
We performed a hyper-parameter search over the following parameters with 5 different seeds per configuration:
* Batch Sizes: `[8, 4]`
* Learning Rates: `[3e-05, 5e-05]`
And report micro F1-score on development set:
| Configuration | Run 1 | Run 2 | Run 3 | Run 4 | Run 5 | Avg. |
|-----------------|--------------|--------------|--------------|--------------|--------------|--------------|
| bs4-e10-lr3e-05 | [0.8314][1] | [0.8377][2] | [0.8359][3] | [0.8214][4] | [0.8364][5] | 83.26 ± 0.6 |
| bs8-e10-lr3e-05 | [0.83][6] | [0.8274][7] | [0.8358][8] | [0.8234][9] | [0.8327][10] | 82.99 ± 0.43 |
| bs8-e10-lr5e-05 | [0.8301][11] | [0.8321][12] | [0.8267][13] | [0.8266][14] | [0.8308][15] | 82.93 ± 0.22 |
| bs4-e10-lr5e-05 | [0.8181][16] | [0.8087][17] | [0.8239][18] | [0.8219][19] | [0.8224][20] | 81.9 ± 0.55 |
[1]: https://hf.co/stefan-it/hmbench-hipe2020-fr-hmbert-bs4-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-1
[2]: https://hf.co/stefan-it/hmbench-hipe2020-fr-hmbert-bs4-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-2
[3]: https://hf.co/stefan-it/hmbench-hipe2020-fr-hmbert-bs4-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-3
[4]: https://hf.co/stefan-it/hmbench-hipe2020-fr-hmbert-bs4-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-4
[5]: https://hf.co/stefan-it/hmbench-hipe2020-fr-hmbert-bs4-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-5
[6]: https://hf.co/stefan-it/hmbench-hipe2020-fr-hmbert-bs8-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-1
[7]: https://hf.co/stefan-it/hmbench-hipe2020-fr-hmbert-bs8-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-2
[8]: https://hf.co/stefan-it/hmbench-hipe2020-fr-hmbert-bs8-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-3
[9]: https://hf.co/stefan-it/hmbench-hipe2020-fr-hmbert-bs8-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-4
[10]: https://hf.co/stefan-it/hmbench-hipe2020-fr-hmbert-bs8-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-5
[11]: https://hf.co/stefan-it/hmbench-hipe2020-fr-hmbert-bs8-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-1
[12]: https://hf.co/stefan-it/hmbench-hipe2020-fr-hmbert-bs8-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-2
[13]: https://hf.co/stefan-it/hmbench-hipe2020-fr-hmbert-bs8-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-3
[14]: https://hf.co/stefan-it/hmbench-hipe2020-fr-hmbert-bs8-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-4
[15]: https://hf.co/stefan-it/hmbench-hipe2020-fr-hmbert-bs8-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-5
[16]: https://hf.co/stefan-it/hmbench-hipe2020-fr-hmbert-bs4-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-1
[17]: https://hf.co/stefan-it/hmbench-hipe2020-fr-hmbert-bs4-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-2
[18]: https://hf.co/stefan-it/hmbench-hipe2020-fr-hmbert-bs4-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-3
[19]: https://hf.co/stefan-it/hmbench-hipe2020-fr-hmbert-bs4-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-4
[20]: https://hf.co/stefan-it/hmbench-hipe2020-fr-hmbert-bs4-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-5
The [training log](training.log) and TensorBoard logs (only for hmByT5 and hmTEAMS based models) are also uploaded to the model hub.
More information about fine-tuning can be found [here](https://github.com/stefan-it/hmBench).
# Acknowledgements
We thank [Luisa März](https://github.com/LuisaMaerz), [Katharina Schmid](https://github.com/schmika) and
[Erion Çano](https://github.com/erionc) for their fruitful discussions about Historic Language Models.
Research supported with Cloud TPUs from Google's [TPU Research Cloud](https://sites.research.google/trc/about/) (TRC).
Many Thanks for providing access to the TPUs ❤️
|
stefan-it/hmbench-hipe2020-fr-hmbert-bs4-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-1 | stefan-it | 2023-10-17T23:23:34Z | 7 | 0 | flair | [
"flair",
"pytorch",
"token-classification",
"sequence-tagger-model",
"fr",
"base_model:dbmdz/bert-base-historic-multilingual-cased",
"base_model:finetune:dbmdz/bert-base-historic-multilingual-cased",
"license:mit",
"region:us"
]
| token-classification | 2023-10-13T15:41:41Z | ---
language: fr
license: mit
tags:
- flair
- token-classification
- sequence-tagger-model
base_model: dbmdz/bert-base-historic-multilingual-cased
widget:
- text: Nous recevons le premier numéro d ' un nouveau journal , le Radical - Libéral
, qui paraîtra à Genève deux fois la semaine . Son but est de représenter l '
élément national du radicalisme genevois , en d ' autres termes , de défendre
la politique intransigeante do M . Carteret , en opposition aux tendances du groupe
_ > dont le Genevois est l ' organe . Bétail .
---
# Fine-tuned Flair Model on French HIPE-2020 Dataset (HIPE-2022)
This Flair model was fine-tuned on the
[French HIPE-2020](https://github.com/hipe-eval/HIPE-2022-data/blob/main/documentation/README-hipe2020.md)
NER Dataset using hmBERT as backbone LM.
The HIPE-2020 dataset is comprised of newspapers from mid 19C to mid 20C. For information can be found
[here](https://dl.acm.org/doi/abs/10.1007/978-3-030-58219-7_21).
The following NEs were annotated: `loc`, `org`, `pers`, `prod`, `time` and `comp`.
# Results
We performed a hyper-parameter search over the following parameters with 5 different seeds per configuration:
* Batch Sizes: `[8, 4]`
* Learning Rates: `[3e-05, 5e-05]`
And report micro F1-score on development set:
| Configuration | Run 1 | Run 2 | Run 3 | Run 4 | Run 5 | Avg. |
|-----------------|--------------|--------------|--------------|--------------|--------------|--------------|
| bs4-e10-lr3e-05 | [0.8314][1] | [0.8377][2] | [0.8359][3] | [0.8214][4] | [0.8364][5] | 83.26 ± 0.6 |
| bs8-e10-lr3e-05 | [0.83][6] | [0.8274][7] | [0.8358][8] | [0.8234][9] | [0.8327][10] | 82.99 ± 0.43 |
| bs8-e10-lr5e-05 | [0.8301][11] | [0.8321][12] | [0.8267][13] | [0.8266][14] | [0.8308][15] | 82.93 ± 0.22 |
| bs4-e10-lr5e-05 | [0.8181][16] | [0.8087][17] | [0.8239][18] | [0.8219][19] | [0.8224][20] | 81.9 ± 0.55 |
[1]: https://hf.co/stefan-it/hmbench-hipe2020-fr-hmbert-bs4-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-1
[2]: https://hf.co/stefan-it/hmbench-hipe2020-fr-hmbert-bs4-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-2
[3]: https://hf.co/stefan-it/hmbench-hipe2020-fr-hmbert-bs4-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-3
[4]: https://hf.co/stefan-it/hmbench-hipe2020-fr-hmbert-bs4-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-4
[5]: https://hf.co/stefan-it/hmbench-hipe2020-fr-hmbert-bs4-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-5
[6]: https://hf.co/stefan-it/hmbench-hipe2020-fr-hmbert-bs8-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-1
[7]: https://hf.co/stefan-it/hmbench-hipe2020-fr-hmbert-bs8-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-2
[8]: https://hf.co/stefan-it/hmbench-hipe2020-fr-hmbert-bs8-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-3
[9]: https://hf.co/stefan-it/hmbench-hipe2020-fr-hmbert-bs8-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-4
[10]: https://hf.co/stefan-it/hmbench-hipe2020-fr-hmbert-bs8-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-5
[11]: https://hf.co/stefan-it/hmbench-hipe2020-fr-hmbert-bs8-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-1
[12]: https://hf.co/stefan-it/hmbench-hipe2020-fr-hmbert-bs8-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-2
[13]: https://hf.co/stefan-it/hmbench-hipe2020-fr-hmbert-bs8-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-3
[14]: https://hf.co/stefan-it/hmbench-hipe2020-fr-hmbert-bs8-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-4
[15]: https://hf.co/stefan-it/hmbench-hipe2020-fr-hmbert-bs8-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-5
[16]: https://hf.co/stefan-it/hmbench-hipe2020-fr-hmbert-bs4-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-1
[17]: https://hf.co/stefan-it/hmbench-hipe2020-fr-hmbert-bs4-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-2
[18]: https://hf.co/stefan-it/hmbench-hipe2020-fr-hmbert-bs4-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-3
[19]: https://hf.co/stefan-it/hmbench-hipe2020-fr-hmbert-bs4-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-4
[20]: https://hf.co/stefan-it/hmbench-hipe2020-fr-hmbert-bs4-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-5
The [training log](training.log) and TensorBoard logs (only for hmByT5 and hmTEAMS based models) are also uploaded to the model hub.
More information about fine-tuning can be found [here](https://github.com/stefan-it/hmBench).
# Acknowledgements
We thank [Luisa März](https://github.com/LuisaMaerz), [Katharina Schmid](https://github.com/schmika) and
[Erion Çano](https://github.com/erionc) for their fruitful discussions about Historic Language Models.
Research supported with Cloud TPUs from Google's [TPU Research Cloud](https://sites.research.google/trc/about/) (TRC).
Many Thanks for providing access to the TPUs ❤️
|
stefan-it/hmbench-hipe2020-fr-hmbert-bs8-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-5 | stefan-it | 2023-10-17T23:23:33Z | 6 | 0 | flair | [
"flair",
"pytorch",
"token-classification",
"sequence-tagger-model",
"fr",
"base_model:dbmdz/bert-base-historic-multilingual-cased",
"base_model:finetune:dbmdz/bert-base-historic-multilingual-cased",
"license:mit",
"region:us"
]
| token-classification | 2023-10-13T18:46:58Z | ---
language: fr
license: mit
tags:
- flair
- token-classification
- sequence-tagger-model
base_model: dbmdz/bert-base-historic-multilingual-cased
widget:
- text: Nous recevons le premier numéro d ' un nouveau journal , le Radical - Libéral
, qui paraîtra à Genève deux fois la semaine . Son but est de représenter l '
élément national du radicalisme genevois , en d ' autres termes , de défendre
la politique intransigeante do M . Carteret , en opposition aux tendances du groupe
_ > dont le Genevois est l ' organe . Bétail .
---
# Fine-tuned Flair Model on French HIPE-2020 Dataset (HIPE-2022)
This Flair model was fine-tuned on the
[French HIPE-2020](https://github.com/hipe-eval/HIPE-2022-data/blob/main/documentation/README-hipe2020.md)
NER Dataset using hmBERT as backbone LM.
The HIPE-2020 dataset is comprised of newspapers from mid 19C to mid 20C. For information can be found
[here](https://dl.acm.org/doi/abs/10.1007/978-3-030-58219-7_21).
The following NEs were annotated: `loc`, `org`, `pers`, `prod`, `time` and `comp`.
# Results
We performed a hyper-parameter search over the following parameters with 5 different seeds per configuration:
* Batch Sizes: `[8, 4]`
* Learning Rates: `[3e-05, 5e-05]`
And report micro F1-score on development set:
| Configuration | Run 1 | Run 2 | Run 3 | Run 4 | Run 5 | Avg. |
|-----------------|--------------|--------------|--------------|--------------|--------------|--------------|
| bs4-e10-lr3e-05 | [0.8314][1] | [0.8377][2] | [0.8359][3] | [0.8214][4] | [0.8364][5] | 83.26 ± 0.6 |
| bs8-e10-lr3e-05 | [0.83][6] | [0.8274][7] | [0.8358][8] | [0.8234][9] | [0.8327][10] | 82.99 ± 0.43 |
| bs8-e10-lr5e-05 | [0.8301][11] | [0.8321][12] | [0.8267][13] | [0.8266][14] | [0.8308][15] | 82.93 ± 0.22 |
| bs4-e10-lr5e-05 | [0.8181][16] | [0.8087][17] | [0.8239][18] | [0.8219][19] | [0.8224][20] | 81.9 ± 0.55 |
[1]: https://hf.co/stefan-it/hmbench-hipe2020-fr-hmbert-bs4-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-1
[2]: https://hf.co/stefan-it/hmbench-hipe2020-fr-hmbert-bs4-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-2
[3]: https://hf.co/stefan-it/hmbench-hipe2020-fr-hmbert-bs4-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-3
[4]: https://hf.co/stefan-it/hmbench-hipe2020-fr-hmbert-bs4-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-4
[5]: https://hf.co/stefan-it/hmbench-hipe2020-fr-hmbert-bs4-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-5
[6]: https://hf.co/stefan-it/hmbench-hipe2020-fr-hmbert-bs8-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-1
[7]: https://hf.co/stefan-it/hmbench-hipe2020-fr-hmbert-bs8-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-2
[8]: https://hf.co/stefan-it/hmbench-hipe2020-fr-hmbert-bs8-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-3
[9]: https://hf.co/stefan-it/hmbench-hipe2020-fr-hmbert-bs8-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-4
[10]: https://hf.co/stefan-it/hmbench-hipe2020-fr-hmbert-bs8-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-5
[11]: https://hf.co/stefan-it/hmbench-hipe2020-fr-hmbert-bs8-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-1
[12]: https://hf.co/stefan-it/hmbench-hipe2020-fr-hmbert-bs8-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-2
[13]: https://hf.co/stefan-it/hmbench-hipe2020-fr-hmbert-bs8-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-3
[14]: https://hf.co/stefan-it/hmbench-hipe2020-fr-hmbert-bs8-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-4
[15]: https://hf.co/stefan-it/hmbench-hipe2020-fr-hmbert-bs8-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-5
[16]: https://hf.co/stefan-it/hmbench-hipe2020-fr-hmbert-bs4-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-1
[17]: https://hf.co/stefan-it/hmbench-hipe2020-fr-hmbert-bs4-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-2
[18]: https://hf.co/stefan-it/hmbench-hipe2020-fr-hmbert-bs4-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-3
[19]: https://hf.co/stefan-it/hmbench-hipe2020-fr-hmbert-bs4-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-4
[20]: https://hf.co/stefan-it/hmbench-hipe2020-fr-hmbert-bs4-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-5
The [training log](training.log) and TensorBoard logs (only for hmByT5 and hmTEAMS based models) are also uploaded to the model hub.
More information about fine-tuning can be found [here](https://github.com/stefan-it/hmBench).
# Acknowledgements
We thank [Luisa März](https://github.com/LuisaMaerz), [Katharina Schmid](https://github.com/schmika) and
[Erion Çano](https://github.com/erionc) for their fruitful discussions about Historic Language Models.
Research supported with Cloud TPUs from Google's [TPU Research Cloud](https://sites.research.google/trc/about/) (TRC).
Many Thanks for providing access to the TPUs ❤️
|
stefan-it/hmbench-hipe2020-fr-hmbert-bs8-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-3 | stefan-it | 2023-10-17T23:23:31Z | 3 | 0 | flair | [
"flair",
"pytorch",
"token-classification",
"sequence-tagger-model",
"fr",
"base_model:dbmdz/bert-base-historic-multilingual-cased",
"base_model:finetune:dbmdz/bert-base-historic-multilingual-cased",
"license:mit",
"region:us"
]
| token-classification | 2023-10-13T17:07:37Z | ---
language: fr
license: mit
tags:
- flair
- token-classification
- sequence-tagger-model
base_model: dbmdz/bert-base-historic-multilingual-cased
widget:
- text: Nous recevons le premier numéro d ' un nouveau journal , le Radical - Libéral
, qui paraîtra à Genève deux fois la semaine . Son but est de représenter l '
élément national du radicalisme genevois , en d ' autres termes , de défendre
la politique intransigeante do M . Carteret , en opposition aux tendances du groupe
_ > dont le Genevois est l ' organe . Bétail .
---
# Fine-tuned Flair Model on French HIPE-2020 Dataset (HIPE-2022)
This Flair model was fine-tuned on the
[French HIPE-2020](https://github.com/hipe-eval/HIPE-2022-data/blob/main/documentation/README-hipe2020.md)
NER Dataset using hmBERT as backbone LM.
The HIPE-2020 dataset is comprised of newspapers from mid 19C to mid 20C. For information can be found
[here](https://dl.acm.org/doi/abs/10.1007/978-3-030-58219-7_21).
The following NEs were annotated: `loc`, `org`, `pers`, `prod`, `time` and `comp`.
# Results
We performed a hyper-parameter search over the following parameters with 5 different seeds per configuration:
* Batch Sizes: `[8, 4]`
* Learning Rates: `[3e-05, 5e-05]`
And report micro F1-score on development set:
| Configuration | Run 1 | Run 2 | Run 3 | Run 4 | Run 5 | Avg. |
|-----------------|--------------|--------------|--------------|--------------|--------------|--------------|
| bs4-e10-lr3e-05 | [0.8314][1] | [0.8377][2] | [0.8359][3] | [0.8214][4] | [0.8364][5] | 83.26 ± 0.6 |
| bs8-e10-lr3e-05 | [0.83][6] | [0.8274][7] | [0.8358][8] | [0.8234][9] | [0.8327][10] | 82.99 ± 0.43 |
| bs8-e10-lr5e-05 | [0.8301][11] | [0.8321][12] | [0.8267][13] | [0.8266][14] | [0.8308][15] | 82.93 ± 0.22 |
| bs4-e10-lr5e-05 | [0.8181][16] | [0.8087][17] | [0.8239][18] | [0.8219][19] | [0.8224][20] | 81.9 ± 0.55 |
[1]: https://hf.co/stefan-it/hmbench-hipe2020-fr-hmbert-bs4-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-1
[2]: https://hf.co/stefan-it/hmbench-hipe2020-fr-hmbert-bs4-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-2
[3]: https://hf.co/stefan-it/hmbench-hipe2020-fr-hmbert-bs4-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-3
[4]: https://hf.co/stefan-it/hmbench-hipe2020-fr-hmbert-bs4-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-4
[5]: https://hf.co/stefan-it/hmbench-hipe2020-fr-hmbert-bs4-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-5
[6]: https://hf.co/stefan-it/hmbench-hipe2020-fr-hmbert-bs8-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-1
[7]: https://hf.co/stefan-it/hmbench-hipe2020-fr-hmbert-bs8-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-2
[8]: https://hf.co/stefan-it/hmbench-hipe2020-fr-hmbert-bs8-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-3
[9]: https://hf.co/stefan-it/hmbench-hipe2020-fr-hmbert-bs8-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-4
[10]: https://hf.co/stefan-it/hmbench-hipe2020-fr-hmbert-bs8-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-5
[11]: https://hf.co/stefan-it/hmbench-hipe2020-fr-hmbert-bs8-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-1
[12]: https://hf.co/stefan-it/hmbench-hipe2020-fr-hmbert-bs8-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-2
[13]: https://hf.co/stefan-it/hmbench-hipe2020-fr-hmbert-bs8-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-3
[14]: https://hf.co/stefan-it/hmbench-hipe2020-fr-hmbert-bs8-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-4
[15]: https://hf.co/stefan-it/hmbench-hipe2020-fr-hmbert-bs8-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-5
[16]: https://hf.co/stefan-it/hmbench-hipe2020-fr-hmbert-bs4-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-1
[17]: https://hf.co/stefan-it/hmbench-hipe2020-fr-hmbert-bs4-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-2
[18]: https://hf.co/stefan-it/hmbench-hipe2020-fr-hmbert-bs4-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-3
[19]: https://hf.co/stefan-it/hmbench-hipe2020-fr-hmbert-bs4-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-4
[20]: https://hf.co/stefan-it/hmbench-hipe2020-fr-hmbert-bs4-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-5
The [training log](training.log) and TensorBoard logs (only for hmByT5 and hmTEAMS based models) are also uploaded to the model hub.
More information about fine-tuning can be found [here](https://github.com/stefan-it/hmBench).
# Acknowledgements
We thank [Luisa März](https://github.com/LuisaMaerz), [Katharina Schmid](https://github.com/schmika) and
[Erion Çano](https://github.com/erionc) for their fruitful discussions about Historic Language Models.
Research supported with Cloud TPUs from Google's [TPU Research Cloud](https://sites.research.google/trc/about/) (TRC).
Many Thanks for providing access to the TPUs ❤️
|
stefan-it/hmbench-hipe2020-fr-hmbert-bs8-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-2 | stefan-it | 2023-10-17T23:23:29Z | 4 | 0 | flair | [
"flair",
"pytorch",
"token-classification",
"sequence-tagger-model",
"fr",
"base_model:dbmdz/bert-base-historic-multilingual-cased",
"base_model:finetune:dbmdz/bert-base-historic-multilingual-cased",
"license:mit",
"region:us"
]
| token-classification | 2023-10-13T16:17:52Z | ---
language: fr
license: mit
tags:
- flair
- token-classification
- sequence-tagger-model
base_model: dbmdz/bert-base-historic-multilingual-cased
widget:
- text: Nous recevons le premier numéro d ' un nouveau journal , le Radical - Libéral
, qui paraîtra à Genève deux fois la semaine . Son but est de représenter l '
élément national du radicalisme genevois , en d ' autres termes , de défendre
la politique intransigeante do M . Carteret , en opposition aux tendances du groupe
_ > dont le Genevois est l ' organe . Bétail .
---
# Fine-tuned Flair Model on French HIPE-2020 Dataset (HIPE-2022)
This Flair model was fine-tuned on the
[French HIPE-2020](https://github.com/hipe-eval/HIPE-2022-data/blob/main/documentation/README-hipe2020.md)
NER Dataset using hmBERT as backbone LM.
The HIPE-2020 dataset is comprised of newspapers from mid 19C to mid 20C. For information can be found
[here](https://dl.acm.org/doi/abs/10.1007/978-3-030-58219-7_21).
The following NEs were annotated: `loc`, `org`, `pers`, `prod`, `time` and `comp`.
# Results
We performed a hyper-parameter search over the following parameters with 5 different seeds per configuration:
* Batch Sizes: `[8, 4]`
* Learning Rates: `[3e-05, 5e-05]`
And report micro F1-score on development set:
| Configuration | Run 1 | Run 2 | Run 3 | Run 4 | Run 5 | Avg. |
|-----------------|--------------|--------------|--------------|--------------|--------------|--------------|
| bs4-e10-lr3e-05 | [0.8314][1] | [0.8377][2] | [0.8359][3] | [0.8214][4] | [0.8364][5] | 83.26 ± 0.6 |
| bs8-e10-lr3e-05 | [0.83][6] | [0.8274][7] | [0.8358][8] | [0.8234][9] | [0.8327][10] | 82.99 ± 0.43 |
| bs8-e10-lr5e-05 | [0.8301][11] | [0.8321][12] | [0.8267][13] | [0.8266][14] | [0.8308][15] | 82.93 ± 0.22 |
| bs4-e10-lr5e-05 | [0.8181][16] | [0.8087][17] | [0.8239][18] | [0.8219][19] | [0.8224][20] | 81.9 ± 0.55 |
[1]: https://hf.co/stefan-it/hmbench-hipe2020-fr-hmbert-bs4-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-1
[2]: https://hf.co/stefan-it/hmbench-hipe2020-fr-hmbert-bs4-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-2
[3]: https://hf.co/stefan-it/hmbench-hipe2020-fr-hmbert-bs4-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-3
[4]: https://hf.co/stefan-it/hmbench-hipe2020-fr-hmbert-bs4-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-4
[5]: https://hf.co/stefan-it/hmbench-hipe2020-fr-hmbert-bs4-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-5
[6]: https://hf.co/stefan-it/hmbench-hipe2020-fr-hmbert-bs8-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-1
[7]: https://hf.co/stefan-it/hmbench-hipe2020-fr-hmbert-bs8-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-2
[8]: https://hf.co/stefan-it/hmbench-hipe2020-fr-hmbert-bs8-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-3
[9]: https://hf.co/stefan-it/hmbench-hipe2020-fr-hmbert-bs8-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-4
[10]: https://hf.co/stefan-it/hmbench-hipe2020-fr-hmbert-bs8-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-5
[11]: https://hf.co/stefan-it/hmbench-hipe2020-fr-hmbert-bs8-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-1
[12]: https://hf.co/stefan-it/hmbench-hipe2020-fr-hmbert-bs8-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-2
[13]: https://hf.co/stefan-it/hmbench-hipe2020-fr-hmbert-bs8-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-3
[14]: https://hf.co/stefan-it/hmbench-hipe2020-fr-hmbert-bs8-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-4
[15]: https://hf.co/stefan-it/hmbench-hipe2020-fr-hmbert-bs8-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-5
[16]: https://hf.co/stefan-it/hmbench-hipe2020-fr-hmbert-bs4-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-1
[17]: https://hf.co/stefan-it/hmbench-hipe2020-fr-hmbert-bs4-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-2
[18]: https://hf.co/stefan-it/hmbench-hipe2020-fr-hmbert-bs4-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-3
[19]: https://hf.co/stefan-it/hmbench-hipe2020-fr-hmbert-bs4-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-4
[20]: https://hf.co/stefan-it/hmbench-hipe2020-fr-hmbert-bs4-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-5
The [training log](training.log) and TensorBoard logs (only for hmByT5 and hmTEAMS based models) are also uploaded to the model hub.
More information about fine-tuning can be found [here](https://github.com/stefan-it/hmBench).
# Acknowledgements
We thank [Luisa März](https://github.com/LuisaMaerz), [Katharina Schmid](https://github.com/schmika) and
[Erion Çano](https://github.com/erionc) for their fruitful discussions about Historic Language Models.
Research supported with Cloud TPUs from Google's [TPU Research Cloud](https://sites.research.google/trc/about/) (TRC).
Many Thanks for providing access to the TPUs ❤️
|
stefan-it/hmbench-hipe2020-fr-hmbert-bs8-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-4 | stefan-it | 2023-10-17T23:23:26Z | 8 | 0 | flair | [
"flair",
"pytorch",
"token-classification",
"sequence-tagger-model",
"fr",
"base_model:dbmdz/bert-base-historic-multilingual-cased",
"base_model:finetune:dbmdz/bert-base-historic-multilingual-cased",
"license:mit",
"region:us"
]
| token-classification | 2023-10-13T17:46:35Z | ---
language: fr
license: mit
tags:
- flair
- token-classification
- sequence-tagger-model
base_model: dbmdz/bert-base-historic-multilingual-cased
widget:
- text: Nous recevons le premier numéro d ' un nouveau journal , le Radical - Libéral
, qui paraîtra à Genève deux fois la semaine . Son but est de représenter l '
élément national du radicalisme genevois , en d ' autres termes , de défendre
la politique intransigeante do M . Carteret , en opposition aux tendances du groupe
_ > dont le Genevois est l ' organe . Bétail .
---
# Fine-tuned Flair Model on French HIPE-2020 Dataset (HIPE-2022)
This Flair model was fine-tuned on the
[French HIPE-2020](https://github.com/hipe-eval/HIPE-2022-data/blob/main/documentation/README-hipe2020.md)
NER Dataset using hmBERT as backbone LM.
The HIPE-2020 dataset is comprised of newspapers from mid 19C to mid 20C. For information can be found
[here](https://dl.acm.org/doi/abs/10.1007/978-3-030-58219-7_21).
The following NEs were annotated: `loc`, `org`, `pers`, `prod`, `time` and `comp`.
# Results
We performed a hyper-parameter search over the following parameters with 5 different seeds per configuration:
* Batch Sizes: `[8, 4]`
* Learning Rates: `[3e-05, 5e-05]`
And report micro F1-score on development set:
| Configuration | Run 1 | Run 2 | Run 3 | Run 4 | Run 5 | Avg. |
|-----------------|--------------|--------------|--------------|--------------|--------------|--------------|
| bs4-e10-lr3e-05 | [0.8314][1] | [0.8377][2] | [0.8359][3] | [0.8214][4] | [0.8364][5] | 83.26 ± 0.6 |
| bs8-e10-lr3e-05 | [0.83][6] | [0.8274][7] | [0.8358][8] | [0.8234][9] | [0.8327][10] | 82.99 ± 0.43 |
| bs8-e10-lr5e-05 | [0.8301][11] | [0.8321][12] | [0.8267][13] | [0.8266][14] | [0.8308][15] | 82.93 ± 0.22 |
| bs4-e10-lr5e-05 | [0.8181][16] | [0.8087][17] | [0.8239][18] | [0.8219][19] | [0.8224][20] | 81.9 ± 0.55 |
[1]: https://hf.co/stefan-it/hmbench-hipe2020-fr-hmbert-bs4-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-1
[2]: https://hf.co/stefan-it/hmbench-hipe2020-fr-hmbert-bs4-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-2
[3]: https://hf.co/stefan-it/hmbench-hipe2020-fr-hmbert-bs4-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-3
[4]: https://hf.co/stefan-it/hmbench-hipe2020-fr-hmbert-bs4-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-4
[5]: https://hf.co/stefan-it/hmbench-hipe2020-fr-hmbert-bs4-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-5
[6]: https://hf.co/stefan-it/hmbench-hipe2020-fr-hmbert-bs8-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-1
[7]: https://hf.co/stefan-it/hmbench-hipe2020-fr-hmbert-bs8-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-2
[8]: https://hf.co/stefan-it/hmbench-hipe2020-fr-hmbert-bs8-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-3
[9]: https://hf.co/stefan-it/hmbench-hipe2020-fr-hmbert-bs8-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-4
[10]: https://hf.co/stefan-it/hmbench-hipe2020-fr-hmbert-bs8-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-5
[11]: https://hf.co/stefan-it/hmbench-hipe2020-fr-hmbert-bs8-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-1
[12]: https://hf.co/stefan-it/hmbench-hipe2020-fr-hmbert-bs8-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-2
[13]: https://hf.co/stefan-it/hmbench-hipe2020-fr-hmbert-bs8-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-3
[14]: https://hf.co/stefan-it/hmbench-hipe2020-fr-hmbert-bs8-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-4
[15]: https://hf.co/stefan-it/hmbench-hipe2020-fr-hmbert-bs8-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-5
[16]: https://hf.co/stefan-it/hmbench-hipe2020-fr-hmbert-bs4-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-1
[17]: https://hf.co/stefan-it/hmbench-hipe2020-fr-hmbert-bs4-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-2
[18]: https://hf.co/stefan-it/hmbench-hipe2020-fr-hmbert-bs4-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-3
[19]: https://hf.co/stefan-it/hmbench-hipe2020-fr-hmbert-bs4-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-4
[20]: https://hf.co/stefan-it/hmbench-hipe2020-fr-hmbert-bs4-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-5
The [training log](training.log) and TensorBoard logs (only for hmByT5 and hmTEAMS based models) are also uploaded to the model hub.
More information about fine-tuning can be found [here](https://github.com/stefan-it/hmBench).
# Acknowledgements
We thank [Luisa März](https://github.com/LuisaMaerz), [Katharina Schmid](https://github.com/schmika) and
[Erion Çano](https://github.com/erionc) for their fruitful discussions about Historic Language Models.
Research supported with Cloud TPUs from Google's [TPU Research Cloud](https://sites.research.google/trc/about/) (TRC).
Many Thanks for providing access to the TPUs ❤️
|
stefan-it/hmbench-hipe2020-fr-hmbert-bs8-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-3 | stefan-it | 2023-10-17T23:23:24Z | 6 | 0 | flair | [
"flair",
"pytorch",
"token-classification",
"sequence-tagger-model",
"fr",
"base_model:dbmdz/bert-base-historic-multilingual-cased",
"base_model:finetune:dbmdz/bert-base-historic-multilingual-cased",
"license:mit",
"region:us"
]
| token-classification | 2023-10-13T16:56:55Z | ---
language: fr
license: mit
tags:
- flair
- token-classification
- sequence-tagger-model
base_model: dbmdz/bert-base-historic-multilingual-cased
widget:
- text: Nous recevons le premier numéro d ' un nouveau journal , le Radical - Libéral
, qui paraîtra à Genève deux fois la semaine . Son but est de représenter l '
élément national du radicalisme genevois , en d ' autres termes , de défendre
la politique intransigeante do M . Carteret , en opposition aux tendances du groupe
_ > dont le Genevois est l ' organe . Bétail .
---
# Fine-tuned Flair Model on French HIPE-2020 Dataset (HIPE-2022)
This Flair model was fine-tuned on the
[French HIPE-2020](https://github.com/hipe-eval/HIPE-2022-data/blob/main/documentation/README-hipe2020.md)
NER Dataset using hmBERT as backbone LM.
The HIPE-2020 dataset is comprised of newspapers from mid 19C to mid 20C. For information can be found
[here](https://dl.acm.org/doi/abs/10.1007/978-3-030-58219-7_21).
The following NEs were annotated: `loc`, `org`, `pers`, `prod`, `time` and `comp`.
# Results
We performed a hyper-parameter search over the following parameters with 5 different seeds per configuration:
* Batch Sizes: `[8, 4]`
* Learning Rates: `[3e-05, 5e-05]`
And report micro F1-score on development set:
| Configuration | Run 1 | Run 2 | Run 3 | Run 4 | Run 5 | Avg. |
|-----------------|--------------|--------------|--------------|--------------|--------------|--------------|
| bs4-e10-lr3e-05 | [0.8314][1] | [0.8377][2] | [0.8359][3] | [0.8214][4] | [0.8364][5] | 83.26 ± 0.6 |
| bs8-e10-lr3e-05 | [0.83][6] | [0.8274][7] | [0.8358][8] | [0.8234][9] | [0.8327][10] | 82.99 ± 0.43 |
| bs8-e10-lr5e-05 | [0.8301][11] | [0.8321][12] | [0.8267][13] | [0.8266][14] | [0.8308][15] | 82.93 ± 0.22 |
| bs4-e10-lr5e-05 | [0.8181][16] | [0.8087][17] | [0.8239][18] | [0.8219][19] | [0.8224][20] | 81.9 ± 0.55 |
[1]: https://hf.co/stefan-it/hmbench-hipe2020-fr-hmbert-bs4-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-1
[2]: https://hf.co/stefan-it/hmbench-hipe2020-fr-hmbert-bs4-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-2
[3]: https://hf.co/stefan-it/hmbench-hipe2020-fr-hmbert-bs4-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-3
[4]: https://hf.co/stefan-it/hmbench-hipe2020-fr-hmbert-bs4-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-4
[5]: https://hf.co/stefan-it/hmbench-hipe2020-fr-hmbert-bs4-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-5
[6]: https://hf.co/stefan-it/hmbench-hipe2020-fr-hmbert-bs8-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-1
[7]: https://hf.co/stefan-it/hmbench-hipe2020-fr-hmbert-bs8-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-2
[8]: https://hf.co/stefan-it/hmbench-hipe2020-fr-hmbert-bs8-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-3
[9]: https://hf.co/stefan-it/hmbench-hipe2020-fr-hmbert-bs8-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-4
[10]: https://hf.co/stefan-it/hmbench-hipe2020-fr-hmbert-bs8-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-5
[11]: https://hf.co/stefan-it/hmbench-hipe2020-fr-hmbert-bs8-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-1
[12]: https://hf.co/stefan-it/hmbench-hipe2020-fr-hmbert-bs8-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-2
[13]: https://hf.co/stefan-it/hmbench-hipe2020-fr-hmbert-bs8-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-3
[14]: https://hf.co/stefan-it/hmbench-hipe2020-fr-hmbert-bs8-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-4
[15]: https://hf.co/stefan-it/hmbench-hipe2020-fr-hmbert-bs8-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-5
[16]: https://hf.co/stefan-it/hmbench-hipe2020-fr-hmbert-bs4-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-1
[17]: https://hf.co/stefan-it/hmbench-hipe2020-fr-hmbert-bs4-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-2
[18]: https://hf.co/stefan-it/hmbench-hipe2020-fr-hmbert-bs4-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-3
[19]: https://hf.co/stefan-it/hmbench-hipe2020-fr-hmbert-bs4-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-4
[20]: https://hf.co/stefan-it/hmbench-hipe2020-fr-hmbert-bs4-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-5
The [training log](training.log) and TensorBoard logs (only for hmByT5 and hmTEAMS based models) are also uploaded to the model hub.
More information about fine-tuning can be found [here](https://github.com/stefan-it/hmBench).
# Acknowledgements
We thank [Luisa März](https://github.com/LuisaMaerz), [Katharina Schmid](https://github.com/schmika) and
[Erion Çano](https://github.com/erionc) for their fruitful discussions about Historic Language Models.
Research supported with Cloud TPUs from Google's [TPU Research Cloud](https://sites.research.google/trc/about/) (TRC).
Many Thanks for providing access to the TPUs ❤️
|
stefan-it/hmbench-hipe2020-fr-hmbert-bs8-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-1 | stefan-it | 2023-10-17T23:23:22Z | 6 | 0 | flair | [
"flair",
"pytorch",
"token-classification",
"sequence-tagger-model",
"fr",
"base_model:dbmdz/bert-base-historic-multilingual-cased",
"base_model:finetune:dbmdz/bert-base-historic-multilingual-cased",
"license:mit",
"region:us"
]
| token-classification | 2023-10-13T15:16:30Z | ---
language: fr
license: mit
tags:
- flair
- token-classification
- sequence-tagger-model
base_model: dbmdz/bert-base-historic-multilingual-cased
widget:
- text: Nous recevons le premier numéro d ' un nouveau journal , le Radical - Libéral
, qui paraîtra à Genève deux fois la semaine . Son but est de représenter l '
élément national du radicalisme genevois , en d ' autres termes , de défendre
la politique intransigeante do M . Carteret , en opposition aux tendances du groupe
_ > dont le Genevois est l ' organe . Bétail .
---
# Fine-tuned Flair Model on French HIPE-2020 Dataset (HIPE-2022)
This Flair model was fine-tuned on the
[French HIPE-2020](https://github.com/hipe-eval/HIPE-2022-data/blob/main/documentation/README-hipe2020.md)
NER Dataset using hmBERT as backbone LM.
The HIPE-2020 dataset is comprised of newspapers from mid 19C to mid 20C. For information can be found
[here](https://dl.acm.org/doi/abs/10.1007/978-3-030-58219-7_21).
The following NEs were annotated: `loc`, `org`, `pers`, `prod`, `time` and `comp`.
# Results
We performed a hyper-parameter search over the following parameters with 5 different seeds per configuration:
* Batch Sizes: `[8, 4]`
* Learning Rates: `[3e-05, 5e-05]`
And report micro F1-score on development set:
| Configuration | Run 1 | Run 2 | Run 3 | Run 4 | Run 5 | Avg. |
|-----------------|--------------|--------------|--------------|--------------|--------------|--------------|
| bs4-e10-lr3e-05 | [0.8314][1] | [0.8377][2] | [0.8359][3] | [0.8214][4] | [0.8364][5] | 83.26 ± 0.6 |
| bs8-e10-lr3e-05 | [0.83][6] | [0.8274][7] | [0.8358][8] | [0.8234][9] | [0.8327][10] | 82.99 ± 0.43 |
| bs8-e10-lr5e-05 | [0.8301][11] | [0.8321][12] | [0.8267][13] | [0.8266][14] | [0.8308][15] | 82.93 ± 0.22 |
| bs4-e10-lr5e-05 | [0.8181][16] | [0.8087][17] | [0.8239][18] | [0.8219][19] | [0.8224][20] | 81.9 ± 0.55 |
[1]: https://hf.co/stefan-it/hmbench-hipe2020-fr-hmbert-bs4-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-1
[2]: https://hf.co/stefan-it/hmbench-hipe2020-fr-hmbert-bs4-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-2
[3]: https://hf.co/stefan-it/hmbench-hipe2020-fr-hmbert-bs4-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-3
[4]: https://hf.co/stefan-it/hmbench-hipe2020-fr-hmbert-bs4-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-4
[5]: https://hf.co/stefan-it/hmbench-hipe2020-fr-hmbert-bs4-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-5
[6]: https://hf.co/stefan-it/hmbench-hipe2020-fr-hmbert-bs8-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-1
[7]: https://hf.co/stefan-it/hmbench-hipe2020-fr-hmbert-bs8-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-2
[8]: https://hf.co/stefan-it/hmbench-hipe2020-fr-hmbert-bs8-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-3
[9]: https://hf.co/stefan-it/hmbench-hipe2020-fr-hmbert-bs8-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-4
[10]: https://hf.co/stefan-it/hmbench-hipe2020-fr-hmbert-bs8-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-5
[11]: https://hf.co/stefan-it/hmbench-hipe2020-fr-hmbert-bs8-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-1
[12]: https://hf.co/stefan-it/hmbench-hipe2020-fr-hmbert-bs8-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-2
[13]: https://hf.co/stefan-it/hmbench-hipe2020-fr-hmbert-bs8-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-3
[14]: https://hf.co/stefan-it/hmbench-hipe2020-fr-hmbert-bs8-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-4
[15]: https://hf.co/stefan-it/hmbench-hipe2020-fr-hmbert-bs8-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-5
[16]: https://hf.co/stefan-it/hmbench-hipe2020-fr-hmbert-bs4-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-1
[17]: https://hf.co/stefan-it/hmbench-hipe2020-fr-hmbert-bs4-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-2
[18]: https://hf.co/stefan-it/hmbench-hipe2020-fr-hmbert-bs4-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-3
[19]: https://hf.co/stefan-it/hmbench-hipe2020-fr-hmbert-bs4-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-4
[20]: https://hf.co/stefan-it/hmbench-hipe2020-fr-hmbert-bs4-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-5
The [training log](training.log) and TensorBoard logs (only for hmByT5 and hmTEAMS based models) are also uploaded to the model hub.
More information about fine-tuning can be found [here](https://github.com/stefan-it/hmBench).
# Acknowledgements
We thank [Luisa März](https://github.com/LuisaMaerz), [Katharina Schmid](https://github.com/schmika) and
[Erion Çano](https://github.com/erionc) for their fruitful discussions about Historic Language Models.
Research supported with Cloud TPUs from Google's [TPU Research Cloud](https://sites.research.google/trc/about/) (TRC).
Many Thanks for providing access to the TPUs ❤️
|
stefan-it/hmbench-hipe2020-de-hmbert-bs4-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-5 | stefan-it | 2023-10-17T23:23:02Z | 4 | 0 | flair | [
"flair",
"pytorch",
"token-classification",
"sequence-tagger-model",
"de",
"base_model:dbmdz/bert-base-historic-multilingual-cased",
"base_model:finetune:dbmdz/bert-base-historic-multilingual-cased",
"license:mit",
"region:us"
]
| token-classification | 2023-10-13T14:16:51Z | ---
language: de
license: mit
tags:
- flair
- token-classification
- sequence-tagger-model
base_model: dbmdz/bert-base-historic-multilingual-cased
widget:
- text: Es war am 25sten , als Lord Corn wollis Dublin mit seinem Gefolge und mehrern
Truppen verließ , um in einer Central - Lage bey Sligo die Operationen der Armee
persönlich zu dirigiren . Der Feind dürfte bald in die Enge kommen , da Gen .
Lacke mit 6000 Mann ihm entgegen marschirt .
---
# Fine-tuned Flair Model on German HIPE-2020 Dataset (HIPE-2022)
This Flair model was fine-tuned on the
[German HIPE-2020](https://github.com/hipe-eval/HIPE-2022-data/blob/main/documentation/README-hipe2020.md)
NER Dataset using hmBERT as backbone LM.
The HIPE-2020 dataset is comprised of newspapers from mid 19C to mid 20C. For information can be found
[here](https://dl.acm.org/doi/abs/10.1007/978-3-030-58219-7_21).
The following NEs were annotated: `loc`, `org`, `pers`, `prod`, `time` and `comp`.
# Results
We performed a hyper-parameter search over the following parameters with 5 different seeds per configuration:
* Batch Sizes: `[8, 4]`
* Learning Rates: `[3e-05, 5e-05]`
And report micro F1-score on development set:
| Configuration | Run 1 | Run 2 | Run 3 | Run 4 | Run 5 | Avg. |
|-----------------|--------------|--------------|--------------|--------------|--------------|--------------|
| bs4-e10-lr3e-05 | [0.7876][1] | [0.7978][2] | [0.7803][3] | [0.7859][4] | [0.7907][5] | 78.85 ± 0.58 |
| bs8-e10-lr3e-05 | [0.7904][6] | [0.7884][7] | [0.7876][8] | [0.783][9] | [0.7894][10] | 78.78 ± 0.26 |
| bs8-e10-lr5e-05 | [0.7939][11] | [0.7859][12] | [0.7825][13] | [0.7849][14] | [0.7853][15] | 78.65 ± 0.39 |
| bs4-e10-lr5e-05 | [0.7943][16] | [0.786][17] | [0.7834][18] | [0.7824][19] | [0.7736][20] | 78.39 ± 0.67 |
[1]: https://hf.co/stefan-it/hmbench-hipe2020-de-hmbert-bs4-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-1
[2]: https://hf.co/stefan-it/hmbench-hipe2020-de-hmbert-bs4-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-2
[3]: https://hf.co/stefan-it/hmbench-hipe2020-de-hmbert-bs4-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-3
[4]: https://hf.co/stefan-it/hmbench-hipe2020-de-hmbert-bs4-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-4
[5]: https://hf.co/stefan-it/hmbench-hipe2020-de-hmbert-bs4-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-5
[6]: https://hf.co/stefan-it/hmbench-hipe2020-de-hmbert-bs8-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-1
[7]: https://hf.co/stefan-it/hmbench-hipe2020-de-hmbert-bs8-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-2
[8]: https://hf.co/stefan-it/hmbench-hipe2020-de-hmbert-bs8-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-3
[9]: https://hf.co/stefan-it/hmbench-hipe2020-de-hmbert-bs8-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-4
[10]: https://hf.co/stefan-it/hmbench-hipe2020-de-hmbert-bs8-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-5
[11]: https://hf.co/stefan-it/hmbench-hipe2020-de-hmbert-bs8-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-1
[12]: https://hf.co/stefan-it/hmbench-hipe2020-de-hmbert-bs8-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-2
[13]: https://hf.co/stefan-it/hmbench-hipe2020-de-hmbert-bs8-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-3
[14]: https://hf.co/stefan-it/hmbench-hipe2020-de-hmbert-bs8-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-4
[15]: https://hf.co/stefan-it/hmbench-hipe2020-de-hmbert-bs8-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-5
[16]: https://hf.co/stefan-it/hmbench-hipe2020-de-hmbert-bs4-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-1
[17]: https://hf.co/stefan-it/hmbench-hipe2020-de-hmbert-bs4-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-2
[18]: https://hf.co/stefan-it/hmbench-hipe2020-de-hmbert-bs4-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-3
[19]: https://hf.co/stefan-it/hmbench-hipe2020-de-hmbert-bs4-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-4
[20]: https://hf.co/stefan-it/hmbench-hipe2020-de-hmbert-bs4-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-5
The [training log](training.log) and TensorBoard logs (only for hmByT5 and hmTEAMS based models) are also uploaded to the model hub.
More information about fine-tuning can be found [here](https://github.com/stefan-it/hmBench).
# Acknowledgements
We thank [Luisa März](https://github.com/LuisaMaerz), [Katharina Schmid](https://github.com/schmika) and
[Erion Çano](https://github.com/erionc) for their fruitful discussions about Historic Language Models.
Research supported with Cloud TPUs from Google's [TPU Research Cloud](https://sites.research.google/trc/about/) (TRC).
Many Thanks for providing access to the TPUs ❤️
|
stefan-it/hmbench-hipe2020-de-hmbert-bs4-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-4 | stefan-it | 2023-10-17T23:23:01Z | 2 | 0 | flair | [
"flair",
"pytorch",
"token-classification",
"sequence-tagger-model",
"de",
"base_model:dbmdz/bert-base-historic-multilingual-cased",
"base_model:finetune:dbmdz/bert-base-historic-multilingual-cased",
"license:mit",
"region:us"
]
| token-classification | 2023-10-13T13:44:48Z | ---
language: de
license: mit
tags:
- flair
- token-classification
- sequence-tagger-model
base_model: dbmdz/bert-base-historic-multilingual-cased
widget:
- text: Es war am 25sten , als Lord Corn wollis Dublin mit seinem Gefolge und mehrern
Truppen verließ , um in einer Central - Lage bey Sligo die Operationen der Armee
persönlich zu dirigiren . Der Feind dürfte bald in die Enge kommen , da Gen .
Lacke mit 6000 Mann ihm entgegen marschirt .
---
# Fine-tuned Flair Model on German HIPE-2020 Dataset (HIPE-2022)
This Flair model was fine-tuned on the
[German HIPE-2020](https://github.com/hipe-eval/HIPE-2022-data/blob/main/documentation/README-hipe2020.md)
NER Dataset using hmBERT as backbone LM.
The HIPE-2020 dataset is comprised of newspapers from mid 19C to mid 20C. For information can be found
[here](https://dl.acm.org/doi/abs/10.1007/978-3-030-58219-7_21).
The following NEs were annotated: `loc`, `org`, `pers`, `prod`, `time` and `comp`.
# Results
We performed a hyper-parameter search over the following parameters with 5 different seeds per configuration:
* Batch Sizes: `[8, 4]`
* Learning Rates: `[3e-05, 5e-05]`
And report micro F1-score on development set:
| Configuration | Run 1 | Run 2 | Run 3 | Run 4 | Run 5 | Avg. |
|-----------------|--------------|--------------|--------------|--------------|--------------|--------------|
| bs4-e10-lr3e-05 | [0.7876][1] | [0.7978][2] | [0.7803][3] | [0.7859][4] | [0.7907][5] | 78.85 ± 0.58 |
| bs8-e10-lr3e-05 | [0.7904][6] | [0.7884][7] | [0.7876][8] | [0.783][9] | [0.7894][10] | 78.78 ± 0.26 |
| bs8-e10-lr5e-05 | [0.7939][11] | [0.7859][12] | [0.7825][13] | [0.7849][14] | [0.7853][15] | 78.65 ± 0.39 |
| bs4-e10-lr5e-05 | [0.7943][16] | [0.786][17] | [0.7834][18] | [0.7824][19] | [0.7736][20] | 78.39 ± 0.67 |
[1]: https://hf.co/stefan-it/hmbench-hipe2020-de-hmbert-bs4-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-1
[2]: https://hf.co/stefan-it/hmbench-hipe2020-de-hmbert-bs4-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-2
[3]: https://hf.co/stefan-it/hmbench-hipe2020-de-hmbert-bs4-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-3
[4]: https://hf.co/stefan-it/hmbench-hipe2020-de-hmbert-bs4-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-4
[5]: https://hf.co/stefan-it/hmbench-hipe2020-de-hmbert-bs4-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-5
[6]: https://hf.co/stefan-it/hmbench-hipe2020-de-hmbert-bs8-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-1
[7]: https://hf.co/stefan-it/hmbench-hipe2020-de-hmbert-bs8-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-2
[8]: https://hf.co/stefan-it/hmbench-hipe2020-de-hmbert-bs8-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-3
[9]: https://hf.co/stefan-it/hmbench-hipe2020-de-hmbert-bs8-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-4
[10]: https://hf.co/stefan-it/hmbench-hipe2020-de-hmbert-bs8-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-5
[11]: https://hf.co/stefan-it/hmbench-hipe2020-de-hmbert-bs8-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-1
[12]: https://hf.co/stefan-it/hmbench-hipe2020-de-hmbert-bs8-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-2
[13]: https://hf.co/stefan-it/hmbench-hipe2020-de-hmbert-bs8-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-3
[14]: https://hf.co/stefan-it/hmbench-hipe2020-de-hmbert-bs8-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-4
[15]: https://hf.co/stefan-it/hmbench-hipe2020-de-hmbert-bs8-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-5
[16]: https://hf.co/stefan-it/hmbench-hipe2020-de-hmbert-bs4-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-1
[17]: https://hf.co/stefan-it/hmbench-hipe2020-de-hmbert-bs4-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-2
[18]: https://hf.co/stefan-it/hmbench-hipe2020-de-hmbert-bs4-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-3
[19]: https://hf.co/stefan-it/hmbench-hipe2020-de-hmbert-bs4-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-4
[20]: https://hf.co/stefan-it/hmbench-hipe2020-de-hmbert-bs4-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-5
The [training log](training.log) and TensorBoard logs (only for hmByT5 and hmTEAMS based models) are also uploaded to the model hub.
More information about fine-tuning can be found [here](https://github.com/stefan-it/hmBench).
# Acknowledgements
We thank [Luisa März](https://github.com/LuisaMaerz), [Katharina Schmid](https://github.com/schmika) and
[Erion Çano](https://github.com/erionc) for their fruitful discussions about Historic Language Models.
Research supported with Cloud TPUs from Google's [TPU Research Cloud](https://sites.research.google/trc/about/) (TRC).
Many Thanks for providing access to the TPUs ❤️
|
stefan-it/hmbench-hipe2020-de-hmbert-bs4-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-1 | stefan-it | 2023-10-17T23:22:58Z | 9 | 0 | flair | [
"flair",
"pytorch",
"token-classification",
"sequence-tagger-model",
"de",
"base_model:dbmdz/bert-base-historic-multilingual-cased",
"base_model:finetune:dbmdz/bert-base-historic-multilingual-cased",
"license:mit",
"region:us"
]
| token-classification | 2023-10-13T12:09:09Z | ---
language: de
license: mit
tags:
- flair
- token-classification
- sequence-tagger-model
base_model: dbmdz/bert-base-historic-multilingual-cased
widget:
- text: Es war am 25sten , als Lord Corn wollis Dublin mit seinem Gefolge und mehrern
Truppen verließ , um in einer Central - Lage bey Sligo die Operationen der Armee
persönlich zu dirigiren . Der Feind dürfte bald in die Enge kommen , da Gen .
Lacke mit 6000 Mann ihm entgegen marschirt .
---
# Fine-tuned Flair Model on German HIPE-2020 Dataset (HIPE-2022)
This Flair model was fine-tuned on the
[German HIPE-2020](https://github.com/hipe-eval/HIPE-2022-data/blob/main/documentation/README-hipe2020.md)
NER Dataset using hmBERT as backbone LM.
The HIPE-2020 dataset is comprised of newspapers from mid 19C to mid 20C. For information can be found
[here](https://dl.acm.org/doi/abs/10.1007/978-3-030-58219-7_21).
The following NEs were annotated: `loc`, `org`, `pers`, `prod`, `time` and `comp`.
# Results
We performed a hyper-parameter search over the following parameters with 5 different seeds per configuration:
* Batch Sizes: `[8, 4]`
* Learning Rates: `[3e-05, 5e-05]`
And report micro F1-score on development set:
| Configuration | Run 1 | Run 2 | Run 3 | Run 4 | Run 5 | Avg. |
|-----------------|--------------|--------------|--------------|--------------|--------------|--------------|
| bs4-e10-lr3e-05 | [0.7876][1] | [0.7978][2] | [0.7803][3] | [0.7859][4] | [0.7907][5] | 78.85 ± 0.58 |
| bs8-e10-lr3e-05 | [0.7904][6] | [0.7884][7] | [0.7876][8] | [0.783][9] | [0.7894][10] | 78.78 ± 0.26 |
| bs8-e10-lr5e-05 | [0.7939][11] | [0.7859][12] | [0.7825][13] | [0.7849][14] | [0.7853][15] | 78.65 ± 0.39 |
| bs4-e10-lr5e-05 | [0.7943][16] | [0.786][17] | [0.7834][18] | [0.7824][19] | [0.7736][20] | 78.39 ± 0.67 |
[1]: https://hf.co/stefan-it/hmbench-hipe2020-de-hmbert-bs4-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-1
[2]: https://hf.co/stefan-it/hmbench-hipe2020-de-hmbert-bs4-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-2
[3]: https://hf.co/stefan-it/hmbench-hipe2020-de-hmbert-bs4-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-3
[4]: https://hf.co/stefan-it/hmbench-hipe2020-de-hmbert-bs4-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-4
[5]: https://hf.co/stefan-it/hmbench-hipe2020-de-hmbert-bs4-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-5
[6]: https://hf.co/stefan-it/hmbench-hipe2020-de-hmbert-bs8-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-1
[7]: https://hf.co/stefan-it/hmbench-hipe2020-de-hmbert-bs8-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-2
[8]: https://hf.co/stefan-it/hmbench-hipe2020-de-hmbert-bs8-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-3
[9]: https://hf.co/stefan-it/hmbench-hipe2020-de-hmbert-bs8-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-4
[10]: https://hf.co/stefan-it/hmbench-hipe2020-de-hmbert-bs8-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-5
[11]: https://hf.co/stefan-it/hmbench-hipe2020-de-hmbert-bs8-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-1
[12]: https://hf.co/stefan-it/hmbench-hipe2020-de-hmbert-bs8-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-2
[13]: https://hf.co/stefan-it/hmbench-hipe2020-de-hmbert-bs8-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-3
[14]: https://hf.co/stefan-it/hmbench-hipe2020-de-hmbert-bs8-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-4
[15]: https://hf.co/stefan-it/hmbench-hipe2020-de-hmbert-bs8-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-5
[16]: https://hf.co/stefan-it/hmbench-hipe2020-de-hmbert-bs4-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-1
[17]: https://hf.co/stefan-it/hmbench-hipe2020-de-hmbert-bs4-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-2
[18]: https://hf.co/stefan-it/hmbench-hipe2020-de-hmbert-bs4-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-3
[19]: https://hf.co/stefan-it/hmbench-hipe2020-de-hmbert-bs4-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-4
[20]: https://hf.co/stefan-it/hmbench-hipe2020-de-hmbert-bs4-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-5
The [training log](training.log) and TensorBoard logs (only for hmByT5 and hmTEAMS based models) are also uploaded to the model hub.
More information about fine-tuning can be found [here](https://github.com/stefan-it/hmBench).
# Acknowledgements
We thank [Luisa März](https://github.com/LuisaMaerz), [Katharina Schmid](https://github.com/schmika) and
[Erion Çano](https://github.com/erionc) for their fruitful discussions about Historic Language Models.
Research supported with Cloud TPUs from Google's [TPU Research Cloud](https://sites.research.google/trc/about/) (TRC).
Many Thanks for providing access to the TPUs ❤️
|
stefan-it/hmbench-hipe2020-de-hmbert-bs4-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-5 | stefan-it | 2023-10-17T23:22:57Z | 3 | 0 | flair | [
"flair",
"pytorch",
"token-classification",
"sequence-tagger-model",
"de",
"base_model:dbmdz/bert-base-historic-multilingual-cased",
"base_model:finetune:dbmdz/bert-base-historic-multilingual-cased",
"license:mit",
"region:us"
]
| token-classification | 2023-10-13T14:07:42Z | ---
language: de
license: mit
tags:
- flair
- token-classification
- sequence-tagger-model
base_model: dbmdz/bert-base-historic-multilingual-cased
widget:
- text: Es war am 25sten , als Lord Corn wollis Dublin mit seinem Gefolge und mehrern
Truppen verließ , um in einer Central - Lage bey Sligo die Operationen der Armee
persönlich zu dirigiren . Der Feind dürfte bald in die Enge kommen , da Gen .
Lacke mit 6000 Mann ihm entgegen marschirt .
---
# Fine-tuned Flair Model on German HIPE-2020 Dataset (HIPE-2022)
This Flair model was fine-tuned on the
[German HIPE-2020](https://github.com/hipe-eval/HIPE-2022-data/blob/main/documentation/README-hipe2020.md)
NER Dataset using hmBERT as backbone LM.
The HIPE-2020 dataset is comprised of newspapers from mid 19C to mid 20C. For information can be found
[here](https://dl.acm.org/doi/abs/10.1007/978-3-030-58219-7_21).
The following NEs were annotated: `loc`, `org`, `pers`, `prod`, `time` and `comp`.
# Results
We performed a hyper-parameter search over the following parameters with 5 different seeds per configuration:
* Batch Sizes: `[8, 4]`
* Learning Rates: `[3e-05, 5e-05]`
And report micro F1-score on development set:
| Configuration | Run 1 | Run 2 | Run 3 | Run 4 | Run 5 | Avg. |
|-----------------|--------------|--------------|--------------|--------------|--------------|--------------|
| bs4-e10-lr3e-05 | [0.7876][1] | [0.7978][2] | [0.7803][3] | [0.7859][4] | [0.7907][5] | 78.85 ± 0.58 |
| bs8-e10-lr3e-05 | [0.7904][6] | [0.7884][7] | [0.7876][8] | [0.783][9] | [0.7894][10] | 78.78 ± 0.26 |
| bs8-e10-lr5e-05 | [0.7939][11] | [0.7859][12] | [0.7825][13] | [0.7849][14] | [0.7853][15] | 78.65 ± 0.39 |
| bs4-e10-lr5e-05 | [0.7943][16] | [0.786][17] | [0.7834][18] | [0.7824][19] | [0.7736][20] | 78.39 ± 0.67 |
[1]: https://hf.co/stefan-it/hmbench-hipe2020-de-hmbert-bs4-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-1
[2]: https://hf.co/stefan-it/hmbench-hipe2020-de-hmbert-bs4-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-2
[3]: https://hf.co/stefan-it/hmbench-hipe2020-de-hmbert-bs4-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-3
[4]: https://hf.co/stefan-it/hmbench-hipe2020-de-hmbert-bs4-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-4
[5]: https://hf.co/stefan-it/hmbench-hipe2020-de-hmbert-bs4-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-5
[6]: https://hf.co/stefan-it/hmbench-hipe2020-de-hmbert-bs8-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-1
[7]: https://hf.co/stefan-it/hmbench-hipe2020-de-hmbert-bs8-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-2
[8]: https://hf.co/stefan-it/hmbench-hipe2020-de-hmbert-bs8-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-3
[9]: https://hf.co/stefan-it/hmbench-hipe2020-de-hmbert-bs8-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-4
[10]: https://hf.co/stefan-it/hmbench-hipe2020-de-hmbert-bs8-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-5
[11]: https://hf.co/stefan-it/hmbench-hipe2020-de-hmbert-bs8-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-1
[12]: https://hf.co/stefan-it/hmbench-hipe2020-de-hmbert-bs8-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-2
[13]: https://hf.co/stefan-it/hmbench-hipe2020-de-hmbert-bs8-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-3
[14]: https://hf.co/stefan-it/hmbench-hipe2020-de-hmbert-bs8-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-4
[15]: https://hf.co/stefan-it/hmbench-hipe2020-de-hmbert-bs8-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-5
[16]: https://hf.co/stefan-it/hmbench-hipe2020-de-hmbert-bs4-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-1
[17]: https://hf.co/stefan-it/hmbench-hipe2020-de-hmbert-bs4-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-2
[18]: https://hf.co/stefan-it/hmbench-hipe2020-de-hmbert-bs4-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-3
[19]: https://hf.co/stefan-it/hmbench-hipe2020-de-hmbert-bs4-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-4
[20]: https://hf.co/stefan-it/hmbench-hipe2020-de-hmbert-bs4-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-5
The [training log](training.log) and TensorBoard logs (only for hmByT5 and hmTEAMS based models) are also uploaded to the model hub.
More information about fine-tuning can be found [here](https://github.com/stefan-it/hmBench).
# Acknowledgements
We thank [Luisa März](https://github.com/LuisaMaerz), [Katharina Schmid](https://github.com/schmika) and
[Erion Çano](https://github.com/erionc) for their fruitful discussions about Historic Language Models.
Research supported with Cloud TPUs from Google's [TPU Research Cloud](https://sites.research.google/trc/about/) (TRC).
Many Thanks for providing access to the TPUs ❤️
|
stefan-it/hmbench-hipe2020-de-hmbert-bs4-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-4 | stefan-it | 2023-10-17T23:22:56Z | 9 | 0 | flair | [
"flair",
"pytorch",
"token-classification",
"sequence-tagger-model",
"de",
"base_model:dbmdz/bert-base-historic-multilingual-cased",
"base_model:finetune:dbmdz/bert-base-historic-multilingual-cased",
"license:mit",
"region:us"
]
| token-classification | 2023-10-13T13:35:40Z | ---
language: de
license: mit
tags:
- flair
- token-classification
- sequence-tagger-model
base_model: dbmdz/bert-base-historic-multilingual-cased
widget:
- text: Es war am 25sten , als Lord Corn wollis Dublin mit seinem Gefolge und mehrern
Truppen verließ , um in einer Central - Lage bey Sligo die Operationen der Armee
persönlich zu dirigiren . Der Feind dürfte bald in die Enge kommen , da Gen .
Lacke mit 6000 Mann ihm entgegen marschirt .
---
# Fine-tuned Flair Model on German HIPE-2020 Dataset (HIPE-2022)
This Flair model was fine-tuned on the
[German HIPE-2020](https://github.com/hipe-eval/HIPE-2022-data/blob/main/documentation/README-hipe2020.md)
NER Dataset using hmBERT as backbone LM.
The HIPE-2020 dataset is comprised of newspapers from mid 19C to mid 20C. For information can be found
[here](https://dl.acm.org/doi/abs/10.1007/978-3-030-58219-7_21).
The following NEs were annotated: `loc`, `org`, `pers`, `prod`, `time` and `comp`.
# Results
We performed a hyper-parameter search over the following parameters with 5 different seeds per configuration:
* Batch Sizes: `[8, 4]`
* Learning Rates: `[3e-05, 5e-05]`
And report micro F1-score on development set:
| Configuration | Run 1 | Run 2 | Run 3 | Run 4 | Run 5 | Avg. |
|-----------------|--------------|--------------|--------------|--------------|--------------|--------------|
| bs4-e10-lr3e-05 | [0.7876][1] | [0.7978][2] | [0.7803][3] | [0.7859][4] | [0.7907][5] | 78.85 ± 0.58 |
| bs8-e10-lr3e-05 | [0.7904][6] | [0.7884][7] | [0.7876][8] | [0.783][9] | [0.7894][10] | 78.78 ± 0.26 |
| bs8-e10-lr5e-05 | [0.7939][11] | [0.7859][12] | [0.7825][13] | [0.7849][14] | [0.7853][15] | 78.65 ± 0.39 |
| bs4-e10-lr5e-05 | [0.7943][16] | [0.786][17] | [0.7834][18] | [0.7824][19] | [0.7736][20] | 78.39 ± 0.67 |
[1]: https://hf.co/stefan-it/hmbench-hipe2020-de-hmbert-bs4-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-1
[2]: https://hf.co/stefan-it/hmbench-hipe2020-de-hmbert-bs4-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-2
[3]: https://hf.co/stefan-it/hmbench-hipe2020-de-hmbert-bs4-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-3
[4]: https://hf.co/stefan-it/hmbench-hipe2020-de-hmbert-bs4-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-4
[5]: https://hf.co/stefan-it/hmbench-hipe2020-de-hmbert-bs4-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-5
[6]: https://hf.co/stefan-it/hmbench-hipe2020-de-hmbert-bs8-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-1
[7]: https://hf.co/stefan-it/hmbench-hipe2020-de-hmbert-bs8-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-2
[8]: https://hf.co/stefan-it/hmbench-hipe2020-de-hmbert-bs8-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-3
[9]: https://hf.co/stefan-it/hmbench-hipe2020-de-hmbert-bs8-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-4
[10]: https://hf.co/stefan-it/hmbench-hipe2020-de-hmbert-bs8-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-5
[11]: https://hf.co/stefan-it/hmbench-hipe2020-de-hmbert-bs8-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-1
[12]: https://hf.co/stefan-it/hmbench-hipe2020-de-hmbert-bs8-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-2
[13]: https://hf.co/stefan-it/hmbench-hipe2020-de-hmbert-bs8-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-3
[14]: https://hf.co/stefan-it/hmbench-hipe2020-de-hmbert-bs8-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-4
[15]: https://hf.co/stefan-it/hmbench-hipe2020-de-hmbert-bs8-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-5
[16]: https://hf.co/stefan-it/hmbench-hipe2020-de-hmbert-bs4-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-1
[17]: https://hf.co/stefan-it/hmbench-hipe2020-de-hmbert-bs4-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-2
[18]: https://hf.co/stefan-it/hmbench-hipe2020-de-hmbert-bs4-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-3
[19]: https://hf.co/stefan-it/hmbench-hipe2020-de-hmbert-bs4-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-4
[20]: https://hf.co/stefan-it/hmbench-hipe2020-de-hmbert-bs4-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-5
The [training log](training.log) and TensorBoard logs (only for hmByT5 and hmTEAMS based models) are also uploaded to the model hub.
More information about fine-tuning can be found [here](https://github.com/stefan-it/hmBench).
# Acknowledgements
We thank [Luisa März](https://github.com/LuisaMaerz), [Katharina Schmid](https://github.com/schmika) and
[Erion Çano](https://github.com/erionc) for their fruitful discussions about Historic Language Models.
Research supported with Cloud TPUs from Google's [TPU Research Cloud](https://sites.research.google/trc/about/) (TRC).
Many Thanks for providing access to the TPUs ❤️
|
stefan-it/hmbench-hipe2020-de-hmbert-bs4-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-1 | stefan-it | 2023-10-17T23:22:52Z | 8 | 0 | flair | [
"flair",
"pytorch",
"token-classification",
"sequence-tagger-model",
"de",
"base_model:dbmdz/bert-base-historic-multilingual-cased",
"base_model:finetune:dbmdz/bert-base-historic-multilingual-cased",
"license:mit",
"region:us"
]
| token-classification | 2023-10-13T12:00:01Z | ---
language: de
license: mit
tags:
- flair
- token-classification
- sequence-tagger-model
base_model: dbmdz/bert-base-historic-multilingual-cased
widget:
- text: Es war am 25sten , als Lord Corn wollis Dublin mit seinem Gefolge und mehrern
Truppen verließ , um in einer Central - Lage bey Sligo die Operationen der Armee
persönlich zu dirigiren . Der Feind dürfte bald in die Enge kommen , da Gen .
Lacke mit 6000 Mann ihm entgegen marschirt .
---
# Fine-tuned Flair Model on German HIPE-2020 Dataset (HIPE-2022)
This Flair model was fine-tuned on the
[German HIPE-2020](https://github.com/hipe-eval/HIPE-2022-data/blob/main/documentation/README-hipe2020.md)
NER Dataset using hmBERT as backbone LM.
The HIPE-2020 dataset is comprised of newspapers from mid 19C to mid 20C. For information can be found
[here](https://dl.acm.org/doi/abs/10.1007/978-3-030-58219-7_21).
The following NEs were annotated: `loc`, `org`, `pers`, `prod`, `time` and `comp`.
# Results
We performed a hyper-parameter search over the following parameters with 5 different seeds per configuration:
* Batch Sizes: `[8, 4]`
* Learning Rates: `[3e-05, 5e-05]`
And report micro F1-score on development set:
| Configuration | Run 1 | Run 2 | Run 3 | Run 4 | Run 5 | Avg. |
|-----------------|--------------|--------------|--------------|--------------|--------------|--------------|
| bs4-e10-lr3e-05 | [0.7876][1] | [0.7978][2] | [0.7803][3] | [0.7859][4] | [0.7907][5] | 78.85 ± 0.58 |
| bs8-e10-lr3e-05 | [0.7904][6] | [0.7884][7] | [0.7876][8] | [0.783][9] | [0.7894][10] | 78.78 ± 0.26 |
| bs8-e10-lr5e-05 | [0.7939][11] | [0.7859][12] | [0.7825][13] | [0.7849][14] | [0.7853][15] | 78.65 ± 0.39 |
| bs4-e10-lr5e-05 | [0.7943][16] | [0.786][17] | [0.7834][18] | [0.7824][19] | [0.7736][20] | 78.39 ± 0.67 |
[1]: https://hf.co/stefan-it/hmbench-hipe2020-de-hmbert-bs4-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-1
[2]: https://hf.co/stefan-it/hmbench-hipe2020-de-hmbert-bs4-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-2
[3]: https://hf.co/stefan-it/hmbench-hipe2020-de-hmbert-bs4-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-3
[4]: https://hf.co/stefan-it/hmbench-hipe2020-de-hmbert-bs4-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-4
[5]: https://hf.co/stefan-it/hmbench-hipe2020-de-hmbert-bs4-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-5
[6]: https://hf.co/stefan-it/hmbench-hipe2020-de-hmbert-bs8-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-1
[7]: https://hf.co/stefan-it/hmbench-hipe2020-de-hmbert-bs8-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-2
[8]: https://hf.co/stefan-it/hmbench-hipe2020-de-hmbert-bs8-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-3
[9]: https://hf.co/stefan-it/hmbench-hipe2020-de-hmbert-bs8-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-4
[10]: https://hf.co/stefan-it/hmbench-hipe2020-de-hmbert-bs8-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-5
[11]: https://hf.co/stefan-it/hmbench-hipe2020-de-hmbert-bs8-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-1
[12]: https://hf.co/stefan-it/hmbench-hipe2020-de-hmbert-bs8-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-2
[13]: https://hf.co/stefan-it/hmbench-hipe2020-de-hmbert-bs8-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-3
[14]: https://hf.co/stefan-it/hmbench-hipe2020-de-hmbert-bs8-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-4
[15]: https://hf.co/stefan-it/hmbench-hipe2020-de-hmbert-bs8-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-5
[16]: https://hf.co/stefan-it/hmbench-hipe2020-de-hmbert-bs4-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-1
[17]: https://hf.co/stefan-it/hmbench-hipe2020-de-hmbert-bs4-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-2
[18]: https://hf.co/stefan-it/hmbench-hipe2020-de-hmbert-bs4-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-3
[19]: https://hf.co/stefan-it/hmbench-hipe2020-de-hmbert-bs4-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-4
[20]: https://hf.co/stefan-it/hmbench-hipe2020-de-hmbert-bs4-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-5
The [training log](training.log) and TensorBoard logs (only for hmByT5 and hmTEAMS based models) are also uploaded to the model hub.
More information about fine-tuning can be found [here](https://github.com/stefan-it/hmBench).
# Acknowledgements
We thank [Luisa März](https://github.com/LuisaMaerz), [Katharina Schmid](https://github.com/schmika) and
[Erion Çano](https://github.com/erionc) for their fruitful discussions about Historic Language Models.
Research supported with Cloud TPUs from Google's [TPU Research Cloud](https://sites.research.google/trc/about/) (TRC).
Many Thanks for providing access to the TPUs ❤️
|
stefan-it/hmbench-hipe2020-de-hmbert-bs8-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-5 | stefan-it | 2023-10-17T23:22:51Z | 9 | 0 | flair | [
"flair",
"pytorch",
"token-classification",
"sequence-tagger-model",
"de",
"base_model:dbmdz/bert-base-historic-multilingual-cased",
"base_model:finetune:dbmdz/bert-base-historic-multilingual-cased",
"license:mit",
"region:us"
]
| token-classification | 2023-10-13T13:58:40Z | ---
language: de
license: mit
tags:
- flair
- token-classification
- sequence-tagger-model
base_model: dbmdz/bert-base-historic-multilingual-cased
widget:
- text: Es war am 25sten , als Lord Corn wollis Dublin mit seinem Gefolge und mehrern
Truppen verließ , um in einer Central - Lage bey Sligo die Operationen der Armee
persönlich zu dirigiren . Der Feind dürfte bald in die Enge kommen , da Gen .
Lacke mit 6000 Mann ihm entgegen marschirt .
---
# Fine-tuned Flair Model on German HIPE-2020 Dataset (HIPE-2022)
This Flair model was fine-tuned on the
[German HIPE-2020](https://github.com/hipe-eval/HIPE-2022-data/blob/main/documentation/README-hipe2020.md)
NER Dataset using hmBERT as backbone LM.
The HIPE-2020 dataset is comprised of newspapers from mid 19C to mid 20C. For information can be found
[here](https://dl.acm.org/doi/abs/10.1007/978-3-030-58219-7_21).
The following NEs were annotated: `loc`, `org`, `pers`, `prod`, `time` and `comp`.
# Results
We performed a hyper-parameter search over the following parameters with 5 different seeds per configuration:
* Batch Sizes: `[8, 4]`
* Learning Rates: `[3e-05, 5e-05]`
And report micro F1-score on development set:
| Configuration | Run 1 | Run 2 | Run 3 | Run 4 | Run 5 | Avg. |
|-----------------|--------------|--------------|--------------|--------------|--------------|--------------|
| bs4-e10-lr3e-05 | [0.7876][1] | [0.7978][2] | [0.7803][3] | [0.7859][4] | [0.7907][5] | 78.85 ± 0.58 |
| bs8-e10-lr3e-05 | [0.7904][6] | [0.7884][7] | [0.7876][8] | [0.783][9] | [0.7894][10] | 78.78 ± 0.26 |
| bs8-e10-lr5e-05 | [0.7939][11] | [0.7859][12] | [0.7825][13] | [0.7849][14] | [0.7853][15] | 78.65 ± 0.39 |
| bs4-e10-lr5e-05 | [0.7943][16] | [0.786][17] | [0.7834][18] | [0.7824][19] | [0.7736][20] | 78.39 ± 0.67 |
[1]: https://hf.co/stefan-it/hmbench-hipe2020-de-hmbert-bs4-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-1
[2]: https://hf.co/stefan-it/hmbench-hipe2020-de-hmbert-bs4-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-2
[3]: https://hf.co/stefan-it/hmbench-hipe2020-de-hmbert-bs4-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-3
[4]: https://hf.co/stefan-it/hmbench-hipe2020-de-hmbert-bs4-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-4
[5]: https://hf.co/stefan-it/hmbench-hipe2020-de-hmbert-bs4-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-5
[6]: https://hf.co/stefan-it/hmbench-hipe2020-de-hmbert-bs8-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-1
[7]: https://hf.co/stefan-it/hmbench-hipe2020-de-hmbert-bs8-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-2
[8]: https://hf.co/stefan-it/hmbench-hipe2020-de-hmbert-bs8-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-3
[9]: https://hf.co/stefan-it/hmbench-hipe2020-de-hmbert-bs8-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-4
[10]: https://hf.co/stefan-it/hmbench-hipe2020-de-hmbert-bs8-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-5
[11]: https://hf.co/stefan-it/hmbench-hipe2020-de-hmbert-bs8-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-1
[12]: https://hf.co/stefan-it/hmbench-hipe2020-de-hmbert-bs8-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-2
[13]: https://hf.co/stefan-it/hmbench-hipe2020-de-hmbert-bs8-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-3
[14]: https://hf.co/stefan-it/hmbench-hipe2020-de-hmbert-bs8-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-4
[15]: https://hf.co/stefan-it/hmbench-hipe2020-de-hmbert-bs8-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-5
[16]: https://hf.co/stefan-it/hmbench-hipe2020-de-hmbert-bs4-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-1
[17]: https://hf.co/stefan-it/hmbench-hipe2020-de-hmbert-bs4-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-2
[18]: https://hf.co/stefan-it/hmbench-hipe2020-de-hmbert-bs4-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-3
[19]: https://hf.co/stefan-it/hmbench-hipe2020-de-hmbert-bs4-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-4
[20]: https://hf.co/stefan-it/hmbench-hipe2020-de-hmbert-bs4-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-5
The [training log](training.log) and TensorBoard logs (only for hmByT5 and hmTEAMS based models) are also uploaded to the model hub.
More information about fine-tuning can be found [here](https://github.com/stefan-it/hmBench).
# Acknowledgements
We thank [Luisa März](https://github.com/LuisaMaerz), [Katharina Schmid](https://github.com/schmika) and
[Erion Çano](https://github.com/erionc) for their fruitful discussions about Historic Language Models.
Research supported with Cloud TPUs from Google's [TPU Research Cloud](https://sites.research.google/trc/about/) (TRC).
Many Thanks for providing access to the TPUs ❤️
|
stefan-it/hmbench-hipe2020-de-hmbert-bs8-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-3 | stefan-it | 2023-10-17T23:22:48Z | 9 | 0 | flair | [
"flair",
"pytorch",
"token-classification",
"sequence-tagger-model",
"de",
"base_model:dbmdz/bert-base-historic-multilingual-cased",
"base_model:finetune:dbmdz/bert-base-historic-multilingual-cased",
"license:mit",
"region:us"
]
| token-classification | 2023-10-13T12:54:48Z | ---
language: de
license: mit
tags:
- flair
- token-classification
- sequence-tagger-model
base_model: dbmdz/bert-base-historic-multilingual-cased
widget:
- text: Es war am 25sten , als Lord Corn wollis Dublin mit seinem Gefolge und mehrern
Truppen verließ , um in einer Central - Lage bey Sligo die Operationen der Armee
persönlich zu dirigiren . Der Feind dürfte bald in die Enge kommen , da Gen .
Lacke mit 6000 Mann ihm entgegen marschirt .
---
# Fine-tuned Flair Model on German HIPE-2020 Dataset (HIPE-2022)
This Flair model was fine-tuned on the
[German HIPE-2020](https://github.com/hipe-eval/HIPE-2022-data/blob/main/documentation/README-hipe2020.md)
NER Dataset using hmBERT as backbone LM.
The HIPE-2020 dataset is comprised of newspapers from mid 19C to mid 20C. For information can be found
[here](https://dl.acm.org/doi/abs/10.1007/978-3-030-58219-7_21).
The following NEs were annotated: `loc`, `org`, `pers`, `prod`, `time` and `comp`.
# Results
We performed a hyper-parameter search over the following parameters with 5 different seeds per configuration:
* Batch Sizes: `[8, 4]`
* Learning Rates: `[3e-05, 5e-05]`
And report micro F1-score on development set:
| Configuration | Run 1 | Run 2 | Run 3 | Run 4 | Run 5 | Avg. |
|-----------------|--------------|--------------|--------------|--------------|--------------|--------------|
| bs4-e10-lr3e-05 | [0.7876][1] | [0.7978][2] | [0.7803][3] | [0.7859][4] | [0.7907][5] | 78.85 ± 0.58 |
| bs8-e10-lr3e-05 | [0.7904][6] | [0.7884][7] | [0.7876][8] | [0.783][9] | [0.7894][10] | 78.78 ± 0.26 |
| bs8-e10-lr5e-05 | [0.7939][11] | [0.7859][12] | [0.7825][13] | [0.7849][14] | [0.7853][15] | 78.65 ± 0.39 |
| bs4-e10-lr5e-05 | [0.7943][16] | [0.786][17] | [0.7834][18] | [0.7824][19] | [0.7736][20] | 78.39 ± 0.67 |
[1]: https://hf.co/stefan-it/hmbench-hipe2020-de-hmbert-bs4-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-1
[2]: https://hf.co/stefan-it/hmbench-hipe2020-de-hmbert-bs4-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-2
[3]: https://hf.co/stefan-it/hmbench-hipe2020-de-hmbert-bs4-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-3
[4]: https://hf.co/stefan-it/hmbench-hipe2020-de-hmbert-bs4-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-4
[5]: https://hf.co/stefan-it/hmbench-hipe2020-de-hmbert-bs4-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-5
[6]: https://hf.co/stefan-it/hmbench-hipe2020-de-hmbert-bs8-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-1
[7]: https://hf.co/stefan-it/hmbench-hipe2020-de-hmbert-bs8-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-2
[8]: https://hf.co/stefan-it/hmbench-hipe2020-de-hmbert-bs8-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-3
[9]: https://hf.co/stefan-it/hmbench-hipe2020-de-hmbert-bs8-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-4
[10]: https://hf.co/stefan-it/hmbench-hipe2020-de-hmbert-bs8-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-5
[11]: https://hf.co/stefan-it/hmbench-hipe2020-de-hmbert-bs8-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-1
[12]: https://hf.co/stefan-it/hmbench-hipe2020-de-hmbert-bs8-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-2
[13]: https://hf.co/stefan-it/hmbench-hipe2020-de-hmbert-bs8-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-3
[14]: https://hf.co/stefan-it/hmbench-hipe2020-de-hmbert-bs8-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-4
[15]: https://hf.co/stefan-it/hmbench-hipe2020-de-hmbert-bs8-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-5
[16]: https://hf.co/stefan-it/hmbench-hipe2020-de-hmbert-bs4-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-1
[17]: https://hf.co/stefan-it/hmbench-hipe2020-de-hmbert-bs4-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-2
[18]: https://hf.co/stefan-it/hmbench-hipe2020-de-hmbert-bs4-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-3
[19]: https://hf.co/stefan-it/hmbench-hipe2020-de-hmbert-bs4-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-4
[20]: https://hf.co/stefan-it/hmbench-hipe2020-de-hmbert-bs4-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-5
The [training log](training.log) and TensorBoard logs (only for hmByT5 and hmTEAMS based models) are also uploaded to the model hub.
More information about fine-tuning can be found [here](https://github.com/stefan-it/hmBench).
# Acknowledgements
We thank [Luisa März](https://github.com/LuisaMaerz), [Katharina Schmid](https://github.com/schmika) and
[Erion Çano](https://github.com/erionc) for their fruitful discussions about Historic Language Models.
Research supported with Cloud TPUs from Google's [TPU Research Cloud](https://sites.research.google/trc/about/) (TRC).
Many Thanks for providing access to the TPUs ❤️
|
stefan-it/hmbench-hipe2020-de-hmbert-bs8-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-1 | stefan-it | 2023-10-17T23:22:46Z | 11 | 0 | flair | [
"flair",
"pytorch",
"token-classification",
"sequence-tagger-model",
"de",
"base_model:dbmdz/bert-base-historic-multilingual-cased",
"base_model:finetune:dbmdz/bert-base-historic-multilingual-cased",
"license:mit",
"region:us"
]
| token-classification | 2023-10-13T11:50:36Z | ---
language: de
license: mit
tags:
- flair
- token-classification
- sequence-tagger-model
base_model: dbmdz/bert-base-historic-multilingual-cased
widget:
- text: Es war am 25sten , als Lord Corn wollis Dublin mit seinem Gefolge und mehrern
Truppen verließ , um in einer Central - Lage bey Sligo die Operationen der Armee
persönlich zu dirigiren . Der Feind dürfte bald in die Enge kommen , da Gen .
Lacke mit 6000 Mann ihm entgegen marschirt .
---
# Fine-tuned Flair Model on German HIPE-2020 Dataset (HIPE-2022)
This Flair model was fine-tuned on the
[German HIPE-2020](https://github.com/hipe-eval/HIPE-2022-data/blob/main/documentation/README-hipe2020.md)
NER Dataset using hmBERT as backbone LM.
The HIPE-2020 dataset is comprised of newspapers from mid 19C to mid 20C. For information can be found
[here](https://dl.acm.org/doi/abs/10.1007/978-3-030-58219-7_21).
The following NEs were annotated: `loc`, `org`, `pers`, `prod`, `time` and `comp`.
# Results
We performed a hyper-parameter search over the following parameters with 5 different seeds per configuration:
* Batch Sizes: `[8, 4]`
* Learning Rates: `[3e-05, 5e-05]`
And report micro F1-score on development set:
| Configuration | Run 1 | Run 2 | Run 3 | Run 4 | Run 5 | Avg. |
|-----------------|--------------|--------------|--------------|--------------|--------------|--------------|
| bs4-e10-lr3e-05 | [0.7876][1] | [0.7978][2] | [0.7803][3] | [0.7859][4] | [0.7907][5] | 78.85 ± 0.58 |
| bs8-e10-lr3e-05 | [0.7904][6] | [0.7884][7] | [0.7876][8] | [0.783][9] | [0.7894][10] | 78.78 ± 0.26 |
| bs8-e10-lr5e-05 | [0.7939][11] | [0.7859][12] | [0.7825][13] | [0.7849][14] | [0.7853][15] | 78.65 ± 0.39 |
| bs4-e10-lr5e-05 | [0.7943][16] | [0.786][17] | [0.7834][18] | [0.7824][19] | [0.7736][20] | 78.39 ± 0.67 |
[1]: https://hf.co/stefan-it/hmbench-hipe2020-de-hmbert-bs4-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-1
[2]: https://hf.co/stefan-it/hmbench-hipe2020-de-hmbert-bs4-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-2
[3]: https://hf.co/stefan-it/hmbench-hipe2020-de-hmbert-bs4-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-3
[4]: https://hf.co/stefan-it/hmbench-hipe2020-de-hmbert-bs4-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-4
[5]: https://hf.co/stefan-it/hmbench-hipe2020-de-hmbert-bs4-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-5
[6]: https://hf.co/stefan-it/hmbench-hipe2020-de-hmbert-bs8-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-1
[7]: https://hf.co/stefan-it/hmbench-hipe2020-de-hmbert-bs8-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-2
[8]: https://hf.co/stefan-it/hmbench-hipe2020-de-hmbert-bs8-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-3
[9]: https://hf.co/stefan-it/hmbench-hipe2020-de-hmbert-bs8-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-4
[10]: https://hf.co/stefan-it/hmbench-hipe2020-de-hmbert-bs8-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-5
[11]: https://hf.co/stefan-it/hmbench-hipe2020-de-hmbert-bs8-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-1
[12]: https://hf.co/stefan-it/hmbench-hipe2020-de-hmbert-bs8-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-2
[13]: https://hf.co/stefan-it/hmbench-hipe2020-de-hmbert-bs8-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-3
[14]: https://hf.co/stefan-it/hmbench-hipe2020-de-hmbert-bs8-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-4
[15]: https://hf.co/stefan-it/hmbench-hipe2020-de-hmbert-bs8-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-5
[16]: https://hf.co/stefan-it/hmbench-hipe2020-de-hmbert-bs4-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-1
[17]: https://hf.co/stefan-it/hmbench-hipe2020-de-hmbert-bs4-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-2
[18]: https://hf.co/stefan-it/hmbench-hipe2020-de-hmbert-bs4-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-3
[19]: https://hf.co/stefan-it/hmbench-hipe2020-de-hmbert-bs4-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-4
[20]: https://hf.co/stefan-it/hmbench-hipe2020-de-hmbert-bs4-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-5
The [training log](training.log) and TensorBoard logs (only for hmByT5 and hmTEAMS based models) are also uploaded to the model hub.
More information about fine-tuning can be found [here](https://github.com/stefan-it/hmBench).
# Acknowledgements
We thank [Luisa März](https://github.com/LuisaMaerz), [Katharina Schmid](https://github.com/schmika) and
[Erion Çano](https://github.com/erionc) for their fruitful discussions about Historic Language Models.
Research supported with Cloud TPUs from Google's [TPU Research Cloud](https://sites.research.google/trc/about/) (TRC).
Many Thanks for providing access to the TPUs ❤️
|
stefan-it/hmbench-hipe2020-de-hmbert-bs8-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-5 | stefan-it | 2023-10-17T23:22:45Z | 10 | 0 | flair | [
"flair",
"pytorch",
"token-classification",
"sequence-tagger-model",
"de",
"base_model:dbmdz/bert-base-historic-multilingual-cased",
"base_model:finetune:dbmdz/bert-base-historic-multilingual-cased",
"license:mit",
"region:us"
]
| token-classification | 2023-10-13T13:51:51Z | ---
language: de
license: mit
tags:
- flair
- token-classification
- sequence-tagger-model
base_model: dbmdz/bert-base-historic-multilingual-cased
widget:
- text: Es war am 25sten , als Lord Corn wollis Dublin mit seinem Gefolge und mehrern
Truppen verließ , um in einer Central - Lage bey Sligo die Operationen der Armee
persönlich zu dirigiren . Der Feind dürfte bald in die Enge kommen , da Gen .
Lacke mit 6000 Mann ihm entgegen marschirt .
---
# Fine-tuned Flair Model on German HIPE-2020 Dataset (HIPE-2022)
This Flair model was fine-tuned on the
[German HIPE-2020](https://github.com/hipe-eval/HIPE-2022-data/blob/main/documentation/README-hipe2020.md)
NER Dataset using hmBERT as backbone LM.
The HIPE-2020 dataset is comprised of newspapers from mid 19C to mid 20C. For information can be found
[here](https://dl.acm.org/doi/abs/10.1007/978-3-030-58219-7_21).
The following NEs were annotated: `loc`, `org`, `pers`, `prod`, `time` and `comp`.
# Results
We performed a hyper-parameter search over the following parameters with 5 different seeds per configuration:
* Batch Sizes: `[8, 4]`
* Learning Rates: `[3e-05, 5e-05]`
And report micro F1-score on development set:
| Configuration | Run 1 | Run 2 | Run 3 | Run 4 | Run 5 | Avg. |
|-----------------|--------------|--------------|--------------|--------------|--------------|--------------|
| bs4-e10-lr3e-05 | [0.7876][1] | [0.7978][2] | [0.7803][3] | [0.7859][4] | [0.7907][5] | 78.85 ± 0.58 |
| bs8-e10-lr3e-05 | [0.7904][6] | [0.7884][7] | [0.7876][8] | [0.783][9] | [0.7894][10] | 78.78 ± 0.26 |
| bs8-e10-lr5e-05 | [0.7939][11] | [0.7859][12] | [0.7825][13] | [0.7849][14] | [0.7853][15] | 78.65 ± 0.39 |
| bs4-e10-lr5e-05 | [0.7943][16] | [0.786][17] | [0.7834][18] | [0.7824][19] | [0.7736][20] | 78.39 ± 0.67 |
[1]: https://hf.co/stefan-it/hmbench-hipe2020-de-hmbert-bs4-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-1
[2]: https://hf.co/stefan-it/hmbench-hipe2020-de-hmbert-bs4-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-2
[3]: https://hf.co/stefan-it/hmbench-hipe2020-de-hmbert-bs4-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-3
[4]: https://hf.co/stefan-it/hmbench-hipe2020-de-hmbert-bs4-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-4
[5]: https://hf.co/stefan-it/hmbench-hipe2020-de-hmbert-bs4-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-5
[6]: https://hf.co/stefan-it/hmbench-hipe2020-de-hmbert-bs8-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-1
[7]: https://hf.co/stefan-it/hmbench-hipe2020-de-hmbert-bs8-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-2
[8]: https://hf.co/stefan-it/hmbench-hipe2020-de-hmbert-bs8-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-3
[9]: https://hf.co/stefan-it/hmbench-hipe2020-de-hmbert-bs8-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-4
[10]: https://hf.co/stefan-it/hmbench-hipe2020-de-hmbert-bs8-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-5
[11]: https://hf.co/stefan-it/hmbench-hipe2020-de-hmbert-bs8-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-1
[12]: https://hf.co/stefan-it/hmbench-hipe2020-de-hmbert-bs8-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-2
[13]: https://hf.co/stefan-it/hmbench-hipe2020-de-hmbert-bs8-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-3
[14]: https://hf.co/stefan-it/hmbench-hipe2020-de-hmbert-bs8-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-4
[15]: https://hf.co/stefan-it/hmbench-hipe2020-de-hmbert-bs8-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-5
[16]: https://hf.co/stefan-it/hmbench-hipe2020-de-hmbert-bs4-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-1
[17]: https://hf.co/stefan-it/hmbench-hipe2020-de-hmbert-bs4-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-2
[18]: https://hf.co/stefan-it/hmbench-hipe2020-de-hmbert-bs4-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-3
[19]: https://hf.co/stefan-it/hmbench-hipe2020-de-hmbert-bs4-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-4
[20]: https://hf.co/stefan-it/hmbench-hipe2020-de-hmbert-bs4-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-5
The [training log](training.log) and TensorBoard logs (only for hmByT5 and hmTEAMS based models) are also uploaded to the model hub.
More information about fine-tuning can be found [here](https://github.com/stefan-it/hmBench).
# Acknowledgements
We thank [Luisa März](https://github.com/LuisaMaerz), [Katharina Schmid](https://github.com/schmika) and
[Erion Çano](https://github.com/erionc) for their fruitful discussions about Historic Language Models.
Research supported with Cloud TPUs from Google's [TPU Research Cloud](https://sites.research.google/trc/about/) (TRC).
Many Thanks for providing access to the TPUs ❤️
|
stefan-it/hmbench-hipe2020-de-hmbert-bs8-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-4 | stefan-it | 2023-10-17T23:22:44Z | 7 | 0 | flair | [
"flair",
"pytorch",
"token-classification",
"sequence-tagger-model",
"de",
"base_model:dbmdz/bert-base-historic-multilingual-cased",
"base_model:finetune:dbmdz/bert-base-historic-multilingual-cased",
"license:mit",
"region:us"
]
| token-classification | 2023-10-13T13:19:40Z | ---
language: de
license: mit
tags:
- flair
- token-classification
- sequence-tagger-model
base_model: dbmdz/bert-base-historic-multilingual-cased
widget:
- text: Es war am 25sten , als Lord Corn wollis Dublin mit seinem Gefolge und mehrern
Truppen verließ , um in einer Central - Lage bey Sligo die Operationen der Armee
persönlich zu dirigiren . Der Feind dürfte bald in die Enge kommen , da Gen .
Lacke mit 6000 Mann ihm entgegen marschirt .
---
# Fine-tuned Flair Model on German HIPE-2020 Dataset (HIPE-2022)
This Flair model was fine-tuned on the
[German HIPE-2020](https://github.com/hipe-eval/HIPE-2022-data/blob/main/documentation/README-hipe2020.md)
NER Dataset using hmBERT as backbone LM.
The HIPE-2020 dataset is comprised of newspapers from mid 19C to mid 20C. For information can be found
[here](https://dl.acm.org/doi/abs/10.1007/978-3-030-58219-7_21).
The following NEs were annotated: `loc`, `org`, `pers`, `prod`, `time` and `comp`.
# Results
We performed a hyper-parameter search over the following parameters with 5 different seeds per configuration:
* Batch Sizes: `[8, 4]`
* Learning Rates: `[3e-05, 5e-05]`
And report micro F1-score on development set:
| Configuration | Run 1 | Run 2 | Run 3 | Run 4 | Run 5 | Avg. |
|-----------------|--------------|--------------|--------------|--------------|--------------|--------------|
| bs4-e10-lr3e-05 | [0.7876][1] | [0.7978][2] | [0.7803][3] | [0.7859][4] | [0.7907][5] | 78.85 ± 0.58 |
| bs8-e10-lr3e-05 | [0.7904][6] | [0.7884][7] | [0.7876][8] | [0.783][9] | [0.7894][10] | 78.78 ± 0.26 |
| bs8-e10-lr5e-05 | [0.7939][11] | [0.7859][12] | [0.7825][13] | [0.7849][14] | [0.7853][15] | 78.65 ± 0.39 |
| bs4-e10-lr5e-05 | [0.7943][16] | [0.786][17] | [0.7834][18] | [0.7824][19] | [0.7736][20] | 78.39 ± 0.67 |
[1]: https://hf.co/stefan-it/hmbench-hipe2020-de-hmbert-bs4-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-1
[2]: https://hf.co/stefan-it/hmbench-hipe2020-de-hmbert-bs4-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-2
[3]: https://hf.co/stefan-it/hmbench-hipe2020-de-hmbert-bs4-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-3
[4]: https://hf.co/stefan-it/hmbench-hipe2020-de-hmbert-bs4-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-4
[5]: https://hf.co/stefan-it/hmbench-hipe2020-de-hmbert-bs4-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-5
[6]: https://hf.co/stefan-it/hmbench-hipe2020-de-hmbert-bs8-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-1
[7]: https://hf.co/stefan-it/hmbench-hipe2020-de-hmbert-bs8-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-2
[8]: https://hf.co/stefan-it/hmbench-hipe2020-de-hmbert-bs8-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-3
[9]: https://hf.co/stefan-it/hmbench-hipe2020-de-hmbert-bs8-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-4
[10]: https://hf.co/stefan-it/hmbench-hipe2020-de-hmbert-bs8-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-5
[11]: https://hf.co/stefan-it/hmbench-hipe2020-de-hmbert-bs8-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-1
[12]: https://hf.co/stefan-it/hmbench-hipe2020-de-hmbert-bs8-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-2
[13]: https://hf.co/stefan-it/hmbench-hipe2020-de-hmbert-bs8-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-3
[14]: https://hf.co/stefan-it/hmbench-hipe2020-de-hmbert-bs8-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-4
[15]: https://hf.co/stefan-it/hmbench-hipe2020-de-hmbert-bs8-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-5
[16]: https://hf.co/stefan-it/hmbench-hipe2020-de-hmbert-bs4-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-1
[17]: https://hf.co/stefan-it/hmbench-hipe2020-de-hmbert-bs4-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-2
[18]: https://hf.co/stefan-it/hmbench-hipe2020-de-hmbert-bs4-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-3
[19]: https://hf.co/stefan-it/hmbench-hipe2020-de-hmbert-bs4-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-4
[20]: https://hf.co/stefan-it/hmbench-hipe2020-de-hmbert-bs4-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-5
The [training log](training.log) and TensorBoard logs (only for hmByT5 and hmTEAMS based models) are also uploaded to the model hub.
More information about fine-tuning can be found [here](https://github.com/stefan-it/hmBench).
# Acknowledgements
We thank [Luisa März](https://github.com/LuisaMaerz), [Katharina Schmid](https://github.com/schmika) and
[Erion Çano](https://github.com/erionc) for their fruitful discussions about Historic Language Models.
Research supported with Cloud TPUs from Google's [TPU Research Cloud](https://sites.research.google/trc/about/) (TRC).
Many Thanks for providing access to the TPUs ❤️
|
stefan-it/hmbench-hipe2020-de-hmbert-bs8-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-3 | stefan-it | 2023-10-17T23:22:42Z | 4 | 0 | flair | [
"flair",
"pytorch",
"token-classification",
"sequence-tagger-model",
"de",
"base_model:dbmdz/bert-base-historic-multilingual-cased",
"base_model:finetune:dbmdz/bert-base-historic-multilingual-cased",
"license:mit",
"region:us"
]
| token-classification | 2023-10-13T12:47:55Z | ---
language: de
license: mit
tags:
- flair
- token-classification
- sequence-tagger-model
base_model: dbmdz/bert-base-historic-multilingual-cased
widget:
- text: Es war am 25sten , als Lord Corn wollis Dublin mit seinem Gefolge und mehrern
Truppen verließ , um in einer Central - Lage bey Sligo die Operationen der Armee
persönlich zu dirigiren . Der Feind dürfte bald in die Enge kommen , da Gen .
Lacke mit 6000 Mann ihm entgegen marschirt .
---
# Fine-tuned Flair Model on German HIPE-2020 Dataset (HIPE-2022)
This Flair model was fine-tuned on the
[German HIPE-2020](https://github.com/hipe-eval/HIPE-2022-data/blob/main/documentation/README-hipe2020.md)
NER Dataset using hmBERT as backbone LM.
The HIPE-2020 dataset is comprised of newspapers from mid 19C to mid 20C. For information can be found
[here](https://dl.acm.org/doi/abs/10.1007/978-3-030-58219-7_21).
The following NEs were annotated: `loc`, `org`, `pers`, `prod`, `time` and `comp`.
# Results
We performed a hyper-parameter search over the following parameters with 5 different seeds per configuration:
* Batch Sizes: `[8, 4]`
* Learning Rates: `[3e-05, 5e-05]`
And report micro F1-score on development set:
| Configuration | Run 1 | Run 2 | Run 3 | Run 4 | Run 5 | Avg. |
|-----------------|--------------|--------------|--------------|--------------|--------------|--------------|
| bs4-e10-lr3e-05 | [0.7876][1] | [0.7978][2] | [0.7803][3] | [0.7859][4] | [0.7907][5] | 78.85 ± 0.58 |
| bs8-e10-lr3e-05 | [0.7904][6] | [0.7884][7] | [0.7876][8] | [0.783][9] | [0.7894][10] | 78.78 ± 0.26 |
| bs8-e10-lr5e-05 | [0.7939][11] | [0.7859][12] | [0.7825][13] | [0.7849][14] | [0.7853][15] | 78.65 ± 0.39 |
| bs4-e10-lr5e-05 | [0.7943][16] | [0.786][17] | [0.7834][18] | [0.7824][19] | [0.7736][20] | 78.39 ± 0.67 |
[1]: https://hf.co/stefan-it/hmbench-hipe2020-de-hmbert-bs4-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-1
[2]: https://hf.co/stefan-it/hmbench-hipe2020-de-hmbert-bs4-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-2
[3]: https://hf.co/stefan-it/hmbench-hipe2020-de-hmbert-bs4-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-3
[4]: https://hf.co/stefan-it/hmbench-hipe2020-de-hmbert-bs4-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-4
[5]: https://hf.co/stefan-it/hmbench-hipe2020-de-hmbert-bs4-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-5
[6]: https://hf.co/stefan-it/hmbench-hipe2020-de-hmbert-bs8-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-1
[7]: https://hf.co/stefan-it/hmbench-hipe2020-de-hmbert-bs8-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-2
[8]: https://hf.co/stefan-it/hmbench-hipe2020-de-hmbert-bs8-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-3
[9]: https://hf.co/stefan-it/hmbench-hipe2020-de-hmbert-bs8-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-4
[10]: https://hf.co/stefan-it/hmbench-hipe2020-de-hmbert-bs8-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-5
[11]: https://hf.co/stefan-it/hmbench-hipe2020-de-hmbert-bs8-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-1
[12]: https://hf.co/stefan-it/hmbench-hipe2020-de-hmbert-bs8-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-2
[13]: https://hf.co/stefan-it/hmbench-hipe2020-de-hmbert-bs8-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-3
[14]: https://hf.co/stefan-it/hmbench-hipe2020-de-hmbert-bs8-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-4
[15]: https://hf.co/stefan-it/hmbench-hipe2020-de-hmbert-bs8-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-5
[16]: https://hf.co/stefan-it/hmbench-hipe2020-de-hmbert-bs4-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-1
[17]: https://hf.co/stefan-it/hmbench-hipe2020-de-hmbert-bs4-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-2
[18]: https://hf.co/stefan-it/hmbench-hipe2020-de-hmbert-bs4-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-3
[19]: https://hf.co/stefan-it/hmbench-hipe2020-de-hmbert-bs4-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-4
[20]: https://hf.co/stefan-it/hmbench-hipe2020-de-hmbert-bs4-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-5
The [training log](training.log) and TensorBoard logs (only for hmByT5 and hmTEAMS based models) are also uploaded to the model hub.
More information about fine-tuning can be found [here](https://github.com/stefan-it/hmBench).
# Acknowledgements
We thank [Luisa März](https://github.com/LuisaMaerz), [Katharina Schmid](https://github.com/schmika) and
[Erion Çano](https://github.com/erionc) for their fruitful discussions about Historic Language Models.
Research supported with Cloud TPUs from Google's [TPU Research Cloud](https://sites.research.google/trc/about/) (TRC).
Many Thanks for providing access to the TPUs ❤️
|
stefan-it/hmbench-hipe2020-de-hmbert-bs8-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-2 | stefan-it | 2023-10-17T23:22:41Z | 8 | 0 | flair | [
"flair",
"pytorch",
"token-classification",
"sequence-tagger-model",
"de",
"base_model:dbmdz/bert-base-historic-multilingual-cased",
"base_model:finetune:dbmdz/bert-base-historic-multilingual-cased",
"license:mit",
"region:us"
]
| token-classification | 2023-10-13T12:15:54Z | ---
language: de
license: mit
tags:
- flair
- token-classification
- sequence-tagger-model
base_model: dbmdz/bert-base-historic-multilingual-cased
widget:
- text: Es war am 25sten , als Lord Corn wollis Dublin mit seinem Gefolge und mehrern
Truppen verließ , um in einer Central - Lage bey Sligo die Operationen der Armee
persönlich zu dirigiren . Der Feind dürfte bald in die Enge kommen , da Gen .
Lacke mit 6000 Mann ihm entgegen marschirt .
---
# Fine-tuned Flair Model on German HIPE-2020 Dataset (HIPE-2022)
This Flair model was fine-tuned on the
[German HIPE-2020](https://github.com/hipe-eval/HIPE-2022-data/blob/main/documentation/README-hipe2020.md)
NER Dataset using hmBERT as backbone LM.
The HIPE-2020 dataset is comprised of newspapers from mid 19C to mid 20C. For information can be found
[here](https://dl.acm.org/doi/abs/10.1007/978-3-030-58219-7_21).
The following NEs were annotated: `loc`, `org`, `pers`, `prod`, `time` and `comp`.
# Results
We performed a hyper-parameter search over the following parameters with 5 different seeds per configuration:
* Batch Sizes: `[8, 4]`
* Learning Rates: `[3e-05, 5e-05]`
And report micro F1-score on development set:
| Configuration | Run 1 | Run 2 | Run 3 | Run 4 | Run 5 | Avg. |
|-----------------|--------------|--------------|--------------|--------------|--------------|--------------|
| bs4-e10-lr3e-05 | [0.7876][1] | [0.7978][2] | [0.7803][3] | [0.7859][4] | [0.7907][5] | 78.85 ± 0.58 |
| bs8-e10-lr3e-05 | [0.7904][6] | [0.7884][7] | [0.7876][8] | [0.783][9] | [0.7894][10] | 78.78 ± 0.26 |
| bs8-e10-lr5e-05 | [0.7939][11] | [0.7859][12] | [0.7825][13] | [0.7849][14] | [0.7853][15] | 78.65 ± 0.39 |
| bs4-e10-lr5e-05 | [0.7943][16] | [0.786][17] | [0.7834][18] | [0.7824][19] | [0.7736][20] | 78.39 ± 0.67 |
[1]: https://hf.co/stefan-it/hmbench-hipe2020-de-hmbert-bs4-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-1
[2]: https://hf.co/stefan-it/hmbench-hipe2020-de-hmbert-bs4-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-2
[3]: https://hf.co/stefan-it/hmbench-hipe2020-de-hmbert-bs4-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-3
[4]: https://hf.co/stefan-it/hmbench-hipe2020-de-hmbert-bs4-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-4
[5]: https://hf.co/stefan-it/hmbench-hipe2020-de-hmbert-bs4-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-5
[6]: https://hf.co/stefan-it/hmbench-hipe2020-de-hmbert-bs8-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-1
[7]: https://hf.co/stefan-it/hmbench-hipe2020-de-hmbert-bs8-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-2
[8]: https://hf.co/stefan-it/hmbench-hipe2020-de-hmbert-bs8-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-3
[9]: https://hf.co/stefan-it/hmbench-hipe2020-de-hmbert-bs8-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-4
[10]: https://hf.co/stefan-it/hmbench-hipe2020-de-hmbert-bs8-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-5
[11]: https://hf.co/stefan-it/hmbench-hipe2020-de-hmbert-bs8-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-1
[12]: https://hf.co/stefan-it/hmbench-hipe2020-de-hmbert-bs8-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-2
[13]: https://hf.co/stefan-it/hmbench-hipe2020-de-hmbert-bs8-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-3
[14]: https://hf.co/stefan-it/hmbench-hipe2020-de-hmbert-bs8-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-4
[15]: https://hf.co/stefan-it/hmbench-hipe2020-de-hmbert-bs8-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-5
[16]: https://hf.co/stefan-it/hmbench-hipe2020-de-hmbert-bs4-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-1
[17]: https://hf.co/stefan-it/hmbench-hipe2020-de-hmbert-bs4-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-2
[18]: https://hf.co/stefan-it/hmbench-hipe2020-de-hmbert-bs4-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-3
[19]: https://hf.co/stefan-it/hmbench-hipe2020-de-hmbert-bs4-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-4
[20]: https://hf.co/stefan-it/hmbench-hipe2020-de-hmbert-bs4-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-5
The [training log](training.log) and TensorBoard logs (only for hmByT5 and hmTEAMS based models) are also uploaded to the model hub.
More information about fine-tuning can be found [here](https://github.com/stefan-it/hmBench).
# Acknowledgements
We thank [Luisa März](https://github.com/LuisaMaerz), [Katharina Schmid](https://github.com/schmika) and
[Erion Çano](https://github.com/erionc) for their fruitful discussions about Historic Language Models.
Research supported with Cloud TPUs from Google's [TPU Research Cloud](https://sites.research.google/trc/about/) (TRC).
Many Thanks for providing access to the TPUs ❤️
|
stefan-it/hmbench-hipe2020-de-hmbert-bs8-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-1 | stefan-it | 2023-10-17T23:22:40Z | 2 | 0 | flair | [
"flair",
"pytorch",
"token-classification",
"sequence-tagger-model",
"de",
"base_model:dbmdz/bert-base-historic-multilingual-cased",
"base_model:finetune:dbmdz/bert-base-historic-multilingual-cased",
"license:mit",
"region:us"
]
| token-classification | 2023-10-13T11:43:50Z | ---
language: de
license: mit
tags:
- flair
- token-classification
- sequence-tagger-model
base_model: dbmdz/bert-base-historic-multilingual-cased
widget:
- text: Es war am 25sten , als Lord Corn wollis Dublin mit seinem Gefolge und mehrern
Truppen verließ , um in einer Central - Lage bey Sligo die Operationen der Armee
persönlich zu dirigiren . Der Feind dürfte bald in die Enge kommen , da Gen .
Lacke mit 6000 Mann ihm entgegen marschirt .
---
# Fine-tuned Flair Model on German HIPE-2020 Dataset (HIPE-2022)
This Flair model was fine-tuned on the
[German HIPE-2020](https://github.com/hipe-eval/HIPE-2022-data/blob/main/documentation/README-hipe2020.md)
NER Dataset using hmBERT as backbone LM.
The HIPE-2020 dataset is comprised of newspapers from mid 19C to mid 20C. For information can be found
[here](https://dl.acm.org/doi/abs/10.1007/978-3-030-58219-7_21).
The following NEs were annotated: `loc`, `org`, `pers`, `prod`, `time` and `comp`.
# Results
We performed a hyper-parameter search over the following parameters with 5 different seeds per configuration:
* Batch Sizes: `[8, 4]`
* Learning Rates: `[3e-05, 5e-05]`
And report micro F1-score on development set:
| Configuration | Run 1 | Run 2 | Run 3 | Run 4 | Run 5 | Avg. |
|-----------------|--------------|--------------|--------------|--------------|--------------|--------------|
| bs4-e10-lr3e-05 | [0.7876][1] | [0.7978][2] | [0.7803][3] | [0.7859][4] | [0.7907][5] | 78.85 ± 0.58 |
| bs8-e10-lr3e-05 | [0.7904][6] | [0.7884][7] | [0.7876][8] | [0.783][9] | [0.7894][10] | 78.78 ± 0.26 |
| bs8-e10-lr5e-05 | [0.7939][11] | [0.7859][12] | [0.7825][13] | [0.7849][14] | [0.7853][15] | 78.65 ± 0.39 |
| bs4-e10-lr5e-05 | [0.7943][16] | [0.786][17] | [0.7834][18] | [0.7824][19] | [0.7736][20] | 78.39 ± 0.67 |
[1]: https://hf.co/stefan-it/hmbench-hipe2020-de-hmbert-bs4-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-1
[2]: https://hf.co/stefan-it/hmbench-hipe2020-de-hmbert-bs4-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-2
[3]: https://hf.co/stefan-it/hmbench-hipe2020-de-hmbert-bs4-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-3
[4]: https://hf.co/stefan-it/hmbench-hipe2020-de-hmbert-bs4-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-4
[5]: https://hf.co/stefan-it/hmbench-hipe2020-de-hmbert-bs4-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-5
[6]: https://hf.co/stefan-it/hmbench-hipe2020-de-hmbert-bs8-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-1
[7]: https://hf.co/stefan-it/hmbench-hipe2020-de-hmbert-bs8-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-2
[8]: https://hf.co/stefan-it/hmbench-hipe2020-de-hmbert-bs8-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-3
[9]: https://hf.co/stefan-it/hmbench-hipe2020-de-hmbert-bs8-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-4
[10]: https://hf.co/stefan-it/hmbench-hipe2020-de-hmbert-bs8-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-5
[11]: https://hf.co/stefan-it/hmbench-hipe2020-de-hmbert-bs8-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-1
[12]: https://hf.co/stefan-it/hmbench-hipe2020-de-hmbert-bs8-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-2
[13]: https://hf.co/stefan-it/hmbench-hipe2020-de-hmbert-bs8-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-3
[14]: https://hf.co/stefan-it/hmbench-hipe2020-de-hmbert-bs8-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-4
[15]: https://hf.co/stefan-it/hmbench-hipe2020-de-hmbert-bs8-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-5
[16]: https://hf.co/stefan-it/hmbench-hipe2020-de-hmbert-bs4-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-1
[17]: https://hf.co/stefan-it/hmbench-hipe2020-de-hmbert-bs4-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-2
[18]: https://hf.co/stefan-it/hmbench-hipe2020-de-hmbert-bs4-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-3
[19]: https://hf.co/stefan-it/hmbench-hipe2020-de-hmbert-bs4-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-4
[20]: https://hf.co/stefan-it/hmbench-hipe2020-de-hmbert-bs4-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-5
The [training log](training.log) and TensorBoard logs (only for hmByT5 and hmTEAMS based models) are also uploaded to the model hub.
More information about fine-tuning can be found [here](https://github.com/stefan-it/hmBench).
# Acknowledgements
We thank [Luisa März](https://github.com/LuisaMaerz), [Katharina Schmid](https://github.com/schmika) and
[Erion Çano](https://github.com/erionc) for their fruitful discussions about Historic Language Models.
Research supported with Cloud TPUs from Google's [TPU Research Cloud](https://sites.research.google/trc/about/) (TRC).
Many Thanks for providing access to the TPUs ❤️
|
stefan-it/hmbench-topres19th-en-hmbert-bs8-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-1 | stefan-it | 2023-10-17T23:22:08Z | 5 | 0 | flair | [
"flair",
"pytorch",
"token-classification",
"sequence-tagger-model",
"en",
"base_model:dbmdz/bert-base-historic-multilingual-cased",
"base_model:finetune:dbmdz/bert-base-historic-multilingual-cased",
"license:mit",
"region:us"
]
| token-classification | 2023-10-16T21:31:09Z | ---
language: en
license: mit
tags:
- flair
- token-classification
- sequence-tagger-model
base_model: dbmdz/bert-base-historic-multilingual-cased
widget:
- text: On Wednesday , a public dinner was given by the Conservative Burgesses of
Leads , to the Conservative members of the Leeds Town Council , in the Music Hall
, Albion-street , which was very numerously attended .
---
# Fine-tuned Flair Model on TopRes19th English NER Dataset (HIPE-2022)
This Flair model was fine-tuned on the
[TopRes19th English](https://github.com/hipe-eval/HIPE-2022-data/blob/main/documentation/README-topres19th.md)
NER Dataset using hmBERT as backbone LM.
The TopRes19th dataset consists of NE-annotated historical English newspaper articles from 19C.
The following NEs were annotated: `BUILDING`, `LOC` and `STREET`.
# Results
We performed a hyper-parameter search over the following parameters with 5 different seeds per configuration:
* Batch Sizes: `[8, 4]`
* Learning Rates: `[3e-05, 5e-05]`
And report micro F1-score on development set:
| Configuration | Run 1 | Run 2 | Run 3 | Run 4 | Run 5 | Avg. |
|-----------------|--------------|--------------|--------------|--------------|--------------|--------------|
| bs8-e10-lr3e-05 | [0.8024][1] | [0.7936][2] | [0.8083][3] | [0.8042][4] | [0.8122][5] | 80.41 ± 0.63 |
| bs4-e10-lr3e-05 | [0.791][6] | [0.8143][7] | [0.8017][8] | [0.8065][9] | [0.8065][10] | 80.4 ± 0.77 |
| bs8-e10-lr5e-05 | [0.7974][11] | [0.7983][12] | [0.8092][13] | [0.8094][14] | [0.7828][15] | 79.94 ± 0.98 |
| bs4-e10-lr5e-05 | [0.8058][16] | [0.7966][17] | [0.8033][18] | [0.7889][19] | [0.786][20] | 79.61 ± 0.77 |
[1]: https://hf.co/stefan-it/hmbench-topres19th-en-hmbert-bs8-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-1
[2]: https://hf.co/stefan-it/hmbench-topres19th-en-hmbert-bs8-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-2
[3]: https://hf.co/stefan-it/hmbench-topres19th-en-hmbert-bs8-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-3
[4]: https://hf.co/stefan-it/hmbench-topres19th-en-hmbert-bs8-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-4
[5]: https://hf.co/stefan-it/hmbench-topres19th-en-hmbert-bs8-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-5
[6]: https://hf.co/stefan-it/hmbench-topres19th-en-hmbert-bs4-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-1
[7]: https://hf.co/stefan-it/hmbench-topres19th-en-hmbert-bs4-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-2
[8]: https://hf.co/stefan-it/hmbench-topres19th-en-hmbert-bs4-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-3
[9]: https://hf.co/stefan-it/hmbench-topres19th-en-hmbert-bs4-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-4
[10]: https://hf.co/stefan-it/hmbench-topres19th-en-hmbert-bs4-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-5
[11]: https://hf.co/stefan-it/hmbench-topres19th-en-hmbert-bs8-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-1
[12]: https://hf.co/stefan-it/hmbench-topres19th-en-hmbert-bs8-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-2
[13]: https://hf.co/stefan-it/hmbench-topres19th-en-hmbert-bs8-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-3
[14]: https://hf.co/stefan-it/hmbench-topres19th-en-hmbert-bs8-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-4
[15]: https://hf.co/stefan-it/hmbench-topres19th-en-hmbert-bs8-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-5
[16]: https://hf.co/stefan-it/hmbench-topres19th-en-hmbert-bs4-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-1
[17]: https://hf.co/stefan-it/hmbench-topres19th-en-hmbert-bs4-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-2
[18]: https://hf.co/stefan-it/hmbench-topres19th-en-hmbert-bs4-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-3
[19]: https://hf.co/stefan-it/hmbench-topres19th-en-hmbert-bs4-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-4
[20]: https://hf.co/stefan-it/hmbench-topres19th-en-hmbert-bs4-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-5
The [training log](training.log) and TensorBoard logs (only for hmByT5 and hmTEAMS based models) are also uploaded to the model hub.
More information about fine-tuning can be found [here](https://github.com/stefan-it/hmBench).
# Acknowledgements
We thank [Luisa März](https://github.com/LuisaMaerz), [Katharina Schmid](https://github.com/schmika) and
[Erion Çano](https://github.com/erionc) for their fruitful discussions about Historic Language Models.
Research supported with Cloud TPUs from Google's [TPU Research Cloud](https://sites.research.google/trc/about/) (TRC).
Many Thanks for providing access to the TPUs ❤️
|
stefan-it/hmbench-topres19th-en-hmbert-bs8-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-2 | stefan-it | 2023-10-17T23:22:03Z | 2 | 0 | flair | [
"flair",
"pytorch",
"token-classification",
"sequence-tagger-model",
"en",
"base_model:dbmdz/bert-base-historic-multilingual-cased",
"base_model:finetune:dbmdz/bert-base-historic-multilingual-cased",
"license:mit",
"region:us"
]
| token-classification | 2023-10-16T22:04:37Z | ---
language: en
license: mit
tags:
- flair
- token-classification
- sequence-tagger-model
base_model: dbmdz/bert-base-historic-multilingual-cased
widget:
- text: On Wednesday , a public dinner was given by the Conservative Burgesses of
Leads , to the Conservative members of the Leeds Town Council , in the Music Hall
, Albion-street , which was very numerously attended .
---
# Fine-tuned Flair Model on TopRes19th English NER Dataset (HIPE-2022)
This Flair model was fine-tuned on the
[TopRes19th English](https://github.com/hipe-eval/HIPE-2022-data/blob/main/documentation/README-topres19th.md)
NER Dataset using hmBERT as backbone LM.
The TopRes19th dataset consists of NE-annotated historical English newspaper articles from 19C.
The following NEs were annotated: `BUILDING`, `LOC` and `STREET`.
# Results
We performed a hyper-parameter search over the following parameters with 5 different seeds per configuration:
* Batch Sizes: `[8, 4]`
* Learning Rates: `[3e-05, 5e-05]`
And report micro F1-score on development set:
| Configuration | Run 1 | Run 2 | Run 3 | Run 4 | Run 5 | Avg. |
|-----------------|--------------|--------------|--------------|--------------|--------------|--------------|
| bs8-e10-lr3e-05 | [0.8024][1] | [0.7936][2] | [0.8083][3] | [0.8042][4] | [0.8122][5] | 80.41 ± 0.63 |
| bs4-e10-lr3e-05 | [0.791][6] | [0.8143][7] | [0.8017][8] | [0.8065][9] | [0.8065][10] | 80.4 ± 0.77 |
| bs8-e10-lr5e-05 | [0.7974][11] | [0.7983][12] | [0.8092][13] | [0.8094][14] | [0.7828][15] | 79.94 ± 0.98 |
| bs4-e10-lr5e-05 | [0.8058][16] | [0.7966][17] | [0.8033][18] | [0.7889][19] | [0.786][20] | 79.61 ± 0.77 |
[1]: https://hf.co/stefan-it/hmbench-topres19th-en-hmbert-bs8-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-1
[2]: https://hf.co/stefan-it/hmbench-topres19th-en-hmbert-bs8-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-2
[3]: https://hf.co/stefan-it/hmbench-topres19th-en-hmbert-bs8-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-3
[4]: https://hf.co/stefan-it/hmbench-topres19th-en-hmbert-bs8-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-4
[5]: https://hf.co/stefan-it/hmbench-topres19th-en-hmbert-bs8-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-5
[6]: https://hf.co/stefan-it/hmbench-topres19th-en-hmbert-bs4-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-1
[7]: https://hf.co/stefan-it/hmbench-topres19th-en-hmbert-bs4-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-2
[8]: https://hf.co/stefan-it/hmbench-topres19th-en-hmbert-bs4-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-3
[9]: https://hf.co/stefan-it/hmbench-topres19th-en-hmbert-bs4-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-4
[10]: https://hf.co/stefan-it/hmbench-topres19th-en-hmbert-bs4-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-5
[11]: https://hf.co/stefan-it/hmbench-topres19th-en-hmbert-bs8-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-1
[12]: https://hf.co/stefan-it/hmbench-topres19th-en-hmbert-bs8-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-2
[13]: https://hf.co/stefan-it/hmbench-topres19th-en-hmbert-bs8-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-3
[14]: https://hf.co/stefan-it/hmbench-topres19th-en-hmbert-bs8-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-4
[15]: https://hf.co/stefan-it/hmbench-topres19th-en-hmbert-bs8-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-5
[16]: https://hf.co/stefan-it/hmbench-topres19th-en-hmbert-bs4-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-1
[17]: https://hf.co/stefan-it/hmbench-topres19th-en-hmbert-bs4-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-2
[18]: https://hf.co/stefan-it/hmbench-topres19th-en-hmbert-bs4-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-3
[19]: https://hf.co/stefan-it/hmbench-topres19th-en-hmbert-bs4-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-4
[20]: https://hf.co/stefan-it/hmbench-topres19th-en-hmbert-bs4-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-5
The [training log](training.log) and TensorBoard logs (only for hmByT5 and hmTEAMS based models) are also uploaded to the model hub.
More information about fine-tuning can be found [here](https://github.com/stefan-it/hmBench).
# Acknowledgements
We thank [Luisa März](https://github.com/LuisaMaerz), [Katharina Schmid](https://github.com/schmika) and
[Erion Çano](https://github.com/erionc) for their fruitful discussions about Historic Language Models.
Research supported with Cloud TPUs from Google's [TPU Research Cloud](https://sites.research.google/trc/about/) (TRC).
Many Thanks for providing access to the TPUs ❤️
|
stefan-it/hmbench-topres19th-en-hmbert-bs8-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-1 | stefan-it | 2023-10-17T23:22:02Z | 2 | 0 | flair | [
"flair",
"pytorch",
"token-classification",
"sequence-tagger-model",
"en",
"base_model:dbmdz/bert-base-historic-multilingual-cased",
"base_model:finetune:dbmdz/bert-base-historic-multilingual-cased",
"license:mit",
"region:us"
]
| token-classification | 2023-10-16T21:22:38Z | ---
language: en
license: mit
tags:
- flair
- token-classification
- sequence-tagger-model
base_model: dbmdz/bert-base-historic-multilingual-cased
widget:
- text: On Wednesday , a public dinner was given by the Conservative Burgesses of
Leads , to the Conservative members of the Leeds Town Council , in the Music Hall
, Albion-street , which was very numerously attended .
---
# Fine-tuned Flair Model on TopRes19th English NER Dataset (HIPE-2022)
This Flair model was fine-tuned on the
[TopRes19th English](https://github.com/hipe-eval/HIPE-2022-data/blob/main/documentation/README-topres19th.md)
NER Dataset using hmBERT as backbone LM.
The TopRes19th dataset consists of NE-annotated historical English newspaper articles from 19C.
The following NEs were annotated: `BUILDING`, `LOC` and `STREET`.
# Results
We performed a hyper-parameter search over the following parameters with 5 different seeds per configuration:
* Batch Sizes: `[8, 4]`
* Learning Rates: `[3e-05, 5e-05]`
And report micro F1-score on development set:
| Configuration | Run 1 | Run 2 | Run 3 | Run 4 | Run 5 | Avg. |
|-----------------|--------------|--------------|--------------|--------------|--------------|--------------|
| bs8-e10-lr3e-05 | [0.8024][1] | [0.7936][2] | [0.8083][3] | [0.8042][4] | [0.8122][5] | 80.41 ± 0.63 |
| bs4-e10-lr3e-05 | [0.791][6] | [0.8143][7] | [0.8017][8] | [0.8065][9] | [0.8065][10] | 80.4 ± 0.77 |
| bs8-e10-lr5e-05 | [0.7974][11] | [0.7983][12] | [0.8092][13] | [0.8094][14] | [0.7828][15] | 79.94 ± 0.98 |
| bs4-e10-lr5e-05 | [0.8058][16] | [0.7966][17] | [0.8033][18] | [0.7889][19] | [0.786][20] | 79.61 ± 0.77 |
[1]: https://hf.co/stefan-it/hmbench-topres19th-en-hmbert-bs8-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-1
[2]: https://hf.co/stefan-it/hmbench-topres19th-en-hmbert-bs8-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-2
[3]: https://hf.co/stefan-it/hmbench-topres19th-en-hmbert-bs8-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-3
[4]: https://hf.co/stefan-it/hmbench-topres19th-en-hmbert-bs8-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-4
[5]: https://hf.co/stefan-it/hmbench-topres19th-en-hmbert-bs8-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-5
[6]: https://hf.co/stefan-it/hmbench-topres19th-en-hmbert-bs4-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-1
[7]: https://hf.co/stefan-it/hmbench-topres19th-en-hmbert-bs4-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-2
[8]: https://hf.co/stefan-it/hmbench-topres19th-en-hmbert-bs4-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-3
[9]: https://hf.co/stefan-it/hmbench-topres19th-en-hmbert-bs4-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-4
[10]: https://hf.co/stefan-it/hmbench-topres19th-en-hmbert-bs4-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-5
[11]: https://hf.co/stefan-it/hmbench-topres19th-en-hmbert-bs8-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-1
[12]: https://hf.co/stefan-it/hmbench-topres19th-en-hmbert-bs8-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-2
[13]: https://hf.co/stefan-it/hmbench-topres19th-en-hmbert-bs8-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-3
[14]: https://hf.co/stefan-it/hmbench-topres19th-en-hmbert-bs8-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-4
[15]: https://hf.co/stefan-it/hmbench-topres19th-en-hmbert-bs8-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-5
[16]: https://hf.co/stefan-it/hmbench-topres19th-en-hmbert-bs4-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-1
[17]: https://hf.co/stefan-it/hmbench-topres19th-en-hmbert-bs4-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-2
[18]: https://hf.co/stefan-it/hmbench-topres19th-en-hmbert-bs4-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-3
[19]: https://hf.co/stefan-it/hmbench-topres19th-en-hmbert-bs4-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-4
[20]: https://hf.co/stefan-it/hmbench-topres19th-en-hmbert-bs4-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-5
The [training log](training.log) and TensorBoard logs (only for hmByT5 and hmTEAMS based models) are also uploaded to the model hub.
More information about fine-tuning can be found [here](https://github.com/stefan-it/hmBench).
# Acknowledgements
We thank [Luisa März](https://github.com/LuisaMaerz), [Katharina Schmid](https://github.com/schmika) and
[Erion Çano](https://github.com/erionc) for their fruitful discussions about Historic Language Models.
Research supported with Cloud TPUs from Google's [TPU Research Cloud](https://sites.research.google/trc/about/) (TRC).
Many Thanks for providing access to the TPUs ❤️
|
stefan-it/hmbench-letemps-fr-hmbert-bs4-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-4 | stefan-it | 2023-10-17T23:21:36Z | 11 | 0 | flair | [
"flair",
"pytorch",
"token-classification",
"sequence-tagger-model",
"fr",
"base_model:dbmdz/bert-base-historic-multilingual-cased",
"base_model:finetune:dbmdz/bert-base-historic-multilingual-cased",
"license:mit",
"region:us"
]
| token-classification | 2023-10-15T00:33:30Z | ---
language: fr
license: mit
tags:
- flair
- token-classification
- sequence-tagger-model
base_model: dbmdz/bert-base-historic-multilingual-cased
widget:
- text: 'Parmi les remèdes recommandés par la Société , il faut mentionner celui que
M . Schatzmann , de Lausanne , a proposé :'
---
# Fine-tuned Flair Model on LeTemps French NER Dataset (HIPE-2022)
This Flair model was fine-tuned on the
[LeTemps French](https://github.com/hipe-eval/HIPE-2022-data/blob/main/documentation/README-letemps.md)
NER Dataset using hmBERT as backbone LM.
The LeTemps dataset consists of NE-annotated historical French newspaper articles from mid-19C to mid 20C.
The following NEs were annotated: `loc`, `org` and `pers`.
# Results
We performed a hyper-parameter search over the following parameters with 5 different seeds per configuration:
* Batch Sizes: `[8, 4]`
* Learning Rates: `[3e-05, 5e-05]`
And report micro F1-score on development set:
| Configuration | Run 1 | Run 2 | Run 3 | Run 4 | Run 5 | Avg. |
|-----------------|--------------|--------------|--------------|--------------|--------------|--------------|
| bs8-e10-lr3e-05 | [0.6585][1] | [0.6464][2] | [0.6532][3] | [0.6589][4] | [0.6493][5] | 65.33 ± 0.49 |
| bs4-e10-lr3e-05 | [0.6482][6] | [0.6361][7] | [0.653][8] | [0.6624][9] | [0.65][10] | 64.99 ± 0.85 |
| bs8-e10-lr5e-05 | [0.6525][11] | [0.6448][12] | [0.6492][13] | [0.6467][14] | [0.6438][15] | 64.74 ± 0.31 |
| bs4-e10-lr5e-05 | [0.6403][16] | [0.6292][17] | [0.642][18] | [0.6432][19] | [0.6386][20] | 63.87 ± 0.5 |
[1]: https://hf.co/stefan-it/hmbench-letemps-fr-hmbert-bs8-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-1
[2]: https://hf.co/stefan-it/hmbench-letemps-fr-hmbert-bs8-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-2
[3]: https://hf.co/stefan-it/hmbench-letemps-fr-hmbert-bs8-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-3
[4]: https://hf.co/stefan-it/hmbench-letemps-fr-hmbert-bs8-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-4
[5]: https://hf.co/stefan-it/hmbench-letemps-fr-hmbert-bs8-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-5
[6]: https://hf.co/stefan-it/hmbench-letemps-fr-hmbert-bs4-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-1
[7]: https://hf.co/stefan-it/hmbench-letemps-fr-hmbert-bs4-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-2
[8]: https://hf.co/stefan-it/hmbench-letemps-fr-hmbert-bs4-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-3
[9]: https://hf.co/stefan-it/hmbench-letemps-fr-hmbert-bs4-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-4
[10]: https://hf.co/stefan-it/hmbench-letemps-fr-hmbert-bs4-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-5
[11]: https://hf.co/stefan-it/hmbench-letemps-fr-hmbert-bs8-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-1
[12]: https://hf.co/stefan-it/hmbench-letemps-fr-hmbert-bs8-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-2
[13]: https://hf.co/stefan-it/hmbench-letemps-fr-hmbert-bs8-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-3
[14]: https://hf.co/stefan-it/hmbench-letemps-fr-hmbert-bs8-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-4
[15]: https://hf.co/stefan-it/hmbench-letemps-fr-hmbert-bs8-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-5
[16]: https://hf.co/stefan-it/hmbench-letemps-fr-hmbert-bs4-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-1
[17]: https://hf.co/stefan-it/hmbench-letemps-fr-hmbert-bs4-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-2
[18]: https://hf.co/stefan-it/hmbench-letemps-fr-hmbert-bs4-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-3
[19]: https://hf.co/stefan-it/hmbench-letemps-fr-hmbert-bs4-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-4
[20]: https://hf.co/stefan-it/hmbench-letemps-fr-hmbert-bs4-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-5
The [training log](training.log) and TensorBoard logs (only for hmByT5 and hmTEAMS based models) are also uploaded to the model hub.
More information about fine-tuning can be found [here](https://github.com/stefan-it/hmBench).
# Acknowledgements
We thank [Luisa März](https://github.com/LuisaMaerz), [Katharina Schmid](https://github.com/schmika) and
[Erion Çano](https://github.com/erionc) for their fruitful discussions about Historic Language Models.
Research supported with Cloud TPUs from Google's [TPU Research Cloud](https://sites.research.google/trc/about/) (TRC).
Many Thanks for providing access to the TPUs ❤️
|
stefan-it/hmbench-letemps-fr-hmbert-bs4-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-3 | stefan-it | 2023-10-17T23:21:35Z | 4 | 0 | flair | [
"flair",
"pytorch",
"token-classification",
"sequence-tagger-model",
"fr",
"base_model:dbmdz/bert-base-historic-multilingual-cased",
"base_model:finetune:dbmdz/bert-base-historic-multilingual-cased",
"license:mit",
"region:us"
]
| token-classification | 2023-10-14T22:52:50Z | ---
language: fr
license: mit
tags:
- flair
- token-classification
- sequence-tagger-model
base_model: dbmdz/bert-base-historic-multilingual-cased
widget:
- text: 'Parmi les remèdes recommandés par la Société , il faut mentionner celui que
M . Schatzmann , de Lausanne , a proposé :'
---
# Fine-tuned Flair Model on LeTemps French NER Dataset (HIPE-2022)
This Flair model was fine-tuned on the
[LeTemps French](https://github.com/hipe-eval/HIPE-2022-data/blob/main/documentation/README-letemps.md)
NER Dataset using hmBERT as backbone LM.
The LeTemps dataset consists of NE-annotated historical French newspaper articles from mid-19C to mid 20C.
The following NEs were annotated: `loc`, `org` and `pers`.
# Results
We performed a hyper-parameter search over the following parameters with 5 different seeds per configuration:
* Batch Sizes: `[8, 4]`
* Learning Rates: `[3e-05, 5e-05]`
And report micro F1-score on development set:
| Configuration | Run 1 | Run 2 | Run 3 | Run 4 | Run 5 | Avg. |
|-----------------|--------------|--------------|--------------|--------------|--------------|--------------|
| bs8-e10-lr3e-05 | [0.6585][1] | [0.6464][2] | [0.6532][3] | [0.6589][4] | [0.6493][5] | 65.33 ± 0.49 |
| bs4-e10-lr3e-05 | [0.6482][6] | [0.6361][7] | [0.653][8] | [0.6624][9] | [0.65][10] | 64.99 ± 0.85 |
| bs8-e10-lr5e-05 | [0.6525][11] | [0.6448][12] | [0.6492][13] | [0.6467][14] | [0.6438][15] | 64.74 ± 0.31 |
| bs4-e10-lr5e-05 | [0.6403][16] | [0.6292][17] | [0.642][18] | [0.6432][19] | [0.6386][20] | 63.87 ± 0.5 |
[1]: https://hf.co/stefan-it/hmbench-letemps-fr-hmbert-bs8-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-1
[2]: https://hf.co/stefan-it/hmbench-letemps-fr-hmbert-bs8-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-2
[3]: https://hf.co/stefan-it/hmbench-letemps-fr-hmbert-bs8-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-3
[4]: https://hf.co/stefan-it/hmbench-letemps-fr-hmbert-bs8-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-4
[5]: https://hf.co/stefan-it/hmbench-letemps-fr-hmbert-bs8-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-5
[6]: https://hf.co/stefan-it/hmbench-letemps-fr-hmbert-bs4-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-1
[7]: https://hf.co/stefan-it/hmbench-letemps-fr-hmbert-bs4-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-2
[8]: https://hf.co/stefan-it/hmbench-letemps-fr-hmbert-bs4-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-3
[9]: https://hf.co/stefan-it/hmbench-letemps-fr-hmbert-bs4-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-4
[10]: https://hf.co/stefan-it/hmbench-letemps-fr-hmbert-bs4-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-5
[11]: https://hf.co/stefan-it/hmbench-letemps-fr-hmbert-bs8-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-1
[12]: https://hf.co/stefan-it/hmbench-letemps-fr-hmbert-bs8-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-2
[13]: https://hf.co/stefan-it/hmbench-letemps-fr-hmbert-bs8-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-3
[14]: https://hf.co/stefan-it/hmbench-letemps-fr-hmbert-bs8-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-4
[15]: https://hf.co/stefan-it/hmbench-letemps-fr-hmbert-bs8-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-5
[16]: https://hf.co/stefan-it/hmbench-letemps-fr-hmbert-bs4-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-1
[17]: https://hf.co/stefan-it/hmbench-letemps-fr-hmbert-bs4-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-2
[18]: https://hf.co/stefan-it/hmbench-letemps-fr-hmbert-bs4-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-3
[19]: https://hf.co/stefan-it/hmbench-letemps-fr-hmbert-bs4-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-4
[20]: https://hf.co/stefan-it/hmbench-letemps-fr-hmbert-bs4-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-5
The [training log](training.log) and TensorBoard logs (only for hmByT5 and hmTEAMS based models) are also uploaded to the model hub.
More information about fine-tuning can be found [here](https://github.com/stefan-it/hmBench).
# Acknowledgements
We thank [Luisa März](https://github.com/LuisaMaerz), [Katharina Schmid](https://github.com/schmika) and
[Erion Çano](https://github.com/erionc) for their fruitful discussions about Historic Language Models.
Research supported with Cloud TPUs from Google's [TPU Research Cloud](https://sites.research.google/trc/about/) (TRC).
Many Thanks for providing access to the TPUs ❤️
|
stefan-it/hmbench-letemps-fr-hmbert-bs4-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-1 | stefan-it | 2023-10-17T23:21:33Z | 5 | 0 | flair | [
"flair",
"pytorch",
"token-classification",
"sequence-tagger-model",
"fr",
"base_model:dbmdz/bert-base-historic-multilingual-cased",
"base_model:finetune:dbmdz/bert-base-historic-multilingual-cased",
"license:mit",
"region:us"
]
| token-classification | 2023-10-14T19:33:44Z | ---
language: fr
license: mit
tags:
- flair
- token-classification
- sequence-tagger-model
base_model: dbmdz/bert-base-historic-multilingual-cased
widget:
- text: 'Parmi les remèdes recommandés par la Société , il faut mentionner celui que
M . Schatzmann , de Lausanne , a proposé :'
---
# Fine-tuned Flair Model on LeTemps French NER Dataset (HIPE-2022)
This Flair model was fine-tuned on the
[LeTemps French](https://github.com/hipe-eval/HIPE-2022-data/blob/main/documentation/README-letemps.md)
NER Dataset using hmBERT as backbone LM.
The LeTemps dataset consists of NE-annotated historical French newspaper articles from mid-19C to mid 20C.
The following NEs were annotated: `loc`, `org` and `pers`.
# Results
We performed a hyper-parameter search over the following parameters with 5 different seeds per configuration:
* Batch Sizes: `[8, 4]`
* Learning Rates: `[3e-05, 5e-05]`
And report micro F1-score on development set:
| Configuration | Run 1 | Run 2 | Run 3 | Run 4 | Run 5 | Avg. |
|-----------------|--------------|--------------|--------------|--------------|--------------|--------------|
| bs8-e10-lr3e-05 | [0.6585][1] | [0.6464][2] | [0.6532][3] | [0.6589][4] | [0.6493][5] | 65.33 ± 0.49 |
| bs4-e10-lr3e-05 | [0.6482][6] | [0.6361][7] | [0.653][8] | [0.6624][9] | [0.65][10] | 64.99 ± 0.85 |
| bs8-e10-lr5e-05 | [0.6525][11] | [0.6448][12] | [0.6492][13] | [0.6467][14] | [0.6438][15] | 64.74 ± 0.31 |
| bs4-e10-lr5e-05 | [0.6403][16] | [0.6292][17] | [0.642][18] | [0.6432][19] | [0.6386][20] | 63.87 ± 0.5 |
[1]: https://hf.co/stefan-it/hmbench-letemps-fr-hmbert-bs8-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-1
[2]: https://hf.co/stefan-it/hmbench-letemps-fr-hmbert-bs8-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-2
[3]: https://hf.co/stefan-it/hmbench-letemps-fr-hmbert-bs8-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-3
[4]: https://hf.co/stefan-it/hmbench-letemps-fr-hmbert-bs8-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-4
[5]: https://hf.co/stefan-it/hmbench-letemps-fr-hmbert-bs8-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-5
[6]: https://hf.co/stefan-it/hmbench-letemps-fr-hmbert-bs4-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-1
[7]: https://hf.co/stefan-it/hmbench-letemps-fr-hmbert-bs4-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-2
[8]: https://hf.co/stefan-it/hmbench-letemps-fr-hmbert-bs4-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-3
[9]: https://hf.co/stefan-it/hmbench-letemps-fr-hmbert-bs4-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-4
[10]: https://hf.co/stefan-it/hmbench-letemps-fr-hmbert-bs4-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-5
[11]: https://hf.co/stefan-it/hmbench-letemps-fr-hmbert-bs8-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-1
[12]: https://hf.co/stefan-it/hmbench-letemps-fr-hmbert-bs8-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-2
[13]: https://hf.co/stefan-it/hmbench-letemps-fr-hmbert-bs8-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-3
[14]: https://hf.co/stefan-it/hmbench-letemps-fr-hmbert-bs8-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-4
[15]: https://hf.co/stefan-it/hmbench-letemps-fr-hmbert-bs8-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-5
[16]: https://hf.co/stefan-it/hmbench-letemps-fr-hmbert-bs4-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-1
[17]: https://hf.co/stefan-it/hmbench-letemps-fr-hmbert-bs4-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-2
[18]: https://hf.co/stefan-it/hmbench-letemps-fr-hmbert-bs4-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-3
[19]: https://hf.co/stefan-it/hmbench-letemps-fr-hmbert-bs4-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-4
[20]: https://hf.co/stefan-it/hmbench-letemps-fr-hmbert-bs4-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-5
The [training log](training.log) and TensorBoard logs (only for hmByT5 and hmTEAMS based models) are also uploaded to the model hub.
More information about fine-tuning can be found [here](https://github.com/stefan-it/hmBench).
# Acknowledgements
We thank [Luisa März](https://github.com/LuisaMaerz), [Katharina Schmid](https://github.com/schmika) and
[Erion Çano](https://github.com/erionc) for their fruitful discussions about Historic Language Models.
Research supported with Cloud TPUs from Google's [TPU Research Cloud](https://sites.research.google/trc/about/) (TRC).
Many Thanks for providing access to the TPUs ❤️
|
stefan-it/hmbench-letemps-fr-hmbert-bs4-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-4 | stefan-it | 2023-10-17T23:21:31Z | 7 | 0 | flair | [
"flair",
"pytorch",
"token-classification",
"sequence-tagger-model",
"fr",
"base_model:dbmdz/bert-base-historic-multilingual-cased",
"base_model:finetune:dbmdz/bert-base-historic-multilingual-cased",
"license:mit",
"region:us"
]
| token-classification | 2023-10-15T00:03:19Z | ---
language: fr
license: mit
tags:
- flair
- token-classification
- sequence-tagger-model
base_model: dbmdz/bert-base-historic-multilingual-cased
widget:
- text: 'Parmi les remèdes recommandés par la Société , il faut mentionner celui que
M . Schatzmann , de Lausanne , a proposé :'
---
# Fine-tuned Flair Model on LeTemps French NER Dataset (HIPE-2022)
This Flair model was fine-tuned on the
[LeTemps French](https://github.com/hipe-eval/HIPE-2022-data/blob/main/documentation/README-letemps.md)
NER Dataset using hmBERT as backbone LM.
The LeTemps dataset consists of NE-annotated historical French newspaper articles from mid-19C to mid 20C.
The following NEs were annotated: `loc`, `org` and `pers`.
# Results
We performed a hyper-parameter search over the following parameters with 5 different seeds per configuration:
* Batch Sizes: `[8, 4]`
* Learning Rates: `[3e-05, 5e-05]`
And report micro F1-score on development set:
| Configuration | Run 1 | Run 2 | Run 3 | Run 4 | Run 5 | Avg. |
|-----------------|--------------|--------------|--------------|--------------|--------------|--------------|
| bs8-e10-lr3e-05 | [0.6585][1] | [0.6464][2] | [0.6532][3] | [0.6589][4] | [0.6493][5] | 65.33 ± 0.49 |
| bs4-e10-lr3e-05 | [0.6482][6] | [0.6361][7] | [0.653][8] | [0.6624][9] | [0.65][10] | 64.99 ± 0.85 |
| bs8-e10-lr5e-05 | [0.6525][11] | [0.6448][12] | [0.6492][13] | [0.6467][14] | [0.6438][15] | 64.74 ± 0.31 |
| bs4-e10-lr5e-05 | [0.6403][16] | [0.6292][17] | [0.642][18] | [0.6432][19] | [0.6386][20] | 63.87 ± 0.5 |
[1]: https://hf.co/stefan-it/hmbench-letemps-fr-hmbert-bs8-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-1
[2]: https://hf.co/stefan-it/hmbench-letemps-fr-hmbert-bs8-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-2
[3]: https://hf.co/stefan-it/hmbench-letemps-fr-hmbert-bs8-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-3
[4]: https://hf.co/stefan-it/hmbench-letemps-fr-hmbert-bs8-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-4
[5]: https://hf.co/stefan-it/hmbench-letemps-fr-hmbert-bs8-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-5
[6]: https://hf.co/stefan-it/hmbench-letemps-fr-hmbert-bs4-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-1
[7]: https://hf.co/stefan-it/hmbench-letemps-fr-hmbert-bs4-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-2
[8]: https://hf.co/stefan-it/hmbench-letemps-fr-hmbert-bs4-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-3
[9]: https://hf.co/stefan-it/hmbench-letemps-fr-hmbert-bs4-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-4
[10]: https://hf.co/stefan-it/hmbench-letemps-fr-hmbert-bs4-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-5
[11]: https://hf.co/stefan-it/hmbench-letemps-fr-hmbert-bs8-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-1
[12]: https://hf.co/stefan-it/hmbench-letemps-fr-hmbert-bs8-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-2
[13]: https://hf.co/stefan-it/hmbench-letemps-fr-hmbert-bs8-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-3
[14]: https://hf.co/stefan-it/hmbench-letemps-fr-hmbert-bs8-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-4
[15]: https://hf.co/stefan-it/hmbench-letemps-fr-hmbert-bs8-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-5
[16]: https://hf.co/stefan-it/hmbench-letemps-fr-hmbert-bs4-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-1
[17]: https://hf.co/stefan-it/hmbench-letemps-fr-hmbert-bs4-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-2
[18]: https://hf.co/stefan-it/hmbench-letemps-fr-hmbert-bs4-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-3
[19]: https://hf.co/stefan-it/hmbench-letemps-fr-hmbert-bs4-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-4
[20]: https://hf.co/stefan-it/hmbench-letemps-fr-hmbert-bs4-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-5
The [training log](training.log) and TensorBoard logs (only for hmByT5 and hmTEAMS based models) are also uploaded to the model hub.
More information about fine-tuning can be found [here](https://github.com/stefan-it/hmBench).
# Acknowledgements
We thank [Luisa März](https://github.com/LuisaMaerz), [Katharina Schmid](https://github.com/schmika) and
[Erion Çano](https://github.com/erionc) for their fruitful discussions about Historic Language Models.
Research supported with Cloud TPUs from Google's [TPU Research Cloud](https://sites.research.google/trc/about/) (TRC).
Many Thanks for providing access to the TPUs ❤️
|
stefan-it/hmbench-letemps-fr-hmbert-bs4-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-2 | stefan-it | 2023-10-17T23:21:29Z | 3 | 0 | flair | [
"flair",
"pytorch",
"token-classification",
"sequence-tagger-model",
"fr",
"base_model:dbmdz/bert-base-historic-multilingual-cased",
"base_model:finetune:dbmdz/bert-base-historic-multilingual-cased",
"license:mit",
"region:us"
]
| token-classification | 2023-10-14T20:44:20Z | ---
language: fr
license: mit
tags:
- flair
- token-classification
- sequence-tagger-model
base_model: dbmdz/bert-base-historic-multilingual-cased
widget:
- text: 'Parmi les remèdes recommandés par la Société , il faut mentionner celui que
M . Schatzmann , de Lausanne , a proposé :'
---
# Fine-tuned Flair Model on LeTemps French NER Dataset (HIPE-2022)
This Flair model was fine-tuned on the
[LeTemps French](https://github.com/hipe-eval/HIPE-2022-data/blob/main/documentation/README-letemps.md)
NER Dataset using hmBERT as backbone LM.
The LeTemps dataset consists of NE-annotated historical French newspaper articles from mid-19C to mid 20C.
The following NEs were annotated: `loc`, `org` and `pers`.
# Results
We performed a hyper-parameter search over the following parameters with 5 different seeds per configuration:
* Batch Sizes: `[8, 4]`
* Learning Rates: `[3e-05, 5e-05]`
And report micro F1-score on development set:
| Configuration | Run 1 | Run 2 | Run 3 | Run 4 | Run 5 | Avg. |
|-----------------|--------------|--------------|--------------|--------------|--------------|--------------|
| bs8-e10-lr3e-05 | [0.6585][1] | [0.6464][2] | [0.6532][3] | [0.6589][4] | [0.6493][5] | 65.33 ± 0.49 |
| bs4-e10-lr3e-05 | [0.6482][6] | [0.6361][7] | [0.653][8] | [0.6624][9] | [0.65][10] | 64.99 ± 0.85 |
| bs8-e10-lr5e-05 | [0.6525][11] | [0.6448][12] | [0.6492][13] | [0.6467][14] | [0.6438][15] | 64.74 ± 0.31 |
| bs4-e10-lr5e-05 | [0.6403][16] | [0.6292][17] | [0.642][18] | [0.6432][19] | [0.6386][20] | 63.87 ± 0.5 |
[1]: https://hf.co/stefan-it/hmbench-letemps-fr-hmbert-bs8-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-1
[2]: https://hf.co/stefan-it/hmbench-letemps-fr-hmbert-bs8-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-2
[3]: https://hf.co/stefan-it/hmbench-letemps-fr-hmbert-bs8-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-3
[4]: https://hf.co/stefan-it/hmbench-letemps-fr-hmbert-bs8-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-4
[5]: https://hf.co/stefan-it/hmbench-letemps-fr-hmbert-bs8-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-5
[6]: https://hf.co/stefan-it/hmbench-letemps-fr-hmbert-bs4-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-1
[7]: https://hf.co/stefan-it/hmbench-letemps-fr-hmbert-bs4-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-2
[8]: https://hf.co/stefan-it/hmbench-letemps-fr-hmbert-bs4-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-3
[9]: https://hf.co/stefan-it/hmbench-letemps-fr-hmbert-bs4-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-4
[10]: https://hf.co/stefan-it/hmbench-letemps-fr-hmbert-bs4-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-5
[11]: https://hf.co/stefan-it/hmbench-letemps-fr-hmbert-bs8-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-1
[12]: https://hf.co/stefan-it/hmbench-letemps-fr-hmbert-bs8-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-2
[13]: https://hf.co/stefan-it/hmbench-letemps-fr-hmbert-bs8-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-3
[14]: https://hf.co/stefan-it/hmbench-letemps-fr-hmbert-bs8-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-4
[15]: https://hf.co/stefan-it/hmbench-letemps-fr-hmbert-bs8-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-5
[16]: https://hf.co/stefan-it/hmbench-letemps-fr-hmbert-bs4-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-1
[17]: https://hf.co/stefan-it/hmbench-letemps-fr-hmbert-bs4-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-2
[18]: https://hf.co/stefan-it/hmbench-letemps-fr-hmbert-bs4-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-3
[19]: https://hf.co/stefan-it/hmbench-letemps-fr-hmbert-bs4-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-4
[20]: https://hf.co/stefan-it/hmbench-letemps-fr-hmbert-bs4-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-5
The [training log](training.log) and TensorBoard logs (only for hmByT5 and hmTEAMS based models) are also uploaded to the model hub.
More information about fine-tuning can be found [here](https://github.com/stefan-it/hmBench).
# Acknowledgements
We thank [Luisa März](https://github.com/LuisaMaerz), [Katharina Schmid](https://github.com/schmika) and
[Erion Çano](https://github.com/erionc) for their fruitful discussions about Historic Language Models.
Research supported with Cloud TPUs from Google's [TPU Research Cloud](https://sites.research.google/trc/about/) (TRC).
Many Thanks for providing access to the TPUs ❤️
|
stefan-it/hmbench-letemps-fr-hmbert-bs8-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-5 | stefan-it | 2023-10-17T23:21:26Z | 4 | 0 | flair | [
"flair",
"pytorch",
"token-classification",
"sequence-tagger-model",
"fr",
"base_model:dbmdz/bert-base-historic-multilingual-cased",
"base_model:finetune:dbmdz/bert-base-historic-multilingual-cased",
"license:mit",
"region:us"
]
| token-classification | 2023-10-15T02:54:56Z | ---
language: fr
license: mit
tags:
- flair
- token-classification
- sequence-tagger-model
base_model: dbmdz/bert-base-historic-multilingual-cased
widget:
- text: 'Parmi les remèdes recommandés par la Société , il faut mentionner celui que
M . Schatzmann , de Lausanne , a proposé :'
---
# Fine-tuned Flair Model on LeTemps French NER Dataset (HIPE-2022)
This Flair model was fine-tuned on the
[LeTemps French](https://github.com/hipe-eval/HIPE-2022-data/blob/main/documentation/README-letemps.md)
NER Dataset using hmBERT as backbone LM.
The LeTemps dataset consists of NE-annotated historical French newspaper articles from mid-19C to mid 20C.
The following NEs were annotated: `loc`, `org` and `pers`.
# Results
We performed a hyper-parameter search over the following parameters with 5 different seeds per configuration:
* Batch Sizes: `[8, 4]`
* Learning Rates: `[3e-05, 5e-05]`
And report micro F1-score on development set:
| Configuration | Run 1 | Run 2 | Run 3 | Run 4 | Run 5 | Avg. |
|-----------------|--------------|--------------|--------------|--------------|--------------|--------------|
| bs8-e10-lr3e-05 | [0.6585][1] | [0.6464][2] | [0.6532][3] | [0.6589][4] | [0.6493][5] | 65.33 ± 0.49 |
| bs4-e10-lr3e-05 | [0.6482][6] | [0.6361][7] | [0.653][8] | [0.6624][9] | [0.65][10] | 64.99 ± 0.85 |
| bs8-e10-lr5e-05 | [0.6525][11] | [0.6448][12] | [0.6492][13] | [0.6467][14] | [0.6438][15] | 64.74 ± 0.31 |
| bs4-e10-lr5e-05 | [0.6403][16] | [0.6292][17] | [0.642][18] | [0.6432][19] | [0.6386][20] | 63.87 ± 0.5 |
[1]: https://hf.co/stefan-it/hmbench-letemps-fr-hmbert-bs8-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-1
[2]: https://hf.co/stefan-it/hmbench-letemps-fr-hmbert-bs8-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-2
[3]: https://hf.co/stefan-it/hmbench-letemps-fr-hmbert-bs8-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-3
[4]: https://hf.co/stefan-it/hmbench-letemps-fr-hmbert-bs8-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-4
[5]: https://hf.co/stefan-it/hmbench-letemps-fr-hmbert-bs8-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-5
[6]: https://hf.co/stefan-it/hmbench-letemps-fr-hmbert-bs4-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-1
[7]: https://hf.co/stefan-it/hmbench-letemps-fr-hmbert-bs4-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-2
[8]: https://hf.co/stefan-it/hmbench-letemps-fr-hmbert-bs4-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-3
[9]: https://hf.co/stefan-it/hmbench-letemps-fr-hmbert-bs4-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-4
[10]: https://hf.co/stefan-it/hmbench-letemps-fr-hmbert-bs4-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-5
[11]: https://hf.co/stefan-it/hmbench-letemps-fr-hmbert-bs8-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-1
[12]: https://hf.co/stefan-it/hmbench-letemps-fr-hmbert-bs8-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-2
[13]: https://hf.co/stefan-it/hmbench-letemps-fr-hmbert-bs8-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-3
[14]: https://hf.co/stefan-it/hmbench-letemps-fr-hmbert-bs8-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-4
[15]: https://hf.co/stefan-it/hmbench-letemps-fr-hmbert-bs8-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-5
[16]: https://hf.co/stefan-it/hmbench-letemps-fr-hmbert-bs4-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-1
[17]: https://hf.co/stefan-it/hmbench-letemps-fr-hmbert-bs4-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-2
[18]: https://hf.co/stefan-it/hmbench-letemps-fr-hmbert-bs4-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-3
[19]: https://hf.co/stefan-it/hmbench-letemps-fr-hmbert-bs4-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-4
[20]: https://hf.co/stefan-it/hmbench-letemps-fr-hmbert-bs4-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-5
The [training log](training.log) and TensorBoard logs (only for hmByT5 and hmTEAMS based models) are also uploaded to the model hub.
More information about fine-tuning can be found [here](https://github.com/stefan-it/hmBench).
# Acknowledgements
We thank [Luisa März](https://github.com/LuisaMaerz), [Katharina Schmid](https://github.com/schmika) and
[Erion Çano](https://github.com/erionc) for their fruitful discussions about Historic Language Models.
Research supported with Cloud TPUs from Google's [TPU Research Cloud](https://sites.research.google/trc/about/) (TRC).
Many Thanks for providing access to the TPUs ❤️
|
stefan-it/hmbench-letemps-fr-hmbert-bs8-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-4 | stefan-it | 2023-10-17T23:21:25Z | 1 | 0 | flair | [
"flair",
"pytorch",
"token-classification",
"sequence-tagger-model",
"fr",
"base_model:dbmdz/bert-base-historic-multilingual-cased",
"base_model:finetune:dbmdz/bert-base-historic-multilingual-cased",
"license:mit",
"region:us"
]
| token-classification | 2023-10-15T01:14:16Z | ---
language: fr
license: mit
tags:
- flair
- token-classification
- sequence-tagger-model
base_model: dbmdz/bert-base-historic-multilingual-cased
widget:
- text: 'Parmi les remèdes recommandés par la Société , il faut mentionner celui que
M . Schatzmann , de Lausanne , a proposé :'
---
# Fine-tuned Flair Model on LeTemps French NER Dataset (HIPE-2022)
This Flair model was fine-tuned on the
[LeTemps French](https://github.com/hipe-eval/HIPE-2022-data/blob/main/documentation/README-letemps.md)
NER Dataset using hmBERT as backbone LM.
The LeTemps dataset consists of NE-annotated historical French newspaper articles from mid-19C to mid 20C.
The following NEs were annotated: `loc`, `org` and `pers`.
# Results
We performed a hyper-parameter search over the following parameters with 5 different seeds per configuration:
* Batch Sizes: `[8, 4]`
* Learning Rates: `[3e-05, 5e-05]`
And report micro F1-score on development set:
| Configuration | Run 1 | Run 2 | Run 3 | Run 4 | Run 5 | Avg. |
|-----------------|--------------|--------------|--------------|--------------|--------------|--------------|
| bs8-e10-lr3e-05 | [0.6585][1] | [0.6464][2] | [0.6532][3] | [0.6589][4] | [0.6493][5] | 65.33 ± 0.49 |
| bs4-e10-lr3e-05 | [0.6482][6] | [0.6361][7] | [0.653][8] | [0.6624][9] | [0.65][10] | 64.99 ± 0.85 |
| bs8-e10-lr5e-05 | [0.6525][11] | [0.6448][12] | [0.6492][13] | [0.6467][14] | [0.6438][15] | 64.74 ± 0.31 |
| bs4-e10-lr5e-05 | [0.6403][16] | [0.6292][17] | [0.642][18] | [0.6432][19] | [0.6386][20] | 63.87 ± 0.5 |
[1]: https://hf.co/stefan-it/hmbench-letemps-fr-hmbert-bs8-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-1
[2]: https://hf.co/stefan-it/hmbench-letemps-fr-hmbert-bs8-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-2
[3]: https://hf.co/stefan-it/hmbench-letemps-fr-hmbert-bs8-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-3
[4]: https://hf.co/stefan-it/hmbench-letemps-fr-hmbert-bs8-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-4
[5]: https://hf.co/stefan-it/hmbench-letemps-fr-hmbert-bs8-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-5
[6]: https://hf.co/stefan-it/hmbench-letemps-fr-hmbert-bs4-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-1
[7]: https://hf.co/stefan-it/hmbench-letemps-fr-hmbert-bs4-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-2
[8]: https://hf.co/stefan-it/hmbench-letemps-fr-hmbert-bs4-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-3
[9]: https://hf.co/stefan-it/hmbench-letemps-fr-hmbert-bs4-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-4
[10]: https://hf.co/stefan-it/hmbench-letemps-fr-hmbert-bs4-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-5
[11]: https://hf.co/stefan-it/hmbench-letemps-fr-hmbert-bs8-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-1
[12]: https://hf.co/stefan-it/hmbench-letemps-fr-hmbert-bs8-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-2
[13]: https://hf.co/stefan-it/hmbench-letemps-fr-hmbert-bs8-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-3
[14]: https://hf.co/stefan-it/hmbench-letemps-fr-hmbert-bs8-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-4
[15]: https://hf.co/stefan-it/hmbench-letemps-fr-hmbert-bs8-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-5
[16]: https://hf.co/stefan-it/hmbench-letemps-fr-hmbert-bs4-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-1
[17]: https://hf.co/stefan-it/hmbench-letemps-fr-hmbert-bs4-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-2
[18]: https://hf.co/stefan-it/hmbench-letemps-fr-hmbert-bs4-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-3
[19]: https://hf.co/stefan-it/hmbench-letemps-fr-hmbert-bs4-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-4
[20]: https://hf.co/stefan-it/hmbench-letemps-fr-hmbert-bs4-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-5
The [training log](training.log) and TensorBoard logs (only for hmByT5 and hmTEAMS based models) are also uploaded to the model hub.
More information about fine-tuning can be found [here](https://github.com/stefan-it/hmBench).
# Acknowledgements
We thank [Luisa März](https://github.com/LuisaMaerz), [Katharina Schmid](https://github.com/schmika) and
[Erion Çano](https://github.com/erionc) for their fruitful discussions about Historic Language Models.
Research supported with Cloud TPUs from Google's [TPU Research Cloud](https://sites.research.google/trc/about/) (TRC).
Many Thanks for providing access to the TPUs ❤️
|
stefan-it/hmbench-letemps-fr-hmbert-bs8-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-2 | stefan-it | 2023-10-17T23:21:23Z | 2 | 0 | flair | [
"flair",
"pytorch",
"token-classification",
"sequence-tagger-model",
"fr",
"base_model:dbmdz/bert-base-historic-multilingual-cased",
"base_model:finetune:dbmdz/bert-base-historic-multilingual-cased",
"license:mit",
"region:us"
]
| token-classification | 2023-10-14T21:54:25Z | ---
language: fr
license: mit
tags:
- flair
- token-classification
- sequence-tagger-model
base_model: dbmdz/bert-base-historic-multilingual-cased
widget:
- text: 'Parmi les remèdes recommandés par la Société , il faut mentionner celui que
M . Schatzmann , de Lausanne , a proposé :'
---
# Fine-tuned Flair Model on LeTemps French NER Dataset (HIPE-2022)
This Flair model was fine-tuned on the
[LeTemps French](https://github.com/hipe-eval/HIPE-2022-data/blob/main/documentation/README-letemps.md)
NER Dataset using hmBERT as backbone LM.
The LeTemps dataset consists of NE-annotated historical French newspaper articles from mid-19C to mid 20C.
The following NEs were annotated: `loc`, `org` and `pers`.
# Results
We performed a hyper-parameter search over the following parameters with 5 different seeds per configuration:
* Batch Sizes: `[8, 4]`
* Learning Rates: `[3e-05, 5e-05]`
And report micro F1-score on development set:
| Configuration | Run 1 | Run 2 | Run 3 | Run 4 | Run 5 | Avg. |
|-----------------|--------------|--------------|--------------|--------------|--------------|--------------|
| bs8-e10-lr3e-05 | [0.6585][1] | [0.6464][2] | [0.6532][3] | [0.6589][4] | [0.6493][5] | 65.33 ± 0.49 |
| bs4-e10-lr3e-05 | [0.6482][6] | [0.6361][7] | [0.653][8] | [0.6624][9] | [0.65][10] | 64.99 ± 0.85 |
| bs8-e10-lr5e-05 | [0.6525][11] | [0.6448][12] | [0.6492][13] | [0.6467][14] | [0.6438][15] | 64.74 ± 0.31 |
| bs4-e10-lr5e-05 | [0.6403][16] | [0.6292][17] | [0.642][18] | [0.6432][19] | [0.6386][20] | 63.87 ± 0.5 |
[1]: https://hf.co/stefan-it/hmbench-letemps-fr-hmbert-bs8-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-1
[2]: https://hf.co/stefan-it/hmbench-letemps-fr-hmbert-bs8-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-2
[3]: https://hf.co/stefan-it/hmbench-letemps-fr-hmbert-bs8-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-3
[4]: https://hf.co/stefan-it/hmbench-letemps-fr-hmbert-bs8-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-4
[5]: https://hf.co/stefan-it/hmbench-letemps-fr-hmbert-bs8-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-5
[6]: https://hf.co/stefan-it/hmbench-letemps-fr-hmbert-bs4-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-1
[7]: https://hf.co/stefan-it/hmbench-letemps-fr-hmbert-bs4-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-2
[8]: https://hf.co/stefan-it/hmbench-letemps-fr-hmbert-bs4-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-3
[9]: https://hf.co/stefan-it/hmbench-letemps-fr-hmbert-bs4-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-4
[10]: https://hf.co/stefan-it/hmbench-letemps-fr-hmbert-bs4-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-5
[11]: https://hf.co/stefan-it/hmbench-letemps-fr-hmbert-bs8-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-1
[12]: https://hf.co/stefan-it/hmbench-letemps-fr-hmbert-bs8-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-2
[13]: https://hf.co/stefan-it/hmbench-letemps-fr-hmbert-bs8-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-3
[14]: https://hf.co/stefan-it/hmbench-letemps-fr-hmbert-bs8-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-4
[15]: https://hf.co/stefan-it/hmbench-letemps-fr-hmbert-bs8-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-5
[16]: https://hf.co/stefan-it/hmbench-letemps-fr-hmbert-bs4-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-1
[17]: https://hf.co/stefan-it/hmbench-letemps-fr-hmbert-bs4-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-2
[18]: https://hf.co/stefan-it/hmbench-letemps-fr-hmbert-bs4-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-3
[19]: https://hf.co/stefan-it/hmbench-letemps-fr-hmbert-bs4-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-4
[20]: https://hf.co/stefan-it/hmbench-letemps-fr-hmbert-bs4-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-5
The [training log](training.log) and TensorBoard logs (only for hmByT5 and hmTEAMS based models) are also uploaded to the model hub.
More information about fine-tuning can be found [here](https://github.com/stefan-it/hmBench).
# Acknowledgements
We thank [Luisa März](https://github.com/LuisaMaerz), [Katharina Schmid](https://github.com/schmika) and
[Erion Çano](https://github.com/erionc) for their fruitful discussions about Historic Language Models.
Research supported with Cloud TPUs from Google's [TPU Research Cloud](https://sites.research.google/trc/about/) (TRC).
Many Thanks for providing access to the TPUs ❤️
|
stefan-it/hmbench-letemps-fr-hmbert-bs8-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-1 | stefan-it | 2023-10-17T23:21:22Z | 7 | 0 | flair | [
"flair",
"pytorch",
"token-classification",
"sequence-tagger-model",
"fr",
"base_model:dbmdz/bert-base-historic-multilingual-cased",
"base_model:finetune:dbmdz/bert-base-historic-multilingual-cased",
"license:mit",
"region:us"
]
| token-classification | 2023-10-14T20:14:43Z | ---
language: fr
license: mit
tags:
- flair
- token-classification
- sequence-tagger-model
base_model: dbmdz/bert-base-historic-multilingual-cased
widget:
- text: 'Parmi les remèdes recommandés par la Société , il faut mentionner celui que
M . Schatzmann , de Lausanne , a proposé :'
---
# Fine-tuned Flair Model on LeTemps French NER Dataset (HIPE-2022)
This Flair model was fine-tuned on the
[LeTemps French](https://github.com/hipe-eval/HIPE-2022-data/blob/main/documentation/README-letemps.md)
NER Dataset using hmBERT as backbone LM.
The LeTemps dataset consists of NE-annotated historical French newspaper articles from mid-19C to mid 20C.
The following NEs were annotated: `loc`, `org` and `pers`.
# Results
We performed a hyper-parameter search over the following parameters with 5 different seeds per configuration:
* Batch Sizes: `[8, 4]`
* Learning Rates: `[3e-05, 5e-05]`
And report micro F1-score on development set:
| Configuration | Run 1 | Run 2 | Run 3 | Run 4 | Run 5 | Avg. |
|-----------------|--------------|--------------|--------------|--------------|--------------|--------------|
| bs8-e10-lr3e-05 | [0.6585][1] | [0.6464][2] | [0.6532][3] | [0.6589][4] | [0.6493][5] | 65.33 ± 0.49 |
| bs4-e10-lr3e-05 | [0.6482][6] | [0.6361][7] | [0.653][8] | [0.6624][9] | [0.65][10] | 64.99 ± 0.85 |
| bs8-e10-lr5e-05 | [0.6525][11] | [0.6448][12] | [0.6492][13] | [0.6467][14] | [0.6438][15] | 64.74 ± 0.31 |
| bs4-e10-lr5e-05 | [0.6403][16] | [0.6292][17] | [0.642][18] | [0.6432][19] | [0.6386][20] | 63.87 ± 0.5 |
[1]: https://hf.co/stefan-it/hmbench-letemps-fr-hmbert-bs8-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-1
[2]: https://hf.co/stefan-it/hmbench-letemps-fr-hmbert-bs8-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-2
[3]: https://hf.co/stefan-it/hmbench-letemps-fr-hmbert-bs8-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-3
[4]: https://hf.co/stefan-it/hmbench-letemps-fr-hmbert-bs8-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-4
[5]: https://hf.co/stefan-it/hmbench-letemps-fr-hmbert-bs8-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-5
[6]: https://hf.co/stefan-it/hmbench-letemps-fr-hmbert-bs4-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-1
[7]: https://hf.co/stefan-it/hmbench-letemps-fr-hmbert-bs4-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-2
[8]: https://hf.co/stefan-it/hmbench-letemps-fr-hmbert-bs4-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-3
[9]: https://hf.co/stefan-it/hmbench-letemps-fr-hmbert-bs4-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-4
[10]: https://hf.co/stefan-it/hmbench-letemps-fr-hmbert-bs4-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-5
[11]: https://hf.co/stefan-it/hmbench-letemps-fr-hmbert-bs8-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-1
[12]: https://hf.co/stefan-it/hmbench-letemps-fr-hmbert-bs8-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-2
[13]: https://hf.co/stefan-it/hmbench-letemps-fr-hmbert-bs8-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-3
[14]: https://hf.co/stefan-it/hmbench-letemps-fr-hmbert-bs8-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-4
[15]: https://hf.co/stefan-it/hmbench-letemps-fr-hmbert-bs8-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-5
[16]: https://hf.co/stefan-it/hmbench-letemps-fr-hmbert-bs4-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-1
[17]: https://hf.co/stefan-it/hmbench-letemps-fr-hmbert-bs4-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-2
[18]: https://hf.co/stefan-it/hmbench-letemps-fr-hmbert-bs4-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-3
[19]: https://hf.co/stefan-it/hmbench-letemps-fr-hmbert-bs4-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-4
[20]: https://hf.co/stefan-it/hmbench-letemps-fr-hmbert-bs4-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-5
The [training log](training.log) and TensorBoard logs (only for hmByT5 and hmTEAMS based models) are also uploaded to the model hub.
More information about fine-tuning can be found [here](https://github.com/stefan-it/hmBench).
# Acknowledgements
We thank [Luisa März](https://github.com/LuisaMaerz), [Katharina Schmid](https://github.com/schmika) and
[Erion Çano](https://github.com/erionc) for their fruitful discussions about Historic Language Models.
Research supported with Cloud TPUs from Google's [TPU Research Cloud](https://sites.research.google/trc/about/) (TRC).
Many Thanks for providing access to the TPUs ❤️
|
stefan-it/hmbench-letemps-fr-hmbert-bs8-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-5 | stefan-it | 2023-10-17T23:21:21Z | 7 | 0 | flair | [
"flair",
"pytorch",
"token-classification",
"sequence-tagger-model",
"fr",
"base_model:dbmdz/bert-base-historic-multilingual-cased",
"base_model:finetune:dbmdz/bert-base-historic-multilingual-cased",
"license:mit",
"region:us"
]
| token-classification | 2023-10-15T02:34:25Z | ---
language: fr
license: mit
tags:
- flair
- token-classification
- sequence-tagger-model
base_model: dbmdz/bert-base-historic-multilingual-cased
widget:
- text: 'Parmi les remèdes recommandés par la Société , il faut mentionner celui que
M . Schatzmann , de Lausanne , a proposé :'
---
# Fine-tuned Flair Model on LeTemps French NER Dataset (HIPE-2022)
This Flair model was fine-tuned on the
[LeTemps French](https://github.com/hipe-eval/HIPE-2022-data/blob/main/documentation/README-letemps.md)
NER Dataset using hmBERT as backbone LM.
The LeTemps dataset consists of NE-annotated historical French newspaper articles from mid-19C to mid 20C.
The following NEs were annotated: `loc`, `org` and `pers`.
# Results
We performed a hyper-parameter search over the following parameters with 5 different seeds per configuration:
* Batch Sizes: `[8, 4]`
* Learning Rates: `[3e-05, 5e-05]`
And report micro F1-score on development set:
| Configuration | Run 1 | Run 2 | Run 3 | Run 4 | Run 5 | Avg. |
|-----------------|--------------|--------------|--------------|--------------|--------------|--------------|
| bs8-e10-lr3e-05 | [0.6585][1] | [0.6464][2] | [0.6532][3] | [0.6589][4] | [0.6493][5] | 65.33 ± 0.49 |
| bs4-e10-lr3e-05 | [0.6482][6] | [0.6361][7] | [0.653][8] | [0.6624][9] | [0.65][10] | 64.99 ± 0.85 |
| bs8-e10-lr5e-05 | [0.6525][11] | [0.6448][12] | [0.6492][13] | [0.6467][14] | [0.6438][15] | 64.74 ± 0.31 |
| bs4-e10-lr5e-05 | [0.6403][16] | [0.6292][17] | [0.642][18] | [0.6432][19] | [0.6386][20] | 63.87 ± 0.5 |
[1]: https://hf.co/stefan-it/hmbench-letemps-fr-hmbert-bs8-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-1
[2]: https://hf.co/stefan-it/hmbench-letemps-fr-hmbert-bs8-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-2
[3]: https://hf.co/stefan-it/hmbench-letemps-fr-hmbert-bs8-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-3
[4]: https://hf.co/stefan-it/hmbench-letemps-fr-hmbert-bs8-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-4
[5]: https://hf.co/stefan-it/hmbench-letemps-fr-hmbert-bs8-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-5
[6]: https://hf.co/stefan-it/hmbench-letemps-fr-hmbert-bs4-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-1
[7]: https://hf.co/stefan-it/hmbench-letemps-fr-hmbert-bs4-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-2
[8]: https://hf.co/stefan-it/hmbench-letemps-fr-hmbert-bs4-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-3
[9]: https://hf.co/stefan-it/hmbench-letemps-fr-hmbert-bs4-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-4
[10]: https://hf.co/stefan-it/hmbench-letemps-fr-hmbert-bs4-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-5
[11]: https://hf.co/stefan-it/hmbench-letemps-fr-hmbert-bs8-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-1
[12]: https://hf.co/stefan-it/hmbench-letemps-fr-hmbert-bs8-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-2
[13]: https://hf.co/stefan-it/hmbench-letemps-fr-hmbert-bs8-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-3
[14]: https://hf.co/stefan-it/hmbench-letemps-fr-hmbert-bs8-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-4
[15]: https://hf.co/stefan-it/hmbench-letemps-fr-hmbert-bs8-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-5
[16]: https://hf.co/stefan-it/hmbench-letemps-fr-hmbert-bs4-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-1
[17]: https://hf.co/stefan-it/hmbench-letemps-fr-hmbert-bs4-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-2
[18]: https://hf.co/stefan-it/hmbench-letemps-fr-hmbert-bs4-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-3
[19]: https://hf.co/stefan-it/hmbench-letemps-fr-hmbert-bs4-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-4
[20]: https://hf.co/stefan-it/hmbench-letemps-fr-hmbert-bs4-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-5
The [training log](training.log) and TensorBoard logs (only for hmByT5 and hmTEAMS based models) are also uploaded to the model hub.
More information about fine-tuning can be found [here](https://github.com/stefan-it/hmBench).
# Acknowledgements
We thank [Luisa März](https://github.com/LuisaMaerz), [Katharina Schmid](https://github.com/schmika) and
[Erion Çano](https://github.com/erionc) for their fruitful discussions about Historic Language Models.
Research supported with Cloud TPUs from Google's [TPU Research Cloud](https://sites.research.google/trc/about/) (TRC).
Many Thanks for providing access to the TPUs ❤️
|
stefan-it/hmbench-letemps-fr-hmbert-bs8-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-1 | stefan-it | 2023-10-17T23:21:16Z | 6 | 0 | flair | [
"flair",
"pytorch",
"token-classification",
"sequence-tagger-model",
"fr",
"base_model:dbmdz/bert-base-historic-multilingual-cased",
"base_model:finetune:dbmdz/bert-base-historic-multilingual-cased",
"license:mit",
"region:us"
]
| token-classification | 2023-10-14T19:54:03Z | ---
language: fr
license: mit
tags:
- flair
- token-classification
- sequence-tagger-model
base_model: dbmdz/bert-base-historic-multilingual-cased
widget:
- text: 'Parmi les remèdes recommandés par la Société , il faut mentionner celui que
M . Schatzmann , de Lausanne , a proposé :'
---
# Fine-tuned Flair Model on LeTemps French NER Dataset (HIPE-2022)
This Flair model was fine-tuned on the
[LeTemps French](https://github.com/hipe-eval/HIPE-2022-data/blob/main/documentation/README-letemps.md)
NER Dataset using hmBERT as backbone LM.
The LeTemps dataset consists of NE-annotated historical French newspaper articles from mid-19C to mid 20C.
The following NEs were annotated: `loc`, `org` and `pers`.
# Results
We performed a hyper-parameter search over the following parameters with 5 different seeds per configuration:
* Batch Sizes: `[8, 4]`
* Learning Rates: `[3e-05, 5e-05]`
And report micro F1-score on development set:
| Configuration | Run 1 | Run 2 | Run 3 | Run 4 | Run 5 | Avg. |
|-----------------|--------------|--------------|--------------|--------------|--------------|--------------|
| bs8-e10-lr3e-05 | [0.6585][1] | [0.6464][2] | [0.6532][3] | [0.6589][4] | [0.6493][5] | 65.33 ± 0.49 |
| bs4-e10-lr3e-05 | [0.6482][6] | [0.6361][7] | [0.653][8] | [0.6624][9] | [0.65][10] | 64.99 ± 0.85 |
| bs8-e10-lr5e-05 | [0.6525][11] | [0.6448][12] | [0.6492][13] | [0.6467][14] | [0.6438][15] | 64.74 ± 0.31 |
| bs4-e10-lr5e-05 | [0.6403][16] | [0.6292][17] | [0.642][18] | [0.6432][19] | [0.6386][20] | 63.87 ± 0.5 |
[1]: https://hf.co/stefan-it/hmbench-letemps-fr-hmbert-bs8-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-1
[2]: https://hf.co/stefan-it/hmbench-letemps-fr-hmbert-bs8-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-2
[3]: https://hf.co/stefan-it/hmbench-letemps-fr-hmbert-bs8-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-3
[4]: https://hf.co/stefan-it/hmbench-letemps-fr-hmbert-bs8-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-4
[5]: https://hf.co/stefan-it/hmbench-letemps-fr-hmbert-bs8-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-5
[6]: https://hf.co/stefan-it/hmbench-letemps-fr-hmbert-bs4-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-1
[7]: https://hf.co/stefan-it/hmbench-letemps-fr-hmbert-bs4-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-2
[8]: https://hf.co/stefan-it/hmbench-letemps-fr-hmbert-bs4-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-3
[9]: https://hf.co/stefan-it/hmbench-letemps-fr-hmbert-bs4-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-4
[10]: https://hf.co/stefan-it/hmbench-letemps-fr-hmbert-bs4-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-5
[11]: https://hf.co/stefan-it/hmbench-letemps-fr-hmbert-bs8-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-1
[12]: https://hf.co/stefan-it/hmbench-letemps-fr-hmbert-bs8-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-2
[13]: https://hf.co/stefan-it/hmbench-letemps-fr-hmbert-bs8-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-3
[14]: https://hf.co/stefan-it/hmbench-letemps-fr-hmbert-bs8-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-4
[15]: https://hf.co/stefan-it/hmbench-letemps-fr-hmbert-bs8-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-5
[16]: https://hf.co/stefan-it/hmbench-letemps-fr-hmbert-bs4-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-1
[17]: https://hf.co/stefan-it/hmbench-letemps-fr-hmbert-bs4-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-2
[18]: https://hf.co/stefan-it/hmbench-letemps-fr-hmbert-bs4-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-3
[19]: https://hf.co/stefan-it/hmbench-letemps-fr-hmbert-bs4-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-4
[20]: https://hf.co/stefan-it/hmbench-letemps-fr-hmbert-bs4-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-5
The [training log](training.log) and TensorBoard logs (only for hmByT5 and hmTEAMS based models) are also uploaded to the model hub.
More information about fine-tuning can be found [here](https://github.com/stefan-it/hmBench).
# Acknowledgements
We thank [Luisa März](https://github.com/LuisaMaerz), [Katharina Schmid](https://github.com/schmika) and
[Erion Çano](https://github.com/erionc) for their fruitful discussions about Historic Language Models.
Research supported with Cloud TPUs from Google's [TPU Research Cloud](https://sites.research.google/trc/about/) (TRC).
Many Thanks for providing access to the TPUs ❤️
|
stefan-it/hmbench-icdar-fr-hmbert-bs4-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-4 | stefan-it | 2023-10-17T23:20:59Z | 8 | 0 | flair | [
"flair",
"pytorch",
"token-classification",
"sequence-tagger-model",
"fr",
"base_model:dbmdz/bert-base-historic-multilingual-cased",
"base_model:finetune:dbmdz/bert-base-historic-multilingual-cased",
"license:mit",
"region:us"
]
| token-classification | 2023-10-14T00:11:04Z | ---
language: fr
license: mit
tags:
- flair
- token-classification
- sequence-tagger-model
base_model: dbmdz/bert-base-historic-multilingual-cased
widget:
- text: Je suis convaincu , a-t43 dit . que nous n"y parviendrions pas , mais nous
ne pouvons céder parce que l' état moral de nos troupe* en souffrirait trop .
( Fournier . ) Des avions ennemis lancent dix-sept bombes sur Dunkerque LONDRES
. 31 décembre .
---
# Fine-tuned Flair Model on French ICDAR-Europeana NER Dataset
This Flair model was fine-tuned on the
[French ICDAR-Europeana](https://github.com/stefan-it/historic-domain-adaptation-icdar)
NER Dataset using hmBERT as backbone LM.
The ICDAR-Europeana NER Dataset is a preprocessed variant of the
[Europeana NER Corpora](https://github.com/EuropeanaNewspapers/ner-corpora) for Dutch and French.
The following NEs were annotated: `PER`, `LOC` and `ORG`.
# Results
We performed a hyper-parameter search over the following parameters with 5 different seeds per configuration:
* Batch Sizes: `[8, 4]`
* Learning Rates: `[3e-05, 5e-05]`
And report micro F1-score on development set:
| Configuration | Run 1 | Run 2 | Run 3 | Run 4 | Run 5 | Avg. |
|-----------------|--------------|--------------|--------------|--------------|--------------|--------------|
| bs4-e10-lr3e-05 | [0.7731][1] | [0.7696][2] | [0.7666][3] | [0.7823][4] | [0.7714][5] | 77.26 ± 0.53 |
| bs4-e10-lr5e-05 | [0.774][6] | [0.7571][7] | [0.7685][8] | [0.7694][9] | [0.7704][10] | 76.79 ± 0.57 |
| bs8-e10-lr5e-05 | [0.7675][11] | [0.7698][12] | [0.7601][13] | [0.7657][14] | [0.7641][15] | 76.54 ± 0.33 |
| bs8-e10-lr3e-05 | [0.7596][16] | [0.7697][17] | [0.7711][18] | [0.7628][19] | [0.7574][20] | 76.41 ± 0.54 |
[1]: https://hf.co/stefan-it/hmbench-icdar-fr-hmbert-bs4-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-1
[2]: https://hf.co/stefan-it/hmbench-icdar-fr-hmbert-bs4-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-2
[3]: https://hf.co/stefan-it/hmbench-icdar-fr-hmbert-bs4-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-3
[4]: https://hf.co/stefan-it/hmbench-icdar-fr-hmbert-bs4-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-4
[5]: https://hf.co/stefan-it/hmbench-icdar-fr-hmbert-bs4-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-5
[6]: https://hf.co/stefan-it/hmbench-icdar-fr-hmbert-bs4-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-1
[7]: https://hf.co/stefan-it/hmbench-icdar-fr-hmbert-bs4-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-2
[8]: https://hf.co/stefan-it/hmbench-icdar-fr-hmbert-bs4-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-3
[9]: https://hf.co/stefan-it/hmbench-icdar-fr-hmbert-bs4-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-4
[10]: https://hf.co/stefan-it/hmbench-icdar-fr-hmbert-bs4-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-5
[11]: https://hf.co/stefan-it/hmbench-icdar-fr-hmbert-bs8-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-1
[12]: https://hf.co/stefan-it/hmbench-icdar-fr-hmbert-bs8-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-2
[13]: https://hf.co/stefan-it/hmbench-icdar-fr-hmbert-bs8-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-3
[14]: https://hf.co/stefan-it/hmbench-icdar-fr-hmbert-bs8-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-4
[15]: https://hf.co/stefan-it/hmbench-icdar-fr-hmbert-bs8-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-5
[16]: https://hf.co/stefan-it/hmbench-icdar-fr-hmbert-bs8-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-1
[17]: https://hf.co/stefan-it/hmbench-icdar-fr-hmbert-bs8-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-2
[18]: https://hf.co/stefan-it/hmbench-icdar-fr-hmbert-bs8-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-3
[19]: https://hf.co/stefan-it/hmbench-icdar-fr-hmbert-bs8-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-4
[20]: https://hf.co/stefan-it/hmbench-icdar-fr-hmbert-bs8-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-5
The [training log](training.log) and TensorBoard logs (only for hmByT5 and hmTEAMS based models) are also uploaded to the model hub.
More information about fine-tuning can be found [here](https://github.com/stefan-it/hmBench).
# Acknowledgements
We thank [Luisa März](https://github.com/LuisaMaerz), [Katharina Schmid](https://github.com/schmika) and
[Erion Çano](https://github.com/erionc) for their fruitful discussions about Historic Language Models.
Research supported with Cloud TPUs from Google's [TPU Research Cloud](https://sites.research.google/trc/about/) (TRC).
Many Thanks for providing access to the TPUs ❤️
|
stefan-it/hmbench-icdar-fr-hmbert-bs4-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-2 | stefan-it | 2023-10-17T23:20:55Z | 6 | 0 | flair | [
"flair",
"pytorch",
"token-classification",
"sequence-tagger-model",
"fr",
"base_model:dbmdz/bert-base-historic-multilingual-cased",
"base_model:finetune:dbmdz/bert-base-historic-multilingual-cased",
"license:mit",
"region:us"
]
| token-classification | 2023-10-13T22:21:06Z | ---
language: fr
license: mit
tags:
- flair
- token-classification
- sequence-tagger-model
base_model: dbmdz/bert-base-historic-multilingual-cased
widget:
- text: Je suis convaincu , a-t43 dit . que nous n"y parviendrions pas , mais nous
ne pouvons céder parce que l' état moral de nos troupe* en souffrirait trop .
( Fournier . ) Des avions ennemis lancent dix-sept bombes sur Dunkerque LONDRES
. 31 décembre .
---
# Fine-tuned Flair Model on French ICDAR-Europeana NER Dataset
This Flair model was fine-tuned on the
[French ICDAR-Europeana](https://github.com/stefan-it/historic-domain-adaptation-icdar)
NER Dataset using hmBERT as backbone LM.
The ICDAR-Europeana NER Dataset is a preprocessed variant of the
[Europeana NER Corpora](https://github.com/EuropeanaNewspapers/ner-corpora) for Dutch and French.
The following NEs were annotated: `PER`, `LOC` and `ORG`.
# Results
We performed a hyper-parameter search over the following parameters with 5 different seeds per configuration:
* Batch Sizes: `[8, 4]`
* Learning Rates: `[3e-05, 5e-05]`
And report micro F1-score on development set:
| Configuration | Run 1 | Run 2 | Run 3 | Run 4 | Run 5 | Avg. |
|-----------------|--------------|--------------|--------------|--------------|--------------|--------------|
| bs4-e10-lr3e-05 | [0.7731][1] | [0.7696][2] | [0.7666][3] | [0.7823][4] | [0.7714][5] | 77.26 ± 0.53 |
| bs4-e10-lr5e-05 | [0.774][6] | [0.7571][7] | [0.7685][8] | [0.7694][9] | [0.7704][10] | 76.79 ± 0.57 |
| bs8-e10-lr5e-05 | [0.7675][11] | [0.7698][12] | [0.7601][13] | [0.7657][14] | [0.7641][15] | 76.54 ± 0.33 |
| bs8-e10-lr3e-05 | [0.7596][16] | [0.7697][17] | [0.7711][18] | [0.7628][19] | [0.7574][20] | 76.41 ± 0.54 |
[1]: https://hf.co/stefan-it/hmbench-icdar-fr-hmbert-bs4-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-1
[2]: https://hf.co/stefan-it/hmbench-icdar-fr-hmbert-bs4-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-2
[3]: https://hf.co/stefan-it/hmbench-icdar-fr-hmbert-bs4-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-3
[4]: https://hf.co/stefan-it/hmbench-icdar-fr-hmbert-bs4-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-4
[5]: https://hf.co/stefan-it/hmbench-icdar-fr-hmbert-bs4-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-5
[6]: https://hf.co/stefan-it/hmbench-icdar-fr-hmbert-bs4-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-1
[7]: https://hf.co/stefan-it/hmbench-icdar-fr-hmbert-bs4-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-2
[8]: https://hf.co/stefan-it/hmbench-icdar-fr-hmbert-bs4-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-3
[9]: https://hf.co/stefan-it/hmbench-icdar-fr-hmbert-bs4-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-4
[10]: https://hf.co/stefan-it/hmbench-icdar-fr-hmbert-bs4-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-5
[11]: https://hf.co/stefan-it/hmbench-icdar-fr-hmbert-bs8-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-1
[12]: https://hf.co/stefan-it/hmbench-icdar-fr-hmbert-bs8-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-2
[13]: https://hf.co/stefan-it/hmbench-icdar-fr-hmbert-bs8-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-3
[14]: https://hf.co/stefan-it/hmbench-icdar-fr-hmbert-bs8-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-4
[15]: https://hf.co/stefan-it/hmbench-icdar-fr-hmbert-bs8-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-5
[16]: https://hf.co/stefan-it/hmbench-icdar-fr-hmbert-bs8-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-1
[17]: https://hf.co/stefan-it/hmbench-icdar-fr-hmbert-bs8-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-2
[18]: https://hf.co/stefan-it/hmbench-icdar-fr-hmbert-bs8-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-3
[19]: https://hf.co/stefan-it/hmbench-icdar-fr-hmbert-bs8-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-4
[20]: https://hf.co/stefan-it/hmbench-icdar-fr-hmbert-bs8-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-5
The [training log](training.log) and TensorBoard logs (only for hmByT5 and hmTEAMS based models) are also uploaded to the model hub.
More information about fine-tuning can be found [here](https://github.com/stefan-it/hmBench).
# Acknowledgements
We thank [Luisa März](https://github.com/LuisaMaerz), [Katharina Schmid](https://github.com/schmika) and
[Erion Çano](https://github.com/erionc) for their fruitful discussions about Historic Language Models.
Research supported with Cloud TPUs from Google's [TPU Research Cloud](https://sites.research.google/trc/about/) (TRC).
Many Thanks for providing access to the TPUs ❤️
|
stefan-it/hmbench-icdar-fr-hmbert-bs4-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-5 | stefan-it | 2023-10-17T23:20:53Z | 7 | 0 | flair | [
"flair",
"pytorch",
"token-classification",
"sequence-tagger-model",
"fr",
"base_model:dbmdz/bert-base-historic-multilingual-cased",
"base_model:finetune:dbmdz/bert-base-historic-multilingual-cased",
"license:mit",
"region:us"
]
| token-classification | 2023-10-14T00:49:21Z | ---
language: fr
license: mit
tags:
- flair
- token-classification
- sequence-tagger-model
base_model: dbmdz/bert-base-historic-multilingual-cased
widget:
- text: Je suis convaincu , a-t43 dit . que nous n"y parviendrions pas , mais nous
ne pouvons céder parce que l' état moral de nos troupe* en souffrirait trop .
( Fournier . ) Des avions ennemis lancent dix-sept bombes sur Dunkerque LONDRES
. 31 décembre .
---
# Fine-tuned Flair Model on French ICDAR-Europeana NER Dataset
This Flair model was fine-tuned on the
[French ICDAR-Europeana](https://github.com/stefan-it/historic-domain-adaptation-icdar)
NER Dataset using hmBERT as backbone LM.
The ICDAR-Europeana NER Dataset is a preprocessed variant of the
[Europeana NER Corpora](https://github.com/EuropeanaNewspapers/ner-corpora) for Dutch and French.
The following NEs were annotated: `PER`, `LOC` and `ORG`.
# Results
We performed a hyper-parameter search over the following parameters with 5 different seeds per configuration:
* Batch Sizes: `[8, 4]`
* Learning Rates: `[3e-05, 5e-05]`
And report micro F1-score on development set:
| Configuration | Run 1 | Run 2 | Run 3 | Run 4 | Run 5 | Avg. |
|-----------------|--------------|--------------|--------------|--------------|--------------|--------------|
| bs4-e10-lr3e-05 | [0.7731][1] | [0.7696][2] | [0.7666][3] | [0.7823][4] | [0.7714][5] | 77.26 ± 0.53 |
| bs4-e10-lr5e-05 | [0.774][6] | [0.7571][7] | [0.7685][8] | [0.7694][9] | [0.7704][10] | 76.79 ± 0.57 |
| bs8-e10-lr5e-05 | [0.7675][11] | [0.7698][12] | [0.7601][13] | [0.7657][14] | [0.7641][15] | 76.54 ± 0.33 |
| bs8-e10-lr3e-05 | [0.7596][16] | [0.7697][17] | [0.7711][18] | [0.7628][19] | [0.7574][20] | 76.41 ± 0.54 |
[1]: https://hf.co/stefan-it/hmbench-icdar-fr-hmbert-bs4-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-1
[2]: https://hf.co/stefan-it/hmbench-icdar-fr-hmbert-bs4-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-2
[3]: https://hf.co/stefan-it/hmbench-icdar-fr-hmbert-bs4-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-3
[4]: https://hf.co/stefan-it/hmbench-icdar-fr-hmbert-bs4-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-4
[5]: https://hf.co/stefan-it/hmbench-icdar-fr-hmbert-bs4-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-5
[6]: https://hf.co/stefan-it/hmbench-icdar-fr-hmbert-bs4-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-1
[7]: https://hf.co/stefan-it/hmbench-icdar-fr-hmbert-bs4-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-2
[8]: https://hf.co/stefan-it/hmbench-icdar-fr-hmbert-bs4-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-3
[9]: https://hf.co/stefan-it/hmbench-icdar-fr-hmbert-bs4-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-4
[10]: https://hf.co/stefan-it/hmbench-icdar-fr-hmbert-bs4-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-5
[11]: https://hf.co/stefan-it/hmbench-icdar-fr-hmbert-bs8-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-1
[12]: https://hf.co/stefan-it/hmbench-icdar-fr-hmbert-bs8-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-2
[13]: https://hf.co/stefan-it/hmbench-icdar-fr-hmbert-bs8-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-3
[14]: https://hf.co/stefan-it/hmbench-icdar-fr-hmbert-bs8-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-4
[15]: https://hf.co/stefan-it/hmbench-icdar-fr-hmbert-bs8-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-5
[16]: https://hf.co/stefan-it/hmbench-icdar-fr-hmbert-bs8-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-1
[17]: https://hf.co/stefan-it/hmbench-icdar-fr-hmbert-bs8-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-2
[18]: https://hf.co/stefan-it/hmbench-icdar-fr-hmbert-bs8-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-3
[19]: https://hf.co/stefan-it/hmbench-icdar-fr-hmbert-bs8-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-4
[20]: https://hf.co/stefan-it/hmbench-icdar-fr-hmbert-bs8-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-5
The [training log](training.log) and TensorBoard logs (only for hmByT5 and hmTEAMS based models) are also uploaded to the model hub.
More information about fine-tuning can be found [here](https://github.com/stefan-it/hmBench).
# Acknowledgements
We thank [Luisa März](https://github.com/LuisaMaerz), [Katharina Schmid](https://github.com/schmika) and
[Erion Çano](https://github.com/erionc) for their fruitful discussions about Historic Language Models.
Research supported with Cloud TPUs from Google's [TPU Research Cloud](https://sites.research.google/trc/about/) (TRC).
Many Thanks for providing access to the TPUs ❤️
|
hmbert/flair-icdar-fr | hmbert | 2023-10-17T23:20:52Z | 4 | 0 | flair | [
"flair",
"pytorch",
"token-classification",
"sequence-tagger-model",
"fr",
"base_model:dbmdz/bert-base-historic-multilingual-cased",
"base_model:finetune:dbmdz/bert-base-historic-multilingual-cased",
"license:mit",
"region:us"
]
| token-classification | 2023-10-13T23:54:47Z | ---
language: fr
license: mit
tags:
- flair
- token-classification
- sequence-tagger-model
base_model: dbmdz/bert-base-historic-multilingual-cased
widget:
- text: Je suis convaincu , a-t43 dit . que nous n"y parviendrions pas , mais nous
ne pouvons céder parce que l' état moral de nos troupe* en souffrirait trop .
( Fournier . ) Des avions ennemis lancent dix-sept bombes sur Dunkerque LONDRES
. 31 décembre .
---
# Fine-tuned Flair Model on French ICDAR-Europeana NER Dataset
This Flair model was fine-tuned on the
[French ICDAR-Europeana](https://github.com/stefan-it/historic-domain-adaptation-icdar)
NER Dataset using hmBERT as backbone LM.
The ICDAR-Europeana NER Dataset is a preprocessed variant of the
[Europeana NER Corpora](https://github.com/EuropeanaNewspapers/ner-corpora) for Dutch and French.
The following NEs were annotated: `PER`, `LOC` and `ORG`.
# Results
We performed a hyper-parameter search over the following parameters with 5 different seeds per configuration:
* Batch Sizes: `[8, 4]`
* Learning Rates: `[3e-05, 5e-05]`
And report micro F1-score on development set:
| Configuration | Run 1 | Run 2 | Run 3 | Run 4 | Run 5 | Avg. |
|-----------------|--------------|--------------|--------------|--------------|--------------|--------------|
| bs4-e10-lr3e-05 | [0.7731][1] | [0.7696][2] | [0.7666][3] | [0.7823][4] | [0.7714][5] | 77.26 ± 0.53 |
| bs4-e10-lr5e-05 | [0.774][6] | [0.7571][7] | [0.7685][8] | [0.7694][9] | [0.7704][10] | 76.79 ± 0.57 |
| bs8-e10-lr5e-05 | [0.7675][11] | [0.7698][12] | [0.7601][13] | [0.7657][14] | [0.7641][15] | 76.54 ± 0.33 |
| bs8-e10-lr3e-05 | [0.7596][16] | [0.7697][17] | [0.7711][18] | [0.7628][19] | [0.7574][20] | 76.41 ± 0.54 |
[1]: https://hf.co/stefan-it/hmbench-icdar-fr-hmbert-bs4-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-1
[2]: https://hf.co/stefan-it/hmbench-icdar-fr-hmbert-bs4-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-2
[3]: https://hf.co/stefan-it/hmbench-icdar-fr-hmbert-bs4-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-3
[4]: https://hf.co/stefan-it/hmbench-icdar-fr-hmbert-bs4-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-4
[5]: https://hf.co/stefan-it/hmbench-icdar-fr-hmbert-bs4-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-5
[6]: https://hf.co/stefan-it/hmbench-icdar-fr-hmbert-bs4-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-1
[7]: https://hf.co/stefan-it/hmbench-icdar-fr-hmbert-bs4-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-2
[8]: https://hf.co/stefan-it/hmbench-icdar-fr-hmbert-bs4-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-3
[9]: https://hf.co/stefan-it/hmbench-icdar-fr-hmbert-bs4-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-4
[10]: https://hf.co/stefan-it/hmbench-icdar-fr-hmbert-bs4-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-5
[11]: https://hf.co/stefan-it/hmbench-icdar-fr-hmbert-bs8-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-1
[12]: https://hf.co/stefan-it/hmbench-icdar-fr-hmbert-bs8-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-2
[13]: https://hf.co/stefan-it/hmbench-icdar-fr-hmbert-bs8-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-3
[14]: https://hf.co/stefan-it/hmbench-icdar-fr-hmbert-bs8-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-4
[15]: https://hf.co/stefan-it/hmbench-icdar-fr-hmbert-bs8-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-5
[16]: https://hf.co/stefan-it/hmbench-icdar-fr-hmbert-bs8-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-1
[17]: https://hf.co/stefan-it/hmbench-icdar-fr-hmbert-bs8-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-2
[18]: https://hf.co/stefan-it/hmbench-icdar-fr-hmbert-bs8-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-3
[19]: https://hf.co/stefan-it/hmbench-icdar-fr-hmbert-bs8-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-4
[20]: https://hf.co/stefan-it/hmbench-icdar-fr-hmbert-bs8-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-5
The [training log](training.log) and TensorBoard logs (only for hmByT5 and hmTEAMS based models) are also uploaded to the model hub.
More information about fine-tuning can be found [here](https://github.com/stefan-it/hmBench).
# Acknowledgements
We thank [Luisa März](https://github.com/LuisaMaerz), [Katharina Schmid](https://github.com/schmika) and
[Erion Çano](https://github.com/erionc) for their fruitful discussions about Historic Language Models.
Research supported with Cloud TPUs from Google's [TPU Research Cloud](https://sites.research.google/trc/about/) (TRC).
Many Thanks for providing access to the TPUs ❤️
|
Subsets and Splits