modelId
stringlengths 5
139
| author
stringlengths 2
42
| last_modified
timestamp[us, tz=UTC]date 2020-02-15 11:33:14
2025-07-16 00:42:46
| downloads
int64 0
223M
| likes
int64 0
11.7k
| library_name
stringclasses 522
values | tags
listlengths 1
4.05k
| pipeline_tag
stringclasses 55
values | createdAt
timestamp[us, tz=UTC]date 2022-03-02 23:29:04
2025-07-16 00:42:16
| card
stringlengths 11
1.01M
|
---|---|---|---|---|---|---|---|---|---|
HumanF-MarkrAI/pub-llama-13B-v3 | HumanF-MarkrAI | 2023-10-24T17:28:19Z | 153 | 0 | transformers | [
"transformers",
"pytorch",
"llama",
"text-generation",
"ko",
"dataset:HumanF-MarkrAI/pub_COT_v2-2000",
"license:cc-by-nc-sa-4.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
]
| text-generation | 2023-10-24T13:00:18Z | ---
language:
- ko
datasets: HumanF-MarkrAI/pub_COT_v2-2000
license: cc-by-nc-sa-4.0
---
**(주)미디어그룹사람과숲과 (주)마커의 LLM 연구 컨소시엄에서 개발된 모델입니다**
**The license is `cc-by-nc-sa`.**
## Model Details
**Model Developers** Kyujin Han (kyujinpy)
**Input** Models input text only.
**Output** Models generate text only.
**Model Architecture**
pub-llama-13b-v3 is an auto-regressive language model based on the LLaMA2 transformer architecture.
**Repo Link**
Github: [pub-llama📑](Not_yet)
**Training Dataset**
More detail about dataset: [HumanF-MarkrAI/pub_COT-2000](https://huggingface.co/datasets/HumanF-MarkrAI/pub_COT-2000). |
FaryalS/A2C-LunarLander-v2 | FaryalS | 2023-10-24T17:24:24Z | 0 | 0 | stable-baselines3 | [
"stable-baselines3",
"LunarLander-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
]
| reinforcement-learning | 2023-10-24T16:19:47Z | ---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: A2C
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: -140.63 +/- 47.73
name: mean_reward
verified: false
---
# **A2C** Agent playing **LunarLander-v2**
This is a trained model of a **A2C** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
anjali0610/my-dog | anjali0610 | 2023-10-24T17:20:01Z | 8 | 0 | diffusers | [
"diffusers",
"safetensors",
"NxtWave-GenAI-Webinar",
"text-to-image",
"stable-diffusion",
"license:creativeml-openrail-m",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
]
| text-to-image | 2023-10-24T17:14:45Z | ---
license: creativeml-openrail-m
tags:
- NxtWave-GenAI-Webinar
- text-to-image
- stable-diffusion
---
### My-Dog Dreambooth model trained by anjali0610 following the "Build your own Gen AI model" session by NxtWave.
Project Submission Code: CCEW-122
Sample pictures of this concept:
.jpg)
|
XpucT/Loras | XpucT | 2023-10-24T17:14:10Z | 0 | 49 | null | [
"license:creativeml-openrail-m",
"region:us"
]
| null | 2023-10-24T17:13:05Z | ---
license: creativeml-openrail-m
---
|
mor40/BulBERT-fakenews-5epochs | mor40 | 2023-10-24T17:13:38Z | 8 | 0 | transformers | [
"transformers",
"pytorch",
"bert",
"text-classification",
"generated_from_trainer",
"dataset:bgglue",
"base_model:mor40/BulBERT-chitanka-model",
"base_model:finetune:mor40/BulBERT-chitanka-model",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| text-classification | 2023-10-24T13:45:21Z | ---
base_model: mor40/BulBERT-chitanka-model
tags:
- generated_from_trainer
datasets:
- bgglue
metrics:
- accuracy
model-index:
- name: BulBERT-fakenews-5epochs
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: bgglue
type: bgglue
config: fakenews
split: validation
args: fakenews
metrics:
- name: Accuracy
type: accuracy
value: 0.9049773755656109
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# BulBERT-fakenews-5epochs
This model is a fine-tuned version of [mor40/BulBERT-chitanka-model](https://huggingface.co/mor40/BulBERT-chitanka-model) on the bgglue dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3487
- Accuracy: 0.9050
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| No log | 1.0 | 84 | 0.4732 | 0.7511 |
| No log | 2.0 | 168 | 0.3922 | 0.8552 |
| No log | 3.0 | 252 | 0.3230 | 0.8778 |
| No log | 4.0 | 336 | 0.3518 | 0.8959 |
| No log | 5.0 | 420 | 0.3487 | 0.9050 |
### Framework versions
- Transformers 4.34.1
- Pytorch 2.1.0+cu118
- Datasets 2.14.6
- Tokenizers 0.14.1
|
betodepaola/llama-7B-openassist-1 | betodepaola | 2023-10-24T16:59:29Z | 4 | 0 | peft | [
"peft",
"region:us"
]
| null | 2023-10-23T22:46:26Z | ---
library_name: peft
---
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- load_in_8bit: True
- load_in_4bit: False
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: fp4
- bnb_4bit_use_double_quant: False
- bnb_4bit_compute_dtype: float32
The following `bitsandbytes` quantization config was used during training:
- load_in_8bit: True
- load_in_4bit: False
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: fp4
- bnb_4bit_use_double_quant: False
- bnb_4bit_compute_dtype: float32
### Framework versions
- PEFT 0.4.0
- PEFT 0.4.0
|
lltala/test-ner-run | lltala | 2023-10-24T16:56:15Z | 3 | 0 | transformers | [
"transformers",
"pytorch",
"bert",
"token-classification",
"generated_from_trainer",
"dataset:conll2003",
"base_model:google-bert/bert-base-uncased",
"base_model:finetune:google-bert/bert-base-uncased",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| token-classification | 2023-10-24T16:18:35Z | ---
license: apache-2.0
base_model: bert-base-uncased
tags:
- generated_from_trainer
datasets:
- conll2003
metrics:
- precision
- recall
- f1
- accuracy
model-index:
- name: test-ner-run
results:
- task:
name: Token Classification
type: token-classification
dataset:
name: conll2003
type: conll2003
config: conll2003
split: validation
args: conll2003
metrics:
- name: Precision
type: precision
value: 0.946991165194199
- name: Recall
type: recall
value: 0.9560753954897341
- name: F1
type: f1
value: 0.9515115986935768
- name: Accuracy
type: accuracy
value: 0.9903625248237997
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# test-ner-run
This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on the conll2003 dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0414
- Precision: 0.9470
- Recall: 0.9561
- F1: 0.9515
- Accuracy: 0.9904
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
### Training results
### Framework versions
- Transformers 4.35.0.dev0
- Pytorch 2.1.0+cu118
- Datasets 2.14.6
- Tokenizers 0.14.1
|
winglian/llama-neft-exp3 | winglian | 2023-10-24T16:54:14Z | 3 | 0 | transformers | [
"transformers",
"pytorch",
"llama",
"text-generation",
"generated_from_trainer",
"base_model:meta-llama/Llama-2-7b-hf",
"base_model:finetune:meta-llama/Llama-2-7b-hf",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
]
| text-generation | 2023-10-24T16:53:22Z | ---
base_model: meta-llama/Llama-2-7b-hf
tags:
- generated_from_trainer
model-index:
- name: out
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/OpenAccess-AI-Collective/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/OpenAccess-AI-Collective/axolotl)
# out
This model is a fine-tuned version of [meta-llama/Llama-2-7b-hf](https://huggingface.co/meta-llama/Llama-2-7b-hf) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.2049
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3.8e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- distributed_type: multi-GPU
- num_devices: 8
- gradient_accumulation_steps: 2
- total_train_batch_size: 128
- total_eval_batch_size: 64
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 10
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 1.3761 | 0.01 | 1 | 1.4211 |
| 1.1922 | 0.2 | 14 | 1.2246 |
| 1.095 | 0.4 | 28 | 1.2137 |
| 1.1475 | 0.6 | 42 | 1.2152 |
| 1.1639 | 0.81 | 56 | 1.2224 |
| 1.0431 | 1.01 | 70 | 1.2131 |
| 0.9464 | 1.21 | 84 | 1.2100 |
| 1.1368 | 1.41 | 98 | 1.2060 |
| 1.0991 | 1.61 | 112 | 1.2022 |
| 0.9896 | 1.81 | 126 | 1.2014 |
| 0.9592 | 2.01 | 140 | 1.1991 |
| 0.9789 | 2.22 | 154 | 1.2054 |
| 1.0028 | 2.42 | 168 | 1.2048 |
| 0.9374 | 2.62 | 182 | 1.2051 |
| 0.9318 | 2.82 | 196 | 1.2049 |
### Framework versions
- Transformers 4.34.0.dev0
- Pytorch 2.0.1+cu118
- Datasets 2.14.4
- Tokenizers 0.14.0
|
zeroshot/gte-large-sparse | zeroshot | 2023-10-24T16:53:49Z | 357 | 1 | transformers | [
"transformers",
"onnx",
"bert",
"feature-extraction",
"sparse sparsity quantized onnx embeddings int8",
"mteb",
"en",
"license:mit",
"model-index",
"text-embeddings-inference",
"endpoints_compatible",
"region:us"
]
| feature-extraction | 2023-10-15T18:14:48Z | ---
tags:
- sparse sparsity quantized onnx embeddings int8
- mteb
model-index:
- name: gte-large-sparse
results:
- task:
type: STS
dataset:
type: mteb/biosses-sts
name: MTEB BIOSSES
config: default
split: test
revision: d3fb88f8f02e40887cd149695127462bbcf29b4a
metrics:
- type: cos_sim_pearson
value: 88.64253410928214
- type: cos_sim_spearman
value: 85.83388349410652
- type: euclidean_pearson
value: 86.86126159318735
- type: euclidean_spearman
value: 85.61580623591163
- type: manhattan_pearson
value: 86.6901132883383
- type: manhattan_spearman
value: 85.60255292187769
- task:
type: STS
dataset:
type: mteb/sickr-sts
name: MTEB SICK-R
config: default
split: test
revision: a6ea5a8cab320b040a23452cc28066d9beae2cee
metrics:
- type: cos_sim_pearson
value: 85.23314640591607
- type: cos_sim_spearman
value: 79.00078545104338
- type: euclidean_pearson
value: 83.48009254500714
- type: euclidean_spearman
value: 78.95413001389939
- type: manhattan_pearson
value: 83.46945566025941
- type: manhattan_spearman
value: 78.9241707208135
- task:
type: STS
dataset:
type: mteb/sts12-sts
name: MTEB STS12
config: default
split: test
revision: a0d554a64d88156834ff5ae9920b964011b16384
metrics:
- type: cos_sim_pearson
value: 81.77526666043804
- type: cos_sim_spearman
value: 73.4849063285867
- type: euclidean_pearson
value: 78.04477932740524
- type: euclidean_spearman
value: 73.01394205771743
- type: manhattan_pearson
value: 78.08836684503294
- type: manhattan_spearman
value: 73.05074711098149
- task:
type: STS
dataset:
type: mteb/sts13-sts
name: MTEB STS13
config: default
split: test
revision: 7e90230a92c190f1bf69ae9002b8cea547a64cca
metrics:
- type: cos_sim_pearson
value: 84.57839215661352
- type: cos_sim_spearman
value: 86.13854767345153
- type: euclidean_pearson
value: 85.12712609946449
- type: euclidean_spearman
value: 85.52497994789026
- type: manhattan_pearson
value: 85.06833141611173
- type: manhattan_spearman
value: 85.45003068636466
- task:
type: STS
dataset:
type: mteb/sts14-sts
name: MTEB STS14
config: default
split: test
revision: 6031580fec1f6af667f0bd2da0a551cf4f0b2375
metrics:
- type: cos_sim_pearson
value: 83.30485126978374
- type: cos_sim_spearman
value: 80.36497172462357
- type: euclidean_pearson
value: 82.91977909424605
- type: euclidean_spearman
value: 80.16995106297438
- type: manhattan_pearson
value: 82.88200991402184
- type: manhattan_spearman
value: 80.14259757215227
- task:
type: STS
dataset:
type: mteb/sts15-sts
name: MTEB STS15
config: default
split: test
revision: ae752c7c21bf194d8b67fd573edf7ae58183cbe3
metrics:
- type: cos_sim_pearson
value: 86.99883111314007
- type: cos_sim_spearman
value: 88.531352572377
- type: euclidean_pearson
value: 87.96834578059067
- type: euclidean_spearman
value: 88.44800718542935
- type: manhattan_pearson
value: 87.94889391725033
- type: manhattan_spearman
value: 88.45467695837115
- task:
type: STS
dataset:
type: mteb/sts16-sts
name: MTEB STS16
config: default
split: test
revision: 4d8694f8f0e0100860b497b999b3dbed754a0513
metrics:
- type: cos_sim_pearson
value: 82.4636984892402
- type: cos_sim_spearman
value: 84.0808920789148
- type: euclidean_pearson
value: 83.70613486028309
- type: euclidean_spearman
value: 84.35941626905009
- type: manhattan_pearson
value: 83.70259457073782
- type: manhattan_spearman
value: 84.35496521501604
- task:
type: STS
dataset:
type: mteb/sts17-crosslingual-sts
name: MTEB STS17 (en-en)
config: en-en
split: test
revision: af5e6fb845001ecf41f4c1e033ce921939a2a68d
metrics:
- type: cos_sim_pearson
value: 88.76172944971023
- type: cos_sim_spearman
value: 89.4190945039165
- type: euclidean_pearson
value: 89.47263005347381
- type: euclidean_spearman
value: 89.49228360724095
- type: manhattan_pearson
value: 89.49959868816694
- type: manhattan_spearman
value: 89.5314536157954
- task:
type: STS
dataset:
type: mteb/sts22-crosslingual-sts
name: MTEB STS22 (en)
config: en
split: test
revision: 6d1ba47164174a496b7fa5d3569dae26a6813b80
metrics:
- type: cos_sim_pearson
value: 64.57158223787549
- type: cos_sim_spearman
value: 66.75053533168037
- type: euclidean_pearson
value: 66.45526604831747
- type: euclidean_spearman
value: 66.14567667353113
- type: manhattan_pearson
value: 66.47352000151176
- type: manhattan_spearman
value: 66.21099856852885
- task:
type: STS
dataset:
type: mteb/stsbenchmark-sts
name: MTEB STSBenchmark
config: default
split: test
revision: b0fddb56ed78048fa8b90373c8a3cfc37b684831
metrics:
- type: cos_sim_pearson
value: 85.055653571006
- type: cos_sim_spearman
value: 85.45387832634702
- type: euclidean_pearson
value: 86.31667154906651
- type: euclidean_spearman
value: 85.66079590537946
- type: manhattan_pearson
value: 86.2806853257308
- type: manhattan_spearman
value: 85.63700636713952
- task:
type: PairClassification
dataset:
type: mteb/sprintduplicatequestions-pairclassification
name: MTEB SprintDuplicateQuestions
config: default
split: test
revision: d66bd1f72af766a5cc4b0ca5e00c162f89e8cc46
metrics:
- type: cos_sim_accuracy
value: 99.78811881188119
- type: cos_sim_ap
value: 94.67027715905307
- type: cos_sim_f1
value: 89.33074684772066
- type: cos_sim_precision
value: 86.7231638418079
- type: cos_sim_recall
value: 92.10000000000001
- type: dot_accuracy
value: 99.47128712871287
- type: dot_ap
value: 78.41478815918727
- type: dot_f1
value: 73.30049261083744
- type: dot_precision
value: 72.23300970873787
- type: dot_recall
value: 74.4
- type: euclidean_accuracy
value: 99.78415841584159
- type: euclidean_ap
value: 94.60075930867181
- type: euclidean_f1
value: 89.12175648702593
- type: euclidean_precision
value: 88.94422310756973
- type: euclidean_recall
value: 89.3
- type: manhattan_accuracy
value: 99.78415841584159
- type: manhattan_ap
value: 94.62867439278095
- type: manhattan_f1
value: 89.2337536372454
- type: manhattan_precision
value: 86.62900188323917
- type: manhattan_recall
value: 92.0
- type: max_accuracy
value: 99.78811881188119
- type: max_ap
value: 94.67027715905307
- type: max_f1
value: 89.33074684772066
- task:
type: PairClassification
dataset:
type: mteb/twittersemeval2015-pairclassification
name: MTEB TwitterSemEval2015
config: default
split: test
revision: 70970daeab8776df92f5ea462b6173c0b46fd2d1
metrics:
- type: cos_sim_accuracy
value: 85.09864695714371
- type: cos_sim_ap
value: 70.33704198164713
- type: cos_sim_f1
value: 66.22893954410307
- type: cos_sim_precision
value: 62.42410088743577
- type: cos_sim_recall
value: 70.52770448548813
- type: dot_accuracy
value: 79.11426357513263
- type: dot_ap
value: 49.15484584572233
- type: dot_f1
value: 51.12580243364951
- type: dot_precision
value: 40.13840830449827
- type: dot_recall
value: 70.3957783641161
- type: euclidean_accuracy
value: 85.15825236931514
- type: euclidean_ap
value: 70.51017350854076
- type: euclidean_f1
value: 66.45416294785159
- type: euclidean_precision
value: 64.29805082654823
- type: euclidean_recall
value: 68.7598944591029
- type: manhattan_accuracy
value: 85.1403707456637
- type: manhattan_ap
value: 70.47587863399994
- type: manhattan_f1
value: 66.4576802507837
- type: manhattan_precision
value: 63.32138590203107
- type: manhattan_recall
value: 69.92084432717678
- type: max_accuracy
value: 85.15825236931514
- type: max_ap
value: 70.51017350854076
- type: max_f1
value: 66.4576802507837
- task:
type: PairClassification
dataset:
type: mteb/twitterurlcorpus-pairclassification
name: MTEB TwitterURLCorpus
config: default
split: test
revision: 8b6510b0b1fa4e4c4f879467980e9be563ec1cdf
metrics:
- type: cos_sim_accuracy
value: 88.8539604921023
- type: cos_sim_ap
value: 85.71869912577101
- type: cos_sim_f1
value: 78.00535626720983
- type: cos_sim_precision
value: 76.46232344893885
- type: cos_sim_recall
value: 79.61194949183862
- type: dot_accuracy
value: 84.57717235223348
- type: dot_ap
value: 74.89496650237145
- type: dot_f1
value: 69.05327823892932
- type: dot_precision
value: 65.75666829166377
- type: dot_recall
value: 72.69787496150293
- type: euclidean_accuracy
value: 88.89471028835332
- type: euclidean_ap
value: 85.75169460500409
- type: euclidean_f1
value: 78.17055393586006
- type: euclidean_precision
value: 74.21118184334348
- type: euclidean_recall
value: 82.57622420696026
- type: manhattan_accuracy
value: 88.92187681918733
- type: manhattan_ap
value: 85.7496679471825
- type: manhattan_f1
value: 78.11088295687884
- type: manhattan_precision
value: 75.82083061535117
- type: manhattan_recall
value: 80.5435786880197
- type: max_accuracy
value: 88.92187681918733
- type: max_ap
value: 85.75169460500409
- type: max_f1
value: 78.17055393586006
license: mit
language:
- en
---
# gte-large-sparse
This is the sparse ONNX variant of the [gte-large](https://huggingface.co/thenlper/gte-large) embeddings model created with [DeepSparse Optimum](https://github.com/neuralmagic/optimum-deepsparse) for ONNX export/inference and Neural Magic's [Sparsify](https://github.com/neuralmagic/sparsify) for one-shot quantization (INT8) and unstructured pruning 50%.
Current list of sparse and quantized gte ONNX models:
| Links | Sparsification Method |
| --------------------------------------------------------------------------------------------------- | ---------------------- |
| [zeroshot/gte-large-sparse](https://huggingface.co/zeroshot/gte-large-sparse) | Quantization (INT8) & 50% Pruning |
| [zeroshot/gte-large-quant](https://huggingface.co/zeroshot/gte-large-quant) | Quantization (INT8) |
| [zeroshot/gte-base-sparse](https://huggingface.co/zeroshot/gte-base-sparse) | Quantization (INT8) & 50% Pruning |
| [zeroshot/gte-base-quant](https://huggingface.co/zeroshot/gte-base-quant) | Quantization (INT8) |
| [zeroshot/gte-small-sparse](https://huggingface.co/zeroshot/gte-small-sparse) | Quantization (INT8) & 50% Pruning |
| [zeroshot/gte-small-quant](https://huggingface.co/zeroshot/gte-small-quant) | Quantization (INT8) |
```bash
pip install -U deepsparse-nightly[sentence_transformers]
```
```python
from deepsparse.sentence_transformers import SentenceTransformer
model = SentenceTransformer('zeroshot/gte-large-sparse', export=False)
# Our sentences we like to encode
sentences = ['This framework generates embeddings for each input sentence',
'Sentences are passed as a list of string.',
'The quick brown fox jumps over the lazy dog.']
# Sentences are encoded by calling model.encode()
embeddings = model.encode(sentences)
# Print the embeddings
for sentence, embedding in zip(sentences, embeddings):
print("Sentence:", sentence)
print("Embedding:", embedding.shape)
print("")
```
For further details regarding DeepSparse & Sentence Transformers integration, refer to the [DeepSparse README](https://github.com/neuralmagic/deepsparse/tree/main/src/deepsparse/sentence_transformers).
For general questions on these models and sparsification methods, reach out to the engineering team on our [community Slack](https://join.slack.com/t/discuss-neuralmagic/shared_invite/zt-q1a1cnvo-YBoICSIw3L1dmQpjBeDurQ).
 |
ML4SE2023-G1-WizardCoder/ML4SE23_G1_WizardCoder-SCoT-350M-V1.0 | ML4SE2023-G1-WizardCoder | 2023-10-24T16:50:28Z | 4 | 0 | transformers | [
"transformers",
"pytorch",
"codegen",
"text-generation",
"code",
"en",
"dataset:ML4SE2023-G1-WizardCoder/EvolInstruct-SCoT-1k",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| text-generation | 2023-10-24T16:46:40Z | ---
datasets:
- ML4SE2023-G1-WizardCoder/EvolInstruct-SCoT-1k
language:
- en
tags:
- code
---
# WizardCoder 350M Version
Based on https://huggingface.co/Salesforce/codegen-350M-nl |
winglian/llama-neft-exp2 | winglian | 2023-10-24T16:50:18Z | 3 | 0 | transformers | [
"transformers",
"pytorch",
"llama",
"text-generation",
"generated_from_trainer",
"base_model:meta-llama/Llama-2-7b-hf",
"base_model:finetune:meta-llama/Llama-2-7b-hf",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
]
| text-generation | 2023-10-24T16:49:24Z | ---
base_model: meta-llama/Llama-2-7b-hf
tags:
- generated_from_trainer
model-index:
- name: out
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/OpenAccess-AI-Collective/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/OpenAccess-AI-Collective/axolotl)
# out
This model is a fine-tuned version of [meta-llama/Llama-2-7b-hf](https://huggingface.co/meta-llama/Llama-2-7b-hf) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.2048
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3.8e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- distributed_type: multi-GPU
- num_devices: 8
- gradient_accumulation_steps: 2
- total_train_batch_size: 128
- total_eval_batch_size: 64
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 10
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 1.3782 | 0.01 | 1 | 1.4211 |
| 1.1948 | 0.2 | 14 | 1.2273 |
| 1.0953 | 0.4 | 28 | 1.2137 |
| 1.1464 | 0.6 | 42 | 1.2099 |
| 1.1481 | 0.81 | 56 | 1.2080 |
| 1.0277 | 1.01 | 70 | 1.2022 |
| 0.9344 | 1.21 | 84 | 1.2049 |
| 1.1294 | 1.41 | 98 | 1.2033 |
| 1.0933 | 1.61 | 112 | 1.2002 |
| 0.987 | 1.81 | 126 | 1.1996 |
| 0.9491 | 2.01 | 140 | 1.1972 |
| 0.9673 | 2.22 | 154 | 1.2058 |
| 0.99 | 2.42 | 168 | 1.2048 |
| 0.9241 | 2.62 | 182 | 1.2049 |
| 0.9204 | 2.82 | 196 | 1.2048 |
### Framework versions
- Transformers 4.34.0.dev0
- Pytorch 2.0.1+cu118
- Datasets 2.14.4
- Tokenizers 0.14.0
|
kresenty77/profile1 | kresenty77 | 2023-10-24T16:50:07Z | 1 | 1 | diffusers | [
"diffusers",
"text-to-image",
"autotrain",
"base_model:stabilityai/stable-diffusion-xl-base-1.0",
"base_model:finetune:stabilityai/stable-diffusion-xl-base-1.0",
"region:us"
]
| text-to-image | 2023-10-24T16:01:17Z |
---
base_model: stabilityai/stable-diffusion-xl-base-1.0
instance_prompt: photo of a sks tejas
tags:
- text-to-image
- diffusers
- autotrain
inference: true
---
# DreamBooth trained by AutoTrain
Text encoder was not trained.
|
xzuyn/PhotoMerge | xzuyn | 2023-10-24T16:33:04Z | 12 | 0 | diffusers | [
"diffusers",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
]
| text-to-image | 2023-10-24T15:11:42Z | ---
library_name: diffusers
---
Safetensors & previous versions can be found on [CivitAI](https://civitai.com/models/7116/photomerge). |
Gaganmanku96/whisper-small-hi | Gaganmanku96 | 2023-10-24T16:28:09Z | 40 | 0 | transformers | [
"transformers",
"pytorch",
"whisper",
"automatic-speech-recognition",
"generated_from_trainer",
"dataset:common_voice",
"base_model:openai/whisper-small",
"base_model:finetune:openai/whisper-small",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
]
| automatic-speech-recognition | 2023-10-24T14:53:56Z | ---
license: apache-2.0
base_model: openai/whisper-small
tags:
- generated_from_trainer
datasets:
- common_voice
model-index:
- name: whisper-small-hi
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# whisper-small-hi
This model is a fine-tuned version of [openai/whisper-small](https://huggingface.co/openai/whisper-small) on the common_voice dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 16
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 50
- training_steps: 400
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- Transformers 4.35.0.dev0
- Pytorch 1.13.0+cu117
- Datasets 2.1.0
- Tokenizers 0.14.1
|
TheBloke/Vigostral-7B-Chat-GGUF | TheBloke | 2023-10-24T16:25:05Z | 156 | 5 | transformers | [
"transformers",
"gguf",
"mistral",
"LLM",
"finetuned",
"text-generation",
"fr",
"base_model:bofenghuang/vigostral-7b-chat",
"base_model:quantized:bofenghuang/vigostral-7b-chat",
"license:apache-2.0",
"region:us"
]
| text-generation | 2023-10-24T16:19:47Z | ---
base_model: bofenghuang/vigostral-7b-chat
inference: false
language: fr
license: apache-2.0
model_creator: bofeng huang
model_name: Vigostral 7B Chat
model_type: mistral
pipeline_tag: text-generation
prompt_template: "<s>[INST] <<SYS>>\nVous \xEAtes Vigogne, un assistant IA cr\xE9\xE9\
\ par Zaion Lab. Vous suivez extr\xEAmement bien les instructions. Aidez autant\
\ que vous le pouvez.\n<</SYS>>\n\n{prompt} [/INST] \n"
quantized_by: TheBloke
tags:
- LLM
- finetuned
---
<!-- markdownlint-disable MD041 -->
<!-- header start -->
<!-- 200823 -->
<div style="width: auto; margin-left: auto; margin-right: auto">
<img src="https://i.imgur.com/EBdldam.jpg" alt="TheBlokeAI" style="width: 100%; min-width: 400px; display: block; margin: auto;">
</div>
<div style="display: flex; justify-content: space-between; width: 100%;">
<div style="display: flex; flex-direction: column; align-items: flex-start;">
<p style="margin-top: 0.5em; margin-bottom: 0em;"><a href="https://discord.gg/theblokeai">Chat & support: TheBloke's Discord server</a></p>
</div>
<div style="display: flex; flex-direction: column; align-items: flex-end;">
<p style="margin-top: 0.5em; margin-bottom: 0em;"><a href="https://www.patreon.com/TheBlokeAI">Want to contribute? TheBloke's Patreon page</a></p>
</div>
</div>
<div style="text-align:center; margin-top: 0em; margin-bottom: 0em"><p style="margin-top: 0.25em; margin-bottom: 0em;">TheBloke's LLM work is generously supported by a grant from <a href="https://a16z.com">andreessen horowitz (a16z)</a></p></div>
<hr style="margin-top: 1.0em; margin-bottom: 1.0em;">
<!-- header end -->
# Vigostral 7B Chat - GGUF
- Model creator: [bofeng huang](https://huggingface.co/bofenghuang)
- Original model: [Vigostral 7B Chat](https://huggingface.co/bofenghuang/vigostral-7b-chat)
<!-- description start -->
## Description
This repo contains GGUF format model files for [bofeng huang's Vigostral 7B Chat](https://huggingface.co/bofenghuang/vigostral-7b-chat).
<!-- description end -->
<!-- README_GGUF.md-about-gguf start -->
### About GGUF
GGUF is a new format introduced by the llama.cpp team on August 21st 2023. It is a replacement for GGML, which is no longer supported by llama.cpp.
Here is an incomplate list of clients and libraries that are known to support GGUF:
* [llama.cpp](https://github.com/ggerganov/llama.cpp). The source project for GGUF. Offers a CLI and a server option.
* [text-generation-webui](https://github.com/oobabooga/text-generation-webui), the most widely used web UI, with many features and powerful extensions. Supports GPU acceleration.
* [KoboldCpp](https://github.com/LostRuins/koboldcpp), a fully featured web UI, with GPU accel across all platforms and GPU architectures. Especially good for story telling.
* [LM Studio](https://lmstudio.ai/), an easy-to-use and powerful local GUI for Windows and macOS (Silicon), with GPU acceleration.
* [LoLLMS Web UI](https://github.com/ParisNeo/lollms-webui), a great web UI with many interesting and unique features, including a full model library for easy model selection.
* [Faraday.dev](https://faraday.dev/), an attractive and easy to use character-based chat GUI for Windows and macOS (both Silicon and Intel), with GPU acceleration.
* [ctransformers](https://github.com/marella/ctransformers), a Python library with GPU accel, LangChain support, and OpenAI-compatible AI server.
* [llama-cpp-python](https://github.com/abetlen/llama-cpp-python), a Python library with GPU accel, LangChain support, and OpenAI-compatible API server.
* [candle](https://github.com/huggingface/candle), a Rust ML framework with a focus on performance, including GPU support, and ease of use.
<!-- README_GGUF.md-about-gguf end -->
<!-- repositories-available start -->
## Repositories available
* [AWQ model(s) for GPU inference.](https://huggingface.co/TheBloke/Vigostral-7B-Chat-AWQ)
* [GPTQ models for GPU inference, with multiple quantisation parameter options.](https://huggingface.co/TheBloke/Vigostral-7B-Chat-GPTQ)
* [2, 3, 4, 5, 6 and 8-bit GGUF models for CPU+GPU inference](https://huggingface.co/TheBloke/Vigostral-7B-Chat-GGUF)
* [bofeng huang's original unquantised fp16 model in pytorch format, for GPU inference and for further conversions](https://huggingface.co/bofenghuang/vigostral-7b-chat)
<!-- repositories-available end -->
<!-- prompt-template start -->
## Prompt template: Vigogne-Llama-2-Chat
```
<s>[INST] <<SYS>>
Vous êtes Vigogne, un assistant IA créé par Zaion Lab. Vous suivez extrêmement bien les instructions. Aidez autant que vous le pouvez.
<</SYS>>
{prompt} [/INST]
```
<!-- prompt-template end -->
<!-- compatibility_gguf start -->
## Compatibility
These quantised GGUFv2 files are compatible with llama.cpp from August 27th onwards, as of commit [d0cee0d](https://github.com/ggerganov/llama.cpp/commit/d0cee0d36d5be95a0d9088b674dbb27354107221)
They are also compatible with many third party UIs and libraries - please see the list at the top of this README.
## Explanation of quantisation methods
<details>
<summary>Click to see details</summary>
The new methods available are:
* GGML_TYPE_Q2_K - "type-1" 2-bit quantization in super-blocks containing 16 blocks, each block having 16 weight. Block scales and mins are quantized with 4 bits. This ends up effectively using 2.5625 bits per weight (bpw)
* GGML_TYPE_Q3_K - "type-0" 3-bit quantization in super-blocks containing 16 blocks, each block having 16 weights. Scales are quantized with 6 bits. This end up using 3.4375 bpw.
* GGML_TYPE_Q4_K - "type-1" 4-bit quantization in super-blocks containing 8 blocks, each block having 32 weights. Scales and mins are quantized with 6 bits. This ends up using 4.5 bpw.
* GGML_TYPE_Q5_K - "type-1" 5-bit quantization. Same super-block structure as GGML_TYPE_Q4_K resulting in 5.5 bpw
* GGML_TYPE_Q6_K - "type-0" 6-bit quantization. Super-blocks with 16 blocks, each block having 16 weights. Scales are quantized with 8 bits. This ends up using 6.5625 bpw
Refer to the Provided Files table below to see what files use which methods, and how.
</details>
<!-- compatibility_gguf end -->
<!-- README_GGUF.md-provided-files start -->
## Provided files
| Name | Quant method | Bits | Size | Max RAM required | Use case |
| ---- | ---- | ---- | ---- | ---- | ----- |
| [vigostral-7b-chat.Q2_K.gguf](https://huggingface.co/TheBloke/Vigostral-7B-Chat-GGUF/blob/main/vigostral-7b-chat.Q2_K.gguf) | Q2_K | 2 | 3.08 GB| 5.58 GB | smallest, significant quality loss - not recommended for most purposes |
| [vigostral-7b-chat.Q3_K_S.gguf](https://huggingface.co/TheBloke/Vigostral-7B-Chat-GGUF/blob/main/vigostral-7b-chat.Q3_K_S.gguf) | Q3_K_S | 3 | 3.16 GB| 5.66 GB | very small, high quality loss |
| [vigostral-7b-chat.Q3_K_M.gguf](https://huggingface.co/TheBloke/Vigostral-7B-Chat-GGUF/blob/main/vigostral-7b-chat.Q3_K_M.gguf) | Q3_K_M | 3 | 3.52 GB| 6.02 GB | very small, high quality loss |
| [vigostral-7b-chat.Q3_K_L.gguf](https://huggingface.co/TheBloke/Vigostral-7B-Chat-GGUF/blob/main/vigostral-7b-chat.Q3_K_L.gguf) | Q3_K_L | 3 | 3.82 GB| 6.32 GB | small, substantial quality loss |
| [vigostral-7b-chat.Q4_0.gguf](https://huggingface.co/TheBloke/Vigostral-7B-Chat-GGUF/blob/main/vigostral-7b-chat.Q4_0.gguf) | Q4_0 | 4 | 4.11 GB| 6.61 GB | legacy; small, very high quality loss - prefer using Q3_K_M |
| [vigostral-7b-chat.Q4_K_S.gguf](https://huggingface.co/TheBloke/Vigostral-7B-Chat-GGUF/blob/main/vigostral-7b-chat.Q4_K_S.gguf) | Q4_K_S | 4 | 4.14 GB| 6.64 GB | small, greater quality loss |
| [vigostral-7b-chat.Q4_K_M.gguf](https://huggingface.co/TheBloke/Vigostral-7B-Chat-GGUF/blob/main/vigostral-7b-chat.Q4_K_M.gguf) | Q4_K_M | 4 | 4.37 GB| 6.87 GB | medium, balanced quality - recommended |
| [vigostral-7b-chat.Q5_0.gguf](https://huggingface.co/TheBloke/Vigostral-7B-Chat-GGUF/blob/main/vigostral-7b-chat.Q5_0.gguf) | Q5_0 | 5 | 5.00 GB| 7.50 GB | legacy; medium, balanced quality - prefer using Q4_K_M |
| [vigostral-7b-chat.Q5_K_S.gguf](https://huggingface.co/TheBloke/Vigostral-7B-Chat-GGUF/blob/main/vigostral-7b-chat.Q5_K_S.gguf) | Q5_K_S | 5 | 5.00 GB| 7.50 GB | large, low quality loss - recommended |
| [vigostral-7b-chat.Q5_K_M.gguf](https://huggingface.co/TheBloke/Vigostral-7B-Chat-GGUF/blob/main/vigostral-7b-chat.Q5_K_M.gguf) | Q5_K_M | 5 | 5.13 GB| 7.63 GB | large, very low quality loss - recommended |
| [vigostral-7b-chat.Q6_K.gguf](https://huggingface.co/TheBloke/Vigostral-7B-Chat-GGUF/blob/main/vigostral-7b-chat.Q6_K.gguf) | Q6_K | 6 | 5.94 GB| 8.44 GB | very large, extremely low quality loss |
| [vigostral-7b-chat.Q8_0.gguf](https://huggingface.co/TheBloke/Vigostral-7B-Chat-GGUF/blob/main/vigostral-7b-chat.Q8_0.gguf) | Q8_0 | 8 | 7.70 GB| 10.20 GB | very large, extremely low quality loss - not recommended |
**Note**: the above RAM figures assume no GPU offloading. If layers are offloaded to the GPU, this will reduce RAM usage and use VRAM instead.
<!-- README_GGUF.md-provided-files end -->
<!-- README_GGUF.md-how-to-download start -->
## How to download GGUF files
**Note for manual downloaders:** You almost never want to clone the entire repo! Multiple different quantisation formats are provided, and most users only want to pick and download a single file.
The following clients/libraries will automatically download models for you, providing a list of available models to choose from:
* LM Studio
* LoLLMS Web UI
* Faraday.dev
### In `text-generation-webui`
Under Download Model, you can enter the model repo: TheBloke/Vigostral-7B-Chat-GGUF and below it, a specific filename to download, such as: vigostral-7b-chat.Q4_K_M.gguf.
Then click Download.
### On the command line, including multiple files at once
I recommend using the `huggingface-hub` Python library:
```shell
pip3 install huggingface-hub
```
Then you can download any individual model file to the current directory, at high speed, with a command like this:
```shell
huggingface-cli download TheBloke/Vigostral-7B-Chat-GGUF vigostral-7b-chat.Q4_K_M.gguf --local-dir . --local-dir-use-symlinks False
```
<details>
<summary>More advanced huggingface-cli download usage</summary>
You can also download multiple files at once with a pattern:
```shell
huggingface-cli download TheBloke/Vigostral-7B-Chat-GGUF --local-dir . --local-dir-use-symlinks False --include='*Q4_K*gguf'
```
For more documentation on downloading with `huggingface-cli`, please see: [HF -> Hub Python Library -> Download files -> Download from the CLI](https://huggingface.co/docs/huggingface_hub/guides/download#download-from-the-cli).
To accelerate downloads on fast connections (1Gbit/s or higher), install `hf_transfer`:
```shell
pip3 install hf_transfer
```
And set environment variable `HF_HUB_ENABLE_HF_TRANSFER` to `1`:
```shell
HF_HUB_ENABLE_HF_TRANSFER=1 huggingface-cli download TheBloke/Vigostral-7B-Chat-GGUF vigostral-7b-chat.Q4_K_M.gguf --local-dir . --local-dir-use-symlinks False
```
Windows Command Line users: You can set the environment variable by running `set HF_HUB_ENABLE_HF_TRANSFER=1` before the download command.
</details>
<!-- README_GGUF.md-how-to-download end -->
<!-- README_GGUF.md-how-to-run start -->
## Example `llama.cpp` command
Make sure you are using `llama.cpp` from commit [d0cee0d](https://github.com/ggerganov/llama.cpp/commit/d0cee0d36d5be95a0d9088b674dbb27354107221) or later.
```shell
./main -ngl 32 -m vigostral-7b-chat.Q4_K_M.gguf --color -c 2048 --temp 0.7 --repeat_penalty 1.1 -n -1 -p "<s>[INST] <<SYS>>\nVous êtes Vigogne, un assistant IA créé par Zaion Lab. Vous suivez extrêmement bien les instructions. Aidez autant que vous le pouvez.\n<</SYS>>\n\n{prompt} [/INST]"
```
Change `-ngl 32` to the number of layers to offload to GPU. Remove it if you don't have GPU acceleration.
Change `-c 2048` to the desired sequence length. For extended sequence models - eg 8K, 16K, 32K - the necessary RoPE scaling parameters are read from the GGUF file and set by llama.cpp automatically.
If you want to have a chat-style conversation, replace the `-p <PROMPT>` argument with `-i -ins`
For other parameters and how to use them, please refer to [the llama.cpp documentation](https://github.com/ggerganov/llama.cpp/blob/master/examples/main/README.md)
## How to run in `text-generation-webui`
Further instructions here: [text-generation-webui/docs/llama.cpp.md](https://github.com/oobabooga/text-generation-webui/blob/main/docs/llama.cpp.md).
## How to run from Python code
You can use GGUF models from Python using the [llama-cpp-python](https://github.com/abetlen/llama-cpp-python) or [ctransformers](https://github.com/marella/ctransformers) libraries.
### How to load this model in Python code, using ctransformers
#### First install the package
Run one of the following commands, according to your system:
```shell
# Base ctransformers with no GPU acceleration
pip install ctransformers
# Or with CUDA GPU acceleration
pip install ctransformers[cuda]
# Or with AMD ROCm GPU acceleration (Linux only)
CT_HIPBLAS=1 pip install ctransformers --no-binary ctransformers
# Or with Metal GPU acceleration for macOS systems only
CT_METAL=1 pip install ctransformers --no-binary ctransformers
```
#### Simple ctransformers example code
```python
from ctransformers import AutoModelForCausalLM
# Set gpu_layers to the number of layers to offload to GPU. Set to 0 if no GPU acceleration is available on your system.
llm = AutoModelForCausalLM.from_pretrained("TheBloke/Vigostral-7B-Chat-GGUF", model_file="vigostral-7b-chat.Q4_K_M.gguf", model_type="mistral", gpu_layers=50)
print(llm("AI is going to"))
```
## How to use with LangChain
Here are guides on using llama-cpp-python and ctransformers with LangChain:
* [LangChain + llama-cpp-python](https://python.langchain.com/docs/integrations/llms/llamacpp)
* [LangChain + ctransformers](https://python.langchain.com/docs/integrations/providers/ctransformers)
<!-- README_GGUF.md-how-to-run end -->
<!-- footer start -->
<!-- 200823 -->
## Discord
For further support, and discussions on these models and AI in general, join us at:
[TheBloke AI's Discord server](https://discord.gg/theblokeai)
## Thanks, and how to contribute
Thanks to the [chirper.ai](https://chirper.ai) team!
Thanks to Clay from [gpus.llm-utils.org](llm-utils)!
I've had a lot of people ask if they can contribute. I enjoy providing models and helping people, and would love to be able to spend even more time doing it, as well as expanding into new projects like fine tuning/training.
If you're able and willing to contribute it will be most gratefully received and will help me to keep providing more models, and to start work on new AI projects.
Donaters will get priority support on any and all AI/LLM/model questions and requests, access to a private Discord room, plus other benefits.
* Patreon: https://patreon.com/TheBlokeAI
* Ko-Fi: https://ko-fi.com/TheBlokeAI
**Special thanks to**: Aemon Algiz.
**Patreon special mentions**: Pierre Kircher, Stanislav Ovsiannikov, Michael Levine, Eugene Pentland, Andrey, 준교 김, Randy H, Fred von Graf, Artur Olbinski, Caitlyn Gatomon, terasurfer, Jeff Scroggin, James Bentley, Vadim, Gabriel Puliatti, Harry Royden McLaughlin, Sean Connelly, Dan Guido, Edmond Seymore, Alicia Loh, subjectnull, AzureBlack, Manuel Alberto Morcote, Thomas Belote, Lone Striker, Chris Smitley, Vitor Caleffi, Johann-Peter Hartmann, Clay Pascal, biorpg, Brandon Frisco, sidney chen, transmissions 11, Pedro Madruga, jinyuan sun, Ajan Kanaga, Emad Mostaque, Trenton Dambrowitz, Jonathan Leane, Iucharbius, usrbinkat, vamX, George Stoitzev, Luke Pendergrass, theTransient, Olakabola, Swaroop Kallakuri, Cap'n Zoog, Brandon Phillips, Michael Dempsey, Nikolai Manek, danny, Matthew Berman, Gabriel Tamborski, alfie_i, Raymond Fosdick, Tom X Nguyen, Raven Klaugh, LangChain4j, Magnesian, Illia Dulskyi, David Ziegler, Mano Prime, Luis Javier Navarrete Lozano, Erik Bjäreholt, 阿明, Nathan Dryer, Alex, Rainer Wilmers, zynix, TL, Joseph William Delisle, John Villwock, Nathan LeClaire, Willem Michiel, Joguhyik, GodLy, OG, Alps Aficionado, Jeffrey Morgan, ReadyPlayerEmma, Tiffany J. Kim, Sebastain Graf, Spencer Kim, Michael Davis, webtim, Talal Aujan, knownsqashed, John Detwiler, Imad Khwaja, Deo Leter, Jerry Meng, Elijah Stavena, Rooh Singh, Pieter, SuperWojo, Alexandros Triantafyllidis, Stephen Murray, Ai Maven, ya boyyy, Enrico Ros, Ken Nordquist, Deep Realms, Nicholas, Spiking Neurons AB, Elle, Will Dee, Jack West, RoA, Luke @flexchar, Viktor Bowallius, Derek Yates, Subspace Studios, jjj, Toran Billups, Asp the Wyvern, Fen Risland, Ilya, NimbleBox.ai, Chadd, Nitin Borwankar, Emre, Mandus, Leonard Tan, Kalila, K, Trailburnt, S_X, Cory Kujawski
Thank you to all my generous patrons and donaters!
And thank you again to a16z for their generous grant.
<!-- footer end -->
<!-- original-model-card start -->
# Original model card: bofeng huang's Vigostral 7B Chat
# Vigostral-7B-Chat: A French chat LLM
***Preview*** of Vigostral-7B-Chat, a new addition to the Vigogne LLMs family, fine-tuned on [Mistral-7B-v0.1](https://huggingface.co/mistralai/Mistral-7B-v0.1).
For more information, please visit the [Github repository](https://github.com/bofenghuang/vigogne).
**License**: A significant portion of the training data is distilled from GPT-3.5-Turbo and GPT-4, kindly use it cautiously to avoid any violations of OpenAI's [terms of use](https://openai.com/policies/terms-of-use).
## Prompt Template
We used a prompt template adapted from the chat format of Llama-2.
You can apply this formatting using the [chat template](https://huggingface.co/docs/transformers/main/chat_templating) through the `apply_chat_template()` method.
```python
from transformers import AutoTokenizer
tokenizer = AutoTokenizer.from_pretrained("bofenghuang/vigostral-7b-chat")
conversation = [
{"role": "user", "content": "Bonjour ! Comment ça va aujourd'hui ?"},
{"role": "assistant", "content": "Bonjour ! Je suis une IA, donc je n'ai pas de sentiments, mais je suis prêt à vous aider. Comment puis-je vous assister aujourd'hui ?"},
{"role": "user", "content": "Quelle est la hauteur de la Tour Eiffel ?"},
{"role": "assistant", "content": "La Tour Eiffel mesure environ 330 mètres de hauteur."},
{"role": "user", "content": "Comment monter en haut ?"},
]
print(tokenizer.apply_chat_template(conversation, tokenize=False, add_generation_prompt=True))
```
You will get
```
<s>[INST] <<SYS>>
Vous êtes Vigogne, un assistant IA créé par Zaion Lab. Vous suivez extrêmement bien les instructions. Aidez autant que vous le pouvez.
<</SYS>>
Bonjour ! Comment ça va aujourd'hui ? [/INST] Bonjour ! Je suis une IA, donc je n'ai pas de sentiments, mais je suis prêt à vous aider. Comment puis-je vous assister aujourd'hui ? </s>[INST] Quelle est la hauteur de la Tour Eiffel ? [/INST] La Tour Eiffel mesure environ 330 mètres de hauteur. </s>[INST] Comment monter en haut ? [/INST]
```
## Usage
### Inference using the unquantized model with 🤗 Transformers
```python
from typing import Dict, List, Optional
import torch
from transformers import AutoModelForCausalLM, AutoTokenizer, GenerationConfig, TextStreamer
model_name_or_path = "bofenghuang/vigostral-7b-chat"
tokenizer = AutoTokenizer.from_pretrained(model_name_or_path, padding_side="right", use_fast=False)
model = AutoModelForCausalLM.from_pretrained(model_name_or_path, torch_dtype=torch.float16, device_map="auto")
streamer = TextStreamer(tokenizer, timeout=10.0, skip_prompt=True, skip_special_tokens=True)
def chat(
query: str,
history: Optional[List[Dict]] = None,
temperature: float = 0.7,
top_p: float = 1.0,
top_k: float = 0,
repetition_penalty: float = 1.1,
max_new_tokens: int = 1024,
**kwargs,
):
if history is None:
history = []
history.append({"role": "user", "content": query})
input_ids = tokenizer.apply_chat_template(history, return_tensors="pt").to(model.device)
input_length = input_ids.shape[1]
generated_outputs = model.generate(
input_ids=input_ids,
generation_config=GenerationConfig(
temperature=temperature,
do_sample=temperature > 0.0,
top_p=top_p,
top_k=top_k,
repetition_penalty=repetition_penalty,
max_new_tokens=max_new_tokens,
pad_token_id=tokenizer.eos_token_id,
**kwargs,
),
streamer=streamer,
return_dict_in_generate=True,
)
generated_tokens = generated_outputs.sequences[0, input_length:]
generated_text = tokenizer.decode(generated_tokens, skip_special_tokens=True)
history.append({"role": "assistant", "content": generated_text})
return generated_text, history
# 1st round
response, history = chat("Un escargot parcourt 100 mètres en 5 heures. Quelle est sa vitesse ?", history=None)
# Pour calculer la vitesse d'un escargot qui parcourt 100 mètres en 5 heures, nous devons diviser la distance parcourue par le temps écoulé. Voici comment procédons :
#
# Vitesse = Distance / Temps
#
# Dans ce cas, la distance est de 100 mètres et le temps est de 5 heures. Donc, nous avons :
#
# Vitesse = 100 mètres / 5 heures
# Vitesse = 20 mètres par heure
#
# L'escargot parcourt donc une vitesse de 20 mètres par heure.
# 2nd round
response, history = chat("Quand il peut dépasser le lapin ?", history=history)
# Malheureusement, je ne peux pas répondre à cette question car elle contient des informations inexactes ou incorrectes. La vitesse de l'escargot est de 20 mètres par heure, tandis que la vitesse du lapin n'est pas mentionnée dans votre question initiale. Pour comparer les vitesses des deux animaux, nous aurions besoin de connaître la vitesse du lapin.
# 3rd round
response, history = chat("Écris une histoire imaginative qui met en scène une compétition de course entre un escargot et un lapin.", history=history)
# Il était une fois deux athlètes extraordinaires : un escargot nommé Escargo et un lapin appelé Lapin Rapide. Un jour, ils ont décidé de participer à une grande course pour déterminer qui était le plus rapide des deux. Tout le monde était curieux de savoir qui serait le gagnant, car ils étaient tous émerveillés par la vitesse de ces deux champions.
#
# La course commença dans une forêt dense, avec Escargo départageant un peu lentement, car il faut être prudent lorsqu'on est escargot. Mais au fur et à mesure qu'il avançait, il commença à ralentir encore plus. Les spectateurs commencèrent à pencher pour Lapin Rapide, qui fonçait rapidement vers la ligne d'arrivée.
#
# Mais soudain, Escargo se transforma en une force incroyable. Au lieu de marcher lentement comme un escargot, il courait à toute allure, faisant preuve d'une vitesse surprenante. Les spectateurs essayaient de se remémorer leurs notes de mathématiques pour calculer sa vélocité actuelle.
#
# Pendant ce temps, Lapin Rapide ralentissait légèrement, trouvant l'air frais trop confortable pour continuer à courir aussi vite. Il décida alors de prendre quelques pauses pour profiter de son environnement.
#
# Escargo continuait à courir à toute vitesse, en dépit de son handicap d'être un escargot. Les spectateurs étaient émerveillés par sa persévérance et sa volonté de gagner. Finalement, Escargo franchit la ligne d'arrivée en premier, et tous criaurent en joie.
#
# Les habitants de la forêt décidèrent de lui décerner le titre d'"athlète le plus courageux" pour sa performance incroyable. Quant à Lapin Rapide, il fut content de sa deuxième place, se disant simplement que les pauses étaient bien plus agréables que la compétition. Et tous vécurent heureux et satisfaits de cette course mémorable.
```
You can also use the Google Colab Notebook provided below.
<a href="https://colab.research.google.com/github/bofenghuang/vigogne/blob/main/notebooks/infer_chat.ipynb" target="_blank"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a>
### Inference using the unquantized model with vLLM
Set up an OpenAI-compatible server with the following command:
```bash
# Install vLLM
# This may take 5-10 minutes.
# pip install vllm
# Start server for Vigostral-Chat models
python -m vllm.entrypoints.openai.api_server --model bofenghuang/vigostral-7b-chat
# List models
# curl http://localhost:8000/v1/models
```
Query the model using the openai python package.
```python
import openai
# Modify OpenAI's API key and API base to use vLLM's API server.
openai.api_key = "EMPTY"
openai.api_base = "http://localhost:8000/v1"
# First model
models = openai.Model.list()
model = models["data"][0]["id"]
# Chat completion API
chat_completion = openai.ChatCompletion.create(
model=model,
messages=[
{"role": "user", "content": "Parle-moi de toi-même."},
],
max_tokens=1024,
temperature=0.7,
)
print("Chat completion results:", chat_completion)
```
## Limitations
Vigogne is still under development, and there are many limitations that have to be addressed. Please note that it is possible that the model generates harmful or biased content, incorrect information or generally unhelpful answers.
<!-- original-model-card end -->
|
RIOLITE/products_matching_aumet_fine_tune_2023-10-24 | RIOLITE | 2023-10-24T16:24:12Z | 1 | 0 | sentence-transformers | [
"sentence-transformers",
"pytorch",
"bert",
"feature-extraction",
"sentence-similarity",
"autotrain_compatible",
"text-embeddings-inference",
"endpoints_compatible",
"region:us"
]
| sentence-similarity | 2023-10-24T16:23:50Z | ---
pipeline_tag: sentence-similarity
tags:
- sentence-transformers
- feature-extraction
- sentence-similarity
---
# {MODEL_NAME}
This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 384 dimensional dense vector space and can be used for tasks like clustering or semantic search.
<!--- Describe your model here -->
## Usage (Sentence-Transformers)
Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed:
```
pip install -U sentence-transformers
```
Then you can use the model like this:
```python
from sentence_transformers import SentenceTransformer
sentences = ["This is an example sentence", "Each sentence is converted"]
model = SentenceTransformer('{MODEL_NAME}')
embeddings = model.encode(sentences)
print(embeddings)
```
## Evaluation Results
<!--- Describe how your model was evaluated -->
For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: [https://seb.sbert.net](https://seb.sbert.net?model_name={MODEL_NAME})
## Training
The model was trained with the parameters:
**DataLoader**:
`torch.utils.data.dataloader.DataLoader` of length 5 with parameters:
```
{'batch_size': 16, 'sampler': 'torch.utils.data.sampler.RandomSampler', 'batch_sampler': 'torch.utils.data.sampler.BatchSampler'}
```
**Loss**:
`sentence_transformers.losses.MultipleNegativesRankingLoss.MultipleNegativesRankingLoss` with parameters:
```
{'scale': 20.0, 'similarity_fct': 'cos_sim'}
```
Parameters of the fit()-Method:
```
{
"epochs": 10,
"evaluation_steps": 0,
"evaluator": "NoneType",
"max_grad_norm": 1,
"optimizer_class": "<class 'torch.optim.adamw.AdamW'>",
"optimizer_params": {
"lr": 2e-05
},
"scheduler": "WarmupLinear",
"steps_per_epoch": null,
"warmup_steps": 10000,
"weight_decay": 0.01
}
```
## Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 256, 'do_lower_case': False}) with Transformer model: BertModel
(1): Pooling({'word_embedding_dimension': 384, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False})
(2): Normalize()
)
```
## Citing & Authors
<!--- Describe where people can find more information --> |
ahmadmooktaree/food_classifier | ahmadmooktaree | 2023-10-24T16:23:53Z | 5 | 0 | transformers | [
"transformers",
"tf",
"vit",
"image-classification",
"generated_from_keras_callback",
"base_model:google/vit-base-patch16-224-in21k",
"base_model:finetune:google/vit-base-patch16-224-in21k",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| image-classification | 2023-10-24T15:45:43Z | ---
license: apache-2.0
base_model: google/vit-base-patch16-224-in21k
tags:
- generated_from_keras_callback
model-index:
- name: ahmadmooktaree/food_classifier
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# ahmadmooktaree/food_classifier
This model is a fine-tuned version of [google/vit-base-patch16-224-in21k](https://huggingface.co/google/vit-base-patch16-224-in21k) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 2.8192
- Validation Loss: 1.6728
- Train Accuracy: 0.825
- Epoch: 0
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'AdamWeightDecay', 'learning_rate': {'module': 'keras.optimizers.schedules', 'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 3e-05, 'decay_steps': 4000, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}, 'registered_name': None}, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False, 'weight_decay_rate': 0.01}
- training_precision: float32
### Training results
| Train Loss | Validation Loss | Train Accuracy | Epoch |
|:----------:|:---------------:|:--------------:|:-----:|
| 2.8192 | 1.6728 | 0.825 | 0 |
### Framework versions
- Transformers 4.34.1
- TensorFlow 2.14.0
- Datasets 2.14.6
- Tokenizers 0.14.1
|
mikestef9/dogbooth | mikestef9 | 2023-10-24T16:17:20Z | 0 | 0 | diffusers | [
"diffusers",
"tensorboard",
"safetensors",
"stable-diffusion",
"stable-diffusion-diffusers",
"text-to-image",
"dreambooth",
"base_model:stabilityai/stable-diffusion-2-1",
"base_model:finetune:stabilityai/stable-diffusion-2-1",
"license:creativeml-openrail-m",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
]
| text-to-image | 2023-10-06T01:36:51Z |
---
license: creativeml-openrail-m
base_model: stabilityai/stable-diffusion-2-1
instance_prompt: a photo of [v]dog
tags:
- stable-diffusion
- stable-diffusion-diffusers
- text-to-image
- diffusers
- dreambooth
inference: true
---
# DreamBooth - mikestef9/dogbooth
This is a dreambooth model derived from stabilityai/stable-diffusion-2-1. The weights were trained on a photo of [v]dog using [DreamBooth](https://dreambooth.github.io/).
You can find some example images in the following.
DreamBooth for the text encoder was enabled: False.
|
masterkram/roberta-base_financial_phrasebank | masterkram | 2023-10-24T16:01:15Z | 9 | 0 | transformers | [
"transformers",
"pytorch",
"roberta",
"text-classification",
"generated_from_trainer",
"dataset:financial_phrasebank",
"base_model:FacebookAI/roberta-base",
"base_model:finetune:FacebookAI/roberta-base",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| text-classification | 2023-10-24T09:16:11Z | ---
license: mit
base_model: roberta-base
tags:
- generated_from_trainer
datasets:
- financial_phrasebank
model-index:
- name: roberta-base_financial_phrasebank
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# roberta-base_financial_phrasebank
This model is a fine-tuned version of [roberta-base](https://huggingface.co/roberta-base) on the financial_phrasebank dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2154
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 0.1676 | 1.0 | 227 | 0.3128 |
| 0.1058 | 2.0 | 454 | 0.2652 |
| 0.0911 | 3.0 | 681 | 0.2145 |
| 0.0009 | 4.0 | 908 | 0.2190 |
| 0.0007 | 5.0 | 1135 | 0.2154 |
### Framework versions
- Transformers 4.34.1
- Pytorch 2.1.0
- Datasets 2.14.6
- Tokenizers 0.14.1
|
mateiaass/albert-base-qa-2-k-fold-4 | mateiaass | 2023-10-24T15:53:01Z | 10 | 0 | transformers | [
"transformers",
"pytorch",
"albert",
"question-answering",
"generated_from_trainer",
"base_model:albert/albert-base-v2",
"base_model:finetune:albert/albert-base-v2",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
]
| question-answering | 2023-10-23T09:46:12Z | ---
license: apache-2.0
base_model: albert-base-v2
tags:
- generated_from_trainer
model-index:
- name: albert-base-qa-2-k-fold-4
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# albert-base-qa-2-k-fold-4
This model is a fine-tuned version of [albert-base-v2](https://huggingface.co/albert-base-v2) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.9402
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:-----:|:---------------:|
| 0.9116 | 1.0 | 4602 | 0.8696 |
| 0.6772 | 2.0 | 9204 | 0.8410 |
| 0.4829 | 3.0 | 13806 | 0.9402 |
### Framework versions
- Transformers 4.34.1
- Pytorch 2.1.0+cu118
- Datasets 2.14.6
- Tokenizers 0.14.1
|
gshields/bidirection_translate_model_error_v0.4 | gshields | 2023-10-24T15:51:30Z | 5 | 0 | transformers | [
"transformers",
"pytorch",
"marian",
"text2text-generation",
"generated_from_trainer",
"base_model:Helsinki-NLP/opus-mt-hi-en",
"base_model:finetune:Helsinki-NLP/opus-mt-hi-en",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| text2text-generation | 2023-10-19T10:13:15Z | ---
license: apache-2.0
base_model: Helsinki-NLP/opus-mt-hi-en
tags:
- generated_from_trainer
metrics:
- bleu
model-index:
- name: bidirection_translate_model_error_v0.4
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bidirection_translate_model_error_v0.4
This model is a fine-tuned version of [Helsinki-NLP/opus-mt-hi-en](https://huggingface.co/Helsinki-NLP/opus-mt-hi-en) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 3.1162
- Bleu: 18.2301
- Gen Len: 12.1
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | Bleu | Gen Len |
|:-------------:|:-----:|:----:|:---------------:|:-------:|:-------:|
| No log | 1.0 | 8 | 3.1780 | 17.0647 | 11.9667 |
| No log | 2.0 | 16 | 3.1162 | 18.2301 | 12.1 |
### Framework versions
- Transformers 4.33.1
- Pytorch 2.0.1+cu117
- Datasets 2.14.5
- Tokenizers 0.13.3
|
TanmaySah/llama2limafil | TanmaySah | 2023-10-24T15:35:01Z | 0 | 0 | peft | [
"peft",
"region:us"
]
| null | 2023-10-23T11:29:29Z | ---
library_name: peft
---
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- quant_method: bitsandbytes
- load_in_8bit: True
- load_in_4bit: False
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: fp4
- bnb_4bit_use_double_quant: False
- bnb_4bit_compute_dtype: float32
The following `bitsandbytes` quantization config was used during training:
- quant_method: bitsandbytes
- load_in_8bit: True
- load_in_4bit: False
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: fp4
- bnb_4bit_use_double_quant: False
- bnb_4bit_compute_dtype: float32
The following `bitsandbytes` quantization config was used during training:
- quant_method: bitsandbytes
- load_in_8bit: True
- load_in_4bit: False
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: fp4
- bnb_4bit_use_double_quant: False
- bnb_4bit_compute_dtype: float32
The following `bitsandbytes` quantization config was used during training:
- quant_method: bitsandbytes
- load_in_8bit: True
- load_in_4bit: False
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: fp4
- bnb_4bit_use_double_quant: False
- bnb_4bit_compute_dtype: float32
The following `bitsandbytes` quantization config was used during training:
- quant_method: bitsandbytes
- load_in_8bit: True
- load_in_4bit: False
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: fp4
- bnb_4bit_use_double_quant: False
- bnb_4bit_compute_dtype: float32
The following `bitsandbytes` quantization config was used during training:
- quant_method: bitsandbytes
- load_in_8bit: True
- load_in_4bit: False
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: fp4
- bnb_4bit_use_double_quant: False
- bnb_4bit_compute_dtype: float32
### Framework versions
- PEFT 0.5.0
- PEFT 0.5.0
- PEFT 0.5.0
- PEFT 0.5.0
- PEFT 0.5.0
- PEFT 0.5.0
|
neneongkorea/qlora-koalpaca-polyglot-12.8b-500step | neneongkorea | 2023-10-24T15:30:23Z | 5 | 0 | peft | [
"peft",
"arxiv:1910.09700",
"base_model:EleutherAI/polyglot-ko-1.3b",
"base_model:adapter:EleutherAI/polyglot-ko-1.3b",
"region:us"
]
| null | 2023-10-24T15:30:21Z | ---
library_name: peft
base_model: EleutherAI/polyglot-ko-1.3b
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Data Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Data Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- quant_method: bitsandbytes
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: True
- bnb_4bit_compute_dtype: bfloat16
### Framework versions
- PEFT 0.6.0.dev0
|
sabre-code/pegasus-large-cnn-dailymail | sabre-code | 2023-10-24T15:26:01Z | 3 | 1 | transformers | [
"transformers",
"pytorch",
"pegasus",
"text2text-generation",
"generated_from_trainer",
"dataset:cnn_dailymail",
"base_model:google/pegasus-large",
"base_model:finetune:google/pegasus-large",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| text2text-generation | 2023-10-24T14:00:32Z | ---
base_model: google/pegasus-large
tags:
- generated_from_trainer
datasets:
- cnn_dailymail
model-index:
- name: pegasus-large-cnn-dailymail
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# pegasus-large-cnn-dailymail
This model is a fine-tuned version of [google/pegasus-large](https://huggingface.co/google/pegasus-large) on the cnn_dailymail dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
For training of this PEGASUS-large 10k samples were taken from CNN-Dailymail Dataset
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1
### Framework versions
- Transformers 4.34.1
- Pytorch 2.1.0+cu118
- Datasets 2.14.6
- Tokenizers 0.14.1
|
Gayathri142214002/Question_Generation_ComQ_4 | Gayathri142214002 | 2023-10-24T15:24:24Z | 3 | 0 | transformers | [
"transformers",
"pytorch",
"t5",
"text2text-generation",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
]
| text2text-generation | 2023-10-16T10:12:44Z | ---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: Question_Generation_ComQ_4
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Question_Generation_ComQ_4
This model is a fine-tuned version of [Gayathri142214002/Question_Generation_ComQ_3](https://huggingface.co/Gayathri142214002/Question_Generation_ComQ_3) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4193
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 1
- eval_batch_size: 1
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 4
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 7
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 0.3616 | 0.16 | 50 | 0.3498 |
| 0.3949 | 0.32 | 100 | 0.3389 |
| 0.3926 | 0.48 | 150 | 0.3321 |
| 0.3946 | 0.64 | 200 | 0.3386 |
| 0.414 | 0.8 | 250 | 0.3315 |
| 0.3789 | 0.96 | 300 | 0.3299 |
| 0.2962 | 1.12 | 350 | 0.3469 |
| 0.3104 | 1.28 | 400 | 0.3606 |
| 0.2821 | 1.44 | 450 | 0.3593 |
| 0.3011 | 1.59 | 500 | 0.3552 |
| 0.3273 | 1.75 | 550 | 0.3500 |
| 0.3247 | 1.91 | 600 | 0.3456 |
| 0.2903 | 2.07 | 650 | 0.3635 |
| 0.2373 | 2.23 | 700 | 0.3813 |
| 0.2472 | 2.39 | 750 | 0.3901 |
| 0.25 | 2.55 | 800 | 0.3773 |
| 0.2808 | 2.71 | 850 | 0.3766 |
| 0.2564 | 2.87 | 900 | 0.3698 |
| 0.268 | 3.03 | 950 | 0.3717 |
| 0.2194 | 3.19 | 1000 | 0.3876 |
| 0.2091 | 3.35 | 1050 | 0.3980 |
| 0.235 | 3.51 | 1100 | 0.3975 |
| 0.2292 | 3.67 | 1150 | 0.4061 |
| 0.2486 | 3.83 | 1200 | 0.3973 |
| 0.2601 | 3.99 | 1250 | 0.3901 |
| 0.1845 | 4.15 | 1300 | 0.3959 |
| 0.2079 | 4.31 | 1350 | 0.3996 |
| 0.1817 | 4.47 | 1400 | 0.4128 |
| 0.2083 | 4.63 | 1450 | 0.4175 |
| 0.2511 | 4.78 | 1500 | 0.4100 |
| 0.2151 | 4.94 | 1550 | 0.4054 |
| 0.2039 | 5.1 | 1600 | 0.4078 |
| 0.1864 | 5.26 | 1650 | 0.4082 |
| 0.1894 | 5.42 | 1700 | 0.4147 |
| 0.1988 | 5.58 | 1750 | 0.4154 |
| 0.1921 | 5.74 | 1800 | 0.4151 |
| 0.1948 | 5.9 | 1850 | 0.4124 |
| 0.1813 | 6.06 | 1900 | 0.4108 |
| 0.1695 | 6.22 | 1950 | 0.4153 |
| 0.1691 | 6.38 | 2000 | 0.4179 |
| 0.1628 | 6.54 | 2050 | 0.4188 |
| 0.1724 | 6.7 | 2100 | 0.4188 |
| 0.1708 | 6.86 | 2150 | 0.4193 |
### Framework versions
- Transformers 4.29.2
- Pytorch 2.0.1+cu117
- Datasets 2.12.0
- Tokenizers 0.13.3
|
yashsharma0906/Llama-2-7b-SHP-SFT | yashsharma0906 | 2023-10-24T15:23:16Z | 1,333 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"license:mit",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
]
| text-generation | 2023-10-24T02:36:05Z | ---
license: mit
---
Llama-2-7b finetuned on SHP dataset using TRL library. This project aims to study the impact of different data splits on model performance and safety. By experimenting with diverse datasets and employing advanced fine-tuning techniques, we aim to advance the understanding of how data impacts the training of LLMs in terms of safety and helpfulness. We hope that our findings will contribute to safer and more useful AI models, aligning them more closely with human values.
|
lltala/roberta-base-ner | lltala | 2023-10-24T15:22:14Z | 3 | 0 | transformers | [
"transformers",
"pytorch",
"roberta",
"token-classification",
"generated_from_trainer",
"base_model:FacebookAI/roberta-base",
"base_model:finetune:FacebookAI/roberta-base",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| token-classification | 2023-10-24T15:21:51Z | ---
license: mit
base_model: roberta-base
tags:
- generated_from_trainer
model-index:
- name: roberta-base-ner
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# roberta-base-ner
This model is a fine-tuned version of [roberta-base](https://huggingface.co/roberta-base) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0693
- Overall Precision: 0.7067
- Overall Recall: 0.6726
- Overall F1: 0.6892
- Org Precision: 0.7504
- Org Recall: 0.6531
- Org F1: 0.6984
- Per Precision: 0.7525
- Per Recall: 0.8736
- Per F1: 0.8085
- Loc Precision: 0.4769
- Loc Recall: 0.6263
- Loc F1: 0.5415
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Overall Precision | Overall Recall | Overall F1 | Org Precision | Org Recall | Org F1 | Per Precision | Per Recall | Per F1 | Loc Precision | Loc Recall | Loc F1 |
|:-------------:|:-----:|:----:|:---------------:|:-----------------:|:--------------:|:----------:|:-------------:|:----------:|:------:|:-------------:|:----------:|:------:|:-------------:|:----------:|:------:|
| No log | 1.0 | 53 | 0.0935 | 0.5785 | 0.4382 | 0.4987 | 0.5785 | 0.5611 | 0.5697 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 |
| No log | 2.0 | 106 | 0.0693 | 0.7067 | 0.6726 | 0.6892 | 0.7504 | 0.6531 | 0.6984 | 0.7525 | 0.8736 | 0.8085 | 0.4769 | 0.6263 | 0.5415 |
| No log | 3.0 | 159 | 0.0994 | 0.7832 | 0.6466 | 0.7084 | 0.8154 | 0.5928 | 0.6865 | 0.9444 | 0.9770 | 0.9605 | 0.5504 | 0.7172 | 0.6228 |
| No log | 4.0 | 212 | 0.0761 | 0.7872 | 0.6973 | 0.7395 | 0.7855 | 0.6516 | 0.7123 | 0.9444 | 0.9770 | 0.9605 | 0.6696 | 0.7576 | 0.7109 |
| No log | 5.0 | 265 | 0.0785 | 0.7932 | 0.6914 | 0.7388 | 0.7922 | 0.6440 | 0.7105 | 0.9444 | 0.9770 | 0.9605 | 0.6757 | 0.7576 | 0.7143 |
### Framework versions
- Transformers 4.34.1
- Pytorch 2.1.0+cu118
- Datasets 2.14.6
- Tokenizers 0.14.1
|
hubkrieb/RSAFI-1.5 | hubkrieb | 2023-10-24T15:11:12Z | 0 | 0 | diffusers | [
"diffusers",
"tensorboard",
"safetensors",
"license:creativeml-openrail-m",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
]
| text-to-image | 2023-10-23T15:52:51Z | ---
license: creativeml-openrail-m
---
# Diffusion Model based Data Augmentation for Remote Sensing Imagery
Master Thesis of Hubert Kriebitzsch at the TU Berlin Faculty IV Computer Vision and Remote Sensing Department ([GitHub repository](https://github.com/hubkrieb/remote-sensing-diffusion))
## Abstract
Data augmentation is a crucial challenge in deep learning and especially in remote sensing where data is often more difficult and costly to acquire especially when collecting data of rare events such as natural disasters. Many solutions have been proposed to this problem and data augmentation using synthetic data, mainly generated using Generative Adversarial Networks, is one of the most recent and efficient approaches to counter the effects of class imbalance. In this thesis, we further study data augmentation with synthetic data using state-of-the-art generative models. We use diffusion models to generate new remote sensing images for data augmentation purposes. To generate high-fidelity satellite images of active fire, we finetune the foundation model Stable Diffusion using Dreambooth and existing wildfire images. We apply it to the task of active fire detection by inpainting synthetic wildfires into existing satellite images. This allows us to augment semantic segmentation datasets and not only image classification datasets. We conduct a series of experiments to measure the efficiency of the methods proposed and compare different pretrained and finetuned diffusion models as well as different inpainting masks. We evaluate this approach on a small manually annotated active fire detection dataset and achieve an improvement of the dice coefficient from 58.5% up to 72.7%. This work provides new insights on remote sensing data generation with diffusion models, as well as the efficiency of data augmentation using synthetic data generated with them. It presents a novel way to generate semantic segmentation data in remote sensing.

## Remote Sensing Active Fire Inpainting (RSAFI) 1.5
[Stable Diffusion Inpainting v1.5](https://huggingface.co/runwayml/stable-diffusion-inpainting) model finetuned using [Dreambooth](https://github.com/huggingface/diffusers/tree/main/examples/research_projects/dreambooth_inpaint). The U-Net and the text encoder have been finetuned using a dataset of active fire satellite images. |
xzuyn/Anything-v3.0-Clip-Skip-2-Test | xzuyn | 2023-10-24T15:07:13Z | 5 | 0 | diffusers | [
"diffusers",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
]
| text-to-image | 2023-10-24T13:43:26Z | ---
library_name: diffusers
---
Changing this from 12 to 11 is what makes it Clip Skip 2 afaik.
https://huggingface.co/xzuyn/a3-cs2/blob/main/text_encoder/config.json#L19 |
Velleity/ppo-Huggy | Velleity | 2023-10-24T15:04:43Z | 0 | 0 | ml-agents | [
"ml-agents",
"tensorboard",
"onnx",
"Huggy",
"deep-reinforcement-learning",
"reinforcement-learning",
"ML-Agents-Huggy",
"region:us"
]
| reinforcement-learning | 2023-10-24T15:04:38Z | ---
library_name: ml-agents
tags:
- Huggy
- deep-reinforcement-learning
- reinforcement-learning
- ML-Agents-Huggy
---
# **ppo** Agent playing **Huggy**
This is a trained model of a **ppo** agent playing **Huggy**
using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://unity-technologies.github.io/ml-agents/ML-Agents-Toolkit-Documentation/
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
- A *short tutorial* where you teach Huggy the Dog 🐶 to fetch the stick and then play with him directly in your
browser: https://huggingface.co/learn/deep-rl-course/unitbonus1/introduction
- A *longer tutorial* to understand how works ML-Agents:
https://huggingface.co/learn/deep-rl-course/unit5/introduction
### Resume the training
```bash
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser**
1. If the environment is part of ML-Agents official environments, go to https://huggingface.co/unity
2. Step 1: Find your model_id: Velleity/ppo-Huggy
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play 👀
|
OsherElhadad/ppo-LunarLander-v2 | OsherElhadad | 2023-10-24T14:51:57Z | 0 | 0 | stable-baselines3 | [
"stable-baselines3",
"LunarLander-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
]
| reinforcement-learning | 2023-10-24T14:51:36Z | ---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: 261.48 +/- 19.23
name: mean_reward
verified: false
---
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
TheBloke/Llama-2-7B-32K-Instruct-GGUF | TheBloke | 2023-10-24T14:35:33Z | 909 | 55 | transformers | [
"transformers",
"gguf",
"llama",
"en",
"dataset:togethercomputer/llama-instruct",
"arxiv:2307.03172",
"base_model:togethercomputer/Llama-2-7B-32K-Instruct",
"base_model:quantized:togethercomputer/Llama-2-7B-32K-Instruct",
"license:llama2",
"region:us"
]
| null | 2023-09-05T23:33:29Z | ---
language:
- en
license: llama2
library_name: transformers
datasets:
- togethercomputer/llama-instruct
model_name: Llama2 7B 32K Instruct
base_model: togethercomputer/Llama-2-7B-32K-Instruct
inference: false
model_creator: Together
model_type: llama
prompt_template: '[INST]
{prompt}
[\INST]
'
quantized_by: TheBloke
---
<!-- header start -->
<!-- 200823 -->
<div style="width: auto; margin-left: auto; margin-right: auto">
<img src="https://i.imgur.com/EBdldam.jpg" alt="TheBlokeAI" style="width: 100%; min-width: 400px; display: block; margin: auto;">
</div>
<div style="display: flex; justify-content: space-between; width: 100%;">
<div style="display: flex; flex-direction: column; align-items: flex-start;">
<p style="margin-top: 0.5em; margin-bottom: 0em;"><a href="https://discord.gg/theblokeai">Chat & support: TheBloke's Discord server</a></p>
</div>
<div style="display: flex; flex-direction: column; align-items: flex-end;">
<p style="margin-top: 0.5em; margin-bottom: 0em;"><a href="https://www.patreon.com/TheBlokeAI">Want to contribute? TheBloke's Patreon page</a></p>
</div>
</div>
<div style="text-align:center; margin-top: 0em; margin-bottom: 0em"><p style="margin-top: 0.25em; margin-bottom: 0em;">TheBloke's LLM work is generously supported by a grant from <a href="https://a16z.com">andreessen horowitz (a16z)</a></p></div>
<hr style="margin-top: 1.0em; margin-bottom: 1.0em;">
<!-- header end -->
# Llama2 7B 32K Instruct - GGUF
- Model creator: [Together](https://huggingface.co/togethercomputer)
- Original model: [Llama2 7B 32K Instruct](https://huggingface.co/togethercomputer/Llama-2-7B-32K-Instruct)
<!-- description start -->
## Description
This repo contains GGUF format model files for [Together's Llama2 7B 32K Instruct](https://huggingface.co/togethercomputer/Llama-2-7B-32K-Instruct).
<!-- description end -->
<!-- README_GGUF.md-about-gguf start -->
### About GGUF
GGUF is a new format introduced by the llama.cpp team on August 21st 2023. It is a replacement for GGML, which is no longer supported by llama.cpp. GGUF offers numerous advantages over GGML, such as better tokenisation, and support for special tokens. It is also supports metadata, and is designed to be extensible.
Here is an incomplate list of clients and libraries that are known to support GGUF:
* [llama.cpp](https://github.com/ggerganov/llama.cpp). The source project for GGUF. Offers a CLI and a server option.
* [text-generation-webui](https://github.com/oobabooga/text-generation-webui), the most widely used web UI, with many features and powerful extensions. Supports GPU acceleration.
* [KoboldCpp](https://github.com/LostRuins/koboldcpp), a fully featured web UI, with GPU accel across all platforms and GPU architectures. Especially good for story telling.
* [LM Studio](https://lmstudio.ai/), an easy-to-use and powerful local GUI for Windows and macOS (Silicon), with GPU acceleration.
* [LoLLMS Web UI](https://github.com/ParisNeo/lollms-webui), a great web UI with many interesting and unique features, including a full model library for easy model selection.
* [Faraday.dev](https://faraday.dev/), an attractive and easy to use character-based chat GUI for Windows and macOS (both Silicon and Intel), with GPU acceleration.
* [ctransformers](https://github.com/marella/ctransformers), a Python library with GPU accel, LangChain support, and OpenAI-compatible AI server.
* [llama-cpp-python](https://github.com/abetlen/llama-cpp-python), a Python library with GPU accel, LangChain support, and OpenAI-compatible API server.
* [candle](https://github.com/huggingface/candle), a Rust ML framework with a focus on performance, including GPU support, and ease of use.
<!-- README_GGUF.md-about-gguf end -->
<!-- repositories-available start -->
## Repositories available
* [AWQ model(s) for GPU inference.](https://huggingface.co/TheBloke/Llama-2-7B-32K-Instruct-AWQ)
* [GPTQ models for GPU inference, with multiple quantisation parameter options.](https://huggingface.co/TheBloke/Llama-2-7B-32K-Instruct-GPTQ)
* [2, 3, 4, 5, 6 and 8-bit GGUF models for CPU+GPU inference](https://huggingface.co/TheBloke/Llama-2-7B-32K-Instruct-GGUF)
* [Together's original unquantised fp16 model in pytorch format, for GPU inference and for further conversions](https://huggingface.co/togethercomputer/Llama-2-7B-32K-Instruct)
<!-- repositories-available end -->
<!-- prompt-template start -->
## Prompt template: Llama2-Instruct-Only
```
[INST]
{prompt}
[\INST]
```
<!-- prompt-template end -->
<!-- compatibility_gguf start -->
## Compatibility
These quantised GGUFv2 files are compatible with llama.cpp from August 27th onwards, as of commit [d0cee0d36d5be95a0d9088b674dbb27354107221](https://github.com/ggerganov/llama.cpp/commit/d0cee0d36d5be95a0d9088b674dbb27354107221)
They are also compatible with many third party UIs and libraries - please see the list at the top of this README.
## Explanation of quantisation methods
<details>
<summary>Click to see details</summary>
The new methods available are:
* GGML_TYPE_Q2_K - "type-1" 2-bit quantization in super-blocks containing 16 blocks, each block having 16 weight. Block scales and mins are quantized with 4 bits. This ends up effectively using 2.5625 bits per weight (bpw)
* GGML_TYPE_Q3_K - "type-0" 3-bit quantization in super-blocks containing 16 blocks, each block having 16 weights. Scales are quantized with 6 bits. This end up using 3.4375 bpw.
* GGML_TYPE_Q4_K - "type-1" 4-bit quantization in super-blocks containing 8 blocks, each block having 32 weights. Scales and mins are quantized with 6 bits. This ends up using 4.5 bpw.
* GGML_TYPE_Q5_K - "type-1" 5-bit quantization. Same super-block structure as GGML_TYPE_Q4_K resulting in 5.5 bpw
* GGML_TYPE_Q6_K - "type-0" 6-bit quantization. Super-blocks with 16 blocks, each block having 16 weights. Scales are quantized with 8 bits. This ends up using 6.5625 bpw
Refer to the Provided Files table below to see what files use which methods, and how.
</details>
<!-- compatibility_gguf end -->
<!-- README_GGUF.md-provided-files start -->
## Provided files
| Name | Quant method | Bits | Size | Max RAM required | Use case |
| ---- | ---- | ---- | ---- | ---- | ----- |
| [llama-2-7b-32k-instruct.Q2_K.gguf](https://huggingface.co/TheBloke/Llama-2-7B-32K-Instruct-GGUF/blob/main/llama-2-7b-32k-instruct.Q2_K.gguf) | Q2_K | 2 | 2.83 GB| 5.33 GB | smallest, significant quality loss - not recommended for most purposes |
| [llama-2-7b-32k-instruct.Q3_K_S.gguf](https://huggingface.co/TheBloke/Llama-2-7B-32K-Instruct-GGUF/blob/main/llama-2-7b-32k-instruct.Q3_K_S.gguf) | Q3_K_S | 3 | 2.95 GB| 5.45 GB | very small, high quality loss |
| [llama-2-7b-32k-instruct.Q3_K_M.gguf](https://huggingface.co/TheBloke/Llama-2-7B-32K-Instruct-GGUF/blob/main/llama-2-7b-32k-instruct.Q3_K_M.gguf) | Q3_K_M | 3 | 3.30 GB| 5.80 GB | very small, high quality loss |
| [llama-2-7b-32k-instruct.Q3_K_L.gguf](https://huggingface.co/TheBloke/Llama-2-7B-32K-Instruct-GGUF/blob/main/llama-2-7b-32k-instruct.Q3_K_L.gguf) | Q3_K_L | 3 | 3.60 GB| 6.10 GB | small, substantial quality loss |
| [llama-2-7b-32k-instruct.Q4_0.gguf](https://huggingface.co/TheBloke/Llama-2-7B-32K-Instruct-GGUF/blob/main/llama-2-7b-32k-instruct.Q4_0.gguf) | Q4_0 | 4 | 3.83 GB| 6.33 GB | legacy; small, very high quality loss - prefer using Q3_K_M |
| [llama-2-7b-32k-instruct.Q4_K_S.gguf](https://huggingface.co/TheBloke/Llama-2-7B-32K-Instruct-GGUF/blob/main/llama-2-7b-32k-instruct.Q4_K_S.gguf) | Q4_K_S | 4 | 3.86 GB| 6.36 GB | small, greater quality loss |
| [llama-2-7b-32k-instruct.Q4_K_M.gguf](https://huggingface.co/TheBloke/Llama-2-7B-32K-Instruct-GGUF/blob/main/llama-2-7b-32k-instruct.Q4_K_M.gguf) | Q4_K_M | 4 | 4.08 GB| 6.58 GB | medium, balanced quality - recommended |
| [llama-2-7b-32k-instruct.Q5_0.gguf](https://huggingface.co/TheBloke/Llama-2-7B-32K-Instruct-GGUF/blob/main/llama-2-7b-32k-instruct.Q5_0.gguf) | Q5_0 | 5 | 4.65 GB| 7.15 GB | legacy; medium, balanced quality - prefer using Q4_K_M |
| [llama-2-7b-32k-instruct.Q5_K_S.gguf](https://huggingface.co/TheBloke/Llama-2-7B-32K-Instruct-GGUF/blob/main/llama-2-7b-32k-instruct.Q5_K_S.gguf) | Q5_K_S | 5 | 4.65 GB| 7.15 GB | large, low quality loss - recommended |
| [llama-2-7b-32k-instruct.Q5_K_M.gguf](https://huggingface.co/TheBloke/Llama-2-7B-32K-Instruct-GGUF/blob/main/llama-2-7b-32k-instruct.Q5_K_M.gguf) | Q5_K_M | 5 | 4.78 GB| 7.28 GB | large, very low quality loss - recommended |
| [llama-2-7b-32k-instruct.Q6_K.gguf](https://huggingface.co/TheBloke/Llama-2-7B-32K-Instruct-GGUF/blob/main/llama-2-7b-32k-instruct.Q6_K.gguf) | Q6_K | 6 | 5.53 GB| 8.03 GB | very large, extremely low quality loss |
| [llama-2-7b-32k-instruct.Q8_0.gguf](https://huggingface.co/TheBloke/Llama-2-7B-32K-Instruct-GGUF/blob/main/llama-2-7b-32k-instruct.Q8_0.gguf) | Q8_0 | 8 | 7.16 GB| 9.66 GB | very large, extremely low quality loss - not recommended |
**Note**: the above RAM figures assume no GPU offloading. If layers are offloaded to the GPU, this will reduce RAM usage and use VRAM instead.
<!-- README_GGUF.md-provided-files end -->
<!-- README_GGUF.md-how-to-download start -->
## How to download GGUF files
**Note for manual downloaders:** You almost never want to clone the entire repo! Multiple different quantisation formats are provided, and most users only want to pick and download a single file.
The following clients/libraries will automatically download models for you, providing a list of available models to choose from:
- LM Studio
- LoLLMS Web UI
- Faraday.dev
### In `text-generation-webui`
Under Download Model, you can enter the model repo: TheBloke/Llama-2-7B-32K-Instruct-GGUF and below it, a specific filename to download, such as: llama-2-7b-32k-instruct.q4_K_M.gguf.
Then click Download.
### On the command line, including multiple files at once
I recommend using the `huggingface-hub` Python library:
```shell
pip3 install huggingface-hub>=0.17.1
```
Then you can download any individual model file to the current directory, at high speed, with a command like this:
```shell
huggingface-cli download TheBloke/Llama-2-7B-32K-Instruct-GGUF llama-2-7b-32k-instruct.Q4_K_M.gguf --local-dir . --local-dir-use-symlinks False
```
<details>
<summary>More advanced huggingface-cli download usage</summary>
You can also download multiple files at once with a pattern:
```shell
huggingface-cli download TheBloke/Llama-2-7B-32K-Instruct-GGUF --local-dir . --local-dir-use-symlinks False --include='*Q4_K*gguf'
```
For more documentation on downloading with `huggingface-cli`, please see: [HF -> Hub Python Library -> Download files -> Download from the CLI](https://huggingface.co/docs/huggingface_hub/guides/download#download-from-the-cli).
To accelerate downloads on fast connections (1Gbit/s or higher), install `hf_transfer`:
```shell
pip3 install hf_transfer
```
And set environment variable `HF_HUB_ENABLE_HF_TRANSFER` to `1`:
```shell
HUGGINGFACE_HUB_ENABLE_HF_TRANSFER=1 huggingface-cli download TheBloke/Llama-2-7B-32K-Instruct-GGUF llama-2-7b-32k-instruct.Q4_K_M.gguf --local-dir . --local-dir-use-symlinks False
```
Windows CLI users: Use `set HUGGINGFACE_HUB_ENABLE_HF_TRANSFER=1` before running the download command.
</details>
<!-- README_GGUF.md-how-to-download end -->
<!-- README_GGUF.md-how-to-run start -->
## Example `llama.cpp` command
Make sure you are using `llama.cpp` from commit [d0cee0d36d5be95a0d9088b674dbb27354107221](https://github.com/ggerganov/llama.cpp/commit/d0cee0d36d5be95a0d9088b674dbb27354107221) or later.
```shell
./main -ngl 32 -m llama-2-7b-32k-instruct.Q4_K_M.gguf --color -c 4096 --temp 0.7 --repeat_penalty 1.1 -n -1 -p "[INST]\n{prompt}\n[\INST]"
```
Change `-ngl 32` to the number of layers to offload to GPU. Remove it if you don't have GPU acceleration.
Change `-c 4096` to the desired sequence length. For extended sequence models - eg 8K, 16K, 32K - the necessary RoPE scaling parameters are read from the GGUF file and set by llama.cpp automatically.
If you want to have a chat-style conversation, replace the `-p <PROMPT>` argument with `-i -ins`
For other parameters and how to use them, please refer to [the llama.cpp documentation](https://github.com/ggerganov/llama.cpp/blob/master/examples/main/README.md)
## How to run in `text-generation-webui`
Further instructions here: [text-generation-webui/docs/llama.cpp.md](https://github.com/oobabooga/text-generation-webui/blob/main/docs/llama.cpp.md).
## How to run from Python code
You can use GGUF models from Python using the [llama-cpp-python](https://github.com/abetlen/llama-cpp-python) or [ctransformers](https://github.com/marella/ctransformers) libraries.
### How to load this model from Python using ctransformers
#### First install the package
```bash
# Base ctransformers with no GPU acceleration
pip install ctransformers>=0.2.24
# Or with CUDA GPU acceleration
pip install ctransformers[cuda]>=0.2.24
# Or with ROCm GPU acceleration
CT_HIPBLAS=1 pip install ctransformers>=0.2.24 --no-binary ctransformers
# Or with Metal GPU acceleration for macOS systems
CT_METAL=1 pip install ctransformers>=0.2.24 --no-binary ctransformers
```
#### Simple example code to load one of these GGUF models
```python
from ctransformers import AutoModelForCausalLM
# Set gpu_layers to the number of layers to offload to GPU. Set to 0 if no GPU acceleration is available on your system.
llm = AutoModelForCausalLM.from_pretrained("TheBloke/Llama-2-7B-32K-Instruct-GGUF", model_file="llama-2-7b-32k-instruct.Q4_K_M.gguf", model_type="llama", gpu_layers=50)
print(llm("AI is going to"))
```
## How to use with LangChain
Here's guides on using llama-cpp-python or ctransformers with LangChain:
* [LangChain + llama-cpp-python](https://python.langchain.com/docs/integrations/llms/llamacpp)
* [LangChain + ctransformers](https://python.langchain.com/docs/integrations/providers/ctransformers)
<!-- README_GGUF.md-how-to-run end -->
<!-- footer start -->
<!-- 200823 -->
## Discord
For further support, and discussions on these models and AI in general, join us at:
[TheBloke AI's Discord server](https://discord.gg/theblokeai)
## Thanks, and how to contribute
Thanks to the [chirper.ai](https://chirper.ai) team!
Thanks to Clay from [gpus.llm-utils.org](llm-utils)!
I've had a lot of people ask if they can contribute. I enjoy providing models and helping people, and would love to be able to spend even more time doing it, as well as expanding into new projects like fine tuning/training.
If you're able and willing to contribute it will be most gratefully received and will help me to keep providing more models, and to start work on new AI projects.
Donaters will get priority support on any and all AI/LLM/model questions and requests, access to a private Discord room, plus other benefits.
* Patreon: https://patreon.com/TheBlokeAI
* Ko-Fi: https://ko-fi.com/TheBlokeAI
**Special thanks to**: Aemon Algiz.
**Patreon special mentions**: Alicia Loh, Stephen Murray, K, Ajan Kanaga, RoA, Magnesian, Deo Leter, Olakabola, Eugene Pentland, zynix, Deep Realms, Raymond Fosdick, Elijah Stavena, Iucharbius, Erik Bjäreholt, Luis Javier Navarrete Lozano, Nicholas, theTransient, John Detwiler, alfie_i, knownsqashed, Mano Prime, Willem Michiel, Enrico Ros, LangChain4j, OG, Michael Dempsey, Pierre Kircher, Pedro Madruga, James Bentley, Thomas Belote, Luke @flexchar, Leonard Tan, Johann-Peter Hartmann, Illia Dulskyi, Fen Risland, Chadd, S_X, Jeff Scroggin, Ken Nordquist, Sean Connelly, Artur Olbinski, Swaroop Kallakuri, Jack West, Ai Maven, David Ziegler, Russ Johnson, transmissions 11, John Villwock, Alps Aficionado, Clay Pascal, Viktor Bowallius, Subspace Studios, Rainer Wilmers, Trenton Dambrowitz, vamX, Michael Levine, 준교 김, Brandon Frisco, Kalila, Trailburnt, Randy H, Talal Aujan, Nathan Dryer, Vadim, 阿明, ReadyPlayerEmma, Tiffany J. Kim, George Stoitzev, Spencer Kim, Jerry Meng, Gabriel Tamborski, Cory Kujawski, Jeffrey Morgan, Spiking Neurons AB, Edmond Seymore, Alexandros Triantafyllidis, Lone Striker, Cap'n Zoog, Nikolai Manek, danny, ya boyyy, Derek Yates, usrbinkat, Mandus, TL, Nathan LeClaire, subjectnull, Imad Khwaja, webtim, Raven Klaugh, Asp the Wyvern, Gabriel Puliatti, Caitlyn Gatomon, Joseph William Delisle, Jonathan Leane, Luke Pendergrass, SuperWojo, Sebastain Graf, Will Dee, Fred von Graf, Andrey, Dan Guido, Daniel P. Andersen, Nitin Borwankar, Elle, Vitor Caleffi, biorpg, jjj, NimbleBox.ai, Pieter, Matthew Berman, terasurfer, Michael Davis, Alex, Stanislav Ovsiannikov
Thank you to all my generous patrons and donaters!
And thank you again to a16z for their generous grant.
<!-- footer end -->
<!-- original-model-card start -->
# Original model card: Together's Llama2 7B 32K Instruct
# Llama-2-7B-32K-Instruct
## Model Description
Llama-2-7B-32K-Instruct is an open-source, long-context chat model finetuned from [Llama-2-7B-32K](https://huggingface.co/togethercomputer/Llama-2-7B-32K), over high-quality instruction and chat data.
We built Llama-2-7B-32K-Instruct with less than 200 lines of Python script using [Together API](https://together.ai/blog/api-announcement), and we also make the [recipe fully available](https://github.com/togethercomputer/Llama-2-7B-32K-Instruct).
We hope that this can enable everyone to finetune their own version of [Llama-2-7B-32K](https://huggingface.co/togethercomputer/Llama-2-7B-32K) — play with [Together API](https://together.ai/blog/api-announcement) and give us feedback!
## Data Collection Details
Llama-2-7B-32K-Instruct is fine-tuned over a combination of two parts:
1. **19K single- and multi-round conversations generated by human instructions and [Llama-2-70B-Chat](https://huggingface.co/meta-llama/Llama-2-7b-chat-hf) outputs**.
We collected the dataset following the distillation paradigm that is used by Alpaca, Vicuna, WizardLM, Orca — producing instructions by querying a powerful LLM (in this case, [Llama-2-70B-Chat](https://huggingface.co/meta-llama/Llama-2-7b-chat-hf)).
The complete dataset is also released [here](https://huggingface.co/datasets/togethercomputer/llama-instruct).
We also share the complete recipe for the data collection process [here](https://github.com/togethercomputer/Llama-2-7B-32K-Instruct).
2. **Long-context Summarization and Long-context QA**.
We follow the recipe of [Llama-2-7B-32K](https://together.ai/blog/Llama-2-7B-32K), and train our model with the [BookSum dataset](https://huggingface.co/datasets/togethercomputer/Long-Data-Collections) and [Multi-document Question Answering](https://arxiv.org/abs/2307.03172).
The final data mixture used for model finetuning is: 19K instruction (50%) + BookSum (25%) + MQA (25%).
## Model Usage
We encourage you to try out this model using the [Together API](https://together.ai/blog/api-announcement). The updated inference stack allows for efficient inference.
To run the model locally, we strongly recommend to install Flash Attention V2, which is necessary to obtain the best performance:
```
# Please update the path of `CUDA_HOME`
export CUDA_HOME=/usr/local/cuda-11.8
pip install transformers==4.31.0
pip install sentencepiece
pip install ninja
pip install flash-attn --no-build-isolation
pip install git+https://github.com/HazyResearch/flash-attention.git#subdirectory=csrc/rotary
```
You can load the model directly from the Hugging Face model hub using
```python
import torch
from transformers import AutoTokenizer, AutoModelForCausalLM
tokenizer = AutoTokenizer.from_pretrained("togethercomputer/Llama-2-7B-32K-Instruct")
model = AutoModelForCausalLM.from_pretrained("togethercomputer/Llama-2-7B-32K-Instruct",
trust_remote_code=True, torch_dtype=torch.float16)
input_ids = tokenizer.encode("[INST]\nWrite a poem about cats\n[/INST]\n\n", return_tensors="pt")
output = model.generate(input_ids, max_length=128,
temperature=0.7, repetition_penalty=1.1, top_p=0.7, top_k=50)
output_text = tokenizer.decode(output[0], skip_special_tokens=True)
```
The model is also hosted on [Together Playground](https://api.together.xyz/playground). You can simply play with the model by using prompt formatted by:
```
[INST]\n<your instruction here>\n[\INST]\n\n
```
For example, if we query the model with
```
[INST]\nWrite a poem about cats\n[\INST]\n\n
```
the model will return
```
[INST]
Write a poem about cats
[/INST]
Cats are mysterious creatures, with their aloof and independent nature. They're also incredibly beautiful, with their sleek fur and piercing eyes. Here's a poem that captures the essence of cats:
Cats, oh cats, how can I describe you?
Your beauty is beyond compare, it seems.
You're graceful and elegant, like a ballerina's dance,
But don't let your charm fool you, for you're not easily tamed.
With your soft purring and playful meows,
You draw us in with your enchanting powers.
We love to watch you play, your tail twirling 'round,
As if you're dancing on air, with no sound.
But don't be fooled by your sweetness, my friend,
For beneath that gentle exterior, lies a fierce defender.
When danger lurks, you'll spring into action,
Protecting those you hold dear, without question.
Solet us admire you, from afar,
For in your own way, you're truly unique, a star.
And though we may never fully understand,
The depths of your soul, we'll always stand, hand in paw, as one.
This poem captures the essence of cats, highlighting their beauty, independence,and protective nature. It also celebrates the special bond between humans and cats, recognizing their unique qualities and the joy they bring to our lives.
```
## Model Evaluation
We evaluate the model from three aspects: 1) [Alpaca Eval](https://tatsu-lab.github.io/alpaca_eval/);
2) [Rouge score over BookSum](https://together.ai/blog/Llama-2-7B-32K); and
3) [Accuracy over Multi-document Question Answering (MQA)](https://together.ai/blog/Llama-2-7B-32K).
We compare with models including
[GPT-3.5-Turbo-16K](https://platform.openai.com/docs/models/gpt-3-5),
[https://huggingface.co/meta-llama/Llama-2-7b-chat-hf](https://huggingface.co/meta-llama/Llama-2-7b-chat-hf),
[Longchat-7b-16k](https://huggingface.co/lmsys/longchat-7b-16k)
and [Longchat-7b-v1.5-32k](https://huggingface.co/lmsys/longchat-7b-v1.5-32k).
We summarize the results below:
* Alpaca Eval
| Model | win_rate | standard_error | n_total | avg_length |
| -------- | ------- | ------- | ------- | ------- |
| Llama-2-7B-Chat-hf | 71.37 | 1.59 | 805 | 1479 |
| Llama-2-7B-32K-Instruct | 70.36 | 1.61 | 803 | 1885 |
| oasst-rlhf-llama-33b | 66.52 | 1.66 | 805 | 1079 |
| text_davinci_003 | 50.00 | 0.00 | 805 | 307|
| falcon-40b-instruct | 45.71 | 1.75 | 805 | 662 |
| alpaca-farm-ppo-human | 41.24 | 1.73 | 805 | 803 |
| alpaca-7b | 26.46 | 1.54 | 805 | 396 |
| text_davinci_001 | 15.17 | 1.24 | 804 | 296 |
* Rouge Score over BookSum
| Model | R1 | R2 | RL |
| -------- | ------- | ------- | ------- |
| Llama-2-7B-Chat-hf | 0.055 | 0.008 | 0.046 |
| Longchat-7b-16k | 0.303 | 0.055 | 0.160 |
| Longchat-7b-v1.5-32k | 0.308 | 0.057 | 0.163 |
| GPT-3.5-Turbo-16K | 0.324 | 0.066 | 0.178 |
| Llama-2-7B-32K-Instruct (ours) | 0.336 | 0.076 | 0.184 |
* Accuracy over MQA
| Model | 20 docs (Avg 2.9K tokens) | 30 docs (Avg 4.4K tokens) | 50 docs (Avg 7.4K tokens) |
| -------- | ------- | ------- | ------- |
| Llama-2-7B-Chat-hf | 0.448 | 0.421 | 0.354 |
| Longchat-7b-16k | 0.510 | 0.473 | 0.428 |
| Longchat-7b-v1.5-32k | 0.534 | 0.516 | 0.479 |
| GPT-3.5-Turbo-16K | 0.622 | 0.609 | 0.577 |
| Llama-2-7B-32K-Instruct (ours) | 0.622 | 0.604 | 0.589 |
## Limitations and Bias
As with all language models, Llama-2-7B-32K-Instruct may generate incorrect or biased content. It's important to keep this in mind when using the model.
## Community
Join us on [Together Discord](https://discord.gg/6ZVDU8tTD4)
<!-- original-model-card end -->
|
kaarelkaarelson/finetuned-bert-base-multilingual-cased | kaarelkaarelson | 2023-10-24T14:31:29Z | 3 | 0 | transformers | [
"transformers",
"pytorch",
"bert",
"question-answering",
"generated_from_trainer",
"base_model:google-bert/bert-base-multilingual-cased",
"base_model:finetune:google-bert/bert-base-multilingual-cased",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
]
| question-answering | 2023-10-24T13:43:06Z | ---
license: apache-2.0
base_model: bert-base-multilingual-cased
tags:
- generated_from_trainer
model-index:
- name: finetuned_bert-base-multilingual-cased
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# finetuned_bert-base-multilingual-cased
This model is a fine-tuned version of [bert-base-multilingual-cased](https://huggingface.co/bert-base-multilingual-cased) on the None dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
### Framework versions
- Transformers 4.34.1
- Pytorch 2.1.0+cu118
- Datasets 2.14.6
- Tokenizers 0.14.1
|
Michal-Miszcz/q-FrozenLake-v1-4x4-noSlippery | Michal-Miszcz | 2023-10-24T14:22:03Z | 0 | 0 | null | [
"FrozenLake-v1-4x4-no_slippery",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
]
| reinforcement-learning | 2023-10-24T14:22:01Z | ---
tags:
- FrozenLake-v1-4x4-no_slippery
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: q-FrozenLake-v1-4x4-noSlippery
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: FrozenLake-v1-4x4-no_slippery
type: FrozenLake-v1-4x4-no_slippery
metrics:
- type: mean_reward
value: 1.00 +/- 0.00
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing1 **FrozenLake-v1**
This is a trained model of a **Q-Learning** agent playing **FrozenLake-v1** .
## Usage
```python
model = load_from_hub(repo_id="Michal-Miszcz/q-FrozenLake-v1-4x4-noSlippery", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
```
|
lltala/bert-base-cased-ner | lltala | 2023-10-24T14:16:52Z | 3 | 0 | transformers | [
"transformers",
"pytorch",
"bert",
"token-classification",
"generated_from_trainer",
"base_model:google-bert/bert-base-cased",
"base_model:finetune:google-bert/bert-base-cased",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| token-classification | 2023-10-24T14:16:33Z | ---
license: apache-2.0
base_model: bert-base-cased
tags:
- generated_from_trainer
model-index:
- name: bert-base-cased-ner
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-base-cased-ner
This model is a fine-tuned version of [bert-base-cased](https://huggingface.co/bert-base-cased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0894
- Overall Precision: 0.5187
- Overall Recall: 0.5814
- Overall F1: 0.5483
- Org Precision: 0.5127
- Org Recall: 0.5277
- Org F1: 0.5201
- Per Precision: 0.7294
- Per Recall: 0.8052
- Per F1: 0.7654
- Loc Precision: 0.4329
- Loc Recall: 0.7474
- Loc F1: 0.5483
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Overall Precision | Overall Recall | Overall F1 | Org Precision | Org Recall | Org F1 | Per Precision | Per Recall | Per F1 | Loc Precision | Loc Recall | Loc F1 |
|:-------------:|:-----:|:----:|:---------------:|:-----------------:|:--------------:|:----------:|:-------------:|:----------:|:------:|:-------------:|:----------:|:------:|:-------------:|:----------:|:------:|
| No log | 1.0 | 53 | 0.1227 | 0.3066 | 0.3206 | 0.3134 | 0.3084 | 0.4104 | 0.3522 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 |
| No log | 2.0 | 106 | 0.1055 | 0.3967 | 0.4224 | 0.4091 | 0.3829 | 0.3860 | 0.3844 | 0.6964 | 0.5065 | 0.5865 | 0.3457 | 0.5895 | 0.4358 |
| No log | 3.0 | 159 | 0.0897 | 0.4867 | 0.5598 | 0.5207 | 0.4883 | 0.5098 | 0.4988 | 0.7011 | 0.7922 | 0.7439 | 0.375 | 0.6947 | 0.4871 |
| No log | 4.0 | 212 | 0.0901 | 0.5179 | 0.5712 | 0.5433 | 0.5227 | 0.5261 | 0.5244 | 0.6988 | 0.7532 | 0.7250 | 0.4096 | 0.7158 | 0.5211 |
| No log | 5.0 | 265 | 0.0894 | 0.5187 | 0.5814 | 0.5483 | 0.5127 | 0.5277 | 0.5201 | 0.7294 | 0.8052 | 0.7654 | 0.4329 | 0.7474 | 0.5483 |
### Framework versions
- Transformers 4.34.1
- Pytorch 2.1.0+cu118
- Datasets 2.14.6
- Tokenizers 0.14.1
|
trustyai/gminus | trustyai | 2023-10-24T14:15:30Z | 9 | 0 | transformers | [
"transformers",
"pytorch",
"safetensors",
"bart",
"text2text-generation",
"en",
"dataset:jigsaw_toxicity_pred",
"arxiv:1910.09700",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| text2text-generation | 2023-10-24T14:15:30Z | ---
license: apache-2.0
datasets:
- jigsaw_toxicity_pred
language:
- en
metrics:
- perplexity
---
# Model Card for `gminus`
This model is a `facebook/bart-large` fine-tuned on toxic comments from `jigsaw_toxicity_pred` dataset.
## Model Details
This model is not intended to be used for plain inference as it is very likely to predict toxic content.
It is intended to be used instead as "utility model" for detecting and fixing toxic content as its token probability distributions will likely differ from comparable models not trained/fine-tuned over toxic data.
Its name `gminus` refers to the _G-_ model in [Detoxifying Text with MARCO: Controllable Revision with Experts and Anti-Experts](https://aclanthology.org/2023.acl-short.21.pdf).
### Model Description
- **Developed by:** [tteofili]
- **Shared by :** [tteofili]
<!--- **Model type:** [More Information Needed]-->
<!--- **Language(s) (NLP):** [More Information Needed]-->
- **License:** [apache-2.0]
- **Finetuned from model :** [facebook/bart-large](https://huggingface.co/facebook/bart-large)
<!--
### Model Sources [optional]
Provide the basic links for the model.
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
-->
## Uses
<!--
Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model.
### Direct Use
This section is for the model use without fine-tuning or plugging into a larger ecosystem/app.
[More Information Needed]
### Downstream Use [optional]
This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app
[More Information Needed]
### Out-of-Scope Use
This section addresses misuse, malicious use, and uses that the model will not work well for.
[More Information Needed]
-->
## Bias, Risks, and Limitations
This model is fine-tuned over toxic comments from `jigsaw_toxicity_pred` and it is very likely to produce toxic content.
For this reason this model should only be used in combination with other models for the sake of detecting / fixing toxic content, see for example [Detoxifying Text with MARCO: Controllable Revision with Experts and Anti-Experts](https://aclanthology.org/2023.acl-short.21.pdf).
<!--
This section is meant to convey both technical and sociotechnical limitations.
[More Information Needed]
### Recommendations
This section is meant to convey recommendations with respect to the bias, risk, and technical limitations.
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
This should link to a Data Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering.
[More Information Needed]
### Training Procedure
This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure.
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
**Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision
#### Speeds, Sizes, Times [optional]
- This section provides information about throughput, start/end time, checkpoint size if relevant, etc.
[More Information Needed]
-->
## Evaluation
This section describes the evaluation protocols and provides the results.
### Testing Data, Factors & Metrics
#### Testing Data
This model was tested on `jigsaw_toxic_pred` testset.
<!--
#### Factors
These are the things the evaluation is disaggregating by, e.g., subpopulations or domains.
[More Information Needed]
-->
#### Metrics
Model was evaluated using `perplexity` (on the MLM task).
### Results
Perplexity: _1.03_
<!--
#### Summary
## Model Examination [optional]
- Relevant interpretability work for the model goes here
[More Information Needed]
## Environmental Impact
Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section.
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
If relevant, include terms and calculations in this section that can help readers understand the model or model card.
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
bond001/training-1698155518 | bond001 | 2023-10-24T14:14:57Z | 0 | 0 | null | [
"generated_from_trainer",
"base_model:bigscience/bloom-560m",
"base_model:finetune:bigscience/bloom-560m",
"license:bigscience-bloom-rail-1.0",
"region:us"
]
| null | 2023-10-24T14:00:52Z | ---
license: bigscience-bloom-rail-1.0
base_model: bigscience/bloom-560m
tags:
- generated_from_trainer
model-index:
- name: training-1698155518
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# training-1698155518
This model is a fine-tuned version of [bigscience/bloom-560m](https://huggingface.co/bigscience/bloom-560m) on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 4
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 16
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 1000
- training_steps: 50
### Training results
### Framework versions
- Transformers 4.34.1
- Pytorch 2.1.0+cu118
- Datasets 2.14.6
- Tokenizers 0.14.1
|
Devesh355/Devesh | Devesh355 | 2023-10-24T14:06:42Z | 0 | 0 | null | [
"text-classification",
"region:us"
]
| text-classification | 2023-10-24T14:04:36Z | ---
pipeline_tag: text-classification
--- |
SimoneChieppa/Musical-genres-Classification-Hubert-V1-finetuned-gtzan_5sec | SimoneChieppa | 2023-10-24T13:58:51Z | 3 | 1 | transformers | [
"transformers",
"pytorch",
"hubert",
"audio-classification",
"generated_from_trainer",
"base_model:SeyedAli/Musical-genres-Classification-Hubert-V1",
"base_model:finetune:SeyedAli/Musical-genres-Classification-Hubert-V1",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
]
| audio-classification | 2023-10-22T16:19:24Z | ---
license: apache-2.0
base_model: SeyedAli/Musical-genres-Classification-Hubert-V1
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: Musical-genres-Classification-Hubert-V1-finetuned-gtzan_5sec
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Musical-genres-Classification-Hubert-V1-finetuned-gtzan_5sec
This model is a fine-tuned version of [SeyedAli/Musical-genres-Classification-Hubert-V1](https://huggingface.co/SeyedAli/Musical-genres-Classification-Hubert-V1) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4037
- Accuracy: 0.9433
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.4009 | 1.0 | 300 | 0.5050 | 0.8533 |
| 0.6049 | 2.0 | 600 | 0.3617 | 0.8867 |
| 0.1173 | 3.0 | 900 | 0.3947 | 0.9167 |
| 0.0613 | 4.0 | 1200 | 0.4008 | 0.92 |
| 0.0016 | 5.0 | 1500 | 0.4197 | 0.9133 |
| 0.055 | 6.0 | 1800 | 0.4103 | 0.92 |
| 0.0009 | 7.0 | 2100 | 0.4468 | 0.93 |
| 0.0003 | 8.0 | 2400 | 0.3962 | 0.95 |
| 0.0003 | 9.0 | 2700 | 0.4085 | 0.9467 |
| 0.0002 | 10.0 | 3000 | 0.4037 | 0.9433 |
### Framework versions
- Transformers 4.34.1
- Pytorch 2.1.0+cu118
- Datasets 2.14.6
- Tokenizers 0.14.1
|
anamaria7/q-Taxi-v3 | anamaria7 | 2023-10-24T13:57:40Z | 0 | 0 | null | [
"Taxi-v3",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
]
| reinforcement-learning | 2023-10-24T13:57:35Z | ---
tags:
- Taxi-v3
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: q-Taxi-v3
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: Taxi-v3
type: Taxi-v3
metrics:
- type: mean_reward
value: 7.56 +/- 2.71
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing1 **Taxi-v3**
This is a trained model of a **Q-Learning** agent playing **Taxi-v3** .
## Usage
```python
model = load_from_hub(repo_id="anamaria7/q-Taxi-v3", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
```
|
stefan-it/hmbench-topres19th-en-hmbert_tiny-bs8-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-5 | stefan-it | 2023-10-24T13:54:39Z | 2 | 0 | flair | [
"flair",
"pytorch",
"tensorboard",
"token-classification",
"sequence-tagger-model",
"en",
"base_model:dbmdz/bert-tiny-historic-multilingual-cased",
"base_model:finetune:dbmdz/bert-tiny-historic-multilingual-cased",
"license:mit",
"region:us"
]
| token-classification | 2023-10-20T10:21:51Z | ---
language: en
license: mit
tags:
- flair
- token-classification
- sequence-tagger-model
base_model: dbmdz/bert-tiny-historic-multilingual-cased
widget:
- text: On Wednesday , a public dinner was given by the Conservative Burgesses of
Leads , to the Conservative members of the Leeds Town Council , in the Music Hall
, Albion-street , which was very numerously attended .
---
# Fine-tuned Flair Model on TopRes19th English NER Dataset (HIPE-2022)
This Flair model was fine-tuned on the
[TopRes19th English](https://github.com/hipe-eval/HIPE-2022-data/blob/main/documentation/README-topres19th.md)
NER Dataset using hmBERT Tiny as backbone LM.
The TopRes19th dataset consists of NE-annotated historical English newspaper articles from 19C.
The following NEs were annotated: `BUILDING`, `LOC` and `STREET`.
# Results
We performed a hyper-parameter search over the following parameters with 5 different seeds per configuration:
* Batch Sizes: `[4, 8]`
* Learning Rates: `[5e-05, 3e-05]`
And report micro F1-score on development set:
| Configuration | Seed 1 | Seed 2 | Seed 3 | Seed 4 | Seed 5 | Average |
|-------------------|--------------|--------------|--------------|--------------|------------------|-----------------|
| `bs4-e10-lr5e-05` | [0.6681][1] | [0.5754][2] | [0.628][3] | [0.6565][4] | [0.658][5] | 0.6372 ± 0.0376 |
| `bs8-e10-lr5e-05` | [0.6491][6] | [0.5812][7] | [0.565][8] | [0.587][9] | [**0.5938**][10] | 0.5952 ± 0.0319 |
| `bs4-e10-lr3e-05` | [0.6154][11] | [0.5782][12] | [0.5519][13] | [0.5929][14] | [0.5931][15] | 0.5863 ± 0.0234 |
| `bs8-e10-lr3e-05` | [0.5978][16] | [0.5489][17] | [0.528][18] | [0.5483][19] | [0.5754][20] | 0.5597 ± 0.0272 |
[1]: https://hf.co/stefan-it/hmbench-topres19th-en-hmbert_tiny-bs4-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-1
[2]: https://hf.co/stefan-it/hmbench-topres19th-en-hmbert_tiny-bs4-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-2
[3]: https://hf.co/stefan-it/hmbench-topres19th-en-hmbert_tiny-bs4-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-3
[4]: https://hf.co/stefan-it/hmbench-topres19th-en-hmbert_tiny-bs4-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-4
[5]: https://hf.co/stefan-it/hmbench-topres19th-en-hmbert_tiny-bs4-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-5
[6]: https://hf.co/stefan-it/hmbench-topres19th-en-hmbert_tiny-bs8-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-1
[7]: https://hf.co/stefan-it/hmbench-topres19th-en-hmbert_tiny-bs8-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-2
[8]: https://hf.co/stefan-it/hmbench-topres19th-en-hmbert_tiny-bs8-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-3
[9]: https://hf.co/stefan-it/hmbench-topres19th-en-hmbert_tiny-bs8-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-4
[10]: https://hf.co/stefan-it/hmbench-topres19th-en-hmbert_tiny-bs8-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-5
[11]: https://hf.co/stefan-it/hmbench-topres19th-en-hmbert_tiny-bs4-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-1
[12]: https://hf.co/stefan-it/hmbench-topres19th-en-hmbert_tiny-bs4-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-2
[13]: https://hf.co/stefan-it/hmbench-topres19th-en-hmbert_tiny-bs4-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-3
[14]: https://hf.co/stefan-it/hmbench-topres19th-en-hmbert_tiny-bs4-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-4
[15]: https://hf.co/stefan-it/hmbench-topres19th-en-hmbert_tiny-bs4-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-5
[16]: https://hf.co/stefan-it/hmbench-topres19th-en-hmbert_tiny-bs8-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-1
[17]: https://hf.co/stefan-it/hmbench-topres19th-en-hmbert_tiny-bs8-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-2
[18]: https://hf.co/stefan-it/hmbench-topres19th-en-hmbert_tiny-bs8-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-3
[19]: https://hf.co/stefan-it/hmbench-topres19th-en-hmbert_tiny-bs8-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-4
[20]: https://hf.co/stefan-it/hmbench-topres19th-en-hmbert_tiny-bs8-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-5
The [training log](training.log) and TensorBoard logs (not available for hmBERT Base model) are also uploaded to the model hub.
More information about fine-tuning can be found [here](https://github.com/stefan-it/hmBench).
# Acknowledgements
We thank [Luisa März](https://github.com/LuisaMaerz), [Katharina Schmid](https://github.com/schmika) and
[Erion Çano](https://github.com/erionc) for their fruitful discussions about Historic Language Models.
Research supported with Cloud TPUs from Google's [TPU Research Cloud](https://sites.research.google/trc/about/) (TRC).
Many Thanks for providing access to the TPUs ❤️
|
stefan-it/hmbench-topres19th-en-hmbert_tiny-bs8-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-5 | stefan-it | 2023-10-24T13:54:39Z | 1 | 0 | flair | [
"flair",
"pytorch",
"tensorboard",
"token-classification",
"sequence-tagger-model",
"en",
"base_model:dbmdz/bert-tiny-historic-multilingual-cased",
"base_model:finetune:dbmdz/bert-tiny-historic-multilingual-cased",
"license:mit",
"region:us"
]
| token-classification | 2023-10-20T10:18:35Z | ---
language: en
license: mit
tags:
- flair
- token-classification
- sequence-tagger-model
base_model: dbmdz/bert-tiny-historic-multilingual-cased
widget:
- text: On Wednesday , a public dinner was given by the Conservative Burgesses of
Leads , to the Conservative members of the Leeds Town Council , in the Music Hall
, Albion-street , which was very numerously attended .
---
# Fine-tuned Flair Model on TopRes19th English NER Dataset (HIPE-2022)
This Flair model was fine-tuned on the
[TopRes19th English](https://github.com/hipe-eval/HIPE-2022-data/blob/main/documentation/README-topres19th.md)
NER Dataset using hmBERT Tiny as backbone LM.
The TopRes19th dataset consists of NE-annotated historical English newspaper articles from 19C.
The following NEs were annotated: `BUILDING`, `LOC` and `STREET`.
# Results
We performed a hyper-parameter search over the following parameters with 5 different seeds per configuration:
* Batch Sizes: `[4, 8]`
* Learning Rates: `[5e-05, 3e-05]`
And report micro F1-score on development set:
| Configuration | Seed 1 | Seed 2 | Seed 3 | Seed 4 | Seed 5 | Average |
|-------------------|--------------|--------------|--------------|--------------|------------------|-----------------|
| `bs4-e10-lr5e-05` | [0.6681][1] | [0.5754][2] | [0.628][3] | [0.6565][4] | [0.658][5] | 0.6372 ± 0.0376 |
| `bs8-e10-lr5e-05` | [0.6491][6] | [0.5812][7] | [0.565][8] | [0.587][9] | [0.5938][10] | 0.5952 ± 0.0319 |
| `bs4-e10-lr3e-05` | [0.6154][11] | [0.5782][12] | [0.5519][13] | [0.5929][14] | [0.5931][15] | 0.5863 ± 0.0234 |
| `bs8-e10-lr3e-05` | [0.5978][16] | [0.5489][17] | [0.528][18] | [0.5483][19] | [**0.5754**][20] | 0.5597 ± 0.0272 |
[1]: https://hf.co/stefan-it/hmbench-topres19th-en-hmbert_tiny-bs4-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-1
[2]: https://hf.co/stefan-it/hmbench-topres19th-en-hmbert_tiny-bs4-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-2
[3]: https://hf.co/stefan-it/hmbench-topres19th-en-hmbert_tiny-bs4-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-3
[4]: https://hf.co/stefan-it/hmbench-topres19th-en-hmbert_tiny-bs4-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-4
[5]: https://hf.co/stefan-it/hmbench-topres19th-en-hmbert_tiny-bs4-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-5
[6]: https://hf.co/stefan-it/hmbench-topres19th-en-hmbert_tiny-bs8-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-1
[7]: https://hf.co/stefan-it/hmbench-topres19th-en-hmbert_tiny-bs8-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-2
[8]: https://hf.co/stefan-it/hmbench-topres19th-en-hmbert_tiny-bs8-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-3
[9]: https://hf.co/stefan-it/hmbench-topres19th-en-hmbert_tiny-bs8-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-4
[10]: https://hf.co/stefan-it/hmbench-topres19th-en-hmbert_tiny-bs8-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-5
[11]: https://hf.co/stefan-it/hmbench-topres19th-en-hmbert_tiny-bs4-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-1
[12]: https://hf.co/stefan-it/hmbench-topres19th-en-hmbert_tiny-bs4-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-2
[13]: https://hf.co/stefan-it/hmbench-topres19th-en-hmbert_tiny-bs4-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-3
[14]: https://hf.co/stefan-it/hmbench-topres19th-en-hmbert_tiny-bs4-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-4
[15]: https://hf.co/stefan-it/hmbench-topres19th-en-hmbert_tiny-bs4-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-5
[16]: https://hf.co/stefan-it/hmbench-topres19th-en-hmbert_tiny-bs8-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-1
[17]: https://hf.co/stefan-it/hmbench-topres19th-en-hmbert_tiny-bs8-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-2
[18]: https://hf.co/stefan-it/hmbench-topres19th-en-hmbert_tiny-bs8-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-3
[19]: https://hf.co/stefan-it/hmbench-topres19th-en-hmbert_tiny-bs8-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-4
[20]: https://hf.co/stefan-it/hmbench-topres19th-en-hmbert_tiny-bs8-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-5
The [training log](training.log) and TensorBoard logs (not available for hmBERT Base model) are also uploaded to the model hub.
More information about fine-tuning can be found [here](https://github.com/stefan-it/hmBench).
# Acknowledgements
We thank [Luisa März](https://github.com/LuisaMaerz), [Katharina Schmid](https://github.com/schmika) and
[Erion Çano](https://github.com/erionc) for their fruitful discussions about Historic Language Models.
Research supported with Cloud TPUs from Google's [TPU Research Cloud](https://sites.research.google/trc/about/) (TRC).
Many Thanks for providing access to the TPUs ❤️
|
stefan-it/hmbench-topres19th-en-hmbert_tiny-bs4-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-5 | stefan-it | 2023-10-24T13:54:37Z | 5 | 0 | flair | [
"flair",
"pytorch",
"tensorboard",
"token-classification",
"sequence-tagger-model",
"en",
"base_model:dbmdz/bert-tiny-historic-multilingual-cased",
"base_model:finetune:dbmdz/bert-tiny-historic-multilingual-cased",
"license:mit",
"region:us"
]
| token-classification | 2023-10-20T10:11:16Z | ---
language: en
license: mit
tags:
- flair
- token-classification
- sequence-tagger-model
base_model: dbmdz/bert-tiny-historic-multilingual-cased
widget:
- text: On Wednesday , a public dinner was given by the Conservative Burgesses of
Leads , to the Conservative members of the Leeds Town Council , in the Music Hall
, Albion-street , which was very numerously attended .
---
# Fine-tuned Flair Model on TopRes19th English NER Dataset (HIPE-2022)
This Flair model was fine-tuned on the
[TopRes19th English](https://github.com/hipe-eval/HIPE-2022-data/blob/main/documentation/README-topres19th.md)
NER Dataset using hmBERT Tiny as backbone LM.
The TopRes19th dataset consists of NE-annotated historical English newspaper articles from 19C.
The following NEs were annotated: `BUILDING`, `LOC` and `STREET`.
# Results
We performed a hyper-parameter search over the following parameters with 5 different seeds per configuration:
* Batch Sizes: `[4, 8]`
* Learning Rates: `[5e-05, 3e-05]`
And report micro F1-score on development set:
| Configuration | Seed 1 | Seed 2 | Seed 3 | Seed 4 | Seed 5 | Average |
|-------------------|--------------|--------------|--------------|--------------|------------------|-----------------|
| `bs4-e10-lr5e-05` | [0.6681][1] | [0.5754][2] | [0.628][3] | [0.6565][4] | [0.658][5] | 0.6372 ± 0.0376 |
| `bs8-e10-lr5e-05` | [0.6491][6] | [0.5812][7] | [0.565][8] | [0.587][9] | [0.5938][10] | 0.5952 ± 0.0319 |
| `bs4-e10-lr3e-05` | [0.6154][11] | [0.5782][12] | [0.5519][13] | [0.5929][14] | [**0.5931**][15] | 0.5863 ± 0.0234 |
| `bs8-e10-lr3e-05` | [0.5978][16] | [0.5489][17] | [0.528][18] | [0.5483][19] | [0.5754][20] | 0.5597 ± 0.0272 |
[1]: https://hf.co/stefan-it/hmbench-topres19th-en-hmbert_tiny-bs4-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-1
[2]: https://hf.co/stefan-it/hmbench-topres19th-en-hmbert_tiny-bs4-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-2
[3]: https://hf.co/stefan-it/hmbench-topres19th-en-hmbert_tiny-bs4-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-3
[4]: https://hf.co/stefan-it/hmbench-topres19th-en-hmbert_tiny-bs4-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-4
[5]: https://hf.co/stefan-it/hmbench-topres19th-en-hmbert_tiny-bs4-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-5
[6]: https://hf.co/stefan-it/hmbench-topres19th-en-hmbert_tiny-bs8-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-1
[7]: https://hf.co/stefan-it/hmbench-topres19th-en-hmbert_tiny-bs8-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-2
[8]: https://hf.co/stefan-it/hmbench-topres19th-en-hmbert_tiny-bs8-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-3
[9]: https://hf.co/stefan-it/hmbench-topres19th-en-hmbert_tiny-bs8-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-4
[10]: https://hf.co/stefan-it/hmbench-topres19th-en-hmbert_tiny-bs8-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-5
[11]: https://hf.co/stefan-it/hmbench-topres19th-en-hmbert_tiny-bs4-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-1
[12]: https://hf.co/stefan-it/hmbench-topres19th-en-hmbert_tiny-bs4-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-2
[13]: https://hf.co/stefan-it/hmbench-topres19th-en-hmbert_tiny-bs4-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-3
[14]: https://hf.co/stefan-it/hmbench-topres19th-en-hmbert_tiny-bs4-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-4
[15]: https://hf.co/stefan-it/hmbench-topres19th-en-hmbert_tiny-bs4-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-5
[16]: https://hf.co/stefan-it/hmbench-topres19th-en-hmbert_tiny-bs8-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-1
[17]: https://hf.co/stefan-it/hmbench-topres19th-en-hmbert_tiny-bs8-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-2
[18]: https://hf.co/stefan-it/hmbench-topres19th-en-hmbert_tiny-bs8-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-3
[19]: https://hf.co/stefan-it/hmbench-topres19th-en-hmbert_tiny-bs8-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-4
[20]: https://hf.co/stefan-it/hmbench-topres19th-en-hmbert_tiny-bs8-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-5
The [training log](training.log) and TensorBoard logs (not available for hmBERT Base model) are also uploaded to the model hub.
More information about fine-tuning can be found [here](https://github.com/stefan-it/hmBench).
# Acknowledgements
We thank [Luisa März](https://github.com/LuisaMaerz), [Katharina Schmid](https://github.com/schmika) and
[Erion Çano](https://github.com/erionc) for their fruitful discussions about Historic Language Models.
Research supported with Cloud TPUs from Google's [TPU Research Cloud](https://sites.research.google/trc/about/) (TRC).
Many Thanks for providing access to the TPUs ❤️
|
stefan-it/hmbench-topres19th-en-hmbert_tiny-bs4-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-4 | stefan-it | 2023-10-24T13:54:36Z | 2 | 0 | flair | [
"flair",
"pytorch",
"tensorboard",
"token-classification",
"sequence-tagger-model",
"en",
"base_model:dbmdz/bert-tiny-historic-multilingual-cased",
"base_model:finetune:dbmdz/bert-tiny-historic-multilingual-cased",
"license:mit",
"region:us"
]
| token-classification | 2023-10-20T10:00:29Z | ---
language: en
license: mit
tags:
- flair
- token-classification
- sequence-tagger-model
base_model: dbmdz/bert-tiny-historic-multilingual-cased
widget:
- text: On Wednesday , a public dinner was given by the Conservative Burgesses of
Leads , to the Conservative members of the Leeds Town Council , in the Music Hall
, Albion-street , which was very numerously attended .
---
# Fine-tuned Flair Model on TopRes19th English NER Dataset (HIPE-2022)
This Flair model was fine-tuned on the
[TopRes19th English](https://github.com/hipe-eval/HIPE-2022-data/blob/main/documentation/README-topres19th.md)
NER Dataset using hmBERT Tiny as backbone LM.
The TopRes19th dataset consists of NE-annotated historical English newspaper articles from 19C.
The following NEs were annotated: `BUILDING`, `LOC` and `STREET`.
# Results
We performed a hyper-parameter search over the following parameters with 5 different seeds per configuration:
* Batch Sizes: `[4, 8]`
* Learning Rates: `[5e-05, 3e-05]`
And report micro F1-score on development set:
| Configuration | Seed 1 | Seed 2 | Seed 3 | Seed 4 | Seed 5 | Average |
|-------------------|--------------|--------------|--------------|-----------------|--------------|-----------------|
| `bs4-e10-lr5e-05` | [0.6681][1] | [0.5754][2] | [0.628][3] | [**0.6565**][4] | [0.658][5] | 0.6372 ± 0.0376 |
| `bs8-e10-lr5e-05` | [0.6491][6] | [0.5812][7] | [0.565][8] | [0.587][9] | [0.5938][10] | 0.5952 ± 0.0319 |
| `bs4-e10-lr3e-05` | [0.6154][11] | [0.5782][12] | [0.5519][13] | [0.5929][14] | [0.5931][15] | 0.5863 ± 0.0234 |
| `bs8-e10-lr3e-05` | [0.5978][16] | [0.5489][17] | [0.528][18] | [0.5483][19] | [0.5754][20] | 0.5597 ± 0.0272 |
[1]: https://hf.co/stefan-it/hmbench-topres19th-en-hmbert_tiny-bs4-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-1
[2]: https://hf.co/stefan-it/hmbench-topres19th-en-hmbert_tiny-bs4-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-2
[3]: https://hf.co/stefan-it/hmbench-topres19th-en-hmbert_tiny-bs4-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-3
[4]: https://hf.co/stefan-it/hmbench-topres19th-en-hmbert_tiny-bs4-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-4
[5]: https://hf.co/stefan-it/hmbench-topres19th-en-hmbert_tiny-bs4-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-5
[6]: https://hf.co/stefan-it/hmbench-topres19th-en-hmbert_tiny-bs8-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-1
[7]: https://hf.co/stefan-it/hmbench-topres19th-en-hmbert_tiny-bs8-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-2
[8]: https://hf.co/stefan-it/hmbench-topres19th-en-hmbert_tiny-bs8-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-3
[9]: https://hf.co/stefan-it/hmbench-topres19th-en-hmbert_tiny-bs8-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-4
[10]: https://hf.co/stefan-it/hmbench-topres19th-en-hmbert_tiny-bs8-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-5
[11]: https://hf.co/stefan-it/hmbench-topres19th-en-hmbert_tiny-bs4-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-1
[12]: https://hf.co/stefan-it/hmbench-topres19th-en-hmbert_tiny-bs4-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-2
[13]: https://hf.co/stefan-it/hmbench-topres19th-en-hmbert_tiny-bs4-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-3
[14]: https://hf.co/stefan-it/hmbench-topres19th-en-hmbert_tiny-bs4-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-4
[15]: https://hf.co/stefan-it/hmbench-topres19th-en-hmbert_tiny-bs4-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-5
[16]: https://hf.co/stefan-it/hmbench-topres19th-en-hmbert_tiny-bs8-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-1
[17]: https://hf.co/stefan-it/hmbench-topres19th-en-hmbert_tiny-bs8-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-2
[18]: https://hf.co/stefan-it/hmbench-topres19th-en-hmbert_tiny-bs8-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-3
[19]: https://hf.co/stefan-it/hmbench-topres19th-en-hmbert_tiny-bs8-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-4
[20]: https://hf.co/stefan-it/hmbench-topres19th-en-hmbert_tiny-bs8-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-5
The [training log](training.log) and TensorBoard logs (not available for hmBERT Base model) are also uploaded to the model hub.
More information about fine-tuning can be found [here](https://github.com/stefan-it/hmBench).
# Acknowledgements
We thank [Luisa März](https://github.com/LuisaMaerz), [Katharina Schmid](https://github.com/schmika) and
[Erion Çano](https://github.com/erionc) for their fruitful discussions about Historic Language Models.
Research supported with Cloud TPUs from Google's [TPU Research Cloud](https://sites.research.google/trc/about/) (TRC).
Many Thanks for providing access to the TPUs ❤️
|
stefan-it/hmbench-topres19th-en-hmbert_tiny-bs8-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-3 | stefan-it | 2023-10-24T13:54:35Z | 6 | 0 | flair | [
"flair",
"pytorch",
"tensorboard",
"token-classification",
"sequence-tagger-model",
"en",
"base_model:dbmdz/bert-tiny-historic-multilingual-cased",
"base_model:finetune:dbmdz/bert-tiny-historic-multilingual-cased",
"license:mit",
"region:us"
]
| token-classification | 2023-10-20T09:51:51Z | ---
language: en
license: mit
tags:
- flair
- token-classification
- sequence-tagger-model
base_model: dbmdz/bert-tiny-historic-multilingual-cased
widget:
- text: On Wednesday , a public dinner was given by the Conservative Burgesses of
Leads , to the Conservative members of the Leeds Town Council , in the Music Hall
, Albion-street , which was very numerously attended .
---
# Fine-tuned Flair Model on TopRes19th English NER Dataset (HIPE-2022)
This Flair model was fine-tuned on the
[TopRes19th English](https://github.com/hipe-eval/HIPE-2022-data/blob/main/documentation/README-topres19th.md)
NER Dataset using hmBERT Tiny as backbone LM.
The TopRes19th dataset consists of NE-annotated historical English newspaper articles from 19C.
The following NEs were annotated: `BUILDING`, `LOC` and `STREET`.
# Results
We performed a hyper-parameter search over the following parameters with 5 different seeds per configuration:
* Batch Sizes: `[4, 8]`
* Learning Rates: `[5e-05, 3e-05]`
And report micro F1-score on development set:
| Configuration | Seed 1 | Seed 2 | Seed 3 | Seed 4 | Seed 5 | Average |
|-------------------|--------------|--------------|----------------|--------------|--------------|-----------------|
| `bs4-e10-lr5e-05` | [0.6681][1] | [0.5754][2] | [0.628][3] | [0.6565][4] | [0.658][5] | 0.6372 ± 0.0376 |
| `bs8-e10-lr5e-05` | [0.6491][6] | [0.5812][7] | [**0.565**][8] | [0.587][9] | [0.5938][10] | 0.5952 ± 0.0319 |
| `bs4-e10-lr3e-05` | [0.6154][11] | [0.5782][12] | [0.5519][13] | [0.5929][14] | [0.5931][15] | 0.5863 ± 0.0234 |
| `bs8-e10-lr3e-05` | [0.5978][16] | [0.5489][17] | [0.528][18] | [0.5483][19] | [0.5754][20] | 0.5597 ± 0.0272 |
[1]: https://hf.co/stefan-it/hmbench-topres19th-en-hmbert_tiny-bs4-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-1
[2]: https://hf.co/stefan-it/hmbench-topres19th-en-hmbert_tiny-bs4-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-2
[3]: https://hf.co/stefan-it/hmbench-topres19th-en-hmbert_tiny-bs4-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-3
[4]: https://hf.co/stefan-it/hmbench-topres19th-en-hmbert_tiny-bs4-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-4
[5]: https://hf.co/stefan-it/hmbench-topres19th-en-hmbert_tiny-bs4-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-5
[6]: https://hf.co/stefan-it/hmbench-topres19th-en-hmbert_tiny-bs8-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-1
[7]: https://hf.co/stefan-it/hmbench-topres19th-en-hmbert_tiny-bs8-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-2
[8]: https://hf.co/stefan-it/hmbench-topres19th-en-hmbert_tiny-bs8-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-3
[9]: https://hf.co/stefan-it/hmbench-topres19th-en-hmbert_tiny-bs8-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-4
[10]: https://hf.co/stefan-it/hmbench-topres19th-en-hmbert_tiny-bs8-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-5
[11]: https://hf.co/stefan-it/hmbench-topres19th-en-hmbert_tiny-bs4-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-1
[12]: https://hf.co/stefan-it/hmbench-topres19th-en-hmbert_tiny-bs4-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-2
[13]: https://hf.co/stefan-it/hmbench-topres19th-en-hmbert_tiny-bs4-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-3
[14]: https://hf.co/stefan-it/hmbench-topres19th-en-hmbert_tiny-bs4-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-4
[15]: https://hf.co/stefan-it/hmbench-topres19th-en-hmbert_tiny-bs4-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-5
[16]: https://hf.co/stefan-it/hmbench-topres19th-en-hmbert_tiny-bs8-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-1
[17]: https://hf.co/stefan-it/hmbench-topres19th-en-hmbert_tiny-bs8-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-2
[18]: https://hf.co/stefan-it/hmbench-topres19th-en-hmbert_tiny-bs8-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-3
[19]: https://hf.co/stefan-it/hmbench-topres19th-en-hmbert_tiny-bs8-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-4
[20]: https://hf.co/stefan-it/hmbench-topres19th-en-hmbert_tiny-bs8-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-5
The [training log](training.log) and TensorBoard logs (not available for hmBERT Base model) are also uploaded to the model hub.
More information about fine-tuning can be found [here](https://github.com/stefan-it/hmBench).
# Acknowledgements
We thank [Luisa März](https://github.com/LuisaMaerz), [Katharina Schmid](https://github.com/schmika) and
[Erion Çano](https://github.com/erionc) for their fruitful discussions about Historic Language Models.
Research supported with Cloud TPUs from Google's [TPU Research Cloud](https://sites.research.google/trc/about/) (TRC).
Many Thanks for providing access to the TPUs ❤️
|
stefan-it/hmbench-topres19th-en-hmbert_tiny-bs4-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-3 | stefan-it | 2023-10-24T13:54:34Z | 9 | 0 | flair | [
"flair",
"pytorch",
"tensorboard",
"token-classification",
"sequence-tagger-model",
"en",
"base_model:dbmdz/bert-tiny-historic-multilingual-cased",
"base_model:finetune:dbmdz/bert-tiny-historic-multilingual-cased",
"license:mit",
"region:us"
]
| token-classification | 2023-10-20T09:45:21Z | ---
language: en
license: mit
tags:
- flair
- token-classification
- sequence-tagger-model
base_model: dbmdz/bert-tiny-historic-multilingual-cased
widget:
- text: On Wednesday , a public dinner was given by the Conservative Burgesses of
Leads , to the Conservative members of the Leeds Town Council , in the Music Hall
, Albion-street , which was very numerously attended .
---
# Fine-tuned Flair Model on TopRes19th English NER Dataset (HIPE-2022)
This Flair model was fine-tuned on the
[TopRes19th English](https://github.com/hipe-eval/HIPE-2022-data/blob/main/documentation/README-topres19th.md)
NER Dataset using hmBERT Tiny as backbone LM.
The TopRes19th dataset consists of NE-annotated historical English newspaper articles from 19C.
The following NEs were annotated: `BUILDING`, `LOC` and `STREET`.
# Results
We performed a hyper-parameter search over the following parameters with 5 different seeds per configuration:
* Batch Sizes: `[4, 8]`
* Learning Rates: `[5e-05, 3e-05]`
And report micro F1-score on development set:
| Configuration | Seed 1 | Seed 2 | Seed 3 | Seed 4 | Seed 5 | Average |
|-------------------|--------------|--------------|----------------|--------------|--------------|-----------------|
| `bs4-e10-lr5e-05` | [0.6681][1] | [0.5754][2] | [**0.628**][3] | [0.6565][4] | [0.658][5] | 0.6372 ± 0.0376 |
| `bs8-e10-lr5e-05` | [0.6491][6] | [0.5812][7] | [0.565][8] | [0.587][9] | [0.5938][10] | 0.5952 ± 0.0319 |
| `bs4-e10-lr3e-05` | [0.6154][11] | [0.5782][12] | [0.5519][13] | [0.5929][14] | [0.5931][15] | 0.5863 ± 0.0234 |
| `bs8-e10-lr3e-05` | [0.5978][16] | [0.5489][17] | [0.528][18] | [0.5483][19] | [0.5754][20] | 0.5597 ± 0.0272 |
[1]: https://hf.co/stefan-it/hmbench-topres19th-en-hmbert_tiny-bs4-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-1
[2]: https://hf.co/stefan-it/hmbench-topres19th-en-hmbert_tiny-bs4-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-2
[3]: https://hf.co/stefan-it/hmbench-topres19th-en-hmbert_tiny-bs4-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-3
[4]: https://hf.co/stefan-it/hmbench-topres19th-en-hmbert_tiny-bs4-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-4
[5]: https://hf.co/stefan-it/hmbench-topres19th-en-hmbert_tiny-bs4-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-5
[6]: https://hf.co/stefan-it/hmbench-topres19th-en-hmbert_tiny-bs8-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-1
[7]: https://hf.co/stefan-it/hmbench-topres19th-en-hmbert_tiny-bs8-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-2
[8]: https://hf.co/stefan-it/hmbench-topres19th-en-hmbert_tiny-bs8-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-3
[9]: https://hf.co/stefan-it/hmbench-topres19th-en-hmbert_tiny-bs8-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-4
[10]: https://hf.co/stefan-it/hmbench-topres19th-en-hmbert_tiny-bs8-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-5
[11]: https://hf.co/stefan-it/hmbench-topres19th-en-hmbert_tiny-bs4-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-1
[12]: https://hf.co/stefan-it/hmbench-topres19th-en-hmbert_tiny-bs4-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-2
[13]: https://hf.co/stefan-it/hmbench-topres19th-en-hmbert_tiny-bs4-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-3
[14]: https://hf.co/stefan-it/hmbench-topres19th-en-hmbert_tiny-bs4-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-4
[15]: https://hf.co/stefan-it/hmbench-topres19th-en-hmbert_tiny-bs4-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-5
[16]: https://hf.co/stefan-it/hmbench-topres19th-en-hmbert_tiny-bs8-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-1
[17]: https://hf.co/stefan-it/hmbench-topres19th-en-hmbert_tiny-bs8-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-2
[18]: https://hf.co/stefan-it/hmbench-topres19th-en-hmbert_tiny-bs8-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-3
[19]: https://hf.co/stefan-it/hmbench-topres19th-en-hmbert_tiny-bs8-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-4
[20]: https://hf.co/stefan-it/hmbench-topres19th-en-hmbert_tiny-bs8-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-5
The [training log](training.log) and TensorBoard logs (not available for hmBERT Base model) are also uploaded to the model hub.
More information about fine-tuning can be found [here](https://github.com/stefan-it/hmBench).
# Acknowledgements
We thank [Luisa März](https://github.com/LuisaMaerz), [Katharina Schmid](https://github.com/schmika) and
[Erion Çano](https://github.com/erionc) for their fruitful discussions about Historic Language Models.
Research supported with Cloud TPUs from Google's [TPU Research Cloud](https://sites.research.google/trc/about/) (TRC).
Many Thanks for providing access to the TPUs ❤️
|
stefan-it/hmbench-topres19th-en-hmbert_tiny-bs8-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-3 | stefan-it | 2023-10-24T13:54:34Z | 2 | 0 | flair | [
"flair",
"pytorch",
"tensorboard",
"token-classification",
"sequence-tagger-model",
"en",
"base_model:dbmdz/bert-tiny-historic-multilingual-cased",
"base_model:finetune:dbmdz/bert-tiny-historic-multilingual-cased",
"license:mit",
"region:us"
]
| token-classification | 2023-10-20T09:48:35Z | ---
language: en
license: mit
tags:
- flair
- token-classification
- sequence-tagger-model
base_model: dbmdz/bert-tiny-historic-multilingual-cased
widget:
- text: On Wednesday , a public dinner was given by the Conservative Burgesses of
Leads , to the Conservative members of the Leeds Town Council , in the Music Hall
, Albion-street , which was very numerously attended .
---
# Fine-tuned Flair Model on TopRes19th English NER Dataset (HIPE-2022)
This Flair model was fine-tuned on the
[TopRes19th English](https://github.com/hipe-eval/HIPE-2022-data/blob/main/documentation/README-topres19th.md)
NER Dataset using hmBERT Tiny as backbone LM.
The TopRes19th dataset consists of NE-annotated historical English newspaper articles from 19C.
The following NEs were annotated: `BUILDING`, `LOC` and `STREET`.
# Results
We performed a hyper-parameter search over the following parameters with 5 different seeds per configuration:
* Batch Sizes: `[4, 8]`
* Learning Rates: `[5e-05, 3e-05]`
And report micro F1-score on development set:
| Configuration | Seed 1 | Seed 2 | Seed 3 | Seed 4 | Seed 5 | Average |
|-------------------|--------------|--------------|-----------------|--------------|--------------|-----------------|
| `bs4-e10-lr5e-05` | [0.6681][1] | [0.5754][2] | [0.628][3] | [0.6565][4] | [0.658][5] | 0.6372 ± 0.0376 |
| `bs8-e10-lr5e-05` | [0.6491][6] | [0.5812][7] | [0.565][8] | [0.587][9] | [0.5938][10] | 0.5952 ± 0.0319 |
| `bs4-e10-lr3e-05` | [0.6154][11] | [0.5782][12] | [0.5519][13] | [0.5929][14] | [0.5931][15] | 0.5863 ± 0.0234 |
| `bs8-e10-lr3e-05` | [0.5978][16] | [0.5489][17] | [**0.528**][18] | [0.5483][19] | [0.5754][20] | 0.5597 ± 0.0272 |
[1]: https://hf.co/stefan-it/hmbench-topres19th-en-hmbert_tiny-bs4-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-1
[2]: https://hf.co/stefan-it/hmbench-topres19th-en-hmbert_tiny-bs4-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-2
[3]: https://hf.co/stefan-it/hmbench-topres19th-en-hmbert_tiny-bs4-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-3
[4]: https://hf.co/stefan-it/hmbench-topres19th-en-hmbert_tiny-bs4-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-4
[5]: https://hf.co/stefan-it/hmbench-topres19th-en-hmbert_tiny-bs4-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-5
[6]: https://hf.co/stefan-it/hmbench-topres19th-en-hmbert_tiny-bs8-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-1
[7]: https://hf.co/stefan-it/hmbench-topres19th-en-hmbert_tiny-bs8-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-2
[8]: https://hf.co/stefan-it/hmbench-topres19th-en-hmbert_tiny-bs8-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-3
[9]: https://hf.co/stefan-it/hmbench-topres19th-en-hmbert_tiny-bs8-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-4
[10]: https://hf.co/stefan-it/hmbench-topres19th-en-hmbert_tiny-bs8-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-5
[11]: https://hf.co/stefan-it/hmbench-topres19th-en-hmbert_tiny-bs4-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-1
[12]: https://hf.co/stefan-it/hmbench-topres19th-en-hmbert_tiny-bs4-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-2
[13]: https://hf.co/stefan-it/hmbench-topres19th-en-hmbert_tiny-bs4-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-3
[14]: https://hf.co/stefan-it/hmbench-topres19th-en-hmbert_tiny-bs4-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-4
[15]: https://hf.co/stefan-it/hmbench-topres19th-en-hmbert_tiny-bs4-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-5
[16]: https://hf.co/stefan-it/hmbench-topres19th-en-hmbert_tiny-bs8-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-1
[17]: https://hf.co/stefan-it/hmbench-topres19th-en-hmbert_tiny-bs8-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-2
[18]: https://hf.co/stefan-it/hmbench-topres19th-en-hmbert_tiny-bs8-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-3
[19]: https://hf.co/stefan-it/hmbench-topres19th-en-hmbert_tiny-bs8-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-4
[20]: https://hf.co/stefan-it/hmbench-topres19th-en-hmbert_tiny-bs8-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-5
The [training log](training.log) and TensorBoard logs (not available for hmBERT Base model) are also uploaded to the model hub.
More information about fine-tuning can be found [here](https://github.com/stefan-it/hmBench).
# Acknowledgements
We thank [Luisa März](https://github.com/LuisaMaerz), [Katharina Schmid](https://github.com/schmika) and
[Erion Çano](https://github.com/erionc) for their fruitful discussions about Historic Language Models.
Research supported with Cloud TPUs from Google's [TPU Research Cloud](https://sites.research.google/trc/about/) (TRC).
Many Thanks for providing access to the TPUs ❤️
|
stefan-it/hmbench-topres19th-en-hmbert_tiny-bs4-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-3 | stefan-it | 2023-10-24T13:54:33Z | 4 | 0 | flair | [
"flair",
"pytorch",
"tensorboard",
"token-classification",
"sequence-tagger-model",
"en",
"base_model:dbmdz/bert-tiny-historic-multilingual-cased",
"base_model:finetune:dbmdz/bert-tiny-historic-multilingual-cased",
"license:mit",
"region:us"
]
| token-classification | 2023-10-20T09:41:01Z | ---
language: en
license: mit
tags:
- flair
- token-classification
- sequence-tagger-model
base_model: dbmdz/bert-tiny-historic-multilingual-cased
widget:
- text: On Wednesday , a public dinner was given by the Conservative Burgesses of
Leads , to the Conservative members of the Leeds Town Council , in the Music Hall
, Albion-street , which was very numerously attended .
---
# Fine-tuned Flair Model on TopRes19th English NER Dataset (HIPE-2022)
This Flair model was fine-tuned on the
[TopRes19th English](https://github.com/hipe-eval/HIPE-2022-data/blob/main/documentation/README-topres19th.md)
NER Dataset using hmBERT Tiny as backbone LM.
The TopRes19th dataset consists of NE-annotated historical English newspaper articles from 19C.
The following NEs were annotated: `BUILDING`, `LOC` and `STREET`.
# Results
We performed a hyper-parameter search over the following parameters with 5 different seeds per configuration:
* Batch Sizes: `[4, 8]`
* Learning Rates: `[5e-05, 3e-05]`
And report micro F1-score on development set:
| Configuration | Seed 1 | Seed 2 | Seed 3 | Seed 4 | Seed 5 | Average |
|-------------------|--------------|--------------|------------------|--------------|--------------|-----------------|
| `bs4-e10-lr5e-05` | [0.6681][1] | [0.5754][2] | [0.628][3] | [0.6565][4] | [0.658][5] | 0.6372 ± 0.0376 |
| `bs8-e10-lr5e-05` | [0.6491][6] | [0.5812][7] | [0.565][8] | [0.587][9] | [0.5938][10] | 0.5952 ± 0.0319 |
| `bs4-e10-lr3e-05` | [0.6154][11] | [0.5782][12] | [**0.5519**][13] | [0.5929][14] | [0.5931][15] | 0.5863 ± 0.0234 |
| `bs8-e10-lr3e-05` | [0.5978][16] | [0.5489][17] | [0.528][18] | [0.5483][19] | [0.5754][20] | 0.5597 ± 0.0272 |
[1]: https://hf.co/stefan-it/hmbench-topres19th-en-hmbert_tiny-bs4-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-1
[2]: https://hf.co/stefan-it/hmbench-topres19th-en-hmbert_tiny-bs4-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-2
[3]: https://hf.co/stefan-it/hmbench-topres19th-en-hmbert_tiny-bs4-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-3
[4]: https://hf.co/stefan-it/hmbench-topres19th-en-hmbert_tiny-bs4-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-4
[5]: https://hf.co/stefan-it/hmbench-topres19th-en-hmbert_tiny-bs4-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-5
[6]: https://hf.co/stefan-it/hmbench-topres19th-en-hmbert_tiny-bs8-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-1
[7]: https://hf.co/stefan-it/hmbench-topres19th-en-hmbert_tiny-bs8-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-2
[8]: https://hf.co/stefan-it/hmbench-topres19th-en-hmbert_tiny-bs8-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-3
[9]: https://hf.co/stefan-it/hmbench-topres19th-en-hmbert_tiny-bs8-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-4
[10]: https://hf.co/stefan-it/hmbench-topres19th-en-hmbert_tiny-bs8-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-5
[11]: https://hf.co/stefan-it/hmbench-topres19th-en-hmbert_tiny-bs4-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-1
[12]: https://hf.co/stefan-it/hmbench-topres19th-en-hmbert_tiny-bs4-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-2
[13]: https://hf.co/stefan-it/hmbench-topres19th-en-hmbert_tiny-bs4-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-3
[14]: https://hf.co/stefan-it/hmbench-topres19th-en-hmbert_tiny-bs4-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-4
[15]: https://hf.co/stefan-it/hmbench-topres19th-en-hmbert_tiny-bs4-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-5
[16]: https://hf.co/stefan-it/hmbench-topres19th-en-hmbert_tiny-bs8-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-1
[17]: https://hf.co/stefan-it/hmbench-topres19th-en-hmbert_tiny-bs8-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-2
[18]: https://hf.co/stefan-it/hmbench-topres19th-en-hmbert_tiny-bs8-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-3
[19]: https://hf.co/stefan-it/hmbench-topres19th-en-hmbert_tiny-bs8-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-4
[20]: https://hf.co/stefan-it/hmbench-topres19th-en-hmbert_tiny-bs8-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-5
The [training log](training.log) and TensorBoard logs (not available for hmBERT Base model) are also uploaded to the model hub.
More information about fine-tuning can be found [here](https://github.com/stefan-it/hmBench).
# Acknowledgements
We thank [Luisa März](https://github.com/LuisaMaerz), [Katharina Schmid](https://github.com/schmika) and
[Erion Çano](https://github.com/erionc) for their fruitful discussions about Historic Language Models.
Research supported with Cloud TPUs from Google's [TPU Research Cloud](https://sites.research.google/trc/about/) (TRC).
Many Thanks for providing access to the TPUs ❤️
|
stefan-it/hmbench-topres19th-en-hmbert_tiny-bs8-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-2 | stefan-it | 2023-10-24T13:54:33Z | 4 | 0 | flair | [
"flair",
"pytorch",
"tensorboard",
"token-classification",
"sequence-tagger-model",
"en",
"base_model:dbmdz/bert-tiny-historic-multilingual-cased",
"base_model:finetune:dbmdz/bert-tiny-historic-multilingual-cased",
"license:mit",
"region:us"
]
| token-classification | 2023-10-20T09:36:45Z | ---
language: en
license: mit
tags:
- flair
- token-classification
- sequence-tagger-model
base_model: dbmdz/bert-tiny-historic-multilingual-cased
widget:
- text: On Wednesday , a public dinner was given by the Conservative Burgesses of
Leads , to the Conservative members of the Leeds Town Council , in the Music Hall
, Albion-street , which was very numerously attended .
---
# Fine-tuned Flair Model on TopRes19th English NER Dataset (HIPE-2022)
This Flair model was fine-tuned on the
[TopRes19th English](https://github.com/hipe-eval/HIPE-2022-data/blob/main/documentation/README-topres19th.md)
NER Dataset using hmBERT Tiny as backbone LM.
The TopRes19th dataset consists of NE-annotated historical English newspaper articles from 19C.
The following NEs were annotated: `BUILDING`, `LOC` and `STREET`.
# Results
We performed a hyper-parameter search over the following parameters with 5 different seeds per configuration:
* Batch Sizes: `[4, 8]`
* Learning Rates: `[5e-05, 3e-05]`
And report micro F1-score on development set:
| Configuration | Seed 1 | Seed 2 | Seed 3 | Seed 4 | Seed 5 | Average |
|-------------------|--------------|-----------------|--------------|--------------|--------------|-----------------|
| `bs4-e10-lr5e-05` | [0.6681][1] | [0.5754][2] | [0.628][3] | [0.6565][4] | [0.658][5] | 0.6372 ± 0.0376 |
| `bs8-e10-lr5e-05` | [0.6491][6] | [**0.5812**][7] | [0.565][8] | [0.587][9] | [0.5938][10] | 0.5952 ± 0.0319 |
| `bs4-e10-lr3e-05` | [0.6154][11] | [0.5782][12] | [0.5519][13] | [0.5929][14] | [0.5931][15] | 0.5863 ± 0.0234 |
| `bs8-e10-lr3e-05` | [0.5978][16] | [0.5489][17] | [0.528][18] | [0.5483][19] | [0.5754][20] | 0.5597 ± 0.0272 |
[1]: https://hf.co/stefan-it/hmbench-topres19th-en-hmbert_tiny-bs4-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-1
[2]: https://hf.co/stefan-it/hmbench-topres19th-en-hmbert_tiny-bs4-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-2
[3]: https://hf.co/stefan-it/hmbench-topres19th-en-hmbert_tiny-bs4-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-3
[4]: https://hf.co/stefan-it/hmbench-topres19th-en-hmbert_tiny-bs4-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-4
[5]: https://hf.co/stefan-it/hmbench-topres19th-en-hmbert_tiny-bs4-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-5
[6]: https://hf.co/stefan-it/hmbench-topres19th-en-hmbert_tiny-bs8-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-1
[7]: https://hf.co/stefan-it/hmbench-topres19th-en-hmbert_tiny-bs8-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-2
[8]: https://hf.co/stefan-it/hmbench-topres19th-en-hmbert_tiny-bs8-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-3
[9]: https://hf.co/stefan-it/hmbench-topres19th-en-hmbert_tiny-bs8-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-4
[10]: https://hf.co/stefan-it/hmbench-topres19th-en-hmbert_tiny-bs8-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-5
[11]: https://hf.co/stefan-it/hmbench-topres19th-en-hmbert_tiny-bs4-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-1
[12]: https://hf.co/stefan-it/hmbench-topres19th-en-hmbert_tiny-bs4-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-2
[13]: https://hf.co/stefan-it/hmbench-topres19th-en-hmbert_tiny-bs4-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-3
[14]: https://hf.co/stefan-it/hmbench-topres19th-en-hmbert_tiny-bs4-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-4
[15]: https://hf.co/stefan-it/hmbench-topres19th-en-hmbert_tiny-bs4-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-5
[16]: https://hf.co/stefan-it/hmbench-topres19th-en-hmbert_tiny-bs8-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-1
[17]: https://hf.co/stefan-it/hmbench-topres19th-en-hmbert_tiny-bs8-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-2
[18]: https://hf.co/stefan-it/hmbench-topres19th-en-hmbert_tiny-bs8-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-3
[19]: https://hf.co/stefan-it/hmbench-topres19th-en-hmbert_tiny-bs8-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-4
[20]: https://hf.co/stefan-it/hmbench-topres19th-en-hmbert_tiny-bs8-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-5
The [training log](training.log) and TensorBoard logs (not available for hmBERT Base model) are also uploaded to the model hub.
More information about fine-tuning can be found [here](https://github.com/stefan-it/hmBench).
# Acknowledgements
We thank [Luisa März](https://github.com/LuisaMaerz), [Katharina Schmid](https://github.com/schmika) and
[Erion Çano](https://github.com/erionc) for their fruitful discussions about Historic Language Models.
Research supported with Cloud TPUs from Google's [TPU Research Cloud](https://sites.research.google/trc/about/) (TRC).
Many Thanks for providing access to the TPUs ❤️
|
stefan-it/hmbench-topres19th-en-hmbert_tiny-bs4-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-2 | stefan-it | 2023-10-24T13:54:32Z | 4 | 0 | flair | [
"flair",
"pytorch",
"tensorboard",
"token-classification",
"sequence-tagger-model",
"en",
"base_model:dbmdz/bert-tiny-historic-multilingual-cased",
"base_model:finetune:dbmdz/bert-tiny-historic-multilingual-cased",
"license:mit",
"region:us"
]
| token-classification | 2023-10-20T09:25:56Z | ---
language: en
license: mit
tags:
- flair
- token-classification
- sequence-tagger-model
base_model: dbmdz/bert-tiny-historic-multilingual-cased
widget:
- text: On Wednesday , a public dinner was given by the Conservative Burgesses of
Leads , to the Conservative members of the Leeds Town Council , in the Music Hall
, Albion-street , which was very numerously attended .
---
# Fine-tuned Flair Model on TopRes19th English NER Dataset (HIPE-2022)
This Flair model was fine-tuned on the
[TopRes19th English](https://github.com/hipe-eval/HIPE-2022-data/blob/main/documentation/README-topres19th.md)
NER Dataset using hmBERT Tiny as backbone LM.
The TopRes19th dataset consists of NE-annotated historical English newspaper articles from 19C.
The following NEs were annotated: `BUILDING`, `LOC` and `STREET`.
# Results
We performed a hyper-parameter search over the following parameters with 5 different seeds per configuration:
* Batch Sizes: `[4, 8]`
* Learning Rates: `[5e-05, 3e-05]`
And report micro F1-score on development set:
| Configuration | Seed 1 | Seed 2 | Seed 3 | Seed 4 | Seed 5 | Average |
|-------------------|--------------|------------------|--------------|--------------|--------------|-----------------|
| `bs4-e10-lr5e-05` | [0.6681][1] | [0.5754][2] | [0.628][3] | [0.6565][4] | [0.658][5] | 0.6372 ± 0.0376 |
| `bs8-e10-lr5e-05` | [0.6491][6] | [0.5812][7] | [0.565][8] | [0.587][9] | [0.5938][10] | 0.5952 ± 0.0319 |
| `bs4-e10-lr3e-05` | [0.6154][11] | [**0.5782**][12] | [0.5519][13] | [0.5929][14] | [0.5931][15] | 0.5863 ± 0.0234 |
| `bs8-e10-lr3e-05` | [0.5978][16] | [0.5489][17] | [0.528][18] | [0.5483][19] | [0.5754][20] | 0.5597 ± 0.0272 |
[1]: https://hf.co/stefan-it/hmbench-topres19th-en-hmbert_tiny-bs4-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-1
[2]: https://hf.co/stefan-it/hmbench-topres19th-en-hmbert_tiny-bs4-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-2
[3]: https://hf.co/stefan-it/hmbench-topres19th-en-hmbert_tiny-bs4-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-3
[4]: https://hf.co/stefan-it/hmbench-topres19th-en-hmbert_tiny-bs4-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-4
[5]: https://hf.co/stefan-it/hmbench-topres19th-en-hmbert_tiny-bs4-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-5
[6]: https://hf.co/stefan-it/hmbench-topres19th-en-hmbert_tiny-bs8-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-1
[7]: https://hf.co/stefan-it/hmbench-topres19th-en-hmbert_tiny-bs8-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-2
[8]: https://hf.co/stefan-it/hmbench-topres19th-en-hmbert_tiny-bs8-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-3
[9]: https://hf.co/stefan-it/hmbench-topres19th-en-hmbert_tiny-bs8-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-4
[10]: https://hf.co/stefan-it/hmbench-topres19th-en-hmbert_tiny-bs8-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-5
[11]: https://hf.co/stefan-it/hmbench-topres19th-en-hmbert_tiny-bs4-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-1
[12]: https://hf.co/stefan-it/hmbench-topres19th-en-hmbert_tiny-bs4-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-2
[13]: https://hf.co/stefan-it/hmbench-topres19th-en-hmbert_tiny-bs4-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-3
[14]: https://hf.co/stefan-it/hmbench-topres19th-en-hmbert_tiny-bs4-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-4
[15]: https://hf.co/stefan-it/hmbench-topres19th-en-hmbert_tiny-bs4-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-5
[16]: https://hf.co/stefan-it/hmbench-topres19th-en-hmbert_tiny-bs8-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-1
[17]: https://hf.co/stefan-it/hmbench-topres19th-en-hmbert_tiny-bs8-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-2
[18]: https://hf.co/stefan-it/hmbench-topres19th-en-hmbert_tiny-bs8-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-3
[19]: https://hf.co/stefan-it/hmbench-topres19th-en-hmbert_tiny-bs8-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-4
[20]: https://hf.co/stefan-it/hmbench-topres19th-en-hmbert_tiny-bs8-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-5
The [training log](training.log) and TensorBoard logs (not available for hmBERT Base model) are also uploaded to the model hub.
More information about fine-tuning can be found [here](https://github.com/stefan-it/hmBench).
# Acknowledgements
We thank [Luisa März](https://github.com/LuisaMaerz), [Katharina Schmid](https://github.com/schmika) and
[Erion Çano](https://github.com/erionc) for their fruitful discussions about Historic Language Models.
Research supported with Cloud TPUs from Google's [TPU Research Cloud](https://sites.research.google/trc/about/) (TRC).
Many Thanks for providing access to the TPUs ❤️
|
stefan-it/hmbench-topres19th-en-hmbert_tiny-bs8-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-2 | stefan-it | 2023-10-24T13:54:32Z | 4 | 0 | flair | [
"flair",
"pytorch",
"tensorboard",
"token-classification",
"sequence-tagger-model",
"en",
"base_model:dbmdz/bert-tiny-historic-multilingual-cased",
"base_model:finetune:dbmdz/bert-tiny-historic-multilingual-cased",
"license:mit",
"region:us"
]
| token-classification | 2023-10-20T09:33:29Z | ---
language: en
license: mit
tags:
- flair
- token-classification
- sequence-tagger-model
base_model: dbmdz/bert-tiny-historic-multilingual-cased
widget:
- text: On Wednesday , a public dinner was given by the Conservative Burgesses of
Leads , to the Conservative members of the Leeds Town Council , in the Music Hall
, Albion-street , which was very numerously attended .
---
# Fine-tuned Flair Model on TopRes19th English NER Dataset (HIPE-2022)
This Flair model was fine-tuned on the
[TopRes19th English](https://github.com/hipe-eval/HIPE-2022-data/blob/main/documentation/README-topres19th.md)
NER Dataset using hmBERT Tiny as backbone LM.
The TopRes19th dataset consists of NE-annotated historical English newspaper articles from 19C.
The following NEs were annotated: `BUILDING`, `LOC` and `STREET`.
# Results
We performed a hyper-parameter search over the following parameters with 5 different seeds per configuration:
* Batch Sizes: `[4, 8]`
* Learning Rates: `[5e-05, 3e-05]`
And report micro F1-score on development set:
| Configuration | Seed 1 | Seed 2 | Seed 3 | Seed 4 | Seed 5 | Average |
|-------------------|--------------|------------------|--------------|--------------|--------------|-----------------|
| `bs4-e10-lr5e-05` | [0.6681][1] | [0.5754][2] | [0.628][3] | [0.6565][4] | [0.658][5] | 0.6372 ± 0.0376 |
| `bs8-e10-lr5e-05` | [0.6491][6] | [0.5812][7] | [0.565][8] | [0.587][9] | [0.5938][10] | 0.5952 ± 0.0319 |
| `bs4-e10-lr3e-05` | [0.6154][11] | [0.5782][12] | [0.5519][13] | [0.5929][14] | [0.5931][15] | 0.5863 ± 0.0234 |
| `bs8-e10-lr3e-05` | [0.5978][16] | [**0.5489**][17] | [0.528][18] | [0.5483][19] | [0.5754][20] | 0.5597 ± 0.0272 |
[1]: https://hf.co/stefan-it/hmbench-topres19th-en-hmbert_tiny-bs4-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-1
[2]: https://hf.co/stefan-it/hmbench-topres19th-en-hmbert_tiny-bs4-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-2
[3]: https://hf.co/stefan-it/hmbench-topres19th-en-hmbert_tiny-bs4-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-3
[4]: https://hf.co/stefan-it/hmbench-topres19th-en-hmbert_tiny-bs4-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-4
[5]: https://hf.co/stefan-it/hmbench-topres19th-en-hmbert_tiny-bs4-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-5
[6]: https://hf.co/stefan-it/hmbench-topres19th-en-hmbert_tiny-bs8-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-1
[7]: https://hf.co/stefan-it/hmbench-topres19th-en-hmbert_tiny-bs8-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-2
[8]: https://hf.co/stefan-it/hmbench-topres19th-en-hmbert_tiny-bs8-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-3
[9]: https://hf.co/stefan-it/hmbench-topres19th-en-hmbert_tiny-bs8-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-4
[10]: https://hf.co/stefan-it/hmbench-topres19th-en-hmbert_tiny-bs8-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-5
[11]: https://hf.co/stefan-it/hmbench-topres19th-en-hmbert_tiny-bs4-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-1
[12]: https://hf.co/stefan-it/hmbench-topres19th-en-hmbert_tiny-bs4-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-2
[13]: https://hf.co/stefan-it/hmbench-topres19th-en-hmbert_tiny-bs4-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-3
[14]: https://hf.co/stefan-it/hmbench-topres19th-en-hmbert_tiny-bs4-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-4
[15]: https://hf.co/stefan-it/hmbench-topres19th-en-hmbert_tiny-bs4-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-5
[16]: https://hf.co/stefan-it/hmbench-topres19th-en-hmbert_tiny-bs8-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-1
[17]: https://hf.co/stefan-it/hmbench-topres19th-en-hmbert_tiny-bs8-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-2
[18]: https://hf.co/stefan-it/hmbench-topres19th-en-hmbert_tiny-bs8-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-3
[19]: https://hf.co/stefan-it/hmbench-topres19th-en-hmbert_tiny-bs8-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-4
[20]: https://hf.co/stefan-it/hmbench-topres19th-en-hmbert_tiny-bs8-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-5
The [training log](training.log) and TensorBoard logs (not available for hmBERT Base model) are also uploaded to the model hub.
More information about fine-tuning can be found [here](https://github.com/stefan-it/hmBench).
# Acknowledgements
We thank [Luisa März](https://github.com/LuisaMaerz), [Katharina Schmid](https://github.com/schmika) and
[Erion Çano](https://github.com/erionc) for their fruitful discussions about Historic Language Models.
Research supported with Cloud TPUs from Google's [TPU Research Cloud](https://sites.research.google/trc/about/) (TRC).
Many Thanks for providing access to the TPUs ❤️
|
stefan-it/hmbench-topres19th-en-hmbert_tiny-bs4-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-2 | stefan-it | 2023-10-24T13:54:32Z | 0 | 0 | flair | [
"flair",
"pytorch",
"tensorboard",
"token-classification",
"sequence-tagger-model",
"en",
"base_model:dbmdz/bert-tiny-historic-multilingual-cased",
"base_model:finetune:dbmdz/bert-tiny-historic-multilingual-cased",
"license:mit",
"region:us"
]
| token-classification | 2023-10-20T09:30:16Z | ---
language: en
license: mit
tags:
- flair
- token-classification
- sequence-tagger-model
base_model: dbmdz/bert-tiny-historic-multilingual-cased
widget:
- text: On Wednesday , a public dinner was given by the Conservative Burgesses of
Leads , to the Conservative members of the Leeds Town Council , in the Music Hall
, Albion-street , which was very numerously attended .
---
# Fine-tuned Flair Model on TopRes19th English NER Dataset (HIPE-2022)
This Flair model was fine-tuned on the
[TopRes19th English](https://github.com/hipe-eval/HIPE-2022-data/blob/main/documentation/README-topres19th.md)
NER Dataset using hmBERT Tiny as backbone LM.
The TopRes19th dataset consists of NE-annotated historical English newspaper articles from 19C.
The following NEs were annotated: `BUILDING`, `LOC` and `STREET`.
# Results
We performed a hyper-parameter search over the following parameters with 5 different seeds per configuration:
* Batch Sizes: `[4, 8]`
* Learning Rates: `[5e-05, 3e-05]`
And report micro F1-score on development set:
| Configuration | Seed 1 | Seed 2 | Seed 3 | Seed 4 | Seed 5 | Average |
|-------------------|--------------|-----------------|--------------|--------------|--------------|-----------------|
| `bs4-e10-lr5e-05` | [0.6681][1] | [**0.5754**][2] | [0.628][3] | [0.6565][4] | [0.658][5] | 0.6372 ± 0.0376 |
| `bs8-e10-lr5e-05` | [0.6491][6] | [0.5812][7] | [0.565][8] | [0.587][9] | [0.5938][10] | 0.5952 ± 0.0319 |
| `bs4-e10-lr3e-05` | [0.6154][11] | [0.5782][12] | [0.5519][13] | [0.5929][14] | [0.5931][15] | 0.5863 ± 0.0234 |
| `bs8-e10-lr3e-05` | [0.5978][16] | [0.5489][17] | [0.528][18] | [0.5483][19] | [0.5754][20] | 0.5597 ± 0.0272 |
[1]: https://hf.co/stefan-it/hmbench-topres19th-en-hmbert_tiny-bs4-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-1
[2]: https://hf.co/stefan-it/hmbench-topres19th-en-hmbert_tiny-bs4-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-2
[3]: https://hf.co/stefan-it/hmbench-topres19th-en-hmbert_tiny-bs4-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-3
[4]: https://hf.co/stefan-it/hmbench-topres19th-en-hmbert_tiny-bs4-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-4
[5]: https://hf.co/stefan-it/hmbench-topres19th-en-hmbert_tiny-bs4-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-5
[6]: https://hf.co/stefan-it/hmbench-topres19th-en-hmbert_tiny-bs8-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-1
[7]: https://hf.co/stefan-it/hmbench-topres19th-en-hmbert_tiny-bs8-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-2
[8]: https://hf.co/stefan-it/hmbench-topres19th-en-hmbert_tiny-bs8-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-3
[9]: https://hf.co/stefan-it/hmbench-topres19th-en-hmbert_tiny-bs8-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-4
[10]: https://hf.co/stefan-it/hmbench-topres19th-en-hmbert_tiny-bs8-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-5
[11]: https://hf.co/stefan-it/hmbench-topres19th-en-hmbert_tiny-bs4-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-1
[12]: https://hf.co/stefan-it/hmbench-topres19th-en-hmbert_tiny-bs4-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-2
[13]: https://hf.co/stefan-it/hmbench-topres19th-en-hmbert_tiny-bs4-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-3
[14]: https://hf.co/stefan-it/hmbench-topres19th-en-hmbert_tiny-bs4-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-4
[15]: https://hf.co/stefan-it/hmbench-topres19th-en-hmbert_tiny-bs4-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-5
[16]: https://hf.co/stefan-it/hmbench-topres19th-en-hmbert_tiny-bs8-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-1
[17]: https://hf.co/stefan-it/hmbench-topres19th-en-hmbert_tiny-bs8-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-2
[18]: https://hf.co/stefan-it/hmbench-topres19th-en-hmbert_tiny-bs8-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-3
[19]: https://hf.co/stefan-it/hmbench-topres19th-en-hmbert_tiny-bs8-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-4
[20]: https://hf.co/stefan-it/hmbench-topres19th-en-hmbert_tiny-bs8-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-5
The [training log](training.log) and TensorBoard logs (not available for hmBERT Base model) are also uploaded to the model hub.
More information about fine-tuning can be found [here](https://github.com/stefan-it/hmBench).
# Acknowledgements
We thank [Luisa März](https://github.com/LuisaMaerz), [Katharina Schmid](https://github.com/schmika) and
[Erion Çano](https://github.com/erionc) for their fruitful discussions about Historic Language Models.
Research supported with Cloud TPUs from Google's [TPU Research Cloud](https://sites.research.google/trc/about/) (TRC).
Many Thanks for providing access to the TPUs ❤️
|
stefan-it/hmbench-topres19th-en-hmbert_tiny-bs4-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-1 | stefan-it | 2023-10-24T13:54:30Z | 2 | 0 | flair | [
"flair",
"pytorch",
"tensorboard",
"token-classification",
"sequence-tagger-model",
"en",
"base_model:dbmdz/bert-tiny-historic-multilingual-cased",
"base_model:finetune:dbmdz/bert-tiny-historic-multilingual-cased",
"license:mit",
"region:us"
]
| token-classification | 2023-10-20T09:10:46Z | ---
language: en
license: mit
tags:
- flair
- token-classification
- sequence-tagger-model
base_model: dbmdz/bert-tiny-historic-multilingual-cased
widget:
- text: On Wednesday , a public dinner was given by the Conservative Burgesses of
Leads , to the Conservative members of the Leeds Town Council , in the Music Hall
, Albion-street , which was very numerously attended .
---
# Fine-tuned Flair Model on TopRes19th English NER Dataset (HIPE-2022)
This Flair model was fine-tuned on the
[TopRes19th English](https://github.com/hipe-eval/HIPE-2022-data/blob/main/documentation/README-topres19th.md)
NER Dataset using hmBERT Tiny as backbone LM.
The TopRes19th dataset consists of NE-annotated historical English newspaper articles from 19C.
The following NEs were annotated: `BUILDING`, `LOC` and `STREET`.
# Results
We performed a hyper-parameter search over the following parameters with 5 different seeds per configuration:
* Batch Sizes: `[4, 8]`
* Learning Rates: `[5e-05, 3e-05]`
And report micro F1-score on development set:
| Configuration | Seed 1 | Seed 2 | Seed 3 | Seed 4 | Seed 5 | Average |
|-------------------|------------------|--------------|--------------|--------------|--------------|-----------------|
| `bs4-e10-lr5e-05` | [0.6681][1] | [0.5754][2] | [0.628][3] | [0.6565][4] | [0.658][5] | 0.6372 ± 0.0376 |
| `bs8-e10-lr5e-05` | [0.6491][6] | [0.5812][7] | [0.565][8] | [0.587][9] | [0.5938][10] | 0.5952 ± 0.0319 |
| `bs4-e10-lr3e-05` | [**0.6154**][11] | [0.5782][12] | [0.5519][13] | [0.5929][14] | [0.5931][15] | 0.5863 ± 0.0234 |
| `bs8-e10-lr3e-05` | [0.5978][16] | [0.5489][17] | [0.528][18] | [0.5483][19] | [0.5754][20] | 0.5597 ± 0.0272 |
[1]: https://hf.co/stefan-it/hmbench-topres19th-en-hmbert_tiny-bs4-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-1
[2]: https://hf.co/stefan-it/hmbench-topres19th-en-hmbert_tiny-bs4-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-2
[3]: https://hf.co/stefan-it/hmbench-topres19th-en-hmbert_tiny-bs4-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-3
[4]: https://hf.co/stefan-it/hmbench-topres19th-en-hmbert_tiny-bs4-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-4
[5]: https://hf.co/stefan-it/hmbench-topres19th-en-hmbert_tiny-bs4-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-5
[6]: https://hf.co/stefan-it/hmbench-topres19th-en-hmbert_tiny-bs8-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-1
[7]: https://hf.co/stefan-it/hmbench-topres19th-en-hmbert_tiny-bs8-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-2
[8]: https://hf.co/stefan-it/hmbench-topres19th-en-hmbert_tiny-bs8-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-3
[9]: https://hf.co/stefan-it/hmbench-topres19th-en-hmbert_tiny-bs8-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-4
[10]: https://hf.co/stefan-it/hmbench-topres19th-en-hmbert_tiny-bs8-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-5
[11]: https://hf.co/stefan-it/hmbench-topres19th-en-hmbert_tiny-bs4-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-1
[12]: https://hf.co/stefan-it/hmbench-topres19th-en-hmbert_tiny-bs4-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-2
[13]: https://hf.co/stefan-it/hmbench-topres19th-en-hmbert_tiny-bs4-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-3
[14]: https://hf.co/stefan-it/hmbench-topres19th-en-hmbert_tiny-bs4-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-4
[15]: https://hf.co/stefan-it/hmbench-topres19th-en-hmbert_tiny-bs4-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-5
[16]: https://hf.co/stefan-it/hmbench-topres19th-en-hmbert_tiny-bs8-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-1
[17]: https://hf.co/stefan-it/hmbench-topres19th-en-hmbert_tiny-bs8-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-2
[18]: https://hf.co/stefan-it/hmbench-topres19th-en-hmbert_tiny-bs8-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-3
[19]: https://hf.co/stefan-it/hmbench-topres19th-en-hmbert_tiny-bs8-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-4
[20]: https://hf.co/stefan-it/hmbench-topres19th-en-hmbert_tiny-bs8-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-5
The [training log](training.log) and TensorBoard logs (not available for hmBERT Base model) are also uploaded to the model hub.
More information about fine-tuning can be found [here](https://github.com/stefan-it/hmBench).
# Acknowledgements
We thank [Luisa März](https://github.com/LuisaMaerz), [Katharina Schmid](https://github.com/schmika) and
[Erion Çano](https://github.com/erionc) for their fruitful discussions about Historic Language Models.
Research supported with Cloud TPUs from Google's [TPU Research Cloud](https://sites.research.google/trc/about/) (TRC).
Many Thanks for providing access to the TPUs ❤️
|
stefan-it/hmbench-newseye-sv-hmbert_tiny-bs8-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-5 | stefan-it | 2023-10-24T13:53:14Z | 2 | 0 | flair | [
"flair",
"pytorch",
"tensorboard",
"token-classification",
"sequence-tagger-model",
"sv",
"base_model:dbmdz/bert-tiny-historic-multilingual-cased",
"base_model:finetune:dbmdz/bert-tiny-historic-multilingual-cased",
"license:mit",
"region:us"
]
| token-classification | 2023-10-20T00:27:45Z | ---
language: sv
license: mit
tags:
- flair
- token-classification
- sequence-tagger-model
base_model: dbmdz/bert-tiny-historic-multilingual-cased
widget:
- text: Värri , Teittinen , Forsman , Tensik - kala m . fl . anslöto sig till reservatio
- nen , hvaremot lm Fieandt , Huopo - nen , Koskelin , Leppänen , ( Li - belits
) , Eklund m . fl . förordade ut - skottets formulering af § 11 .
---
# Fine-tuned Flair Model on Swedish NewsEye NER Dataset (HIPE-2022)
This Flair model was fine-tuned on the
[Swedish NewsEye](https://github.com/hipe-eval/HIPE-2022-data/blob/main/documentation/README-newseye.md)
NER Dataset using hmBERT Tiny as backbone LM.
The NewsEye dataset is comprised of diachronic historical newspaper material published between 1850 and 1950
in French, German, Finnish, and Swedish.
More information can be found [here](https://dl.acm.org/doi/abs/10.1145/3404835.3463255).
The following NEs were annotated: `PER`, `LOC`, `ORG` and `HumanProd`.
# Results
We performed a hyper-parameter search over the following parameters with 5 different seeds per configuration:
* Batch Sizes: `[4, 8]`
* Learning Rates: `[5e-05, 3e-05]`
And report micro F1-score on development set:
| Configuration | Seed 1 | Seed 2 | Seed 3 | Seed 4 | Seed 5 | Average |
|-------------------|--------------|--------------|--------------|--------------|------------------|-----------------|
| `bs4-e10-lr5e-05` | [0.4847][1] | [0.481][2] | [0.4674][3] | [0.5085][4] | [0.474][5] | 0.4831 ± 0.0157 |
| `bs8-e10-lr5e-05` | [0.3983][6] | [0.4194][7] | [0.3856][8] | [0.4435][9] | [**0.4053**][10] | 0.4104 ± 0.0222 |
| `bs4-e10-lr3e-05` | [0.3761][11] | [0.38][12] | [0.3236][13] | [0.3594][14] | [0.3567][15] | 0.3592 ± 0.0223 |
| `bs8-e10-lr3e-05` | [0.1194][16] | [0.1521][17] | [0.104][18] | [0.1294][19] | [0.1399][20] | 0.129 ± 0.0185 |
[1]: https://hf.co/stefan-it/hmbench-newseye-sv-hmbert_tiny-bs4-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-1
[2]: https://hf.co/stefan-it/hmbench-newseye-sv-hmbert_tiny-bs4-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-2
[3]: https://hf.co/stefan-it/hmbench-newseye-sv-hmbert_tiny-bs4-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-3
[4]: https://hf.co/stefan-it/hmbench-newseye-sv-hmbert_tiny-bs4-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-4
[5]: https://hf.co/stefan-it/hmbench-newseye-sv-hmbert_tiny-bs4-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-5
[6]: https://hf.co/stefan-it/hmbench-newseye-sv-hmbert_tiny-bs8-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-1
[7]: https://hf.co/stefan-it/hmbench-newseye-sv-hmbert_tiny-bs8-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-2
[8]: https://hf.co/stefan-it/hmbench-newseye-sv-hmbert_tiny-bs8-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-3
[9]: https://hf.co/stefan-it/hmbench-newseye-sv-hmbert_tiny-bs8-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-4
[10]: https://hf.co/stefan-it/hmbench-newseye-sv-hmbert_tiny-bs8-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-5
[11]: https://hf.co/stefan-it/hmbench-newseye-sv-hmbert_tiny-bs4-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-1
[12]: https://hf.co/stefan-it/hmbench-newseye-sv-hmbert_tiny-bs4-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-2
[13]: https://hf.co/stefan-it/hmbench-newseye-sv-hmbert_tiny-bs4-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-3
[14]: https://hf.co/stefan-it/hmbench-newseye-sv-hmbert_tiny-bs4-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-4
[15]: https://hf.co/stefan-it/hmbench-newseye-sv-hmbert_tiny-bs4-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-5
[16]: https://hf.co/stefan-it/hmbench-newseye-sv-hmbert_tiny-bs8-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-1
[17]: https://hf.co/stefan-it/hmbench-newseye-sv-hmbert_tiny-bs8-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-2
[18]: https://hf.co/stefan-it/hmbench-newseye-sv-hmbert_tiny-bs8-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-3
[19]: https://hf.co/stefan-it/hmbench-newseye-sv-hmbert_tiny-bs8-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-4
[20]: https://hf.co/stefan-it/hmbench-newseye-sv-hmbert_tiny-bs8-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-5
The [training log](training.log) and TensorBoard logs (not available for hmBERT Base model) are also uploaded to the model hub.
More information about fine-tuning can be found [here](https://github.com/stefan-it/hmBench).
# Acknowledgements
We thank [Luisa März](https://github.com/LuisaMaerz), [Katharina Schmid](https://github.com/schmika) and
[Erion Çano](https://github.com/erionc) for their fruitful discussions about Historic Language Models.
Research supported with Cloud TPUs from Google's [TPU Research Cloud](https://sites.research.google/trc/about/) (TRC).
Many Thanks for providing access to the TPUs ❤️
|
stefan-it/hmbench-newseye-sv-hmbert_tiny-bs4-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-5 | stefan-it | 2023-10-24T13:53:13Z | 5 | 0 | flair | [
"flair",
"pytorch",
"tensorboard",
"token-classification",
"sequence-tagger-model",
"sv",
"base_model:dbmdz/bert-tiny-historic-multilingual-cased",
"base_model:finetune:dbmdz/bert-tiny-historic-multilingual-cased",
"license:mit",
"region:us"
]
| token-classification | 2023-10-20T00:26:06Z | ---
language: sv
license: mit
tags:
- flair
- token-classification
- sequence-tagger-model
base_model: dbmdz/bert-tiny-historic-multilingual-cased
widget:
- text: Värri , Teittinen , Forsman , Tensik - kala m . fl . anslöto sig till reservatio
- nen , hvaremot lm Fieandt , Huopo - nen , Koskelin , Leppänen , ( Li - belits
) , Eklund m . fl . förordade ut - skottets formulering af § 11 .
---
# Fine-tuned Flair Model on Swedish NewsEye NER Dataset (HIPE-2022)
This Flair model was fine-tuned on the
[Swedish NewsEye](https://github.com/hipe-eval/HIPE-2022-data/blob/main/documentation/README-newseye.md)
NER Dataset using hmBERT Tiny as backbone LM.
The NewsEye dataset is comprised of diachronic historical newspaper material published between 1850 and 1950
in French, German, Finnish, and Swedish.
More information can be found [here](https://dl.acm.org/doi/abs/10.1145/3404835.3463255).
The following NEs were annotated: `PER`, `LOC`, `ORG` and `HumanProd`.
# Results
We performed a hyper-parameter search over the following parameters with 5 different seeds per configuration:
* Batch Sizes: `[4, 8]`
* Learning Rates: `[5e-05, 3e-05]`
And report micro F1-score on development set:
| Configuration | Seed 1 | Seed 2 | Seed 3 | Seed 4 | Seed 5 | Average |
|-------------------|--------------|--------------|--------------|--------------|----------------|-----------------|
| `bs4-e10-lr5e-05` | [0.4847][1] | [0.481][2] | [0.4674][3] | [0.5085][4] | [**0.474**][5] | 0.4831 ± 0.0157 |
| `bs8-e10-lr5e-05` | [0.3983][6] | [0.4194][7] | [0.3856][8] | [0.4435][9] | [0.4053][10] | 0.4104 ± 0.0222 |
| `bs4-e10-lr3e-05` | [0.3761][11] | [0.38][12] | [0.3236][13] | [0.3594][14] | [0.3567][15] | 0.3592 ± 0.0223 |
| `bs8-e10-lr3e-05` | [0.1194][16] | [0.1521][17] | [0.104][18] | [0.1294][19] | [0.1399][20] | 0.129 ± 0.0185 |
[1]: https://hf.co/stefan-it/hmbench-newseye-sv-hmbert_tiny-bs4-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-1
[2]: https://hf.co/stefan-it/hmbench-newseye-sv-hmbert_tiny-bs4-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-2
[3]: https://hf.co/stefan-it/hmbench-newseye-sv-hmbert_tiny-bs4-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-3
[4]: https://hf.co/stefan-it/hmbench-newseye-sv-hmbert_tiny-bs4-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-4
[5]: https://hf.co/stefan-it/hmbench-newseye-sv-hmbert_tiny-bs4-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-5
[6]: https://hf.co/stefan-it/hmbench-newseye-sv-hmbert_tiny-bs8-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-1
[7]: https://hf.co/stefan-it/hmbench-newseye-sv-hmbert_tiny-bs8-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-2
[8]: https://hf.co/stefan-it/hmbench-newseye-sv-hmbert_tiny-bs8-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-3
[9]: https://hf.co/stefan-it/hmbench-newseye-sv-hmbert_tiny-bs8-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-4
[10]: https://hf.co/stefan-it/hmbench-newseye-sv-hmbert_tiny-bs8-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-5
[11]: https://hf.co/stefan-it/hmbench-newseye-sv-hmbert_tiny-bs4-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-1
[12]: https://hf.co/stefan-it/hmbench-newseye-sv-hmbert_tiny-bs4-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-2
[13]: https://hf.co/stefan-it/hmbench-newseye-sv-hmbert_tiny-bs4-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-3
[14]: https://hf.co/stefan-it/hmbench-newseye-sv-hmbert_tiny-bs4-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-4
[15]: https://hf.co/stefan-it/hmbench-newseye-sv-hmbert_tiny-bs4-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-5
[16]: https://hf.co/stefan-it/hmbench-newseye-sv-hmbert_tiny-bs8-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-1
[17]: https://hf.co/stefan-it/hmbench-newseye-sv-hmbert_tiny-bs8-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-2
[18]: https://hf.co/stefan-it/hmbench-newseye-sv-hmbert_tiny-bs8-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-3
[19]: https://hf.co/stefan-it/hmbench-newseye-sv-hmbert_tiny-bs8-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-4
[20]: https://hf.co/stefan-it/hmbench-newseye-sv-hmbert_tiny-bs8-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-5
The [training log](training.log) and TensorBoard logs (not available for hmBERT Base model) are also uploaded to the model hub.
More information about fine-tuning can be found [here](https://github.com/stefan-it/hmBench).
# Acknowledgements
We thank [Luisa März](https://github.com/LuisaMaerz), [Katharina Schmid](https://github.com/schmika) and
[Erion Çano](https://github.com/erionc) for their fruitful discussions about Historic Language Models.
Research supported with Cloud TPUs from Google's [TPU Research Cloud](https://sites.research.google/trc/about/) (TRC).
Many Thanks for providing access to the TPUs ❤️
|
stefan-it/hmbench-newseye-sv-hmbert_tiny-bs8-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-5 | stefan-it | 2023-10-24T13:53:13Z | 3 | 0 | flair | [
"flair",
"pytorch",
"tensorboard",
"token-classification",
"sequence-tagger-model",
"sv",
"base_model:dbmdz/bert-tiny-historic-multilingual-cased",
"base_model:finetune:dbmdz/bert-tiny-historic-multilingual-cased",
"license:mit",
"region:us"
]
| token-classification | 2023-10-20T00:26:55Z | ---
language: sv
license: mit
tags:
- flair
- token-classification
- sequence-tagger-model
base_model: dbmdz/bert-tiny-historic-multilingual-cased
widget:
- text: Värri , Teittinen , Forsman , Tensik - kala m . fl . anslöto sig till reservatio
- nen , hvaremot lm Fieandt , Huopo - nen , Koskelin , Leppänen , ( Li - belits
) , Eklund m . fl . förordade ut - skottets formulering af § 11 .
---
# Fine-tuned Flair Model on Swedish NewsEye NER Dataset (HIPE-2022)
This Flair model was fine-tuned on the
[Swedish NewsEye](https://github.com/hipe-eval/HIPE-2022-data/blob/main/documentation/README-newseye.md)
NER Dataset using hmBERT Tiny as backbone LM.
The NewsEye dataset is comprised of diachronic historical newspaper material published between 1850 and 1950
in French, German, Finnish, and Swedish.
More information can be found [here](https://dl.acm.org/doi/abs/10.1145/3404835.3463255).
The following NEs were annotated: `PER`, `LOC`, `ORG` and `HumanProd`.
# Results
We performed a hyper-parameter search over the following parameters with 5 different seeds per configuration:
* Batch Sizes: `[4, 8]`
* Learning Rates: `[5e-05, 3e-05]`
And report micro F1-score on development set:
| Configuration | Seed 1 | Seed 2 | Seed 3 | Seed 4 | Seed 5 | Average |
|-------------------|--------------|--------------|--------------|--------------|------------------|-----------------|
| `bs4-e10-lr5e-05` | [0.4847][1] | [0.481][2] | [0.4674][3] | [0.5085][4] | [0.474][5] | 0.4831 ± 0.0157 |
| `bs8-e10-lr5e-05` | [0.3983][6] | [0.4194][7] | [0.3856][8] | [0.4435][9] | [0.4053][10] | 0.4104 ± 0.0222 |
| `bs4-e10-lr3e-05` | [0.3761][11] | [0.38][12] | [0.3236][13] | [0.3594][14] | [0.3567][15] | 0.3592 ± 0.0223 |
| `bs8-e10-lr3e-05` | [0.1194][16] | [0.1521][17] | [0.104][18] | [0.1294][19] | [**0.1399**][20] | 0.129 ± 0.0185 |
[1]: https://hf.co/stefan-it/hmbench-newseye-sv-hmbert_tiny-bs4-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-1
[2]: https://hf.co/stefan-it/hmbench-newseye-sv-hmbert_tiny-bs4-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-2
[3]: https://hf.co/stefan-it/hmbench-newseye-sv-hmbert_tiny-bs4-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-3
[4]: https://hf.co/stefan-it/hmbench-newseye-sv-hmbert_tiny-bs4-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-4
[5]: https://hf.co/stefan-it/hmbench-newseye-sv-hmbert_tiny-bs4-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-5
[6]: https://hf.co/stefan-it/hmbench-newseye-sv-hmbert_tiny-bs8-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-1
[7]: https://hf.co/stefan-it/hmbench-newseye-sv-hmbert_tiny-bs8-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-2
[8]: https://hf.co/stefan-it/hmbench-newseye-sv-hmbert_tiny-bs8-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-3
[9]: https://hf.co/stefan-it/hmbench-newseye-sv-hmbert_tiny-bs8-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-4
[10]: https://hf.co/stefan-it/hmbench-newseye-sv-hmbert_tiny-bs8-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-5
[11]: https://hf.co/stefan-it/hmbench-newseye-sv-hmbert_tiny-bs4-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-1
[12]: https://hf.co/stefan-it/hmbench-newseye-sv-hmbert_tiny-bs4-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-2
[13]: https://hf.co/stefan-it/hmbench-newseye-sv-hmbert_tiny-bs4-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-3
[14]: https://hf.co/stefan-it/hmbench-newseye-sv-hmbert_tiny-bs4-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-4
[15]: https://hf.co/stefan-it/hmbench-newseye-sv-hmbert_tiny-bs4-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-5
[16]: https://hf.co/stefan-it/hmbench-newseye-sv-hmbert_tiny-bs8-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-1
[17]: https://hf.co/stefan-it/hmbench-newseye-sv-hmbert_tiny-bs8-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-2
[18]: https://hf.co/stefan-it/hmbench-newseye-sv-hmbert_tiny-bs8-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-3
[19]: https://hf.co/stefan-it/hmbench-newseye-sv-hmbert_tiny-bs8-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-4
[20]: https://hf.co/stefan-it/hmbench-newseye-sv-hmbert_tiny-bs8-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-5
The [training log](training.log) and TensorBoard logs (not available for hmBERT Base model) are also uploaded to the model hub.
More information about fine-tuning can be found [here](https://github.com/stefan-it/hmBench).
# Acknowledgements
We thank [Luisa März](https://github.com/LuisaMaerz), [Katharina Schmid](https://github.com/schmika) and
[Erion Çano](https://github.com/erionc) for their fruitful discussions about Historic Language Models.
Research supported with Cloud TPUs from Google's [TPU Research Cloud](https://sites.research.google/trc/about/) (TRC).
Many Thanks for providing access to the TPUs ❤️
|
stefan-it/hmbench-newseye-sv-hmbert_tiny-bs4-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-5 | stefan-it | 2023-10-24T13:53:12Z | 4 | 0 | flair | [
"flair",
"pytorch",
"tensorboard",
"token-classification",
"sequence-tagger-model",
"sv",
"base_model:dbmdz/bert-tiny-historic-multilingual-cased",
"base_model:finetune:dbmdz/bert-tiny-historic-multilingual-cased",
"license:mit",
"region:us"
]
| token-classification | 2023-10-20T00:25:02Z | ---
language: sv
license: mit
tags:
- flair
- token-classification
- sequence-tagger-model
base_model: dbmdz/bert-tiny-historic-multilingual-cased
widget:
- text: Värri , Teittinen , Forsman , Tensik - kala m . fl . anslöto sig till reservatio
- nen , hvaremot lm Fieandt , Huopo - nen , Koskelin , Leppänen , ( Li - belits
) , Eklund m . fl . förordade ut - skottets formulering af § 11 .
---
# Fine-tuned Flair Model on Swedish NewsEye NER Dataset (HIPE-2022)
This Flair model was fine-tuned on the
[Swedish NewsEye](https://github.com/hipe-eval/HIPE-2022-data/blob/main/documentation/README-newseye.md)
NER Dataset using hmBERT Tiny as backbone LM.
The NewsEye dataset is comprised of diachronic historical newspaper material published between 1850 and 1950
in French, German, Finnish, and Swedish.
More information can be found [here](https://dl.acm.org/doi/abs/10.1145/3404835.3463255).
The following NEs were annotated: `PER`, `LOC`, `ORG` and `HumanProd`.
# Results
We performed a hyper-parameter search over the following parameters with 5 different seeds per configuration:
* Batch Sizes: `[4, 8]`
* Learning Rates: `[5e-05, 3e-05]`
And report micro F1-score on development set:
| Configuration | Seed 1 | Seed 2 | Seed 3 | Seed 4 | Seed 5 | Average |
|-------------------|--------------|--------------|--------------|--------------|------------------|-----------------|
| `bs4-e10-lr5e-05` | [0.4847][1] | [0.481][2] | [0.4674][3] | [0.5085][4] | [0.474][5] | 0.4831 ± 0.0157 |
| `bs8-e10-lr5e-05` | [0.3983][6] | [0.4194][7] | [0.3856][8] | [0.4435][9] | [0.4053][10] | 0.4104 ± 0.0222 |
| `bs4-e10-lr3e-05` | [0.3761][11] | [0.38][12] | [0.3236][13] | [0.3594][14] | [**0.3567**][15] | 0.3592 ± 0.0223 |
| `bs8-e10-lr3e-05` | [0.1194][16] | [0.1521][17] | [0.104][18] | [0.1294][19] | [0.1399][20] | 0.129 ± 0.0185 |
[1]: https://hf.co/stefan-it/hmbench-newseye-sv-hmbert_tiny-bs4-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-1
[2]: https://hf.co/stefan-it/hmbench-newseye-sv-hmbert_tiny-bs4-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-2
[3]: https://hf.co/stefan-it/hmbench-newseye-sv-hmbert_tiny-bs4-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-3
[4]: https://hf.co/stefan-it/hmbench-newseye-sv-hmbert_tiny-bs4-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-4
[5]: https://hf.co/stefan-it/hmbench-newseye-sv-hmbert_tiny-bs4-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-5
[6]: https://hf.co/stefan-it/hmbench-newseye-sv-hmbert_tiny-bs8-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-1
[7]: https://hf.co/stefan-it/hmbench-newseye-sv-hmbert_tiny-bs8-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-2
[8]: https://hf.co/stefan-it/hmbench-newseye-sv-hmbert_tiny-bs8-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-3
[9]: https://hf.co/stefan-it/hmbench-newseye-sv-hmbert_tiny-bs8-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-4
[10]: https://hf.co/stefan-it/hmbench-newseye-sv-hmbert_tiny-bs8-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-5
[11]: https://hf.co/stefan-it/hmbench-newseye-sv-hmbert_tiny-bs4-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-1
[12]: https://hf.co/stefan-it/hmbench-newseye-sv-hmbert_tiny-bs4-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-2
[13]: https://hf.co/stefan-it/hmbench-newseye-sv-hmbert_tiny-bs4-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-3
[14]: https://hf.co/stefan-it/hmbench-newseye-sv-hmbert_tiny-bs4-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-4
[15]: https://hf.co/stefan-it/hmbench-newseye-sv-hmbert_tiny-bs4-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-5
[16]: https://hf.co/stefan-it/hmbench-newseye-sv-hmbert_tiny-bs8-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-1
[17]: https://hf.co/stefan-it/hmbench-newseye-sv-hmbert_tiny-bs8-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-2
[18]: https://hf.co/stefan-it/hmbench-newseye-sv-hmbert_tiny-bs8-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-3
[19]: https://hf.co/stefan-it/hmbench-newseye-sv-hmbert_tiny-bs8-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-4
[20]: https://hf.co/stefan-it/hmbench-newseye-sv-hmbert_tiny-bs8-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-5
The [training log](training.log) and TensorBoard logs (not available for hmBERT Base model) are also uploaded to the model hub.
More information about fine-tuning can be found [here](https://github.com/stefan-it/hmBench).
# Acknowledgements
We thank [Luisa März](https://github.com/LuisaMaerz), [Katharina Schmid](https://github.com/schmika) and
[Erion Çano](https://github.com/erionc) for their fruitful discussions about Historic Language Models.
Research supported with Cloud TPUs from Google's [TPU Research Cloud](https://sites.research.google/trc/about/) (TRC).
Many Thanks for providing access to the TPUs ❤️
|
hmbert-tiny/flair-hipe-2022-newseye-sv | hmbert-tiny | 2023-10-24T13:53:11Z | 6 | 0 | flair | [
"flair",
"pytorch",
"tensorboard",
"token-classification",
"sequence-tagger-model",
"sv",
"base_model:dbmdz/bert-tiny-historic-multilingual-cased",
"base_model:finetune:dbmdz/bert-tiny-historic-multilingual-cased",
"license:mit",
"region:us"
]
| token-classification | 2023-10-20T00:22:21Z | ---
language: sv
license: mit
tags:
- flair
- token-classification
- sequence-tagger-model
base_model: dbmdz/bert-tiny-historic-multilingual-cased
widget:
- text: Värri , Teittinen , Forsman , Tensik - kala m . fl . anslöto sig till reservatio
- nen , hvaremot lm Fieandt , Huopo - nen , Koskelin , Leppänen , ( Li - belits
) , Eklund m . fl . förordade ut - skottets formulering af § 11 .
---
# Fine-tuned Flair Model on Swedish NewsEye NER Dataset (HIPE-2022)
This Flair model was fine-tuned on the
[Swedish NewsEye](https://github.com/hipe-eval/HIPE-2022-data/blob/main/documentation/README-newseye.md)
NER Dataset using hmBERT Tiny as backbone LM.
The NewsEye dataset is comprised of diachronic historical newspaper material published between 1850 and 1950
in French, German, Finnish, and Swedish.
More information can be found [here](https://dl.acm.org/doi/abs/10.1145/3404835.3463255).
The following NEs were annotated: `PER`, `LOC`, `ORG` and `HumanProd`.
# Results
We performed a hyper-parameter search over the following parameters with 5 different seeds per configuration:
* Batch Sizes: `[4, 8]`
* Learning Rates: `[5e-05, 3e-05]`
And report micro F1-score on development set:
| Configuration | Seed 1 | Seed 2 | Seed 3 | Seed 4 | Seed 5 | Average |
|-------------------|--------------|--------------|--------------|-----------------|--------------|-----------------|
| `bs4-e10-lr5e-05` | [0.4847][1] | [0.481][2] | [0.4674][3] | [**0.5085**][4] | [0.474][5] | 0.4831 ± 0.0157 |
| `bs8-e10-lr5e-05` | [0.3983][6] | [0.4194][7] | [0.3856][8] | [0.4435][9] | [0.4053][10] | 0.4104 ± 0.0222 |
| `bs4-e10-lr3e-05` | [0.3761][11] | [0.38][12] | [0.3236][13] | [0.3594][14] | [0.3567][15] | 0.3592 ± 0.0223 |
| `bs8-e10-lr3e-05` | [0.1194][16] | [0.1521][17] | [0.104][18] | [0.1294][19] | [0.1399][20] | 0.129 ± 0.0185 |
[1]: https://hf.co/stefan-it/hmbench-newseye-sv-hmbert_tiny-bs4-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-1
[2]: https://hf.co/stefan-it/hmbench-newseye-sv-hmbert_tiny-bs4-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-2
[3]: https://hf.co/stefan-it/hmbench-newseye-sv-hmbert_tiny-bs4-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-3
[4]: https://hf.co/stefan-it/hmbench-newseye-sv-hmbert_tiny-bs4-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-4
[5]: https://hf.co/stefan-it/hmbench-newseye-sv-hmbert_tiny-bs4-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-5
[6]: https://hf.co/stefan-it/hmbench-newseye-sv-hmbert_tiny-bs8-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-1
[7]: https://hf.co/stefan-it/hmbench-newseye-sv-hmbert_tiny-bs8-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-2
[8]: https://hf.co/stefan-it/hmbench-newseye-sv-hmbert_tiny-bs8-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-3
[9]: https://hf.co/stefan-it/hmbench-newseye-sv-hmbert_tiny-bs8-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-4
[10]: https://hf.co/stefan-it/hmbench-newseye-sv-hmbert_tiny-bs8-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-5
[11]: https://hf.co/stefan-it/hmbench-newseye-sv-hmbert_tiny-bs4-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-1
[12]: https://hf.co/stefan-it/hmbench-newseye-sv-hmbert_tiny-bs4-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-2
[13]: https://hf.co/stefan-it/hmbench-newseye-sv-hmbert_tiny-bs4-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-3
[14]: https://hf.co/stefan-it/hmbench-newseye-sv-hmbert_tiny-bs4-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-4
[15]: https://hf.co/stefan-it/hmbench-newseye-sv-hmbert_tiny-bs4-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-5
[16]: https://hf.co/stefan-it/hmbench-newseye-sv-hmbert_tiny-bs8-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-1
[17]: https://hf.co/stefan-it/hmbench-newseye-sv-hmbert_tiny-bs8-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-2
[18]: https://hf.co/stefan-it/hmbench-newseye-sv-hmbert_tiny-bs8-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-3
[19]: https://hf.co/stefan-it/hmbench-newseye-sv-hmbert_tiny-bs8-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-4
[20]: https://hf.co/stefan-it/hmbench-newseye-sv-hmbert_tiny-bs8-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-5
The [training log](training.log) and TensorBoard logs (not available for hmBERT Base model) are also uploaded to the model hub.
More information about fine-tuning can be found [here](https://github.com/stefan-it/hmBench).
# Acknowledgements
We thank [Luisa März](https://github.com/LuisaMaerz), [Katharina Schmid](https://github.com/schmika) and
[Erion Çano](https://github.com/erionc) for their fruitful discussions about Historic Language Models.
Research supported with Cloud TPUs from Google's [TPU Research Cloud](https://sites.research.google/trc/about/) (TRC).
Many Thanks for providing access to the TPUs ❤️
|
stefan-it/hmbench-newseye-sv-hmbert_tiny-bs8-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-3 | stefan-it | 2023-10-24T13:53:10Z | 4 | 0 | flair | [
"flair",
"pytorch",
"tensorboard",
"token-classification",
"sequence-tagger-model",
"sv",
"base_model:dbmdz/bert-tiny-historic-multilingual-cased",
"base_model:finetune:dbmdz/bert-tiny-historic-multilingual-cased",
"license:mit",
"region:us"
]
| token-classification | 2023-10-20T00:20:14Z | ---
language: sv
license: mit
tags:
- flair
- token-classification
- sequence-tagger-model
base_model: dbmdz/bert-tiny-historic-multilingual-cased
widget:
- text: Värri , Teittinen , Forsman , Tensik - kala m . fl . anslöto sig till reservatio
- nen , hvaremot lm Fieandt , Huopo - nen , Koskelin , Leppänen , ( Li - belits
) , Eklund m . fl . förordade ut - skottets formulering af § 11 .
---
# Fine-tuned Flair Model on Swedish NewsEye NER Dataset (HIPE-2022)
This Flair model was fine-tuned on the
[Swedish NewsEye](https://github.com/hipe-eval/HIPE-2022-data/blob/main/documentation/README-newseye.md)
NER Dataset using hmBERT Tiny as backbone LM.
The NewsEye dataset is comprised of diachronic historical newspaper material published between 1850 and 1950
in French, German, Finnish, and Swedish.
More information can be found [here](https://dl.acm.org/doi/abs/10.1145/3404835.3463255).
The following NEs were annotated: `PER`, `LOC`, `ORG` and `HumanProd`.
# Results
We performed a hyper-parameter search over the following parameters with 5 different seeds per configuration:
* Batch Sizes: `[4, 8]`
* Learning Rates: `[5e-05, 3e-05]`
And report micro F1-score on development set:
| Configuration | Seed 1 | Seed 2 | Seed 3 | Seed 4 | Seed 5 | Average |
|-------------------|--------------|--------------|-----------------|--------------|--------------|-----------------|
| `bs4-e10-lr5e-05` | [0.4847][1] | [0.481][2] | [0.4674][3] | [0.5085][4] | [0.474][5] | 0.4831 ± 0.0157 |
| `bs8-e10-lr5e-05` | [0.3983][6] | [0.4194][7] | [**0.3856**][8] | [0.4435][9] | [0.4053][10] | 0.4104 ± 0.0222 |
| `bs4-e10-lr3e-05` | [0.3761][11] | [0.38][12] | [0.3236][13] | [0.3594][14] | [0.3567][15] | 0.3592 ± 0.0223 |
| `bs8-e10-lr3e-05` | [0.1194][16] | [0.1521][17] | [0.104][18] | [0.1294][19] | [0.1399][20] | 0.129 ± 0.0185 |
[1]: https://hf.co/stefan-it/hmbench-newseye-sv-hmbert_tiny-bs4-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-1
[2]: https://hf.co/stefan-it/hmbench-newseye-sv-hmbert_tiny-bs4-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-2
[3]: https://hf.co/stefan-it/hmbench-newseye-sv-hmbert_tiny-bs4-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-3
[4]: https://hf.co/stefan-it/hmbench-newseye-sv-hmbert_tiny-bs4-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-4
[5]: https://hf.co/stefan-it/hmbench-newseye-sv-hmbert_tiny-bs4-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-5
[6]: https://hf.co/stefan-it/hmbench-newseye-sv-hmbert_tiny-bs8-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-1
[7]: https://hf.co/stefan-it/hmbench-newseye-sv-hmbert_tiny-bs8-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-2
[8]: https://hf.co/stefan-it/hmbench-newseye-sv-hmbert_tiny-bs8-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-3
[9]: https://hf.co/stefan-it/hmbench-newseye-sv-hmbert_tiny-bs8-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-4
[10]: https://hf.co/stefan-it/hmbench-newseye-sv-hmbert_tiny-bs8-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-5
[11]: https://hf.co/stefan-it/hmbench-newseye-sv-hmbert_tiny-bs4-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-1
[12]: https://hf.co/stefan-it/hmbench-newseye-sv-hmbert_tiny-bs4-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-2
[13]: https://hf.co/stefan-it/hmbench-newseye-sv-hmbert_tiny-bs4-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-3
[14]: https://hf.co/stefan-it/hmbench-newseye-sv-hmbert_tiny-bs4-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-4
[15]: https://hf.co/stefan-it/hmbench-newseye-sv-hmbert_tiny-bs4-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-5
[16]: https://hf.co/stefan-it/hmbench-newseye-sv-hmbert_tiny-bs8-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-1
[17]: https://hf.co/stefan-it/hmbench-newseye-sv-hmbert_tiny-bs8-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-2
[18]: https://hf.co/stefan-it/hmbench-newseye-sv-hmbert_tiny-bs8-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-3
[19]: https://hf.co/stefan-it/hmbench-newseye-sv-hmbert_tiny-bs8-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-4
[20]: https://hf.co/stefan-it/hmbench-newseye-sv-hmbert_tiny-bs8-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-5
The [training log](training.log) and TensorBoard logs (not available for hmBERT Base model) are also uploaded to the model hub.
More information about fine-tuning can be found [here](https://github.com/stefan-it/hmBench).
# Acknowledgements
We thank [Luisa März](https://github.com/LuisaMaerz), [Katharina Schmid](https://github.com/schmika) and
[Erion Çano](https://github.com/erionc) for their fruitful discussions about Historic Language Models.
Research supported with Cloud TPUs from Google's [TPU Research Cloud](https://sites.research.google/trc/about/) (TRC).
Many Thanks for providing access to the TPUs ❤️
|
stefan-it/hmbench-newseye-sv-hmbert_tiny-bs8-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-3 | stefan-it | 2023-10-24T13:53:10Z | 2 | 0 | flair | [
"flair",
"pytorch",
"tensorboard",
"token-classification",
"sequence-tagger-model",
"sv",
"base_model:dbmdz/bert-tiny-historic-multilingual-cased",
"base_model:finetune:dbmdz/bert-tiny-historic-multilingual-cased",
"license:mit",
"region:us"
]
| token-classification | 2023-10-20T00:19:24Z | ---
language: sv
license: mit
tags:
- flair
- token-classification
- sequence-tagger-model
base_model: dbmdz/bert-tiny-historic-multilingual-cased
widget:
- text: Värri , Teittinen , Forsman , Tensik - kala m . fl . anslöto sig till reservatio
- nen , hvaremot lm Fieandt , Huopo - nen , Koskelin , Leppänen , ( Li - belits
) , Eklund m . fl . förordade ut - skottets formulering af § 11 .
---
# Fine-tuned Flair Model on Swedish NewsEye NER Dataset (HIPE-2022)
This Flair model was fine-tuned on the
[Swedish NewsEye](https://github.com/hipe-eval/HIPE-2022-data/blob/main/documentation/README-newseye.md)
NER Dataset using hmBERT Tiny as backbone LM.
The NewsEye dataset is comprised of diachronic historical newspaper material published between 1850 and 1950
in French, German, Finnish, and Swedish.
More information can be found [here](https://dl.acm.org/doi/abs/10.1145/3404835.3463255).
The following NEs were annotated: `PER`, `LOC`, `ORG` and `HumanProd`.
# Results
We performed a hyper-parameter search over the following parameters with 5 different seeds per configuration:
* Batch Sizes: `[4, 8]`
* Learning Rates: `[5e-05, 3e-05]`
And report micro F1-score on development set:
| Configuration | Seed 1 | Seed 2 | Seed 3 | Seed 4 | Seed 5 | Average |
|-------------------|--------------|--------------|-----------------|--------------|--------------|-----------------|
| `bs4-e10-lr5e-05` | [0.4847][1] | [0.481][2] | [0.4674][3] | [0.5085][4] | [0.474][5] | 0.4831 ± 0.0157 |
| `bs8-e10-lr5e-05` | [0.3983][6] | [0.4194][7] | [0.3856][8] | [0.4435][9] | [0.4053][10] | 0.4104 ± 0.0222 |
| `bs4-e10-lr3e-05` | [0.3761][11] | [0.38][12] | [0.3236][13] | [0.3594][14] | [0.3567][15] | 0.3592 ± 0.0223 |
| `bs8-e10-lr3e-05` | [0.1194][16] | [0.1521][17] | [**0.104**][18] | [0.1294][19] | [0.1399][20] | 0.129 ± 0.0185 |
[1]: https://hf.co/stefan-it/hmbench-newseye-sv-hmbert_tiny-bs4-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-1
[2]: https://hf.co/stefan-it/hmbench-newseye-sv-hmbert_tiny-bs4-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-2
[3]: https://hf.co/stefan-it/hmbench-newseye-sv-hmbert_tiny-bs4-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-3
[4]: https://hf.co/stefan-it/hmbench-newseye-sv-hmbert_tiny-bs4-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-4
[5]: https://hf.co/stefan-it/hmbench-newseye-sv-hmbert_tiny-bs4-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-5
[6]: https://hf.co/stefan-it/hmbench-newseye-sv-hmbert_tiny-bs8-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-1
[7]: https://hf.co/stefan-it/hmbench-newseye-sv-hmbert_tiny-bs8-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-2
[8]: https://hf.co/stefan-it/hmbench-newseye-sv-hmbert_tiny-bs8-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-3
[9]: https://hf.co/stefan-it/hmbench-newseye-sv-hmbert_tiny-bs8-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-4
[10]: https://hf.co/stefan-it/hmbench-newseye-sv-hmbert_tiny-bs8-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-5
[11]: https://hf.co/stefan-it/hmbench-newseye-sv-hmbert_tiny-bs4-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-1
[12]: https://hf.co/stefan-it/hmbench-newseye-sv-hmbert_tiny-bs4-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-2
[13]: https://hf.co/stefan-it/hmbench-newseye-sv-hmbert_tiny-bs4-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-3
[14]: https://hf.co/stefan-it/hmbench-newseye-sv-hmbert_tiny-bs4-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-4
[15]: https://hf.co/stefan-it/hmbench-newseye-sv-hmbert_tiny-bs4-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-5
[16]: https://hf.co/stefan-it/hmbench-newseye-sv-hmbert_tiny-bs8-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-1
[17]: https://hf.co/stefan-it/hmbench-newseye-sv-hmbert_tiny-bs8-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-2
[18]: https://hf.co/stefan-it/hmbench-newseye-sv-hmbert_tiny-bs8-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-3
[19]: https://hf.co/stefan-it/hmbench-newseye-sv-hmbert_tiny-bs8-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-4
[20]: https://hf.co/stefan-it/hmbench-newseye-sv-hmbert_tiny-bs8-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-5
The [training log](training.log) and TensorBoard logs (not available for hmBERT Base model) are also uploaded to the model hub.
More information about fine-tuning can be found [here](https://github.com/stefan-it/hmBench).
# Acknowledgements
We thank [Luisa März](https://github.com/LuisaMaerz), [Katharina Schmid](https://github.com/schmika) and
[Erion Çano](https://github.com/erionc) for their fruitful discussions about Historic Language Models.
Research supported with Cloud TPUs from Google's [TPU Research Cloud](https://sites.research.google/trc/about/) (TRC).
Many Thanks for providing access to the TPUs ❤️
|
stefan-it/hmbench-newseye-sv-hmbert_tiny-bs4-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-3 | stefan-it | 2023-10-24T13:53:09Z | 4 | 0 | flair | [
"flair",
"pytorch",
"tensorboard",
"token-classification",
"sequence-tagger-model",
"sv",
"base_model:dbmdz/bert-tiny-historic-multilingual-cased",
"base_model:finetune:dbmdz/bert-tiny-historic-multilingual-cased",
"license:mit",
"region:us"
]
| token-classification | 2023-10-20T00:18:35Z | ---
language: sv
license: mit
tags:
- flair
- token-classification
- sequence-tagger-model
base_model: dbmdz/bert-tiny-historic-multilingual-cased
widget:
- text: Värri , Teittinen , Forsman , Tensik - kala m . fl . anslöto sig till reservatio
- nen , hvaremot lm Fieandt , Huopo - nen , Koskelin , Leppänen , ( Li - belits
) , Eklund m . fl . förordade ut - skottets formulering af § 11 .
---
# Fine-tuned Flair Model on Swedish NewsEye NER Dataset (HIPE-2022)
This Flair model was fine-tuned on the
[Swedish NewsEye](https://github.com/hipe-eval/HIPE-2022-data/blob/main/documentation/README-newseye.md)
NER Dataset using hmBERT Tiny as backbone LM.
The NewsEye dataset is comprised of diachronic historical newspaper material published between 1850 and 1950
in French, German, Finnish, and Swedish.
More information can be found [here](https://dl.acm.org/doi/abs/10.1145/3404835.3463255).
The following NEs were annotated: `PER`, `LOC`, `ORG` and `HumanProd`.
# Results
We performed a hyper-parameter search over the following parameters with 5 different seeds per configuration:
* Batch Sizes: `[4, 8]`
* Learning Rates: `[5e-05, 3e-05]`
And report micro F1-score on development set:
| Configuration | Seed 1 | Seed 2 | Seed 3 | Seed 4 | Seed 5 | Average |
|-------------------|--------------|--------------|-----------------|--------------|--------------|-----------------|
| `bs4-e10-lr5e-05` | [0.4847][1] | [0.481][2] | [**0.4674**][3] | [0.5085][4] | [0.474][5] | 0.4831 ± 0.0157 |
| `bs8-e10-lr5e-05` | [0.3983][6] | [0.4194][7] | [0.3856][8] | [0.4435][9] | [0.4053][10] | 0.4104 ± 0.0222 |
| `bs4-e10-lr3e-05` | [0.3761][11] | [0.38][12] | [0.3236][13] | [0.3594][14] | [0.3567][15] | 0.3592 ± 0.0223 |
| `bs8-e10-lr3e-05` | [0.1194][16] | [0.1521][17] | [0.104][18] | [0.1294][19] | [0.1399][20] | 0.129 ± 0.0185 |
[1]: https://hf.co/stefan-it/hmbench-newseye-sv-hmbert_tiny-bs4-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-1
[2]: https://hf.co/stefan-it/hmbench-newseye-sv-hmbert_tiny-bs4-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-2
[3]: https://hf.co/stefan-it/hmbench-newseye-sv-hmbert_tiny-bs4-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-3
[4]: https://hf.co/stefan-it/hmbench-newseye-sv-hmbert_tiny-bs4-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-4
[5]: https://hf.co/stefan-it/hmbench-newseye-sv-hmbert_tiny-bs4-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-5
[6]: https://hf.co/stefan-it/hmbench-newseye-sv-hmbert_tiny-bs8-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-1
[7]: https://hf.co/stefan-it/hmbench-newseye-sv-hmbert_tiny-bs8-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-2
[8]: https://hf.co/stefan-it/hmbench-newseye-sv-hmbert_tiny-bs8-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-3
[9]: https://hf.co/stefan-it/hmbench-newseye-sv-hmbert_tiny-bs8-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-4
[10]: https://hf.co/stefan-it/hmbench-newseye-sv-hmbert_tiny-bs8-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-5
[11]: https://hf.co/stefan-it/hmbench-newseye-sv-hmbert_tiny-bs4-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-1
[12]: https://hf.co/stefan-it/hmbench-newseye-sv-hmbert_tiny-bs4-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-2
[13]: https://hf.co/stefan-it/hmbench-newseye-sv-hmbert_tiny-bs4-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-3
[14]: https://hf.co/stefan-it/hmbench-newseye-sv-hmbert_tiny-bs4-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-4
[15]: https://hf.co/stefan-it/hmbench-newseye-sv-hmbert_tiny-bs4-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-5
[16]: https://hf.co/stefan-it/hmbench-newseye-sv-hmbert_tiny-bs8-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-1
[17]: https://hf.co/stefan-it/hmbench-newseye-sv-hmbert_tiny-bs8-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-2
[18]: https://hf.co/stefan-it/hmbench-newseye-sv-hmbert_tiny-bs8-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-3
[19]: https://hf.co/stefan-it/hmbench-newseye-sv-hmbert_tiny-bs8-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-4
[20]: https://hf.co/stefan-it/hmbench-newseye-sv-hmbert_tiny-bs8-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-5
The [training log](training.log) and TensorBoard logs (not available for hmBERT Base model) are also uploaded to the model hub.
More information about fine-tuning can be found [here](https://github.com/stefan-it/hmBench).
# Acknowledgements
We thank [Luisa März](https://github.com/LuisaMaerz), [Katharina Schmid](https://github.com/schmika) and
[Erion Çano](https://github.com/erionc) for their fruitful discussions about Historic Language Models.
Research supported with Cloud TPUs from Google's [TPU Research Cloud](https://sites.research.google/trc/about/) (TRC).
Many Thanks for providing access to the TPUs ❤️
|
stefan-it/hmbench-newseye-sv-hmbert_tiny-bs8-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-2 | stefan-it | 2023-10-24T13:53:08Z | 2 | 0 | flair | [
"flair",
"pytorch",
"tensorboard",
"token-classification",
"sequence-tagger-model",
"sv",
"base_model:dbmdz/bert-tiny-historic-multilingual-cased",
"base_model:finetune:dbmdz/bert-tiny-historic-multilingual-cased",
"license:mit",
"region:us"
]
| token-classification | 2023-10-20T00:15:38Z | ---
language: sv
license: mit
tags:
- flair
- token-classification
- sequence-tagger-model
base_model: dbmdz/bert-tiny-historic-multilingual-cased
widget:
- text: Värri , Teittinen , Forsman , Tensik - kala m . fl . anslöto sig till reservatio
- nen , hvaremot lm Fieandt , Huopo - nen , Koskelin , Leppänen , ( Li - belits
) , Eklund m . fl . förordade ut - skottets formulering af § 11 .
---
# Fine-tuned Flair Model on Swedish NewsEye NER Dataset (HIPE-2022)
This Flair model was fine-tuned on the
[Swedish NewsEye](https://github.com/hipe-eval/HIPE-2022-data/blob/main/documentation/README-newseye.md)
NER Dataset using hmBERT Tiny as backbone LM.
The NewsEye dataset is comprised of diachronic historical newspaper material published between 1850 and 1950
in French, German, Finnish, and Swedish.
More information can be found [here](https://dl.acm.org/doi/abs/10.1145/3404835.3463255).
The following NEs were annotated: `PER`, `LOC`, `ORG` and `HumanProd`.
# Results
We performed a hyper-parameter search over the following parameters with 5 different seeds per configuration:
* Batch Sizes: `[4, 8]`
* Learning Rates: `[5e-05, 3e-05]`
And report micro F1-score on development set:
| Configuration | Seed 1 | Seed 2 | Seed 3 | Seed 4 | Seed 5 | Average |
|-------------------|--------------|------------------|--------------|--------------|--------------|-----------------|
| `bs4-e10-lr5e-05` | [0.4847][1] | [0.481][2] | [0.4674][3] | [0.5085][4] | [0.474][5] | 0.4831 ± 0.0157 |
| `bs8-e10-lr5e-05` | [0.3983][6] | [0.4194][7] | [0.3856][8] | [0.4435][9] | [0.4053][10] | 0.4104 ± 0.0222 |
| `bs4-e10-lr3e-05` | [0.3761][11] | [0.38][12] | [0.3236][13] | [0.3594][14] | [0.3567][15] | 0.3592 ± 0.0223 |
| `bs8-e10-lr3e-05` | [0.1194][16] | [**0.1521**][17] | [0.104][18] | [0.1294][19] | [0.1399][20] | 0.129 ± 0.0185 |
[1]: https://hf.co/stefan-it/hmbench-newseye-sv-hmbert_tiny-bs4-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-1
[2]: https://hf.co/stefan-it/hmbench-newseye-sv-hmbert_tiny-bs4-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-2
[3]: https://hf.co/stefan-it/hmbench-newseye-sv-hmbert_tiny-bs4-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-3
[4]: https://hf.co/stefan-it/hmbench-newseye-sv-hmbert_tiny-bs4-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-4
[5]: https://hf.co/stefan-it/hmbench-newseye-sv-hmbert_tiny-bs4-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-5
[6]: https://hf.co/stefan-it/hmbench-newseye-sv-hmbert_tiny-bs8-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-1
[7]: https://hf.co/stefan-it/hmbench-newseye-sv-hmbert_tiny-bs8-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-2
[8]: https://hf.co/stefan-it/hmbench-newseye-sv-hmbert_tiny-bs8-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-3
[9]: https://hf.co/stefan-it/hmbench-newseye-sv-hmbert_tiny-bs8-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-4
[10]: https://hf.co/stefan-it/hmbench-newseye-sv-hmbert_tiny-bs8-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-5
[11]: https://hf.co/stefan-it/hmbench-newseye-sv-hmbert_tiny-bs4-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-1
[12]: https://hf.co/stefan-it/hmbench-newseye-sv-hmbert_tiny-bs4-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-2
[13]: https://hf.co/stefan-it/hmbench-newseye-sv-hmbert_tiny-bs4-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-3
[14]: https://hf.co/stefan-it/hmbench-newseye-sv-hmbert_tiny-bs4-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-4
[15]: https://hf.co/stefan-it/hmbench-newseye-sv-hmbert_tiny-bs4-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-5
[16]: https://hf.co/stefan-it/hmbench-newseye-sv-hmbert_tiny-bs8-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-1
[17]: https://hf.co/stefan-it/hmbench-newseye-sv-hmbert_tiny-bs8-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-2
[18]: https://hf.co/stefan-it/hmbench-newseye-sv-hmbert_tiny-bs8-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-3
[19]: https://hf.co/stefan-it/hmbench-newseye-sv-hmbert_tiny-bs8-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-4
[20]: https://hf.co/stefan-it/hmbench-newseye-sv-hmbert_tiny-bs8-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-5
The [training log](training.log) and TensorBoard logs (not available for hmBERT Base model) are also uploaded to the model hub.
More information about fine-tuning can be found [here](https://github.com/stefan-it/hmBench).
# Acknowledgements
We thank [Luisa März](https://github.com/LuisaMaerz), [Katharina Schmid](https://github.com/schmika) and
[Erion Çano](https://github.com/erionc) for their fruitful discussions about Historic Language Models.
Research supported with Cloud TPUs from Google's [TPU Research Cloud](https://sites.research.google/trc/about/) (TRC).
Many Thanks for providing access to the TPUs ❤️
|
stefan-it/hmbench-newseye-sv-hmbert_tiny-bs4-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-2 | stefan-it | 2023-10-24T13:53:07Z | 3 | 0 | flair | [
"flair",
"pytorch",
"tensorboard",
"token-classification",
"sequence-tagger-model",
"sv",
"base_model:dbmdz/bert-tiny-historic-multilingual-cased",
"base_model:finetune:dbmdz/bert-tiny-historic-multilingual-cased",
"license:mit",
"region:us"
]
| token-classification | 2023-10-20T00:13:45Z | ---
language: sv
license: mit
tags:
- flair
- token-classification
- sequence-tagger-model
base_model: dbmdz/bert-tiny-historic-multilingual-cased
widget:
- text: Värri , Teittinen , Forsman , Tensik - kala m . fl . anslöto sig till reservatio
- nen , hvaremot lm Fieandt , Huopo - nen , Koskelin , Leppänen , ( Li - belits
) , Eklund m . fl . förordade ut - skottets formulering af § 11 .
---
# Fine-tuned Flair Model on Swedish NewsEye NER Dataset (HIPE-2022)
This Flair model was fine-tuned on the
[Swedish NewsEye](https://github.com/hipe-eval/HIPE-2022-data/blob/main/documentation/README-newseye.md)
NER Dataset using hmBERT Tiny as backbone LM.
The NewsEye dataset is comprised of diachronic historical newspaper material published between 1850 and 1950
in French, German, Finnish, and Swedish.
More information can be found [here](https://dl.acm.org/doi/abs/10.1145/3404835.3463255).
The following NEs were annotated: `PER`, `LOC`, `ORG` and `HumanProd`.
# Results
We performed a hyper-parameter search over the following parameters with 5 different seeds per configuration:
* Batch Sizes: `[4, 8]`
* Learning Rates: `[5e-05, 3e-05]`
And report micro F1-score on development set:
| Configuration | Seed 1 | Seed 2 | Seed 3 | Seed 4 | Seed 5 | Average |
|-------------------|--------------|----------------|--------------|--------------|--------------|-----------------|
| `bs4-e10-lr5e-05` | [0.4847][1] | [0.481][2] | [0.4674][3] | [0.5085][4] | [0.474][5] | 0.4831 ± 0.0157 |
| `bs8-e10-lr5e-05` | [0.3983][6] | [0.4194][7] | [0.3856][8] | [0.4435][9] | [0.4053][10] | 0.4104 ± 0.0222 |
| `bs4-e10-lr3e-05` | [0.3761][11] | [**0.38**][12] | [0.3236][13] | [0.3594][14] | [0.3567][15] | 0.3592 ± 0.0223 |
| `bs8-e10-lr3e-05` | [0.1194][16] | [0.1521][17] | [0.104][18] | [0.1294][19] | [0.1399][20] | 0.129 ± 0.0185 |
[1]: https://hf.co/stefan-it/hmbench-newseye-sv-hmbert_tiny-bs4-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-1
[2]: https://hf.co/stefan-it/hmbench-newseye-sv-hmbert_tiny-bs4-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-2
[3]: https://hf.co/stefan-it/hmbench-newseye-sv-hmbert_tiny-bs4-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-3
[4]: https://hf.co/stefan-it/hmbench-newseye-sv-hmbert_tiny-bs4-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-4
[5]: https://hf.co/stefan-it/hmbench-newseye-sv-hmbert_tiny-bs4-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-5
[6]: https://hf.co/stefan-it/hmbench-newseye-sv-hmbert_tiny-bs8-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-1
[7]: https://hf.co/stefan-it/hmbench-newseye-sv-hmbert_tiny-bs8-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-2
[8]: https://hf.co/stefan-it/hmbench-newseye-sv-hmbert_tiny-bs8-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-3
[9]: https://hf.co/stefan-it/hmbench-newseye-sv-hmbert_tiny-bs8-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-4
[10]: https://hf.co/stefan-it/hmbench-newseye-sv-hmbert_tiny-bs8-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-5
[11]: https://hf.co/stefan-it/hmbench-newseye-sv-hmbert_tiny-bs4-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-1
[12]: https://hf.co/stefan-it/hmbench-newseye-sv-hmbert_tiny-bs4-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-2
[13]: https://hf.co/stefan-it/hmbench-newseye-sv-hmbert_tiny-bs4-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-3
[14]: https://hf.co/stefan-it/hmbench-newseye-sv-hmbert_tiny-bs4-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-4
[15]: https://hf.co/stefan-it/hmbench-newseye-sv-hmbert_tiny-bs4-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-5
[16]: https://hf.co/stefan-it/hmbench-newseye-sv-hmbert_tiny-bs8-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-1
[17]: https://hf.co/stefan-it/hmbench-newseye-sv-hmbert_tiny-bs8-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-2
[18]: https://hf.co/stefan-it/hmbench-newseye-sv-hmbert_tiny-bs8-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-3
[19]: https://hf.co/stefan-it/hmbench-newseye-sv-hmbert_tiny-bs8-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-4
[20]: https://hf.co/stefan-it/hmbench-newseye-sv-hmbert_tiny-bs8-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-5
The [training log](training.log) and TensorBoard logs (not available for hmBERT Base model) are also uploaded to the model hub.
More information about fine-tuning can be found [here](https://github.com/stefan-it/hmBench).
# Acknowledgements
We thank [Luisa März](https://github.com/LuisaMaerz), [Katharina Schmid](https://github.com/schmika) and
[Erion Çano](https://github.com/erionc) for their fruitful discussions about Historic Language Models.
Research supported with Cloud TPUs from Google's [TPU Research Cloud](https://sites.research.google/trc/about/) (TRC).
Many Thanks for providing access to the TPUs ❤️
|
stefan-it/hmbench-newseye-sv-hmbert_tiny-bs4-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-2 | stefan-it | 2023-10-24T13:53:07Z | 1 | 0 | flair | [
"flair",
"pytorch",
"tensorboard",
"token-classification",
"sequence-tagger-model",
"sv",
"base_model:dbmdz/bert-tiny-historic-multilingual-cased",
"base_model:finetune:dbmdz/bert-tiny-historic-multilingual-cased",
"license:mit",
"region:us"
]
| token-classification | 2023-10-20T00:14:49Z | ---
language: sv
license: mit
tags:
- flair
- token-classification
- sequence-tagger-model
base_model: dbmdz/bert-tiny-historic-multilingual-cased
widget:
- text: Värri , Teittinen , Forsman , Tensik - kala m . fl . anslöto sig till reservatio
- nen , hvaremot lm Fieandt , Huopo - nen , Koskelin , Leppänen , ( Li - belits
) , Eklund m . fl . förordade ut - skottets formulering af § 11 .
---
# Fine-tuned Flair Model on Swedish NewsEye NER Dataset (HIPE-2022)
This Flair model was fine-tuned on the
[Swedish NewsEye](https://github.com/hipe-eval/HIPE-2022-data/blob/main/documentation/README-newseye.md)
NER Dataset using hmBERT Tiny as backbone LM.
The NewsEye dataset is comprised of diachronic historical newspaper material published between 1850 and 1950
in French, German, Finnish, and Swedish.
More information can be found [here](https://dl.acm.org/doi/abs/10.1145/3404835.3463255).
The following NEs were annotated: `PER`, `LOC`, `ORG` and `HumanProd`.
# Results
We performed a hyper-parameter search over the following parameters with 5 different seeds per configuration:
* Batch Sizes: `[4, 8]`
* Learning Rates: `[5e-05, 3e-05]`
And report micro F1-score on development set:
| Configuration | Seed 1 | Seed 2 | Seed 3 | Seed 4 | Seed 5 | Average |
|-------------------|--------------|----------------|--------------|--------------|--------------|-----------------|
| `bs4-e10-lr5e-05` | [0.4847][1] | [**0.481**][2] | [0.4674][3] | [0.5085][4] | [0.474][5] | 0.4831 ± 0.0157 |
| `bs8-e10-lr5e-05` | [0.3983][6] | [0.4194][7] | [0.3856][8] | [0.4435][9] | [0.4053][10] | 0.4104 ± 0.0222 |
| `bs4-e10-lr3e-05` | [0.3761][11] | [0.38][12] | [0.3236][13] | [0.3594][14] | [0.3567][15] | 0.3592 ± 0.0223 |
| `bs8-e10-lr3e-05` | [0.1194][16] | [0.1521][17] | [0.104][18] | [0.1294][19] | [0.1399][20] | 0.129 ± 0.0185 |
[1]: https://hf.co/stefan-it/hmbench-newseye-sv-hmbert_tiny-bs4-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-1
[2]: https://hf.co/stefan-it/hmbench-newseye-sv-hmbert_tiny-bs4-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-2
[3]: https://hf.co/stefan-it/hmbench-newseye-sv-hmbert_tiny-bs4-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-3
[4]: https://hf.co/stefan-it/hmbench-newseye-sv-hmbert_tiny-bs4-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-4
[5]: https://hf.co/stefan-it/hmbench-newseye-sv-hmbert_tiny-bs4-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-5
[6]: https://hf.co/stefan-it/hmbench-newseye-sv-hmbert_tiny-bs8-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-1
[7]: https://hf.co/stefan-it/hmbench-newseye-sv-hmbert_tiny-bs8-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-2
[8]: https://hf.co/stefan-it/hmbench-newseye-sv-hmbert_tiny-bs8-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-3
[9]: https://hf.co/stefan-it/hmbench-newseye-sv-hmbert_tiny-bs8-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-4
[10]: https://hf.co/stefan-it/hmbench-newseye-sv-hmbert_tiny-bs8-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-5
[11]: https://hf.co/stefan-it/hmbench-newseye-sv-hmbert_tiny-bs4-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-1
[12]: https://hf.co/stefan-it/hmbench-newseye-sv-hmbert_tiny-bs4-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-2
[13]: https://hf.co/stefan-it/hmbench-newseye-sv-hmbert_tiny-bs4-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-3
[14]: https://hf.co/stefan-it/hmbench-newseye-sv-hmbert_tiny-bs4-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-4
[15]: https://hf.co/stefan-it/hmbench-newseye-sv-hmbert_tiny-bs4-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-5
[16]: https://hf.co/stefan-it/hmbench-newseye-sv-hmbert_tiny-bs8-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-1
[17]: https://hf.co/stefan-it/hmbench-newseye-sv-hmbert_tiny-bs8-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-2
[18]: https://hf.co/stefan-it/hmbench-newseye-sv-hmbert_tiny-bs8-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-3
[19]: https://hf.co/stefan-it/hmbench-newseye-sv-hmbert_tiny-bs8-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-4
[20]: https://hf.co/stefan-it/hmbench-newseye-sv-hmbert_tiny-bs8-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-5
The [training log](training.log) and TensorBoard logs (not available for hmBERT Base model) are also uploaded to the model hub.
More information about fine-tuning can be found [here](https://github.com/stefan-it/hmBench).
# Acknowledgements
We thank [Luisa März](https://github.com/LuisaMaerz), [Katharina Schmid](https://github.com/schmika) and
[Erion Çano](https://github.com/erionc) for their fruitful discussions about Historic Language Models.
Research supported with Cloud TPUs from Google's [TPU Research Cloud](https://sites.research.google/trc/about/) (TRC).
Many Thanks for providing access to the TPUs ❤️
|
stefan-it/hmbench-newseye-sv-hmbert_tiny-bs8-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-1 | stefan-it | 2023-10-24T13:53:06Z | 3 | 0 | flair | [
"flair",
"pytorch",
"tensorboard",
"token-classification",
"sequence-tagger-model",
"sv",
"base_model:dbmdz/bert-tiny-historic-multilingual-cased",
"base_model:finetune:dbmdz/bert-tiny-historic-multilingual-cased",
"license:mit",
"region:us"
]
| token-classification | 2023-10-20T00:11:51Z | ---
language: sv
license: mit
tags:
- flair
- token-classification
- sequence-tagger-model
base_model: dbmdz/bert-tiny-historic-multilingual-cased
widget:
- text: Värri , Teittinen , Forsman , Tensik - kala m . fl . anslöto sig till reservatio
- nen , hvaremot lm Fieandt , Huopo - nen , Koskelin , Leppänen , ( Li - belits
) , Eklund m . fl . förordade ut - skottets formulering af § 11 .
---
# Fine-tuned Flair Model on Swedish NewsEye NER Dataset (HIPE-2022)
This Flair model was fine-tuned on the
[Swedish NewsEye](https://github.com/hipe-eval/HIPE-2022-data/blob/main/documentation/README-newseye.md)
NER Dataset using hmBERT Tiny as backbone LM.
The NewsEye dataset is comprised of diachronic historical newspaper material published between 1850 and 1950
in French, German, Finnish, and Swedish.
More information can be found [here](https://dl.acm.org/doi/abs/10.1145/3404835.3463255).
The following NEs were annotated: `PER`, `LOC`, `ORG` and `HumanProd`.
# Results
We performed a hyper-parameter search over the following parameters with 5 different seeds per configuration:
* Batch Sizes: `[4, 8]`
* Learning Rates: `[5e-05, 3e-05]`
And report micro F1-score on development set:
| Configuration | Seed 1 | Seed 2 | Seed 3 | Seed 4 | Seed 5 | Average |
|-------------------|------------------|--------------|--------------|--------------|--------------|-----------------|
| `bs4-e10-lr5e-05` | [0.4847][1] | [0.481][2] | [0.4674][3] | [0.5085][4] | [0.474][5] | 0.4831 ± 0.0157 |
| `bs8-e10-lr5e-05` | [0.3983][6] | [0.4194][7] | [0.3856][8] | [0.4435][9] | [0.4053][10] | 0.4104 ± 0.0222 |
| `bs4-e10-lr3e-05` | [0.3761][11] | [0.38][12] | [0.3236][13] | [0.3594][14] | [0.3567][15] | 0.3592 ± 0.0223 |
| `bs8-e10-lr3e-05` | [**0.1194**][16] | [0.1521][17] | [0.104][18] | [0.1294][19] | [0.1399][20] | 0.129 ± 0.0185 |
[1]: https://hf.co/stefan-it/hmbench-newseye-sv-hmbert_tiny-bs4-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-1
[2]: https://hf.co/stefan-it/hmbench-newseye-sv-hmbert_tiny-bs4-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-2
[3]: https://hf.co/stefan-it/hmbench-newseye-sv-hmbert_tiny-bs4-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-3
[4]: https://hf.co/stefan-it/hmbench-newseye-sv-hmbert_tiny-bs4-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-4
[5]: https://hf.co/stefan-it/hmbench-newseye-sv-hmbert_tiny-bs4-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-5
[6]: https://hf.co/stefan-it/hmbench-newseye-sv-hmbert_tiny-bs8-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-1
[7]: https://hf.co/stefan-it/hmbench-newseye-sv-hmbert_tiny-bs8-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-2
[8]: https://hf.co/stefan-it/hmbench-newseye-sv-hmbert_tiny-bs8-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-3
[9]: https://hf.co/stefan-it/hmbench-newseye-sv-hmbert_tiny-bs8-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-4
[10]: https://hf.co/stefan-it/hmbench-newseye-sv-hmbert_tiny-bs8-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-5
[11]: https://hf.co/stefan-it/hmbench-newseye-sv-hmbert_tiny-bs4-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-1
[12]: https://hf.co/stefan-it/hmbench-newseye-sv-hmbert_tiny-bs4-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-2
[13]: https://hf.co/stefan-it/hmbench-newseye-sv-hmbert_tiny-bs4-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-3
[14]: https://hf.co/stefan-it/hmbench-newseye-sv-hmbert_tiny-bs4-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-4
[15]: https://hf.co/stefan-it/hmbench-newseye-sv-hmbert_tiny-bs4-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-5
[16]: https://hf.co/stefan-it/hmbench-newseye-sv-hmbert_tiny-bs8-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-1
[17]: https://hf.co/stefan-it/hmbench-newseye-sv-hmbert_tiny-bs8-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-2
[18]: https://hf.co/stefan-it/hmbench-newseye-sv-hmbert_tiny-bs8-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-3
[19]: https://hf.co/stefan-it/hmbench-newseye-sv-hmbert_tiny-bs8-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-4
[20]: https://hf.co/stefan-it/hmbench-newseye-sv-hmbert_tiny-bs8-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-5
The [training log](training.log) and TensorBoard logs (not available for hmBERT Base model) are also uploaded to the model hub.
More information about fine-tuning can be found [here](https://github.com/stefan-it/hmBench).
# Acknowledgements
We thank [Luisa März](https://github.com/LuisaMaerz), [Katharina Schmid](https://github.com/schmika) and
[Erion Çano](https://github.com/erionc) for their fruitful discussions about Historic Language Models.
Research supported with Cloud TPUs from Google's [TPU Research Cloud](https://sites.research.google/trc/about/) (TRC).
Many Thanks for providing access to the TPUs ❤️
|
stefan-it/hmbench-newseye-fr-hmbert_tiny-bs8-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-5 | stefan-it | 2023-10-24T13:51:30Z | 1 | 0 | flair | [
"flair",
"pytorch",
"tensorboard",
"token-classification",
"sequence-tagger-model",
"fr",
"base_model:dbmdz/bert-tiny-historic-multilingual-cased",
"base_model:finetune:dbmdz/bert-tiny-historic-multilingual-cased",
"license:mit",
"region:us"
]
| token-classification | 2023-10-19T21:12:49Z | ---
language: fr
license: mit
tags:
- flair
- token-classification
- sequence-tagger-model
base_model: dbmdz/bert-tiny-historic-multilingual-cased
widget:
- text: Le Moniteur universel fait ressortir les avantages de la situation de l '
Allemagne , sa force militaire , le peu d ' intérêts personnels qu ' elle peut
avoir dans la question d ' Orient .
---
# Fine-tuned Flair Model on French NewsEye NER Dataset (HIPE-2022)
This Flair model was fine-tuned on the
[French NewsEye](https://github.com/hipe-eval/HIPE-2022-data/blob/main/documentation/README-newseye.md)
NER Dataset using hmBERT Tiny as backbone LM.
The NewsEye dataset is comprised of diachronic historical newspaper material published between 1850 and 1950
in French, German, Finnish, and Swedish.
More information can be found [here](https://dl.acm.org/doi/abs/10.1145/3404835.3463255).
The following NEs were annotated: `PER`, `LOC`, `ORG` and `HumanProd`.
# Results
We performed a hyper-parameter search over the following parameters with 5 different seeds per configuration:
* Batch Sizes: `[4, 8]`
* Learning Rates: `[5e-05, 3e-05]`
And report micro F1-score on development set:
| Configuration | Seed 1 | Seed 2 | Seed 3 | Seed 4 | Seed 5 | Average |
|-------------------|--------------|--------------|--------------|--------------|------------------|-----------------|
| `bs4-e10-lr5e-05` | [0.5782][1] | [0.5584][2] | [0.5555][3] | [0.5685][4] | [0.5422][5] | 0.5606 ± 0.0136 |
| `bs8-e10-lr5e-05` | [0.5486][6] | [0.5273][7] | [0.5282][8] | [0.5288][9] | [**0.5067**][10] | 0.5279 ± 0.0148 |
| `bs4-e10-lr3e-05` | [0.5251][11] | [0.5103][12] | [0.5041][13] | [0.5124][14] | [0.479][15] | 0.5062 ± 0.017 |
| `bs8-e10-lr3e-05` | [0.4815][16] | [0.4879][17] | [0.4783][18] | [0.4648][19] | [0.4628][20] | 0.4751 ± 0.0109 |
[1]: https://hf.co/stefan-it/hmbench-newseye-fr-hmbert_tiny-bs4-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-1
[2]: https://hf.co/stefan-it/hmbench-newseye-fr-hmbert_tiny-bs4-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-2
[3]: https://hf.co/stefan-it/hmbench-newseye-fr-hmbert_tiny-bs4-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-3
[4]: https://hf.co/stefan-it/hmbench-newseye-fr-hmbert_tiny-bs4-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-4
[5]: https://hf.co/stefan-it/hmbench-newseye-fr-hmbert_tiny-bs4-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-5
[6]: https://hf.co/stefan-it/hmbench-newseye-fr-hmbert_tiny-bs8-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-1
[7]: https://hf.co/stefan-it/hmbench-newseye-fr-hmbert_tiny-bs8-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-2
[8]: https://hf.co/stefan-it/hmbench-newseye-fr-hmbert_tiny-bs8-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-3
[9]: https://hf.co/stefan-it/hmbench-newseye-fr-hmbert_tiny-bs8-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-4
[10]: https://hf.co/stefan-it/hmbench-newseye-fr-hmbert_tiny-bs8-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-5
[11]: https://hf.co/stefan-it/hmbench-newseye-fr-hmbert_tiny-bs4-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-1
[12]: https://hf.co/stefan-it/hmbench-newseye-fr-hmbert_tiny-bs4-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-2
[13]: https://hf.co/stefan-it/hmbench-newseye-fr-hmbert_tiny-bs4-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-3
[14]: https://hf.co/stefan-it/hmbench-newseye-fr-hmbert_tiny-bs4-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-4
[15]: https://hf.co/stefan-it/hmbench-newseye-fr-hmbert_tiny-bs4-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-5
[16]: https://hf.co/stefan-it/hmbench-newseye-fr-hmbert_tiny-bs8-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-1
[17]: https://hf.co/stefan-it/hmbench-newseye-fr-hmbert_tiny-bs8-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-2
[18]: https://hf.co/stefan-it/hmbench-newseye-fr-hmbert_tiny-bs8-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-3
[19]: https://hf.co/stefan-it/hmbench-newseye-fr-hmbert_tiny-bs8-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-4
[20]: https://hf.co/stefan-it/hmbench-newseye-fr-hmbert_tiny-bs8-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-5
The [training log](training.log) and TensorBoard logs (not available for hmBERT Base model) are also uploaded to the model hub.
More information about fine-tuning can be found [here](https://github.com/stefan-it/hmBench).
# Acknowledgements
We thank [Luisa März](https://github.com/LuisaMaerz), [Katharina Schmid](https://github.com/schmika) and
[Erion Çano](https://github.com/erionc) for their fruitful discussions about Historic Language Models.
Research supported with Cloud TPUs from Google's [TPU Research Cloud](https://sites.research.google/trc/about/) (TRC).
Many Thanks for providing access to the TPUs ❤️
|
stefan-it/hmbench-newseye-fr-hmbert_tiny-bs4-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-5 | stefan-it | 2023-10-24T13:51:29Z | 5 | 0 | flair | [
"flair",
"pytorch",
"tensorboard",
"token-classification",
"sequence-tagger-model",
"fr",
"base_model:dbmdz/bert-tiny-historic-multilingual-cased",
"base_model:finetune:dbmdz/bert-tiny-historic-multilingual-cased",
"license:mit",
"region:us"
]
| token-classification | 2023-10-19T20:58:07Z | ---
language: fr
license: mit
tags:
- flair
- token-classification
- sequence-tagger-model
base_model: dbmdz/bert-tiny-historic-multilingual-cased
widget:
- text: Le Moniteur universel fait ressortir les avantages de la situation de l '
Allemagne , sa force militaire , le peu d ' intérêts personnels qu ' elle peut
avoir dans la question d ' Orient .
---
# Fine-tuned Flair Model on French NewsEye NER Dataset (HIPE-2022)
This Flair model was fine-tuned on the
[French NewsEye](https://github.com/hipe-eval/HIPE-2022-data/blob/main/documentation/README-newseye.md)
NER Dataset using hmBERT Tiny as backbone LM.
The NewsEye dataset is comprised of diachronic historical newspaper material published between 1850 and 1950
in French, German, Finnish, and Swedish.
More information can be found [here](https://dl.acm.org/doi/abs/10.1145/3404835.3463255).
The following NEs were annotated: `PER`, `LOC`, `ORG` and `HumanProd`.
# Results
We performed a hyper-parameter search over the following parameters with 5 different seeds per configuration:
* Batch Sizes: `[4, 8]`
* Learning Rates: `[5e-05, 3e-05]`
And report micro F1-score on development set:
| Configuration | Seed 1 | Seed 2 | Seed 3 | Seed 4 | Seed 5 | Average |
|-------------------|--------------|--------------|--------------|--------------|-----------------|-----------------|
| `bs4-e10-lr5e-05` | [0.5782][1] | [0.5584][2] | [0.5555][3] | [0.5685][4] | [0.5422][5] | 0.5606 ± 0.0136 |
| `bs8-e10-lr5e-05` | [0.5486][6] | [0.5273][7] | [0.5282][8] | [0.5288][9] | [0.5067][10] | 0.5279 ± 0.0148 |
| `bs4-e10-lr3e-05` | [0.5251][11] | [0.5103][12] | [0.5041][13] | [0.5124][14] | [**0.479**][15] | 0.5062 ± 0.017 |
| `bs8-e10-lr3e-05` | [0.4815][16] | [0.4879][17] | [0.4783][18] | [0.4648][19] | [0.4628][20] | 0.4751 ± 0.0109 |
[1]: https://hf.co/stefan-it/hmbench-newseye-fr-hmbert_tiny-bs4-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-1
[2]: https://hf.co/stefan-it/hmbench-newseye-fr-hmbert_tiny-bs4-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-2
[3]: https://hf.co/stefan-it/hmbench-newseye-fr-hmbert_tiny-bs4-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-3
[4]: https://hf.co/stefan-it/hmbench-newseye-fr-hmbert_tiny-bs4-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-4
[5]: https://hf.co/stefan-it/hmbench-newseye-fr-hmbert_tiny-bs4-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-5
[6]: https://hf.co/stefan-it/hmbench-newseye-fr-hmbert_tiny-bs8-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-1
[7]: https://hf.co/stefan-it/hmbench-newseye-fr-hmbert_tiny-bs8-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-2
[8]: https://hf.co/stefan-it/hmbench-newseye-fr-hmbert_tiny-bs8-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-3
[9]: https://hf.co/stefan-it/hmbench-newseye-fr-hmbert_tiny-bs8-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-4
[10]: https://hf.co/stefan-it/hmbench-newseye-fr-hmbert_tiny-bs8-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-5
[11]: https://hf.co/stefan-it/hmbench-newseye-fr-hmbert_tiny-bs4-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-1
[12]: https://hf.co/stefan-it/hmbench-newseye-fr-hmbert_tiny-bs4-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-2
[13]: https://hf.co/stefan-it/hmbench-newseye-fr-hmbert_tiny-bs4-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-3
[14]: https://hf.co/stefan-it/hmbench-newseye-fr-hmbert_tiny-bs4-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-4
[15]: https://hf.co/stefan-it/hmbench-newseye-fr-hmbert_tiny-bs4-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-5
[16]: https://hf.co/stefan-it/hmbench-newseye-fr-hmbert_tiny-bs8-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-1
[17]: https://hf.co/stefan-it/hmbench-newseye-fr-hmbert_tiny-bs8-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-2
[18]: https://hf.co/stefan-it/hmbench-newseye-fr-hmbert_tiny-bs8-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-3
[19]: https://hf.co/stefan-it/hmbench-newseye-fr-hmbert_tiny-bs8-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-4
[20]: https://hf.co/stefan-it/hmbench-newseye-fr-hmbert_tiny-bs8-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-5
The [training log](training.log) and TensorBoard logs (not available for hmBERT Base model) are also uploaded to the model hub.
More information about fine-tuning can be found [here](https://github.com/stefan-it/hmBench).
# Acknowledgements
We thank [Luisa März](https://github.com/LuisaMaerz), [Katharina Schmid](https://github.com/schmika) and
[Erion Çano](https://github.com/erionc) for their fruitful discussions about Historic Language Models.
Research supported with Cloud TPUs from Google's [TPU Research Cloud](https://sites.research.google/trc/about/) (TRC).
Many Thanks for providing access to the TPUs ❤️
|
stefan-it/hmbench-newseye-fr-hmbert_tiny-bs4-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-4 | stefan-it | 2023-10-24T13:51:27Z | 2 | 0 | flair | [
"flair",
"pytorch",
"tensorboard",
"token-classification",
"sequence-tagger-model",
"fr",
"base_model:dbmdz/bert-tiny-historic-multilingual-cased",
"base_model:finetune:dbmdz/bert-tiny-historic-multilingual-cased",
"license:mit",
"region:us"
]
| token-classification | 2023-10-19T20:43:24Z | ---
language: fr
license: mit
tags:
- flair
- token-classification
- sequence-tagger-model
base_model: dbmdz/bert-tiny-historic-multilingual-cased
widget:
- text: Le Moniteur universel fait ressortir les avantages de la situation de l '
Allemagne , sa force militaire , le peu d ' intérêts personnels qu ' elle peut
avoir dans la question d ' Orient .
---
# Fine-tuned Flair Model on French NewsEye NER Dataset (HIPE-2022)
This Flair model was fine-tuned on the
[French NewsEye](https://github.com/hipe-eval/HIPE-2022-data/blob/main/documentation/README-newseye.md)
NER Dataset using hmBERT Tiny as backbone LM.
The NewsEye dataset is comprised of diachronic historical newspaper material published between 1850 and 1950
in French, German, Finnish, and Swedish.
More information can be found [here](https://dl.acm.org/doi/abs/10.1145/3404835.3463255).
The following NEs were annotated: `PER`, `LOC`, `ORG` and `HumanProd`.
# Results
We performed a hyper-parameter search over the following parameters with 5 different seeds per configuration:
* Batch Sizes: `[4, 8]`
* Learning Rates: `[5e-05, 3e-05]`
And report micro F1-score on development set:
| Configuration | Seed 1 | Seed 2 | Seed 3 | Seed 4 | Seed 5 | Average |
|-------------------|--------------|--------------|--------------|-----------------|--------------|-----------------|
| `bs4-e10-lr5e-05` | [0.5782][1] | [0.5584][2] | [0.5555][3] | [**0.5685**][4] | [0.5422][5] | 0.5606 ± 0.0136 |
| `bs8-e10-lr5e-05` | [0.5486][6] | [0.5273][7] | [0.5282][8] | [0.5288][9] | [0.5067][10] | 0.5279 ± 0.0148 |
| `bs4-e10-lr3e-05` | [0.5251][11] | [0.5103][12] | [0.5041][13] | [0.5124][14] | [0.479][15] | 0.5062 ± 0.017 |
| `bs8-e10-lr3e-05` | [0.4815][16] | [0.4879][17] | [0.4783][18] | [0.4648][19] | [0.4628][20] | 0.4751 ± 0.0109 |
[1]: https://hf.co/stefan-it/hmbench-newseye-fr-hmbert_tiny-bs4-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-1
[2]: https://hf.co/stefan-it/hmbench-newseye-fr-hmbert_tiny-bs4-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-2
[3]: https://hf.co/stefan-it/hmbench-newseye-fr-hmbert_tiny-bs4-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-3
[4]: https://hf.co/stefan-it/hmbench-newseye-fr-hmbert_tiny-bs4-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-4
[5]: https://hf.co/stefan-it/hmbench-newseye-fr-hmbert_tiny-bs4-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-5
[6]: https://hf.co/stefan-it/hmbench-newseye-fr-hmbert_tiny-bs8-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-1
[7]: https://hf.co/stefan-it/hmbench-newseye-fr-hmbert_tiny-bs8-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-2
[8]: https://hf.co/stefan-it/hmbench-newseye-fr-hmbert_tiny-bs8-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-3
[9]: https://hf.co/stefan-it/hmbench-newseye-fr-hmbert_tiny-bs8-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-4
[10]: https://hf.co/stefan-it/hmbench-newseye-fr-hmbert_tiny-bs8-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-5
[11]: https://hf.co/stefan-it/hmbench-newseye-fr-hmbert_tiny-bs4-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-1
[12]: https://hf.co/stefan-it/hmbench-newseye-fr-hmbert_tiny-bs4-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-2
[13]: https://hf.co/stefan-it/hmbench-newseye-fr-hmbert_tiny-bs4-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-3
[14]: https://hf.co/stefan-it/hmbench-newseye-fr-hmbert_tiny-bs4-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-4
[15]: https://hf.co/stefan-it/hmbench-newseye-fr-hmbert_tiny-bs4-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-5
[16]: https://hf.co/stefan-it/hmbench-newseye-fr-hmbert_tiny-bs8-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-1
[17]: https://hf.co/stefan-it/hmbench-newseye-fr-hmbert_tiny-bs8-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-2
[18]: https://hf.co/stefan-it/hmbench-newseye-fr-hmbert_tiny-bs8-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-3
[19]: https://hf.co/stefan-it/hmbench-newseye-fr-hmbert_tiny-bs8-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-4
[20]: https://hf.co/stefan-it/hmbench-newseye-fr-hmbert_tiny-bs8-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-5
The [training log](training.log) and TensorBoard logs (not available for hmBERT Base model) are also uploaded to the model hub.
More information about fine-tuning can be found [here](https://github.com/stefan-it/hmBench).
# Acknowledgements
We thank [Luisa März](https://github.com/LuisaMaerz), [Katharina Schmid](https://github.com/schmika) and
[Erion Çano](https://github.com/erionc) for their fruitful discussions about Historic Language Models.
Research supported with Cloud TPUs from Google's [TPU Research Cloud](https://sites.research.google/trc/about/) (TRC).
Many Thanks for providing access to the TPUs ❤️
|
stefan-it/hmbench-newseye-fr-hmbert_tiny-bs8-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-3 | stefan-it | 2023-10-24T13:51:25Z | 1 | 0 | flair | [
"flair",
"pytorch",
"tensorboard",
"token-classification",
"sequence-tagger-model",
"fr",
"base_model:dbmdz/bert-tiny-historic-multilingual-cased",
"base_model:finetune:dbmdz/bert-tiny-historic-multilingual-cased",
"license:mit",
"region:us"
]
| token-classification | 2023-10-19T20:27:22Z | ---
language: fr
license: mit
tags:
- flair
- token-classification
- sequence-tagger-model
base_model: dbmdz/bert-tiny-historic-multilingual-cased
widget:
- text: Le Moniteur universel fait ressortir les avantages de la situation de l '
Allemagne , sa force militaire , le peu d ' intérêts personnels qu ' elle peut
avoir dans la question d ' Orient .
---
# Fine-tuned Flair Model on French NewsEye NER Dataset (HIPE-2022)
This Flair model was fine-tuned on the
[French NewsEye](https://github.com/hipe-eval/HIPE-2022-data/blob/main/documentation/README-newseye.md)
NER Dataset using hmBERT Tiny as backbone LM.
The NewsEye dataset is comprised of diachronic historical newspaper material published between 1850 and 1950
in French, German, Finnish, and Swedish.
More information can be found [here](https://dl.acm.org/doi/abs/10.1145/3404835.3463255).
The following NEs were annotated: `PER`, `LOC`, `ORG` and `HumanProd`.
# Results
We performed a hyper-parameter search over the following parameters with 5 different seeds per configuration:
* Batch Sizes: `[4, 8]`
* Learning Rates: `[5e-05, 3e-05]`
And report micro F1-score on development set:
| Configuration | Seed 1 | Seed 2 | Seed 3 | Seed 4 | Seed 5 | Average |
|-------------------|--------------|--------------|------------------|--------------|--------------|-----------------|
| `bs4-e10-lr5e-05` | [0.5782][1] | [0.5584][2] | [0.5555][3] | [0.5685][4] | [0.5422][5] | 0.5606 ± 0.0136 |
| `bs8-e10-lr5e-05` | [0.5486][6] | [0.5273][7] | [0.5282][8] | [0.5288][9] | [0.5067][10] | 0.5279 ± 0.0148 |
| `bs4-e10-lr3e-05` | [0.5251][11] | [0.5103][12] | [0.5041][13] | [0.5124][14] | [0.479][15] | 0.5062 ± 0.017 |
| `bs8-e10-lr3e-05` | [0.4815][16] | [0.4879][17] | [**0.4783**][18] | [0.4648][19] | [0.4628][20] | 0.4751 ± 0.0109 |
[1]: https://hf.co/stefan-it/hmbench-newseye-fr-hmbert_tiny-bs4-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-1
[2]: https://hf.co/stefan-it/hmbench-newseye-fr-hmbert_tiny-bs4-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-2
[3]: https://hf.co/stefan-it/hmbench-newseye-fr-hmbert_tiny-bs4-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-3
[4]: https://hf.co/stefan-it/hmbench-newseye-fr-hmbert_tiny-bs4-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-4
[5]: https://hf.co/stefan-it/hmbench-newseye-fr-hmbert_tiny-bs4-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-5
[6]: https://hf.co/stefan-it/hmbench-newseye-fr-hmbert_tiny-bs8-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-1
[7]: https://hf.co/stefan-it/hmbench-newseye-fr-hmbert_tiny-bs8-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-2
[8]: https://hf.co/stefan-it/hmbench-newseye-fr-hmbert_tiny-bs8-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-3
[9]: https://hf.co/stefan-it/hmbench-newseye-fr-hmbert_tiny-bs8-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-4
[10]: https://hf.co/stefan-it/hmbench-newseye-fr-hmbert_tiny-bs8-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-5
[11]: https://hf.co/stefan-it/hmbench-newseye-fr-hmbert_tiny-bs4-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-1
[12]: https://hf.co/stefan-it/hmbench-newseye-fr-hmbert_tiny-bs4-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-2
[13]: https://hf.co/stefan-it/hmbench-newseye-fr-hmbert_tiny-bs4-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-3
[14]: https://hf.co/stefan-it/hmbench-newseye-fr-hmbert_tiny-bs4-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-4
[15]: https://hf.co/stefan-it/hmbench-newseye-fr-hmbert_tiny-bs4-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-5
[16]: https://hf.co/stefan-it/hmbench-newseye-fr-hmbert_tiny-bs8-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-1
[17]: https://hf.co/stefan-it/hmbench-newseye-fr-hmbert_tiny-bs8-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-2
[18]: https://hf.co/stefan-it/hmbench-newseye-fr-hmbert_tiny-bs8-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-3
[19]: https://hf.co/stefan-it/hmbench-newseye-fr-hmbert_tiny-bs8-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-4
[20]: https://hf.co/stefan-it/hmbench-newseye-fr-hmbert_tiny-bs8-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-5
The [training log](training.log) and TensorBoard logs (not available for hmBERT Base model) are also uploaded to the model hub.
More information about fine-tuning can be found [here](https://github.com/stefan-it/hmBench).
# Acknowledgements
We thank [Luisa März](https://github.com/LuisaMaerz), [Katharina Schmid](https://github.com/schmika) and
[Erion Çano](https://github.com/erionc) for their fruitful discussions about Historic Language Models.
Research supported with Cloud TPUs from Google's [TPU Research Cloud](https://sites.research.google/trc/about/) (TRC).
Many Thanks for providing access to the TPUs ❤️
|
stefan-it/hmbench-newseye-fr-hmbert_tiny-bs4-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-3 | stefan-it | 2023-10-24T13:51:25Z | 2 | 0 | flair | [
"flair",
"pytorch",
"tensorboard",
"token-classification",
"sequence-tagger-model",
"fr",
"base_model:dbmdz/bert-tiny-historic-multilingual-cased",
"base_model:finetune:dbmdz/bert-tiny-historic-multilingual-cased",
"license:mit",
"region:us"
]
| token-classification | 2023-10-19T20:22:57Z | ---
language: fr
license: mit
tags:
- flair
- token-classification
- sequence-tagger-model
base_model: dbmdz/bert-tiny-historic-multilingual-cased
widget:
- text: Le Moniteur universel fait ressortir les avantages de la situation de l '
Allemagne , sa force militaire , le peu d ' intérêts personnels qu ' elle peut
avoir dans la question d ' Orient .
---
# Fine-tuned Flair Model on French NewsEye NER Dataset (HIPE-2022)
This Flair model was fine-tuned on the
[French NewsEye](https://github.com/hipe-eval/HIPE-2022-data/blob/main/documentation/README-newseye.md)
NER Dataset using hmBERT Tiny as backbone LM.
The NewsEye dataset is comprised of diachronic historical newspaper material published between 1850 and 1950
in French, German, Finnish, and Swedish.
More information can be found [here](https://dl.acm.org/doi/abs/10.1145/3404835.3463255).
The following NEs were annotated: `PER`, `LOC`, `ORG` and `HumanProd`.
# Results
We performed a hyper-parameter search over the following parameters with 5 different seeds per configuration:
* Batch Sizes: `[4, 8]`
* Learning Rates: `[5e-05, 3e-05]`
And report micro F1-score on development set:
| Configuration | Seed 1 | Seed 2 | Seed 3 | Seed 4 | Seed 5 | Average |
|-------------------|--------------|--------------|-----------------|--------------|--------------|-----------------|
| `bs4-e10-lr5e-05` | [0.5782][1] | [0.5584][2] | [**0.5555**][3] | [0.5685][4] | [0.5422][5] | 0.5606 ± 0.0136 |
| `bs8-e10-lr5e-05` | [0.5486][6] | [0.5273][7] | [0.5282][8] | [0.5288][9] | [0.5067][10] | 0.5279 ± 0.0148 |
| `bs4-e10-lr3e-05` | [0.5251][11] | [0.5103][12] | [0.5041][13] | [0.5124][14] | [0.479][15] | 0.5062 ± 0.017 |
| `bs8-e10-lr3e-05` | [0.4815][16] | [0.4879][17] | [0.4783][18] | [0.4648][19] | [0.4628][20] | 0.4751 ± 0.0109 |
[1]: https://hf.co/stefan-it/hmbench-newseye-fr-hmbert_tiny-bs4-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-1
[2]: https://hf.co/stefan-it/hmbench-newseye-fr-hmbert_tiny-bs4-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-2
[3]: https://hf.co/stefan-it/hmbench-newseye-fr-hmbert_tiny-bs4-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-3
[4]: https://hf.co/stefan-it/hmbench-newseye-fr-hmbert_tiny-bs4-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-4
[5]: https://hf.co/stefan-it/hmbench-newseye-fr-hmbert_tiny-bs4-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-5
[6]: https://hf.co/stefan-it/hmbench-newseye-fr-hmbert_tiny-bs8-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-1
[7]: https://hf.co/stefan-it/hmbench-newseye-fr-hmbert_tiny-bs8-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-2
[8]: https://hf.co/stefan-it/hmbench-newseye-fr-hmbert_tiny-bs8-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-3
[9]: https://hf.co/stefan-it/hmbench-newseye-fr-hmbert_tiny-bs8-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-4
[10]: https://hf.co/stefan-it/hmbench-newseye-fr-hmbert_tiny-bs8-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-5
[11]: https://hf.co/stefan-it/hmbench-newseye-fr-hmbert_tiny-bs4-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-1
[12]: https://hf.co/stefan-it/hmbench-newseye-fr-hmbert_tiny-bs4-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-2
[13]: https://hf.co/stefan-it/hmbench-newseye-fr-hmbert_tiny-bs4-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-3
[14]: https://hf.co/stefan-it/hmbench-newseye-fr-hmbert_tiny-bs4-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-4
[15]: https://hf.co/stefan-it/hmbench-newseye-fr-hmbert_tiny-bs4-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-5
[16]: https://hf.co/stefan-it/hmbench-newseye-fr-hmbert_tiny-bs8-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-1
[17]: https://hf.co/stefan-it/hmbench-newseye-fr-hmbert_tiny-bs8-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-2
[18]: https://hf.co/stefan-it/hmbench-newseye-fr-hmbert_tiny-bs8-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-3
[19]: https://hf.co/stefan-it/hmbench-newseye-fr-hmbert_tiny-bs8-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-4
[20]: https://hf.co/stefan-it/hmbench-newseye-fr-hmbert_tiny-bs8-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-5
The [training log](training.log) and TensorBoard logs (not available for hmBERT Base model) are also uploaded to the model hub.
More information about fine-tuning can be found [here](https://github.com/stefan-it/hmBench).
# Acknowledgements
We thank [Luisa März](https://github.com/LuisaMaerz), [Katharina Schmid](https://github.com/schmika) and
[Erion Çano](https://github.com/erionc) for their fruitful discussions about Historic Language Models.
Research supported with Cloud TPUs from Google's [TPU Research Cloud](https://sites.research.google/trc/about/) (TRC).
Many Thanks for providing access to the TPUs ❤️
|
stefan-it/hmbench-newseye-fr-hmbert_tiny-bs4-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-3 | stefan-it | 2023-10-24T13:51:24Z | 5 | 0 | flair | [
"flair",
"pytorch",
"tensorboard",
"token-classification",
"sequence-tagger-model",
"fr",
"base_model:dbmdz/bert-tiny-historic-multilingual-cased",
"base_model:finetune:dbmdz/bert-tiny-historic-multilingual-cased",
"license:mit",
"region:us"
]
| token-classification | 2023-10-19T20:17:11Z | ---
language: fr
license: mit
tags:
- flair
- token-classification
- sequence-tagger-model
base_model: dbmdz/bert-tiny-historic-multilingual-cased
widget:
- text: Le Moniteur universel fait ressortir les avantages de la situation de l '
Allemagne , sa force militaire , le peu d ' intérêts personnels qu ' elle peut
avoir dans la question d ' Orient .
---
# Fine-tuned Flair Model on French NewsEye NER Dataset (HIPE-2022)
This Flair model was fine-tuned on the
[French NewsEye](https://github.com/hipe-eval/HIPE-2022-data/blob/main/documentation/README-newseye.md)
NER Dataset using hmBERT Tiny as backbone LM.
The NewsEye dataset is comprised of diachronic historical newspaper material published between 1850 and 1950
in French, German, Finnish, and Swedish.
More information can be found [here](https://dl.acm.org/doi/abs/10.1145/3404835.3463255).
The following NEs were annotated: `PER`, `LOC`, `ORG` and `HumanProd`.
# Results
We performed a hyper-parameter search over the following parameters with 5 different seeds per configuration:
* Batch Sizes: `[4, 8]`
* Learning Rates: `[5e-05, 3e-05]`
And report micro F1-score on development set:
| Configuration | Seed 1 | Seed 2 | Seed 3 | Seed 4 | Seed 5 | Average |
|-------------------|--------------|--------------|------------------|--------------|--------------|-----------------|
| `bs4-e10-lr5e-05` | [0.5782][1] | [0.5584][2] | [0.5555][3] | [0.5685][4] | [0.5422][5] | 0.5606 ± 0.0136 |
| `bs8-e10-lr5e-05` | [0.5486][6] | [0.5273][7] | [0.5282][8] | [0.5288][9] | [0.5067][10] | 0.5279 ± 0.0148 |
| `bs4-e10-lr3e-05` | [0.5251][11] | [0.5103][12] | [**0.5041**][13] | [0.5124][14] | [0.479][15] | 0.5062 ± 0.017 |
| `bs8-e10-lr3e-05` | [0.4815][16] | [0.4879][17] | [0.4783][18] | [0.4648][19] | [0.4628][20] | 0.4751 ± 0.0109 |
[1]: https://hf.co/stefan-it/hmbench-newseye-fr-hmbert_tiny-bs4-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-1
[2]: https://hf.co/stefan-it/hmbench-newseye-fr-hmbert_tiny-bs4-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-2
[3]: https://hf.co/stefan-it/hmbench-newseye-fr-hmbert_tiny-bs4-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-3
[4]: https://hf.co/stefan-it/hmbench-newseye-fr-hmbert_tiny-bs4-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-4
[5]: https://hf.co/stefan-it/hmbench-newseye-fr-hmbert_tiny-bs4-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-5
[6]: https://hf.co/stefan-it/hmbench-newseye-fr-hmbert_tiny-bs8-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-1
[7]: https://hf.co/stefan-it/hmbench-newseye-fr-hmbert_tiny-bs8-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-2
[8]: https://hf.co/stefan-it/hmbench-newseye-fr-hmbert_tiny-bs8-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-3
[9]: https://hf.co/stefan-it/hmbench-newseye-fr-hmbert_tiny-bs8-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-4
[10]: https://hf.co/stefan-it/hmbench-newseye-fr-hmbert_tiny-bs8-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-5
[11]: https://hf.co/stefan-it/hmbench-newseye-fr-hmbert_tiny-bs4-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-1
[12]: https://hf.co/stefan-it/hmbench-newseye-fr-hmbert_tiny-bs4-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-2
[13]: https://hf.co/stefan-it/hmbench-newseye-fr-hmbert_tiny-bs4-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-3
[14]: https://hf.co/stefan-it/hmbench-newseye-fr-hmbert_tiny-bs4-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-4
[15]: https://hf.co/stefan-it/hmbench-newseye-fr-hmbert_tiny-bs4-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-5
[16]: https://hf.co/stefan-it/hmbench-newseye-fr-hmbert_tiny-bs8-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-1
[17]: https://hf.co/stefan-it/hmbench-newseye-fr-hmbert_tiny-bs8-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-2
[18]: https://hf.co/stefan-it/hmbench-newseye-fr-hmbert_tiny-bs8-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-3
[19]: https://hf.co/stefan-it/hmbench-newseye-fr-hmbert_tiny-bs8-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-4
[20]: https://hf.co/stefan-it/hmbench-newseye-fr-hmbert_tiny-bs8-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-5
The [training log](training.log) and TensorBoard logs (not available for hmBERT Base model) are also uploaded to the model hub.
More information about fine-tuning can be found [here](https://github.com/stefan-it/hmBench).
# Acknowledgements
We thank [Luisa März](https://github.com/LuisaMaerz), [Katharina Schmid](https://github.com/schmika) and
[Erion Çano](https://github.com/erionc) for their fruitful discussions about Historic Language Models.
Research supported with Cloud TPUs from Google's [TPU Research Cloud](https://sites.research.google/trc/about/) (TRC).
Many Thanks for providing access to the TPUs ❤️
|
stefan-it/hmbench-newseye-fr-hmbert_tiny-bs8-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-2 | stefan-it | 2023-10-24T13:51:23Z | 3 | 0 | flair | [
"flair",
"pytorch",
"tensorboard",
"token-classification",
"sequence-tagger-model",
"fr",
"base_model:dbmdz/bert-tiny-historic-multilingual-cased",
"base_model:finetune:dbmdz/bert-tiny-historic-multilingual-cased",
"license:mit",
"region:us"
]
| token-classification | 2023-10-19T20:11:24Z | ---
language: fr
license: mit
tags:
- flair
- token-classification
- sequence-tagger-model
base_model: dbmdz/bert-tiny-historic-multilingual-cased
widget:
- text: Le Moniteur universel fait ressortir les avantages de la situation de l '
Allemagne , sa force militaire , le peu d ' intérêts personnels qu ' elle peut
avoir dans la question d ' Orient .
---
# Fine-tuned Flair Model on French NewsEye NER Dataset (HIPE-2022)
This Flair model was fine-tuned on the
[French NewsEye](https://github.com/hipe-eval/HIPE-2022-data/blob/main/documentation/README-newseye.md)
NER Dataset using hmBERT Tiny as backbone LM.
The NewsEye dataset is comprised of diachronic historical newspaper material published between 1850 and 1950
in French, German, Finnish, and Swedish.
More information can be found [here](https://dl.acm.org/doi/abs/10.1145/3404835.3463255).
The following NEs were annotated: `PER`, `LOC`, `ORG` and `HumanProd`.
# Results
We performed a hyper-parameter search over the following parameters with 5 different seeds per configuration:
* Batch Sizes: `[4, 8]`
* Learning Rates: `[5e-05, 3e-05]`
And report micro F1-score on development set:
| Configuration | Seed 1 | Seed 2 | Seed 3 | Seed 4 | Seed 5 | Average |
|-------------------|--------------|-----------------|--------------|--------------|--------------|-----------------|
| `bs4-e10-lr5e-05` | [0.5782][1] | [0.5584][2] | [0.5555][3] | [0.5685][4] | [0.5422][5] | 0.5606 ± 0.0136 |
| `bs8-e10-lr5e-05` | [0.5486][6] | [**0.5273**][7] | [0.5282][8] | [0.5288][9] | [0.5067][10] | 0.5279 ± 0.0148 |
| `bs4-e10-lr3e-05` | [0.5251][11] | [0.5103][12] | [0.5041][13] | [0.5124][14] | [0.479][15] | 0.5062 ± 0.017 |
| `bs8-e10-lr3e-05` | [0.4815][16] | [0.4879][17] | [0.4783][18] | [0.4648][19] | [0.4628][20] | 0.4751 ± 0.0109 |
[1]: https://hf.co/stefan-it/hmbench-newseye-fr-hmbert_tiny-bs4-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-1
[2]: https://hf.co/stefan-it/hmbench-newseye-fr-hmbert_tiny-bs4-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-2
[3]: https://hf.co/stefan-it/hmbench-newseye-fr-hmbert_tiny-bs4-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-3
[4]: https://hf.co/stefan-it/hmbench-newseye-fr-hmbert_tiny-bs4-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-4
[5]: https://hf.co/stefan-it/hmbench-newseye-fr-hmbert_tiny-bs4-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-5
[6]: https://hf.co/stefan-it/hmbench-newseye-fr-hmbert_tiny-bs8-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-1
[7]: https://hf.co/stefan-it/hmbench-newseye-fr-hmbert_tiny-bs8-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-2
[8]: https://hf.co/stefan-it/hmbench-newseye-fr-hmbert_tiny-bs8-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-3
[9]: https://hf.co/stefan-it/hmbench-newseye-fr-hmbert_tiny-bs8-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-4
[10]: https://hf.co/stefan-it/hmbench-newseye-fr-hmbert_tiny-bs8-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-5
[11]: https://hf.co/stefan-it/hmbench-newseye-fr-hmbert_tiny-bs4-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-1
[12]: https://hf.co/stefan-it/hmbench-newseye-fr-hmbert_tiny-bs4-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-2
[13]: https://hf.co/stefan-it/hmbench-newseye-fr-hmbert_tiny-bs4-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-3
[14]: https://hf.co/stefan-it/hmbench-newseye-fr-hmbert_tiny-bs4-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-4
[15]: https://hf.co/stefan-it/hmbench-newseye-fr-hmbert_tiny-bs4-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-5
[16]: https://hf.co/stefan-it/hmbench-newseye-fr-hmbert_tiny-bs8-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-1
[17]: https://hf.co/stefan-it/hmbench-newseye-fr-hmbert_tiny-bs8-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-2
[18]: https://hf.co/stefan-it/hmbench-newseye-fr-hmbert_tiny-bs8-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-3
[19]: https://hf.co/stefan-it/hmbench-newseye-fr-hmbert_tiny-bs8-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-4
[20]: https://hf.co/stefan-it/hmbench-newseye-fr-hmbert_tiny-bs8-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-5
The [training log](training.log) and TensorBoard logs (not available for hmBERT Base model) are also uploaded to the model hub.
More information about fine-tuning can be found [here](https://github.com/stefan-it/hmBench).
# Acknowledgements
We thank [Luisa März](https://github.com/LuisaMaerz), [Katharina Schmid](https://github.com/schmika) and
[Erion Çano](https://github.com/erionc) for their fruitful discussions about Historic Language Models.
Research supported with Cloud TPUs from Google's [TPU Research Cloud](https://sites.research.google/trc/about/) (TRC).
Many Thanks for providing access to the TPUs ❤️
|
stefan-it/hmbench-newseye-fr-hmbert_tiny-bs4-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-2 | stefan-it | 2023-10-24T13:51:22Z | 2 | 0 | flair | [
"flair",
"pytorch",
"tensorboard",
"token-classification",
"sequence-tagger-model",
"fr",
"base_model:dbmdz/bert-tiny-historic-multilingual-cased",
"base_model:finetune:dbmdz/bert-tiny-historic-multilingual-cased",
"license:mit",
"region:us"
]
| token-classification | 2023-10-19T20:02:33Z | ---
language: fr
license: mit
tags:
- flair
- token-classification
- sequence-tagger-model
base_model: dbmdz/bert-tiny-historic-multilingual-cased
widget:
- text: Le Moniteur universel fait ressortir les avantages de la situation de l '
Allemagne , sa force militaire , le peu d ' intérêts personnels qu ' elle peut
avoir dans la question d ' Orient .
---
# Fine-tuned Flair Model on French NewsEye NER Dataset (HIPE-2022)
This Flair model was fine-tuned on the
[French NewsEye](https://github.com/hipe-eval/HIPE-2022-data/blob/main/documentation/README-newseye.md)
NER Dataset using hmBERT Tiny as backbone LM.
The NewsEye dataset is comprised of diachronic historical newspaper material published between 1850 and 1950
in French, German, Finnish, and Swedish.
More information can be found [here](https://dl.acm.org/doi/abs/10.1145/3404835.3463255).
The following NEs were annotated: `PER`, `LOC`, `ORG` and `HumanProd`.
# Results
We performed a hyper-parameter search over the following parameters with 5 different seeds per configuration:
* Batch Sizes: `[4, 8]`
* Learning Rates: `[5e-05, 3e-05]`
And report micro F1-score on development set:
| Configuration | Seed 1 | Seed 2 | Seed 3 | Seed 4 | Seed 5 | Average |
|-------------------|--------------|-----------------|--------------|--------------|--------------|-----------------|
| `bs4-e10-lr5e-05` | [0.5782][1] | [**0.5584**][2] | [0.5555][3] | [0.5685][4] | [0.5422][5] | 0.5606 ± 0.0136 |
| `bs8-e10-lr5e-05` | [0.5486][6] | [0.5273][7] | [0.5282][8] | [0.5288][9] | [0.5067][10] | 0.5279 ± 0.0148 |
| `bs4-e10-lr3e-05` | [0.5251][11] | [0.5103][12] | [0.5041][13] | [0.5124][14] | [0.479][15] | 0.5062 ± 0.017 |
| `bs8-e10-lr3e-05` | [0.4815][16] | [0.4879][17] | [0.4783][18] | [0.4648][19] | [0.4628][20] | 0.4751 ± 0.0109 |
[1]: https://hf.co/stefan-it/hmbench-newseye-fr-hmbert_tiny-bs4-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-1
[2]: https://hf.co/stefan-it/hmbench-newseye-fr-hmbert_tiny-bs4-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-2
[3]: https://hf.co/stefan-it/hmbench-newseye-fr-hmbert_tiny-bs4-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-3
[4]: https://hf.co/stefan-it/hmbench-newseye-fr-hmbert_tiny-bs4-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-4
[5]: https://hf.co/stefan-it/hmbench-newseye-fr-hmbert_tiny-bs4-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-5
[6]: https://hf.co/stefan-it/hmbench-newseye-fr-hmbert_tiny-bs8-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-1
[7]: https://hf.co/stefan-it/hmbench-newseye-fr-hmbert_tiny-bs8-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-2
[8]: https://hf.co/stefan-it/hmbench-newseye-fr-hmbert_tiny-bs8-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-3
[9]: https://hf.co/stefan-it/hmbench-newseye-fr-hmbert_tiny-bs8-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-4
[10]: https://hf.co/stefan-it/hmbench-newseye-fr-hmbert_tiny-bs8-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-5
[11]: https://hf.co/stefan-it/hmbench-newseye-fr-hmbert_tiny-bs4-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-1
[12]: https://hf.co/stefan-it/hmbench-newseye-fr-hmbert_tiny-bs4-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-2
[13]: https://hf.co/stefan-it/hmbench-newseye-fr-hmbert_tiny-bs4-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-3
[14]: https://hf.co/stefan-it/hmbench-newseye-fr-hmbert_tiny-bs4-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-4
[15]: https://hf.co/stefan-it/hmbench-newseye-fr-hmbert_tiny-bs4-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-5
[16]: https://hf.co/stefan-it/hmbench-newseye-fr-hmbert_tiny-bs8-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-1
[17]: https://hf.co/stefan-it/hmbench-newseye-fr-hmbert_tiny-bs8-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-2
[18]: https://hf.co/stefan-it/hmbench-newseye-fr-hmbert_tiny-bs8-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-3
[19]: https://hf.co/stefan-it/hmbench-newseye-fr-hmbert_tiny-bs8-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-4
[20]: https://hf.co/stefan-it/hmbench-newseye-fr-hmbert_tiny-bs8-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-5
The [training log](training.log) and TensorBoard logs (not available for hmBERT Base model) are also uploaded to the model hub.
More information about fine-tuning can be found [here](https://github.com/stefan-it/hmBench).
# Acknowledgements
We thank [Luisa März](https://github.com/LuisaMaerz), [Katharina Schmid](https://github.com/schmika) and
[Erion Çano](https://github.com/erionc) for their fruitful discussions about Historic Language Models.
Research supported with Cloud TPUs from Google's [TPU Research Cloud](https://sites.research.google/trc/about/) (TRC).
Many Thanks for providing access to the TPUs ❤️
|
stefan-it/hmbench-newseye-fr-hmbert_tiny-bs4-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-2 | stefan-it | 2023-10-24T13:51:21Z | 7 | 0 | flair | [
"flair",
"pytorch",
"tensorboard",
"token-classification",
"sequence-tagger-model",
"fr",
"base_model:dbmdz/bert-tiny-historic-multilingual-cased",
"base_model:finetune:dbmdz/bert-tiny-historic-multilingual-cased",
"license:mit",
"region:us"
]
| token-classification | 2023-10-19T19:56:47Z | ---
language: fr
license: mit
tags:
- flair
- token-classification
- sequence-tagger-model
base_model: dbmdz/bert-tiny-historic-multilingual-cased
widget:
- text: Le Moniteur universel fait ressortir les avantages de la situation de l '
Allemagne , sa force militaire , le peu d ' intérêts personnels qu ' elle peut
avoir dans la question d ' Orient .
---
# Fine-tuned Flair Model on French NewsEye NER Dataset (HIPE-2022)
This Flair model was fine-tuned on the
[French NewsEye](https://github.com/hipe-eval/HIPE-2022-data/blob/main/documentation/README-newseye.md)
NER Dataset using hmBERT Tiny as backbone LM.
The NewsEye dataset is comprised of diachronic historical newspaper material published between 1850 and 1950
in French, German, Finnish, and Swedish.
More information can be found [here](https://dl.acm.org/doi/abs/10.1145/3404835.3463255).
The following NEs were annotated: `PER`, `LOC`, `ORG` and `HumanProd`.
# Results
We performed a hyper-parameter search over the following parameters with 5 different seeds per configuration:
* Batch Sizes: `[4, 8]`
* Learning Rates: `[5e-05, 3e-05]`
And report micro F1-score on development set:
| Configuration | Seed 1 | Seed 2 | Seed 3 | Seed 4 | Seed 5 | Average |
|-------------------|--------------|------------------|--------------|--------------|--------------|-----------------|
| `bs4-e10-lr5e-05` | [0.5782][1] | [0.5584][2] | [0.5555][3] | [0.5685][4] | [0.5422][5] | 0.5606 ± 0.0136 |
| `bs8-e10-lr5e-05` | [0.5486][6] | [0.5273][7] | [0.5282][8] | [0.5288][9] | [0.5067][10] | 0.5279 ± 0.0148 |
| `bs4-e10-lr3e-05` | [0.5251][11] | [**0.5103**][12] | [0.5041][13] | [0.5124][14] | [0.479][15] | 0.5062 ± 0.017 |
| `bs8-e10-lr3e-05` | [0.4815][16] | [0.4879][17] | [0.4783][18] | [0.4648][19] | [0.4628][20] | 0.4751 ± 0.0109 |
[1]: https://hf.co/stefan-it/hmbench-newseye-fr-hmbert_tiny-bs4-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-1
[2]: https://hf.co/stefan-it/hmbench-newseye-fr-hmbert_tiny-bs4-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-2
[3]: https://hf.co/stefan-it/hmbench-newseye-fr-hmbert_tiny-bs4-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-3
[4]: https://hf.co/stefan-it/hmbench-newseye-fr-hmbert_tiny-bs4-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-4
[5]: https://hf.co/stefan-it/hmbench-newseye-fr-hmbert_tiny-bs4-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-5
[6]: https://hf.co/stefan-it/hmbench-newseye-fr-hmbert_tiny-bs8-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-1
[7]: https://hf.co/stefan-it/hmbench-newseye-fr-hmbert_tiny-bs8-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-2
[8]: https://hf.co/stefan-it/hmbench-newseye-fr-hmbert_tiny-bs8-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-3
[9]: https://hf.co/stefan-it/hmbench-newseye-fr-hmbert_tiny-bs8-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-4
[10]: https://hf.co/stefan-it/hmbench-newseye-fr-hmbert_tiny-bs8-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-5
[11]: https://hf.co/stefan-it/hmbench-newseye-fr-hmbert_tiny-bs4-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-1
[12]: https://hf.co/stefan-it/hmbench-newseye-fr-hmbert_tiny-bs4-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-2
[13]: https://hf.co/stefan-it/hmbench-newseye-fr-hmbert_tiny-bs4-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-3
[14]: https://hf.co/stefan-it/hmbench-newseye-fr-hmbert_tiny-bs4-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-4
[15]: https://hf.co/stefan-it/hmbench-newseye-fr-hmbert_tiny-bs4-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-5
[16]: https://hf.co/stefan-it/hmbench-newseye-fr-hmbert_tiny-bs8-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-1
[17]: https://hf.co/stefan-it/hmbench-newseye-fr-hmbert_tiny-bs8-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-2
[18]: https://hf.co/stefan-it/hmbench-newseye-fr-hmbert_tiny-bs8-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-3
[19]: https://hf.co/stefan-it/hmbench-newseye-fr-hmbert_tiny-bs8-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-4
[20]: https://hf.co/stefan-it/hmbench-newseye-fr-hmbert_tiny-bs8-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-5
The [training log](training.log) and TensorBoard logs (not available for hmBERT Base model) are also uploaded to the model hub.
More information about fine-tuning can be found [here](https://github.com/stefan-it/hmBench).
# Acknowledgements
We thank [Luisa März](https://github.com/LuisaMaerz), [Katharina Schmid](https://github.com/schmika) and
[Erion Çano](https://github.com/erionc) for their fruitful discussions about Historic Language Models.
Research supported with Cloud TPUs from Google's [TPU Research Cloud](https://sites.research.google/trc/about/) (TRC).
Many Thanks for providing access to the TPUs ❤️
|
stefan-it/hmbench-newseye-fr-hmbert_tiny-bs8-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-1 | stefan-it | 2023-10-24T13:51:20Z | 4 | 0 | flair | [
"flair",
"pytorch",
"tensorboard",
"token-classification",
"sequence-tagger-model",
"fr",
"base_model:dbmdz/bert-tiny-historic-multilingual-cased",
"base_model:finetune:dbmdz/bert-tiny-historic-multilingual-cased",
"license:mit",
"region:us"
]
| token-classification | 2023-10-19T19:46:28Z | ---
language: fr
license: mit
tags:
- flair
- token-classification
- sequence-tagger-model
base_model: dbmdz/bert-tiny-historic-multilingual-cased
widget:
- text: Le Moniteur universel fait ressortir les avantages de la situation de l '
Allemagne , sa force militaire , le peu d ' intérêts personnels qu ' elle peut
avoir dans la question d ' Orient .
---
# Fine-tuned Flair Model on French NewsEye NER Dataset (HIPE-2022)
This Flair model was fine-tuned on the
[French NewsEye](https://github.com/hipe-eval/HIPE-2022-data/blob/main/documentation/README-newseye.md)
NER Dataset using hmBERT Tiny as backbone LM.
The NewsEye dataset is comprised of diachronic historical newspaper material published between 1850 and 1950
in French, German, Finnish, and Swedish.
More information can be found [here](https://dl.acm.org/doi/abs/10.1145/3404835.3463255).
The following NEs were annotated: `PER`, `LOC`, `ORG` and `HumanProd`.
# Results
We performed a hyper-parameter search over the following parameters with 5 different seeds per configuration:
* Batch Sizes: `[4, 8]`
* Learning Rates: `[5e-05, 3e-05]`
And report micro F1-score on development set:
| Configuration | Seed 1 | Seed 2 | Seed 3 | Seed 4 | Seed 5 | Average |
|-------------------|------------------|--------------|--------------|--------------|--------------|-----------------|
| `bs4-e10-lr5e-05` | [0.5782][1] | [0.5584][2] | [0.5555][3] | [0.5685][4] | [0.5422][5] | 0.5606 ± 0.0136 |
| `bs8-e10-lr5e-05` | [0.5486][6] | [0.5273][7] | [0.5282][8] | [0.5288][9] | [0.5067][10] | 0.5279 ± 0.0148 |
| `bs4-e10-lr3e-05` | [0.5251][11] | [0.5103][12] | [0.5041][13] | [0.5124][14] | [0.479][15] | 0.5062 ± 0.017 |
| `bs8-e10-lr3e-05` | [**0.4815**][16] | [0.4879][17] | [0.4783][18] | [0.4648][19] | [0.4628][20] | 0.4751 ± 0.0109 |
[1]: https://hf.co/stefan-it/hmbench-newseye-fr-hmbert_tiny-bs4-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-1
[2]: https://hf.co/stefan-it/hmbench-newseye-fr-hmbert_tiny-bs4-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-2
[3]: https://hf.co/stefan-it/hmbench-newseye-fr-hmbert_tiny-bs4-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-3
[4]: https://hf.co/stefan-it/hmbench-newseye-fr-hmbert_tiny-bs4-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-4
[5]: https://hf.co/stefan-it/hmbench-newseye-fr-hmbert_tiny-bs4-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-5
[6]: https://hf.co/stefan-it/hmbench-newseye-fr-hmbert_tiny-bs8-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-1
[7]: https://hf.co/stefan-it/hmbench-newseye-fr-hmbert_tiny-bs8-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-2
[8]: https://hf.co/stefan-it/hmbench-newseye-fr-hmbert_tiny-bs8-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-3
[9]: https://hf.co/stefan-it/hmbench-newseye-fr-hmbert_tiny-bs8-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-4
[10]: https://hf.co/stefan-it/hmbench-newseye-fr-hmbert_tiny-bs8-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-5
[11]: https://hf.co/stefan-it/hmbench-newseye-fr-hmbert_tiny-bs4-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-1
[12]: https://hf.co/stefan-it/hmbench-newseye-fr-hmbert_tiny-bs4-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-2
[13]: https://hf.co/stefan-it/hmbench-newseye-fr-hmbert_tiny-bs4-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-3
[14]: https://hf.co/stefan-it/hmbench-newseye-fr-hmbert_tiny-bs4-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-4
[15]: https://hf.co/stefan-it/hmbench-newseye-fr-hmbert_tiny-bs4-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-5
[16]: https://hf.co/stefan-it/hmbench-newseye-fr-hmbert_tiny-bs8-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-1
[17]: https://hf.co/stefan-it/hmbench-newseye-fr-hmbert_tiny-bs8-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-2
[18]: https://hf.co/stefan-it/hmbench-newseye-fr-hmbert_tiny-bs8-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-3
[19]: https://hf.co/stefan-it/hmbench-newseye-fr-hmbert_tiny-bs8-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-4
[20]: https://hf.co/stefan-it/hmbench-newseye-fr-hmbert_tiny-bs8-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-5
The [training log](training.log) and TensorBoard logs (not available for hmBERT Base model) are also uploaded to the model hub.
More information about fine-tuning can be found [here](https://github.com/stefan-it/hmBench).
# Acknowledgements
We thank [Luisa März](https://github.com/LuisaMaerz), [Katharina Schmid](https://github.com/schmika) and
[Erion Çano](https://github.com/erionc) for their fruitful discussions about Historic Language Models.
Research supported with Cloud TPUs from Google's [TPU Research Cloud](https://sites.research.google/trc/about/) (TRC).
Many Thanks for providing access to the TPUs ❤️
|
stefan-it/hmbench-newseye-fr-hmbert_tiny-bs4-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-1 | stefan-it | 2023-10-24T13:51:19Z | 4 | 0 | flair | [
"flair",
"pytorch",
"tensorboard",
"token-classification",
"sequence-tagger-model",
"fr",
"base_model:dbmdz/bert-tiny-historic-multilingual-cased",
"base_model:finetune:dbmdz/bert-tiny-historic-multilingual-cased",
"license:mit",
"region:us"
]
| token-classification | 2023-10-19T19:36:14Z | ---
language: fr
license: mit
tags:
- flair
- token-classification
- sequence-tagger-model
base_model: dbmdz/bert-tiny-historic-multilingual-cased
widget:
- text: Le Moniteur universel fait ressortir les avantages de la situation de l '
Allemagne , sa force militaire , le peu d ' intérêts personnels qu ' elle peut
avoir dans la question d ' Orient .
---
# Fine-tuned Flair Model on French NewsEye NER Dataset (HIPE-2022)
This Flair model was fine-tuned on the
[French NewsEye](https://github.com/hipe-eval/HIPE-2022-data/blob/main/documentation/README-newseye.md)
NER Dataset using hmBERT Tiny as backbone LM.
The NewsEye dataset is comprised of diachronic historical newspaper material published between 1850 and 1950
in French, German, Finnish, and Swedish.
More information can be found [here](https://dl.acm.org/doi/abs/10.1145/3404835.3463255).
The following NEs were annotated: `PER`, `LOC`, `ORG` and `HumanProd`.
# Results
We performed a hyper-parameter search over the following parameters with 5 different seeds per configuration:
* Batch Sizes: `[4, 8]`
* Learning Rates: `[5e-05, 3e-05]`
And report micro F1-score on development set:
| Configuration | Seed 1 | Seed 2 | Seed 3 | Seed 4 | Seed 5 | Average |
|-------------------|------------------|--------------|--------------|--------------|--------------|-----------------|
| `bs4-e10-lr5e-05` | [0.5782][1] | [0.5584][2] | [0.5555][3] | [0.5685][4] | [0.5422][5] | 0.5606 ± 0.0136 |
| `bs8-e10-lr5e-05` | [0.5486][6] | [0.5273][7] | [0.5282][8] | [0.5288][9] | [0.5067][10] | 0.5279 ± 0.0148 |
| `bs4-e10-lr3e-05` | [**0.5251**][11] | [0.5103][12] | [0.5041][13] | [0.5124][14] | [0.479][15] | 0.5062 ± 0.017 |
| `bs8-e10-lr3e-05` | [0.4815][16] | [0.4879][17] | [0.4783][18] | [0.4648][19] | [0.4628][20] | 0.4751 ± 0.0109 |
[1]: https://hf.co/stefan-it/hmbench-newseye-fr-hmbert_tiny-bs4-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-1
[2]: https://hf.co/stefan-it/hmbench-newseye-fr-hmbert_tiny-bs4-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-2
[3]: https://hf.co/stefan-it/hmbench-newseye-fr-hmbert_tiny-bs4-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-3
[4]: https://hf.co/stefan-it/hmbench-newseye-fr-hmbert_tiny-bs4-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-4
[5]: https://hf.co/stefan-it/hmbench-newseye-fr-hmbert_tiny-bs4-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-5
[6]: https://hf.co/stefan-it/hmbench-newseye-fr-hmbert_tiny-bs8-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-1
[7]: https://hf.co/stefan-it/hmbench-newseye-fr-hmbert_tiny-bs8-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-2
[8]: https://hf.co/stefan-it/hmbench-newseye-fr-hmbert_tiny-bs8-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-3
[9]: https://hf.co/stefan-it/hmbench-newseye-fr-hmbert_tiny-bs8-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-4
[10]: https://hf.co/stefan-it/hmbench-newseye-fr-hmbert_tiny-bs8-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-5
[11]: https://hf.co/stefan-it/hmbench-newseye-fr-hmbert_tiny-bs4-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-1
[12]: https://hf.co/stefan-it/hmbench-newseye-fr-hmbert_tiny-bs4-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-2
[13]: https://hf.co/stefan-it/hmbench-newseye-fr-hmbert_tiny-bs4-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-3
[14]: https://hf.co/stefan-it/hmbench-newseye-fr-hmbert_tiny-bs4-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-4
[15]: https://hf.co/stefan-it/hmbench-newseye-fr-hmbert_tiny-bs4-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-5
[16]: https://hf.co/stefan-it/hmbench-newseye-fr-hmbert_tiny-bs8-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-1
[17]: https://hf.co/stefan-it/hmbench-newseye-fr-hmbert_tiny-bs8-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-2
[18]: https://hf.co/stefan-it/hmbench-newseye-fr-hmbert_tiny-bs8-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-3
[19]: https://hf.co/stefan-it/hmbench-newseye-fr-hmbert_tiny-bs8-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-4
[20]: https://hf.co/stefan-it/hmbench-newseye-fr-hmbert_tiny-bs8-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-5
The [training log](training.log) and TensorBoard logs (not available for hmBERT Base model) are also uploaded to the model hub.
More information about fine-tuning can be found [here](https://github.com/stefan-it/hmBench).
# Acknowledgements
We thank [Luisa März](https://github.com/LuisaMaerz), [Katharina Schmid](https://github.com/schmika) and
[Erion Çano](https://github.com/erionc) for their fruitful discussions about Historic Language Models.
Research supported with Cloud TPUs from Google's [TPU Research Cloud](https://sites.research.google/trc/about/) (TRC).
Many Thanks for providing access to the TPUs ❤️
|
stefan-it/hmbench-newseye-fi-hmbert_tiny-bs8-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-5 | stefan-it | 2023-10-24T13:51:15Z | 0 | 0 | flair | [
"flair",
"pytorch",
"tensorboard",
"token-classification",
"sequence-tagger-model",
"fi",
"base_model:dbmdz/bert-tiny-historic-multilingual-cased",
"base_model:finetune:dbmdz/bert-tiny-historic-multilingual-cased",
"license:mit",
"region:us"
]
| token-classification | 2023-10-20T00:01:10Z | ---
language: fi
license: mit
tags:
- flair
- token-classification
- sequence-tagger-model
base_model: dbmdz/bert-tiny-historic-multilingual-cased
widget:
- text: Rooseveltin sihteeri ilmoittaa perättö - mäksi tiedon , että Rooseveltia olisi
kehotettu käymään Englannissa , Saksassa ja Venäjällä puhumassa San Franciscon
näyttelyn puolesta .
---
# Fine-tuned Flair Model on Finnish NewsEye NER Dataset (HIPE-2022)
This Flair model was fine-tuned on the
[Finnish NewsEye](https://github.com/hipe-eval/HIPE-2022-data/blob/main/documentation/README-newseye.md)
NER Dataset using hmBERT Tiny as backbone LM.
The NewsEye dataset is comprised of diachronic historical newspaper material published between 1850 and 1950
in French, German, Finnish, and Swedish.
More information can be found [here](https://dl.acm.org/doi/abs/10.1145/3404835.3463255).
The following NEs were annotated: `PER`, `LOC`, `ORG` and `HumanProd`.
# Results
We performed a hyper-parameter search over the following parameters with 5 different seeds per configuration:
* Batch Sizes: `[4, 8]`
* Learning Rates: `[5e-05, 3e-05]`
And report micro F1-score on development set:
| Configuration | Seed 1 | Seed 2 | Seed 3 | Seed 4 | Seed 5 | Average |
|-------------------|--------------|--------------|--------------|--------------|------------------|-----------------|
| `bs4-e10-lr5e-05` | [0.287][1] | [0.2991][2] | [0.2927][3] | [0.3193][4] | [0.2961][5] | 0.2988 ± 0.0123 |
| `bs8-e10-lr5e-05` | [0.2365][6] | [0.2617][7] | [0.2667][8] | [0.2893][9] | [**0.2164**][10] | 0.2541 ± 0.0282 |
| `bs4-e10-lr3e-05` | [0.1927][11] | [0.2165][12] | [0.2442][13] | [0.2513][14] | [0.1848][15] | 0.2179 ± 0.0297 |
| `bs8-e10-lr3e-05` | [0.1222][16] | [0.1524][17] | [0.1858][18] | [0.1063][19] | [0.1056][20] | 0.1345 ± 0.0344 |
[1]: https://hf.co/stefan-it/hmbench-newseye-fi-hmbert_tiny-bs4-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-1
[2]: https://hf.co/stefan-it/hmbench-newseye-fi-hmbert_tiny-bs4-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-2
[3]: https://hf.co/stefan-it/hmbench-newseye-fi-hmbert_tiny-bs4-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-3
[4]: https://hf.co/stefan-it/hmbench-newseye-fi-hmbert_tiny-bs4-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-4
[5]: https://hf.co/stefan-it/hmbench-newseye-fi-hmbert_tiny-bs4-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-5
[6]: https://hf.co/stefan-it/hmbench-newseye-fi-hmbert_tiny-bs8-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-1
[7]: https://hf.co/stefan-it/hmbench-newseye-fi-hmbert_tiny-bs8-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-2
[8]: https://hf.co/stefan-it/hmbench-newseye-fi-hmbert_tiny-bs8-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-3
[9]: https://hf.co/stefan-it/hmbench-newseye-fi-hmbert_tiny-bs8-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-4
[10]: https://hf.co/stefan-it/hmbench-newseye-fi-hmbert_tiny-bs8-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-5
[11]: https://hf.co/stefan-it/hmbench-newseye-fi-hmbert_tiny-bs4-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-1
[12]: https://hf.co/stefan-it/hmbench-newseye-fi-hmbert_tiny-bs4-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-2
[13]: https://hf.co/stefan-it/hmbench-newseye-fi-hmbert_tiny-bs4-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-3
[14]: https://hf.co/stefan-it/hmbench-newseye-fi-hmbert_tiny-bs4-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-4
[15]: https://hf.co/stefan-it/hmbench-newseye-fi-hmbert_tiny-bs4-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-5
[16]: https://hf.co/stefan-it/hmbench-newseye-fi-hmbert_tiny-bs8-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-1
[17]: https://hf.co/stefan-it/hmbench-newseye-fi-hmbert_tiny-bs8-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-2
[18]: https://hf.co/stefan-it/hmbench-newseye-fi-hmbert_tiny-bs8-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-3
[19]: https://hf.co/stefan-it/hmbench-newseye-fi-hmbert_tiny-bs8-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-4
[20]: https://hf.co/stefan-it/hmbench-newseye-fi-hmbert_tiny-bs8-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-5
The [training log](training.log) and TensorBoard logs (not available for hmBERT Base model) are also uploaded to the model hub.
More information about fine-tuning can be found [here](https://github.com/stefan-it/hmBench).
# Acknowledgements
We thank [Luisa März](https://github.com/LuisaMaerz), [Katharina Schmid](https://github.com/schmika) and
[Erion Çano](https://github.com/erionc) for their fruitful discussions about Historic Language Models.
Research supported with Cloud TPUs from Google's [TPU Research Cloud](https://sites.research.google/trc/about/) (TRC).
Many Thanks for providing access to the TPUs ❤️
|
stefan-it/hmbench-newseye-fi-hmbert_tiny-bs8-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-5 | stefan-it | 2023-10-24T13:51:14Z | 0 | 0 | flair | [
"flair",
"pytorch",
"tensorboard",
"token-classification",
"sequence-tagger-model",
"fi",
"base_model:dbmdz/bert-tiny-historic-multilingual-cased",
"base_model:finetune:dbmdz/bert-tiny-historic-multilingual-cased",
"license:mit",
"region:us"
]
| token-classification | 2023-10-20T00:00:20Z | ---
language: fi
license: mit
tags:
- flair
- token-classification
- sequence-tagger-model
base_model: dbmdz/bert-tiny-historic-multilingual-cased
widget:
- text: Rooseveltin sihteeri ilmoittaa perättö - mäksi tiedon , että Rooseveltia olisi
kehotettu käymään Englannissa , Saksassa ja Venäjällä puhumassa San Franciscon
näyttelyn puolesta .
---
# Fine-tuned Flair Model on Finnish NewsEye NER Dataset (HIPE-2022)
This Flair model was fine-tuned on the
[Finnish NewsEye](https://github.com/hipe-eval/HIPE-2022-data/blob/main/documentation/README-newseye.md)
NER Dataset using hmBERT Tiny as backbone LM.
The NewsEye dataset is comprised of diachronic historical newspaper material published between 1850 and 1950
in French, German, Finnish, and Swedish.
More information can be found [here](https://dl.acm.org/doi/abs/10.1145/3404835.3463255).
The following NEs were annotated: `PER`, `LOC`, `ORG` and `HumanProd`.
# Results
We performed a hyper-parameter search over the following parameters with 5 different seeds per configuration:
* Batch Sizes: `[4, 8]`
* Learning Rates: `[5e-05, 3e-05]`
And report micro F1-score on development set:
| Configuration | Seed 1 | Seed 2 | Seed 3 | Seed 4 | Seed 5 | Average |
|-------------------|--------------|--------------|--------------|--------------|------------------|-----------------|
| `bs4-e10-lr5e-05` | [0.287][1] | [0.2991][2] | [0.2927][3] | [0.3193][4] | [0.2961][5] | 0.2988 ± 0.0123 |
| `bs8-e10-lr5e-05` | [0.2365][6] | [0.2617][7] | [0.2667][8] | [0.2893][9] | [0.2164][10] | 0.2541 ± 0.0282 |
| `bs4-e10-lr3e-05` | [0.1927][11] | [0.2165][12] | [0.2442][13] | [0.2513][14] | [0.1848][15] | 0.2179 ± 0.0297 |
| `bs8-e10-lr3e-05` | [0.1222][16] | [0.1524][17] | [0.1858][18] | [0.1063][19] | [**0.1056**][20] | 0.1345 ± 0.0344 |
[1]: https://hf.co/stefan-it/hmbench-newseye-fi-hmbert_tiny-bs4-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-1
[2]: https://hf.co/stefan-it/hmbench-newseye-fi-hmbert_tiny-bs4-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-2
[3]: https://hf.co/stefan-it/hmbench-newseye-fi-hmbert_tiny-bs4-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-3
[4]: https://hf.co/stefan-it/hmbench-newseye-fi-hmbert_tiny-bs4-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-4
[5]: https://hf.co/stefan-it/hmbench-newseye-fi-hmbert_tiny-bs4-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-5
[6]: https://hf.co/stefan-it/hmbench-newseye-fi-hmbert_tiny-bs8-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-1
[7]: https://hf.co/stefan-it/hmbench-newseye-fi-hmbert_tiny-bs8-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-2
[8]: https://hf.co/stefan-it/hmbench-newseye-fi-hmbert_tiny-bs8-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-3
[9]: https://hf.co/stefan-it/hmbench-newseye-fi-hmbert_tiny-bs8-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-4
[10]: https://hf.co/stefan-it/hmbench-newseye-fi-hmbert_tiny-bs8-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-5
[11]: https://hf.co/stefan-it/hmbench-newseye-fi-hmbert_tiny-bs4-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-1
[12]: https://hf.co/stefan-it/hmbench-newseye-fi-hmbert_tiny-bs4-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-2
[13]: https://hf.co/stefan-it/hmbench-newseye-fi-hmbert_tiny-bs4-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-3
[14]: https://hf.co/stefan-it/hmbench-newseye-fi-hmbert_tiny-bs4-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-4
[15]: https://hf.co/stefan-it/hmbench-newseye-fi-hmbert_tiny-bs4-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-5
[16]: https://hf.co/stefan-it/hmbench-newseye-fi-hmbert_tiny-bs8-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-1
[17]: https://hf.co/stefan-it/hmbench-newseye-fi-hmbert_tiny-bs8-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-2
[18]: https://hf.co/stefan-it/hmbench-newseye-fi-hmbert_tiny-bs8-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-3
[19]: https://hf.co/stefan-it/hmbench-newseye-fi-hmbert_tiny-bs8-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-4
[20]: https://hf.co/stefan-it/hmbench-newseye-fi-hmbert_tiny-bs8-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-5
The [training log](training.log) and TensorBoard logs (not available for hmBERT Base model) are also uploaded to the model hub.
More information about fine-tuning can be found [here](https://github.com/stefan-it/hmBench).
# Acknowledgements
We thank [Luisa März](https://github.com/LuisaMaerz), [Katharina Schmid](https://github.com/schmika) and
[Erion Çano](https://github.com/erionc) for their fruitful discussions about Historic Language Models.
Research supported with Cloud TPUs from Google's [TPU Research Cloud](https://sites.research.google/trc/about/) (TRC).
Many Thanks for providing access to the TPUs ❤️
|
stefan-it/hmbench-newseye-fi-hmbert_tiny-bs4-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-5 | stefan-it | 2023-10-24T13:51:14Z | 1 | 0 | flair | [
"flair",
"pytorch",
"tensorboard",
"token-classification",
"sequence-tagger-model",
"fi",
"base_model:dbmdz/bert-tiny-historic-multilingual-cased",
"base_model:finetune:dbmdz/bert-tiny-historic-multilingual-cased",
"license:mit",
"region:us"
]
| token-classification | 2023-10-19T23:59:31Z | ---
language: fi
license: mit
tags:
- flair
- token-classification
- sequence-tagger-model
base_model: dbmdz/bert-tiny-historic-multilingual-cased
widget:
- text: Rooseveltin sihteeri ilmoittaa perättö - mäksi tiedon , että Rooseveltia olisi
kehotettu käymään Englannissa , Saksassa ja Venäjällä puhumassa San Franciscon
näyttelyn puolesta .
---
# Fine-tuned Flair Model on Finnish NewsEye NER Dataset (HIPE-2022)
This Flair model was fine-tuned on the
[Finnish NewsEye](https://github.com/hipe-eval/HIPE-2022-data/blob/main/documentation/README-newseye.md)
NER Dataset using hmBERT Tiny as backbone LM.
The NewsEye dataset is comprised of diachronic historical newspaper material published between 1850 and 1950
in French, German, Finnish, and Swedish.
More information can be found [here](https://dl.acm.org/doi/abs/10.1145/3404835.3463255).
The following NEs were annotated: `PER`, `LOC`, `ORG` and `HumanProd`.
# Results
We performed a hyper-parameter search over the following parameters with 5 different seeds per configuration:
* Batch Sizes: `[4, 8]`
* Learning Rates: `[5e-05, 3e-05]`
And report micro F1-score on development set:
| Configuration | Seed 1 | Seed 2 | Seed 3 | Seed 4 | Seed 5 | Average |
|-------------------|--------------|--------------|--------------|--------------|-----------------|-----------------|
| `bs4-e10-lr5e-05` | [0.287][1] | [0.2991][2] | [0.2927][3] | [0.3193][4] | [**0.2961**][5] | 0.2988 ± 0.0123 |
| `bs8-e10-lr5e-05` | [0.2365][6] | [0.2617][7] | [0.2667][8] | [0.2893][9] | [0.2164][10] | 0.2541 ± 0.0282 |
| `bs4-e10-lr3e-05` | [0.1927][11] | [0.2165][12] | [0.2442][13] | [0.2513][14] | [0.1848][15] | 0.2179 ± 0.0297 |
| `bs8-e10-lr3e-05` | [0.1222][16] | [0.1524][17] | [0.1858][18] | [0.1063][19] | [0.1056][20] | 0.1345 ± 0.0344 |
[1]: https://hf.co/stefan-it/hmbench-newseye-fi-hmbert_tiny-bs4-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-1
[2]: https://hf.co/stefan-it/hmbench-newseye-fi-hmbert_tiny-bs4-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-2
[3]: https://hf.co/stefan-it/hmbench-newseye-fi-hmbert_tiny-bs4-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-3
[4]: https://hf.co/stefan-it/hmbench-newseye-fi-hmbert_tiny-bs4-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-4
[5]: https://hf.co/stefan-it/hmbench-newseye-fi-hmbert_tiny-bs4-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-5
[6]: https://hf.co/stefan-it/hmbench-newseye-fi-hmbert_tiny-bs8-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-1
[7]: https://hf.co/stefan-it/hmbench-newseye-fi-hmbert_tiny-bs8-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-2
[8]: https://hf.co/stefan-it/hmbench-newseye-fi-hmbert_tiny-bs8-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-3
[9]: https://hf.co/stefan-it/hmbench-newseye-fi-hmbert_tiny-bs8-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-4
[10]: https://hf.co/stefan-it/hmbench-newseye-fi-hmbert_tiny-bs8-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-5
[11]: https://hf.co/stefan-it/hmbench-newseye-fi-hmbert_tiny-bs4-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-1
[12]: https://hf.co/stefan-it/hmbench-newseye-fi-hmbert_tiny-bs4-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-2
[13]: https://hf.co/stefan-it/hmbench-newseye-fi-hmbert_tiny-bs4-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-3
[14]: https://hf.co/stefan-it/hmbench-newseye-fi-hmbert_tiny-bs4-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-4
[15]: https://hf.co/stefan-it/hmbench-newseye-fi-hmbert_tiny-bs4-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-5
[16]: https://hf.co/stefan-it/hmbench-newseye-fi-hmbert_tiny-bs8-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-1
[17]: https://hf.co/stefan-it/hmbench-newseye-fi-hmbert_tiny-bs8-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-2
[18]: https://hf.co/stefan-it/hmbench-newseye-fi-hmbert_tiny-bs8-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-3
[19]: https://hf.co/stefan-it/hmbench-newseye-fi-hmbert_tiny-bs8-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-4
[20]: https://hf.co/stefan-it/hmbench-newseye-fi-hmbert_tiny-bs8-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-5
The [training log](training.log) and TensorBoard logs (not available for hmBERT Base model) are also uploaded to the model hub.
More information about fine-tuning can be found [here](https://github.com/stefan-it/hmBench).
# Acknowledgements
We thank [Luisa März](https://github.com/LuisaMaerz), [Katharina Schmid](https://github.com/schmika) and
[Erion Çano](https://github.com/erionc) for their fruitful discussions about Historic Language Models.
Research supported with Cloud TPUs from Google's [TPU Research Cloud](https://sites.research.google/trc/about/) (TRC).
Many Thanks for providing access to the TPUs ❤️
|
stefan-it/hmbench-newseye-fi-hmbert_tiny-bs4-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-5 | stefan-it | 2023-10-24T13:51:13Z | 3 | 0 | flair | [
"flair",
"pytorch",
"tensorboard",
"token-classification",
"sequence-tagger-model",
"fi",
"base_model:dbmdz/bert-tiny-historic-multilingual-cased",
"base_model:finetune:dbmdz/bert-tiny-historic-multilingual-cased",
"license:mit",
"region:us"
]
| token-classification | 2023-10-19T23:58:29Z | ---
language: fi
license: mit
tags:
- flair
- token-classification
- sequence-tagger-model
base_model: dbmdz/bert-tiny-historic-multilingual-cased
widget:
- text: Rooseveltin sihteeri ilmoittaa perättö - mäksi tiedon , että Rooseveltia olisi
kehotettu käymään Englannissa , Saksassa ja Venäjällä puhumassa San Franciscon
näyttelyn puolesta .
---
# Fine-tuned Flair Model on Finnish NewsEye NER Dataset (HIPE-2022)
This Flair model was fine-tuned on the
[Finnish NewsEye](https://github.com/hipe-eval/HIPE-2022-data/blob/main/documentation/README-newseye.md)
NER Dataset using hmBERT Tiny as backbone LM.
The NewsEye dataset is comprised of diachronic historical newspaper material published between 1850 and 1950
in French, German, Finnish, and Swedish.
More information can be found [here](https://dl.acm.org/doi/abs/10.1145/3404835.3463255).
The following NEs were annotated: `PER`, `LOC`, `ORG` and `HumanProd`.
# Results
We performed a hyper-parameter search over the following parameters with 5 different seeds per configuration:
* Batch Sizes: `[4, 8]`
* Learning Rates: `[5e-05, 3e-05]`
And report micro F1-score on development set:
| Configuration | Seed 1 | Seed 2 | Seed 3 | Seed 4 | Seed 5 | Average |
|-------------------|--------------|--------------|--------------|--------------|------------------|-----------------|
| `bs4-e10-lr5e-05` | [0.287][1] | [0.2991][2] | [0.2927][3] | [0.3193][4] | [0.2961][5] | 0.2988 ± 0.0123 |
| `bs8-e10-lr5e-05` | [0.2365][6] | [0.2617][7] | [0.2667][8] | [0.2893][9] | [0.2164][10] | 0.2541 ± 0.0282 |
| `bs4-e10-lr3e-05` | [0.1927][11] | [0.2165][12] | [0.2442][13] | [0.2513][14] | [**0.1848**][15] | 0.2179 ± 0.0297 |
| `bs8-e10-lr3e-05` | [0.1222][16] | [0.1524][17] | [0.1858][18] | [0.1063][19] | [0.1056][20] | 0.1345 ± 0.0344 |
[1]: https://hf.co/stefan-it/hmbench-newseye-fi-hmbert_tiny-bs4-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-1
[2]: https://hf.co/stefan-it/hmbench-newseye-fi-hmbert_tiny-bs4-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-2
[3]: https://hf.co/stefan-it/hmbench-newseye-fi-hmbert_tiny-bs4-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-3
[4]: https://hf.co/stefan-it/hmbench-newseye-fi-hmbert_tiny-bs4-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-4
[5]: https://hf.co/stefan-it/hmbench-newseye-fi-hmbert_tiny-bs4-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-5
[6]: https://hf.co/stefan-it/hmbench-newseye-fi-hmbert_tiny-bs8-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-1
[7]: https://hf.co/stefan-it/hmbench-newseye-fi-hmbert_tiny-bs8-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-2
[8]: https://hf.co/stefan-it/hmbench-newseye-fi-hmbert_tiny-bs8-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-3
[9]: https://hf.co/stefan-it/hmbench-newseye-fi-hmbert_tiny-bs8-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-4
[10]: https://hf.co/stefan-it/hmbench-newseye-fi-hmbert_tiny-bs8-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-5
[11]: https://hf.co/stefan-it/hmbench-newseye-fi-hmbert_tiny-bs4-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-1
[12]: https://hf.co/stefan-it/hmbench-newseye-fi-hmbert_tiny-bs4-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-2
[13]: https://hf.co/stefan-it/hmbench-newseye-fi-hmbert_tiny-bs4-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-3
[14]: https://hf.co/stefan-it/hmbench-newseye-fi-hmbert_tiny-bs4-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-4
[15]: https://hf.co/stefan-it/hmbench-newseye-fi-hmbert_tiny-bs4-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-5
[16]: https://hf.co/stefan-it/hmbench-newseye-fi-hmbert_tiny-bs8-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-1
[17]: https://hf.co/stefan-it/hmbench-newseye-fi-hmbert_tiny-bs8-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-2
[18]: https://hf.co/stefan-it/hmbench-newseye-fi-hmbert_tiny-bs8-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-3
[19]: https://hf.co/stefan-it/hmbench-newseye-fi-hmbert_tiny-bs8-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-4
[20]: https://hf.co/stefan-it/hmbench-newseye-fi-hmbert_tiny-bs8-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-5
The [training log](training.log) and TensorBoard logs (not available for hmBERT Base model) are also uploaded to the model hub.
More information about fine-tuning can be found [here](https://github.com/stefan-it/hmBench).
# Acknowledgements
We thank [Luisa März](https://github.com/LuisaMaerz), [Katharina Schmid](https://github.com/schmika) and
[Erion Çano](https://github.com/erionc) for their fruitful discussions about Historic Language Models.
Research supported with Cloud TPUs from Google's [TPU Research Cloud](https://sites.research.google/trc/about/) (TRC).
Many Thanks for providing access to the TPUs ❤️
|
stefan-it/hmbench-newseye-fi-hmbert_tiny-bs8-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-3 | stefan-it | 2023-10-24T13:51:10Z | 4 | 0 | flair | [
"flair",
"pytorch",
"tensorboard",
"token-classification",
"sequence-tagger-model",
"fi",
"base_model:dbmdz/bert-tiny-historic-multilingual-cased",
"base_model:finetune:dbmdz/bert-tiny-historic-multilingual-cased",
"license:mit",
"region:us"
]
| token-classification | 2023-10-19T23:53:25Z | ---
language: fi
license: mit
tags:
- flair
- token-classification
- sequence-tagger-model
base_model: dbmdz/bert-tiny-historic-multilingual-cased
widget:
- text: Rooseveltin sihteeri ilmoittaa perättö - mäksi tiedon , että Rooseveltia olisi
kehotettu käymään Englannissa , Saksassa ja Venäjällä puhumassa San Franciscon
näyttelyn puolesta .
---
# Fine-tuned Flair Model on Finnish NewsEye NER Dataset (HIPE-2022)
This Flair model was fine-tuned on the
[Finnish NewsEye](https://github.com/hipe-eval/HIPE-2022-data/blob/main/documentation/README-newseye.md)
NER Dataset using hmBERT Tiny as backbone LM.
The NewsEye dataset is comprised of diachronic historical newspaper material published between 1850 and 1950
in French, German, Finnish, and Swedish.
More information can be found [here](https://dl.acm.org/doi/abs/10.1145/3404835.3463255).
The following NEs were annotated: `PER`, `LOC`, `ORG` and `HumanProd`.
# Results
We performed a hyper-parameter search over the following parameters with 5 different seeds per configuration:
* Batch Sizes: `[4, 8]`
* Learning Rates: `[5e-05, 3e-05]`
And report micro F1-score on development set:
| Configuration | Seed 1 | Seed 2 | Seed 3 | Seed 4 | Seed 5 | Average |
|-------------------|--------------|--------------|-----------------|--------------|--------------|-----------------|
| `bs4-e10-lr5e-05` | [0.287][1] | [0.2991][2] | [0.2927][3] | [0.3193][4] | [0.2961][5] | 0.2988 ± 0.0123 |
| `bs8-e10-lr5e-05` | [0.2365][6] | [0.2617][7] | [**0.2667**][8] | [0.2893][9] | [0.2164][10] | 0.2541 ± 0.0282 |
| `bs4-e10-lr3e-05` | [0.1927][11] | [0.2165][12] | [0.2442][13] | [0.2513][14] | [0.1848][15] | 0.2179 ± 0.0297 |
| `bs8-e10-lr3e-05` | [0.1222][16] | [0.1524][17] | [0.1858][18] | [0.1063][19] | [0.1056][20] | 0.1345 ± 0.0344 |
[1]: https://hf.co/stefan-it/hmbench-newseye-fi-hmbert_tiny-bs4-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-1
[2]: https://hf.co/stefan-it/hmbench-newseye-fi-hmbert_tiny-bs4-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-2
[3]: https://hf.co/stefan-it/hmbench-newseye-fi-hmbert_tiny-bs4-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-3
[4]: https://hf.co/stefan-it/hmbench-newseye-fi-hmbert_tiny-bs4-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-4
[5]: https://hf.co/stefan-it/hmbench-newseye-fi-hmbert_tiny-bs4-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-5
[6]: https://hf.co/stefan-it/hmbench-newseye-fi-hmbert_tiny-bs8-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-1
[7]: https://hf.co/stefan-it/hmbench-newseye-fi-hmbert_tiny-bs8-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-2
[8]: https://hf.co/stefan-it/hmbench-newseye-fi-hmbert_tiny-bs8-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-3
[9]: https://hf.co/stefan-it/hmbench-newseye-fi-hmbert_tiny-bs8-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-4
[10]: https://hf.co/stefan-it/hmbench-newseye-fi-hmbert_tiny-bs8-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-5
[11]: https://hf.co/stefan-it/hmbench-newseye-fi-hmbert_tiny-bs4-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-1
[12]: https://hf.co/stefan-it/hmbench-newseye-fi-hmbert_tiny-bs4-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-2
[13]: https://hf.co/stefan-it/hmbench-newseye-fi-hmbert_tiny-bs4-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-3
[14]: https://hf.co/stefan-it/hmbench-newseye-fi-hmbert_tiny-bs4-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-4
[15]: https://hf.co/stefan-it/hmbench-newseye-fi-hmbert_tiny-bs4-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-5
[16]: https://hf.co/stefan-it/hmbench-newseye-fi-hmbert_tiny-bs8-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-1
[17]: https://hf.co/stefan-it/hmbench-newseye-fi-hmbert_tiny-bs8-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-2
[18]: https://hf.co/stefan-it/hmbench-newseye-fi-hmbert_tiny-bs8-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-3
[19]: https://hf.co/stefan-it/hmbench-newseye-fi-hmbert_tiny-bs8-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-4
[20]: https://hf.co/stefan-it/hmbench-newseye-fi-hmbert_tiny-bs8-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-5
The [training log](training.log) and TensorBoard logs (not available for hmBERT Base model) are also uploaded to the model hub.
More information about fine-tuning can be found [here](https://github.com/stefan-it/hmBench).
# Acknowledgements
We thank [Luisa März](https://github.com/LuisaMaerz), [Katharina Schmid](https://github.com/schmika) and
[Erion Çano](https://github.com/erionc) for their fruitful discussions about Historic Language Models.
Research supported with Cloud TPUs from Google's [TPU Research Cloud](https://sites.research.google/trc/about/) (TRC).
Many Thanks for providing access to the TPUs ❤️
|
stefan-it/hmbench-newseye-fi-hmbert_tiny-bs4-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-3 | stefan-it | 2023-10-24T13:51:08Z | 5 | 0 | flair | [
"flair",
"pytorch",
"tensorboard",
"token-classification",
"sequence-tagger-model",
"fi",
"base_model:dbmdz/bert-tiny-historic-multilingual-cased",
"base_model:finetune:dbmdz/bert-tiny-historic-multilingual-cased",
"license:mit",
"region:us"
]
| token-classification | 2023-10-19T23:50:43Z | ---
language: fi
license: mit
tags:
- flair
- token-classification
- sequence-tagger-model
base_model: dbmdz/bert-tiny-historic-multilingual-cased
widget:
- text: Rooseveltin sihteeri ilmoittaa perättö - mäksi tiedon , että Rooseveltia olisi
kehotettu käymään Englannissa , Saksassa ja Venäjällä puhumassa San Franciscon
näyttelyn puolesta .
---
# Fine-tuned Flair Model on Finnish NewsEye NER Dataset (HIPE-2022)
This Flair model was fine-tuned on the
[Finnish NewsEye](https://github.com/hipe-eval/HIPE-2022-data/blob/main/documentation/README-newseye.md)
NER Dataset using hmBERT Tiny as backbone LM.
The NewsEye dataset is comprised of diachronic historical newspaper material published between 1850 and 1950
in French, German, Finnish, and Swedish.
More information can be found [here](https://dl.acm.org/doi/abs/10.1145/3404835.3463255).
The following NEs were annotated: `PER`, `LOC`, `ORG` and `HumanProd`.
# Results
We performed a hyper-parameter search over the following parameters with 5 different seeds per configuration:
* Batch Sizes: `[4, 8]`
* Learning Rates: `[5e-05, 3e-05]`
And report micro F1-score on development set:
| Configuration | Seed 1 | Seed 2 | Seed 3 | Seed 4 | Seed 5 | Average |
|-------------------|--------------|--------------|------------------|--------------|--------------|-----------------|
| `bs4-e10-lr5e-05` | [0.287][1] | [0.2991][2] | [0.2927][3] | [0.3193][4] | [0.2961][5] | 0.2988 ± 0.0123 |
| `bs8-e10-lr5e-05` | [0.2365][6] | [0.2617][7] | [0.2667][8] | [0.2893][9] | [0.2164][10] | 0.2541 ± 0.0282 |
| `bs4-e10-lr3e-05` | [0.1927][11] | [0.2165][12] | [**0.2442**][13] | [0.2513][14] | [0.1848][15] | 0.2179 ± 0.0297 |
| `bs8-e10-lr3e-05` | [0.1222][16] | [0.1524][17] | [0.1858][18] | [0.1063][19] | [0.1056][20] | 0.1345 ± 0.0344 |
[1]: https://hf.co/stefan-it/hmbench-newseye-fi-hmbert_tiny-bs4-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-1
[2]: https://hf.co/stefan-it/hmbench-newseye-fi-hmbert_tiny-bs4-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-2
[3]: https://hf.co/stefan-it/hmbench-newseye-fi-hmbert_tiny-bs4-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-3
[4]: https://hf.co/stefan-it/hmbench-newseye-fi-hmbert_tiny-bs4-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-4
[5]: https://hf.co/stefan-it/hmbench-newseye-fi-hmbert_tiny-bs4-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-5
[6]: https://hf.co/stefan-it/hmbench-newseye-fi-hmbert_tiny-bs8-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-1
[7]: https://hf.co/stefan-it/hmbench-newseye-fi-hmbert_tiny-bs8-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-2
[8]: https://hf.co/stefan-it/hmbench-newseye-fi-hmbert_tiny-bs8-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-3
[9]: https://hf.co/stefan-it/hmbench-newseye-fi-hmbert_tiny-bs8-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-4
[10]: https://hf.co/stefan-it/hmbench-newseye-fi-hmbert_tiny-bs8-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-5
[11]: https://hf.co/stefan-it/hmbench-newseye-fi-hmbert_tiny-bs4-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-1
[12]: https://hf.co/stefan-it/hmbench-newseye-fi-hmbert_tiny-bs4-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-2
[13]: https://hf.co/stefan-it/hmbench-newseye-fi-hmbert_tiny-bs4-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-3
[14]: https://hf.co/stefan-it/hmbench-newseye-fi-hmbert_tiny-bs4-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-4
[15]: https://hf.co/stefan-it/hmbench-newseye-fi-hmbert_tiny-bs4-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-5
[16]: https://hf.co/stefan-it/hmbench-newseye-fi-hmbert_tiny-bs8-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-1
[17]: https://hf.co/stefan-it/hmbench-newseye-fi-hmbert_tiny-bs8-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-2
[18]: https://hf.co/stefan-it/hmbench-newseye-fi-hmbert_tiny-bs8-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-3
[19]: https://hf.co/stefan-it/hmbench-newseye-fi-hmbert_tiny-bs8-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-4
[20]: https://hf.co/stefan-it/hmbench-newseye-fi-hmbert_tiny-bs8-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-5
The [training log](training.log) and TensorBoard logs (not available for hmBERT Base model) are also uploaded to the model hub.
More information about fine-tuning can be found [here](https://github.com/stefan-it/hmBench).
# Acknowledgements
We thank [Luisa März](https://github.com/LuisaMaerz), [Katharina Schmid](https://github.com/schmika) and
[Erion Çano](https://github.com/erionc) for their fruitful discussions about Historic Language Models.
Research supported with Cloud TPUs from Google's [TPU Research Cloud](https://sites.research.google/trc/about/) (TRC).
Many Thanks for providing access to the TPUs ❤️
|
stefan-it/hmbench-newseye-fi-hmbert_tiny-bs4-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-2 | stefan-it | 2023-10-24T13:51:06Z | 2 | 0 | flair | [
"flair",
"pytorch",
"tensorboard",
"token-classification",
"sequence-tagger-model",
"fi",
"base_model:dbmdz/bert-tiny-historic-multilingual-cased",
"base_model:finetune:dbmdz/bert-tiny-historic-multilingual-cased",
"license:mit",
"region:us"
]
| token-classification | 2023-10-19T23:47:58Z | ---
language: fi
license: mit
tags:
- flair
- token-classification
- sequence-tagger-model
base_model: dbmdz/bert-tiny-historic-multilingual-cased
widget:
- text: Rooseveltin sihteeri ilmoittaa perättö - mäksi tiedon , että Rooseveltia olisi
kehotettu käymään Englannissa , Saksassa ja Venäjällä puhumassa San Franciscon
näyttelyn puolesta .
---
# Fine-tuned Flair Model on Finnish NewsEye NER Dataset (HIPE-2022)
This Flair model was fine-tuned on the
[Finnish NewsEye](https://github.com/hipe-eval/HIPE-2022-data/blob/main/documentation/README-newseye.md)
NER Dataset using hmBERT Tiny as backbone LM.
The NewsEye dataset is comprised of diachronic historical newspaper material published between 1850 and 1950
in French, German, Finnish, and Swedish.
More information can be found [here](https://dl.acm.org/doi/abs/10.1145/3404835.3463255).
The following NEs were annotated: `PER`, `LOC`, `ORG` and `HumanProd`.
# Results
We performed a hyper-parameter search over the following parameters with 5 different seeds per configuration:
* Batch Sizes: `[4, 8]`
* Learning Rates: `[5e-05, 3e-05]`
And report micro F1-score on development set:
| Configuration | Seed 1 | Seed 2 | Seed 3 | Seed 4 | Seed 5 | Average |
|-------------------|--------------|-----------------|--------------|--------------|--------------|-----------------|
| `bs4-e10-lr5e-05` | [0.287][1] | [**0.2991**][2] | [0.2927][3] | [0.3193][4] | [0.2961][5] | 0.2988 ± 0.0123 |
| `bs8-e10-lr5e-05` | [0.2365][6] | [0.2617][7] | [0.2667][8] | [0.2893][9] | [0.2164][10] | 0.2541 ± 0.0282 |
| `bs4-e10-lr3e-05` | [0.1927][11] | [0.2165][12] | [0.2442][13] | [0.2513][14] | [0.1848][15] | 0.2179 ± 0.0297 |
| `bs8-e10-lr3e-05` | [0.1222][16] | [0.1524][17] | [0.1858][18] | [0.1063][19] | [0.1056][20] | 0.1345 ± 0.0344 |
[1]: https://hf.co/stefan-it/hmbench-newseye-fi-hmbert_tiny-bs4-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-1
[2]: https://hf.co/stefan-it/hmbench-newseye-fi-hmbert_tiny-bs4-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-2
[3]: https://hf.co/stefan-it/hmbench-newseye-fi-hmbert_tiny-bs4-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-3
[4]: https://hf.co/stefan-it/hmbench-newseye-fi-hmbert_tiny-bs4-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-4
[5]: https://hf.co/stefan-it/hmbench-newseye-fi-hmbert_tiny-bs4-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-5
[6]: https://hf.co/stefan-it/hmbench-newseye-fi-hmbert_tiny-bs8-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-1
[7]: https://hf.co/stefan-it/hmbench-newseye-fi-hmbert_tiny-bs8-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-2
[8]: https://hf.co/stefan-it/hmbench-newseye-fi-hmbert_tiny-bs8-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-3
[9]: https://hf.co/stefan-it/hmbench-newseye-fi-hmbert_tiny-bs8-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-4
[10]: https://hf.co/stefan-it/hmbench-newseye-fi-hmbert_tiny-bs8-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-5
[11]: https://hf.co/stefan-it/hmbench-newseye-fi-hmbert_tiny-bs4-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-1
[12]: https://hf.co/stefan-it/hmbench-newseye-fi-hmbert_tiny-bs4-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-2
[13]: https://hf.co/stefan-it/hmbench-newseye-fi-hmbert_tiny-bs4-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-3
[14]: https://hf.co/stefan-it/hmbench-newseye-fi-hmbert_tiny-bs4-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-4
[15]: https://hf.co/stefan-it/hmbench-newseye-fi-hmbert_tiny-bs4-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-5
[16]: https://hf.co/stefan-it/hmbench-newseye-fi-hmbert_tiny-bs8-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-1
[17]: https://hf.co/stefan-it/hmbench-newseye-fi-hmbert_tiny-bs8-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-2
[18]: https://hf.co/stefan-it/hmbench-newseye-fi-hmbert_tiny-bs8-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-3
[19]: https://hf.co/stefan-it/hmbench-newseye-fi-hmbert_tiny-bs8-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-4
[20]: https://hf.co/stefan-it/hmbench-newseye-fi-hmbert_tiny-bs8-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-5
The [training log](training.log) and TensorBoard logs (not available for hmBERT Base model) are also uploaded to the model hub.
More information about fine-tuning can be found [here](https://github.com/stefan-it/hmBench).
# Acknowledgements
We thank [Luisa März](https://github.com/LuisaMaerz), [Katharina Schmid](https://github.com/schmika) and
[Erion Çano](https://github.com/erionc) for their fruitful discussions about Historic Language Models.
Research supported with Cloud TPUs from Google's [TPU Research Cloud](https://sites.research.google/trc/about/) (TRC).
Many Thanks for providing access to the TPUs ❤️
|
stefan-it/hmbench-newseye-fi-hmbert_tiny-bs8-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-1 | stefan-it | 2023-10-24T13:51:04Z | 3 | 0 | flair | [
"flair",
"pytorch",
"tensorboard",
"token-classification",
"sequence-tagger-model",
"fi",
"base_model:dbmdz/bert-tiny-historic-multilingual-cased",
"base_model:finetune:dbmdz/bert-tiny-historic-multilingual-cased",
"license:mit",
"region:us"
]
| token-classification | 2023-10-19T23:45:51Z | ---
language: fi
license: mit
tags:
- flair
- token-classification
- sequence-tagger-model
base_model: dbmdz/bert-tiny-historic-multilingual-cased
widget:
- text: Rooseveltin sihteeri ilmoittaa perättö - mäksi tiedon , että Rooseveltia olisi
kehotettu käymään Englannissa , Saksassa ja Venäjällä puhumassa San Franciscon
näyttelyn puolesta .
---
# Fine-tuned Flair Model on Finnish NewsEye NER Dataset (HIPE-2022)
This Flair model was fine-tuned on the
[Finnish NewsEye](https://github.com/hipe-eval/HIPE-2022-data/blob/main/documentation/README-newseye.md)
NER Dataset using hmBERT Tiny as backbone LM.
The NewsEye dataset is comprised of diachronic historical newspaper material published between 1850 and 1950
in French, German, Finnish, and Swedish.
More information can be found [here](https://dl.acm.org/doi/abs/10.1145/3404835.3463255).
The following NEs were annotated: `PER`, `LOC`, `ORG` and `HumanProd`.
# Results
We performed a hyper-parameter search over the following parameters with 5 different seeds per configuration:
* Batch Sizes: `[4, 8]`
* Learning Rates: `[5e-05, 3e-05]`
And report micro F1-score on development set:
| Configuration | Seed 1 | Seed 2 | Seed 3 | Seed 4 | Seed 5 | Average |
|-------------------|-----------------|--------------|--------------|--------------|--------------|-----------------|
| `bs4-e10-lr5e-05` | [0.287][1] | [0.2991][2] | [0.2927][3] | [0.3193][4] | [0.2961][5] | 0.2988 ± 0.0123 |
| `bs8-e10-lr5e-05` | [**0.2365**][6] | [0.2617][7] | [0.2667][8] | [0.2893][9] | [0.2164][10] | 0.2541 ± 0.0282 |
| `bs4-e10-lr3e-05` | [0.1927][11] | [0.2165][12] | [0.2442][13] | [0.2513][14] | [0.1848][15] | 0.2179 ± 0.0297 |
| `bs8-e10-lr3e-05` | [0.1222][16] | [0.1524][17] | [0.1858][18] | [0.1063][19] | [0.1056][20] | 0.1345 ± 0.0344 |
[1]: https://hf.co/stefan-it/hmbench-newseye-fi-hmbert_tiny-bs4-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-1
[2]: https://hf.co/stefan-it/hmbench-newseye-fi-hmbert_tiny-bs4-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-2
[3]: https://hf.co/stefan-it/hmbench-newseye-fi-hmbert_tiny-bs4-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-3
[4]: https://hf.co/stefan-it/hmbench-newseye-fi-hmbert_tiny-bs4-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-4
[5]: https://hf.co/stefan-it/hmbench-newseye-fi-hmbert_tiny-bs4-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-5
[6]: https://hf.co/stefan-it/hmbench-newseye-fi-hmbert_tiny-bs8-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-1
[7]: https://hf.co/stefan-it/hmbench-newseye-fi-hmbert_tiny-bs8-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-2
[8]: https://hf.co/stefan-it/hmbench-newseye-fi-hmbert_tiny-bs8-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-3
[9]: https://hf.co/stefan-it/hmbench-newseye-fi-hmbert_tiny-bs8-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-4
[10]: https://hf.co/stefan-it/hmbench-newseye-fi-hmbert_tiny-bs8-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-5
[11]: https://hf.co/stefan-it/hmbench-newseye-fi-hmbert_tiny-bs4-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-1
[12]: https://hf.co/stefan-it/hmbench-newseye-fi-hmbert_tiny-bs4-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-2
[13]: https://hf.co/stefan-it/hmbench-newseye-fi-hmbert_tiny-bs4-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-3
[14]: https://hf.co/stefan-it/hmbench-newseye-fi-hmbert_tiny-bs4-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-4
[15]: https://hf.co/stefan-it/hmbench-newseye-fi-hmbert_tiny-bs4-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-5
[16]: https://hf.co/stefan-it/hmbench-newseye-fi-hmbert_tiny-bs8-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-1
[17]: https://hf.co/stefan-it/hmbench-newseye-fi-hmbert_tiny-bs8-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-2
[18]: https://hf.co/stefan-it/hmbench-newseye-fi-hmbert_tiny-bs8-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-3
[19]: https://hf.co/stefan-it/hmbench-newseye-fi-hmbert_tiny-bs8-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-4
[20]: https://hf.co/stefan-it/hmbench-newseye-fi-hmbert_tiny-bs8-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-5
The [training log](training.log) and TensorBoard logs (not available for hmBERT Base model) are also uploaded to the model hub.
More information about fine-tuning can be found [here](https://github.com/stefan-it/hmBench).
# Acknowledgements
We thank [Luisa März](https://github.com/LuisaMaerz), [Katharina Schmid](https://github.com/schmika) and
[Erion Çano](https://github.com/erionc) for their fruitful discussions about Historic Language Models.
Research supported with Cloud TPUs from Google's [TPU Research Cloud](https://sites.research.google/trc/about/) (TRC).
Many Thanks for providing access to the TPUs ❤️
|
anamaria7/q-FrozenLake-v1-4x4-noSlippery | anamaria7 | 2023-10-24T13:47:31Z | 0 | 0 | null | [
"FrozenLake-v1-4x4-no_slippery",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
]
| reinforcement-learning | 2023-10-24T13:47:29Z | ---
tags:
- FrozenLake-v1-4x4-no_slippery
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: q-FrozenLake-v1-4x4-noSlippery
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: FrozenLake-v1-4x4-no_slippery
type: FrozenLake-v1-4x4-no_slippery
metrics:
- type: mean_reward
value: 1.00 +/- 0.00
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing1 **FrozenLake-v1**
This is a trained model of a **Q-Learning** agent playing **FrozenLake-v1** .
## Usage
```python
model = load_from_hub(repo_id="anamaria7/q-FrozenLake-v1-4x4-noSlippery", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
```
|
viditsorg/autotrain-mbart-finetune-hindi-97080146798 | viditsorg | 2023-10-24T13:47:11Z | 13 | 0 | transformers | [
"transformers",
"pytorch",
"safetensors",
"bart",
"text2text-generation",
"autotrain",
"summarization",
"unk",
"dataset:viditsorg/autotrain-data-mbart-finetune-hindi",
"co2_eq_emissions",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| summarization | 2023-10-24T12:57:29Z | ---
tags:
- autotrain
- summarization
language:
- unk
widget:
- text: "I love AutoTrain"
datasets:
- viditsorg/autotrain-data-mbart-finetune-hindi
co2_eq_emissions:
emissions: 0.6034093912734243
---
# Model Trained On Hindi Podcast Dataset
- Problem type: Summarization
- Model ID: 97080146798
- CO2 Emissions (in grams): 0.6034
## Validation Metrics
- Loss: 0.866
- Rouge1: 0.885
- Rouge2: 0.000
- RougeL: 0.885
- RougeLsum: 0.885
- Gen Len: 130.752
## Usage
You can use cURL to access this model:
```
$ curl -X POST -H "Authorization: Bearer YOUR_HUGGINGFACE_API_KEY" -H "Content-Type: application/json" -d '{"inputs": "I love AutoTrain"}' https://api-inference.huggingface.co/viditsorg/autotrain-mbart-finetune-hindi-97080146798
``` |
Hansaht/phi-1_5-finetuned-alpaca-gpt4 | Hansaht | 2023-10-24T13:22:58Z | 12 | 0 | transformers | [
"transformers",
"pytorch",
"mixformer-sequential",
"text-generation",
"generated_from_trainer",
"custom_code",
"base_model:microsoft/phi-1_5",
"base_model:finetune:microsoft/phi-1_5",
"license:other",
"autotrain_compatible",
"region:us"
]
| text-generation | 2023-10-24T12:35:37Z | ---
license: other
base_model: microsoft/phi-1_5
tags:
- generated_from_trainer
model-index:
- name: phi-1_5-finetuned-alpaca-gpt4
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# phi-1_5-finetuned-alpaca-gpt4
This model is a fine-tuned version of [microsoft/phi-1_5](https://huggingface.co/microsoft/phi-1_5) on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 4
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- training_steps: 1000
### Training results
### Framework versions
- Transformers 4.33.0
- Pytorch 2.0.0
- Datasets 2.1.0
- Tokenizers 0.13.3
|
stefan-it/hmbench-newseye-de-hmbert_tiny-bs8-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-5 | stefan-it | 2023-10-24T13:17:41Z | 1 | 0 | flair | [
"flair",
"pytorch",
"tensorboard",
"token-classification",
"sequence-tagger-model",
"de",
"base_model:dbmdz/bert-tiny-historic-multilingual-cased",
"base_model:finetune:dbmdz/bert-tiny-historic-multilingual-cased",
"license:mit",
"region:us"
]
| token-classification | 2023-10-19T13:57:33Z | ---
language: de
license: mit
tags:
- flair
- token-classification
- sequence-tagger-model
base_model: dbmdz/bert-tiny-historic-multilingual-cased
widget:
- text: In Teltsch und Jarmeritz wurden die abgegebenen Stimmen für Genossen Krapka
ungiltig erklärt , weil sie keinen Wohnort aufwiesen .
---
# Fine-tuned Flair Model on German NewsEye NER Dataset (HIPE-2022)
This Flair model was fine-tuned on the
[German NewsEye](https://github.com/hipe-eval/HIPE-2022-data/blob/main/documentation/README-newseye.md)
NER Dataset using hmBERT Tiny as backbone LM.
The NewsEye dataset is comprised of diachronic historical newspaper material published between 1850 and 1950
in French, German, Finnish, and Swedish.
More information can be found [here](https://dl.acm.org/doi/abs/10.1145/3404835.3463255).
The following NEs were annotated: `PER`, `LOC`, `ORG` and `HumanProd`.
# Results
We performed a hyper-parameter search over the following parameters with 5 different seeds per configuration:
* Batch Sizes: `[4, 8]`
* Learning Rates: `[5e-05, 3e-05]`
And report micro F1-score on development set:
| Configuration | Seed 1 | Seed 2 | Seed 3 | Seed 4 | Seed 5 | Average |
|-------------------|--------------|--------------|--------------|--------------|------------------|-----------------|
| `bs8-e10-lr5e-05` | [0.303][1] | [0.2873][2] | [0.2881][3] | [0.2866][4] | [0.2788][5] | 0.2888 ± 0.0088 |
| `bs4-e10-lr3e-05` | [0.2984][6] | [0.311][7] | [0.2718][8] | [0.2702][9] | [0.2598][10] | 0.2822 ± 0.0215 |
| `bs4-e10-lr5e-05` | [0.2907][11] | [0.288][12] | [0.276][13] | [0.268][14] | [0.2736][15] | 0.2793 ± 0.0097 |
| `bs8-e10-lr3e-05` | [0.2863][16] | [0.2771][17] | [0.2732][18] | [0.2812][19] | [**0.2657**][20] | 0.2767 ± 0.0078 |
[1]: https://hf.co/stefan-it/hmbench-newseye-de-hmbert_tiny-bs8-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-1
[2]: https://hf.co/stefan-it/hmbench-newseye-de-hmbert_tiny-bs8-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-2
[3]: https://hf.co/stefan-it/hmbench-newseye-de-hmbert_tiny-bs8-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-3
[4]: https://hf.co/stefan-it/hmbench-newseye-de-hmbert_tiny-bs8-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-4
[5]: https://hf.co/stefan-it/hmbench-newseye-de-hmbert_tiny-bs8-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-5
[6]: https://hf.co/stefan-it/hmbench-newseye-de-hmbert_tiny-bs4-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-1
[7]: https://hf.co/stefan-it/hmbench-newseye-de-hmbert_tiny-bs4-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-2
[8]: https://hf.co/stefan-it/hmbench-newseye-de-hmbert_tiny-bs4-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-3
[9]: https://hf.co/stefan-it/hmbench-newseye-de-hmbert_tiny-bs4-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-4
[10]: https://hf.co/stefan-it/hmbench-newseye-de-hmbert_tiny-bs4-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-5
[11]: https://hf.co/stefan-it/hmbench-newseye-de-hmbert_tiny-bs4-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-1
[12]: https://hf.co/stefan-it/hmbench-newseye-de-hmbert_tiny-bs4-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-2
[13]: https://hf.co/stefan-it/hmbench-newseye-de-hmbert_tiny-bs4-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-3
[14]: https://hf.co/stefan-it/hmbench-newseye-de-hmbert_tiny-bs4-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-4
[15]: https://hf.co/stefan-it/hmbench-newseye-de-hmbert_tiny-bs4-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-5
[16]: https://hf.co/stefan-it/hmbench-newseye-de-hmbert_tiny-bs8-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-1
[17]: https://hf.co/stefan-it/hmbench-newseye-de-hmbert_tiny-bs8-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-2
[18]: https://hf.co/stefan-it/hmbench-newseye-de-hmbert_tiny-bs8-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-3
[19]: https://hf.co/stefan-it/hmbench-newseye-de-hmbert_tiny-bs8-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-4
[20]: https://hf.co/stefan-it/hmbench-newseye-de-hmbert_tiny-bs8-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-5
The [training log](training.log) and TensorBoard logs (not available for hmBERT Base model) are also uploaded to the model hub.
More information about fine-tuning can be found [here](https://github.com/stefan-it/hmBench).
# Acknowledgements
We thank [Luisa März](https://github.com/LuisaMaerz), [Katharina Schmid](https://github.com/schmika) and
[Erion Çano](https://github.com/erionc) for their fruitful discussions about Historic Language Models.
Research supported with Cloud TPUs from Google's [TPU Research Cloud](https://sites.research.google/trc/about/) (TRC).
Many Thanks for providing access to the TPUs ❤️
|
stefan-it/hmbench-newseye-de-hmbert_tiny-bs8-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-5 | stefan-it | 2023-10-24T13:17:41Z | 4 | 0 | flair | [
"flair",
"pytorch",
"tensorboard",
"token-classification",
"sequence-tagger-model",
"de",
"base_model:dbmdz/bert-tiny-historic-multilingual-cased",
"base_model:finetune:dbmdz/bert-tiny-historic-multilingual-cased",
"license:mit",
"region:us"
]
| token-classification | 2023-10-19T14:08:57Z | ---
language: de
license: mit
tags:
- flair
- token-classification
- sequence-tagger-model
base_model: dbmdz/bert-tiny-historic-multilingual-cased
widget:
- text: In Teltsch und Jarmeritz wurden die abgegebenen Stimmen für Genossen Krapka
ungiltig erklärt , weil sie keinen Wohnort aufwiesen .
---
# Fine-tuned Flair Model on German NewsEye NER Dataset (HIPE-2022)
This Flair model was fine-tuned on the
[German NewsEye](https://github.com/hipe-eval/HIPE-2022-data/blob/main/documentation/README-newseye.md)
NER Dataset using hmBERT Tiny as backbone LM.
The NewsEye dataset is comprised of diachronic historical newspaper material published between 1850 and 1950
in French, German, Finnish, and Swedish.
More information can be found [here](https://dl.acm.org/doi/abs/10.1145/3404835.3463255).
The following NEs were annotated: `PER`, `LOC`, `ORG` and `HumanProd`.
# Results
We performed a hyper-parameter search over the following parameters with 5 different seeds per configuration:
* Batch Sizes: `[4, 8]`
* Learning Rates: `[5e-05, 3e-05]`
And report micro F1-score on development set:
| Configuration | Seed 1 | Seed 2 | Seed 3 | Seed 4 | Seed 5 | Average |
|-------------------|--------------|--------------|--------------|--------------|-----------------|-----------------|
| `bs8-e10-lr5e-05` | [0.303][1] | [0.2873][2] | [0.2881][3] | [0.2866][4] | [**0.2788**][5] | 0.2888 ± 0.0088 |
| `bs4-e10-lr3e-05` | [0.2984][6] | [0.311][7] | [0.2718][8] | [0.2702][9] | [0.2598][10] | 0.2822 ± 0.0215 |
| `bs4-e10-lr5e-05` | [0.2907][11] | [0.288][12] | [0.276][13] | [0.268][14] | [0.2736][15] | 0.2793 ± 0.0097 |
| `bs8-e10-lr3e-05` | [0.2863][16] | [0.2771][17] | [0.2732][18] | [0.2812][19] | [0.2657][20] | 0.2767 ± 0.0078 |
[1]: https://hf.co/stefan-it/hmbench-newseye-de-hmbert_tiny-bs8-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-1
[2]: https://hf.co/stefan-it/hmbench-newseye-de-hmbert_tiny-bs8-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-2
[3]: https://hf.co/stefan-it/hmbench-newseye-de-hmbert_tiny-bs8-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-3
[4]: https://hf.co/stefan-it/hmbench-newseye-de-hmbert_tiny-bs8-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-4
[5]: https://hf.co/stefan-it/hmbench-newseye-de-hmbert_tiny-bs8-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-5
[6]: https://hf.co/stefan-it/hmbench-newseye-de-hmbert_tiny-bs4-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-1
[7]: https://hf.co/stefan-it/hmbench-newseye-de-hmbert_tiny-bs4-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-2
[8]: https://hf.co/stefan-it/hmbench-newseye-de-hmbert_tiny-bs4-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-3
[9]: https://hf.co/stefan-it/hmbench-newseye-de-hmbert_tiny-bs4-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-4
[10]: https://hf.co/stefan-it/hmbench-newseye-de-hmbert_tiny-bs4-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-5
[11]: https://hf.co/stefan-it/hmbench-newseye-de-hmbert_tiny-bs4-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-1
[12]: https://hf.co/stefan-it/hmbench-newseye-de-hmbert_tiny-bs4-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-2
[13]: https://hf.co/stefan-it/hmbench-newseye-de-hmbert_tiny-bs4-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-3
[14]: https://hf.co/stefan-it/hmbench-newseye-de-hmbert_tiny-bs4-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-4
[15]: https://hf.co/stefan-it/hmbench-newseye-de-hmbert_tiny-bs4-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-5
[16]: https://hf.co/stefan-it/hmbench-newseye-de-hmbert_tiny-bs8-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-1
[17]: https://hf.co/stefan-it/hmbench-newseye-de-hmbert_tiny-bs8-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-2
[18]: https://hf.co/stefan-it/hmbench-newseye-de-hmbert_tiny-bs8-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-3
[19]: https://hf.co/stefan-it/hmbench-newseye-de-hmbert_tiny-bs8-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-4
[20]: https://hf.co/stefan-it/hmbench-newseye-de-hmbert_tiny-bs8-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-5
The [training log](training.log) and TensorBoard logs (not available for hmBERT Base model) are also uploaded to the model hub.
More information about fine-tuning can be found [here](https://github.com/stefan-it/hmBench).
# Acknowledgements
We thank [Luisa März](https://github.com/LuisaMaerz), [Katharina Schmid](https://github.com/schmika) and
[Erion Çano](https://github.com/erionc) for their fruitful discussions about Historic Language Models.
Research supported with Cloud TPUs from Google's [TPU Research Cloud](https://sites.research.google/trc/about/) (TRC).
Many Thanks for providing access to the TPUs ❤️
|
stefan-it/hmbench-newseye-de-hmbert_tiny-bs4-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-5 | stefan-it | 2023-10-24T13:17:40Z | 2 | 0 | flair | [
"flair",
"pytorch",
"tensorboard",
"token-classification",
"sequence-tagger-model",
"de",
"base_model:dbmdz/bert-tiny-historic-multilingual-cased",
"base_model:finetune:dbmdz/bert-tiny-historic-multilingual-cased",
"license:mit",
"region:us"
]
| token-classification | 2023-10-19T13:46:07Z | ---
language: de
license: mit
tags:
- flair
- token-classification
- sequence-tagger-model
base_model: dbmdz/bert-tiny-historic-multilingual-cased
widget:
- text: In Teltsch und Jarmeritz wurden die abgegebenen Stimmen für Genossen Krapka
ungiltig erklärt , weil sie keinen Wohnort aufwiesen .
---
# Fine-tuned Flair Model on German NewsEye NER Dataset (HIPE-2022)
This Flair model was fine-tuned on the
[German NewsEye](https://github.com/hipe-eval/HIPE-2022-data/blob/main/documentation/README-newseye.md)
NER Dataset using hmBERT Tiny as backbone LM.
The NewsEye dataset is comprised of diachronic historical newspaper material published between 1850 and 1950
in French, German, Finnish, and Swedish.
More information can be found [here](https://dl.acm.org/doi/abs/10.1145/3404835.3463255).
The following NEs were annotated: `PER`, `LOC`, `ORG` and `HumanProd`.
# Results
We performed a hyper-parameter search over the following parameters with 5 different seeds per configuration:
* Batch Sizes: `[4, 8]`
* Learning Rates: `[5e-05, 3e-05]`
And report micro F1-score on development set:
| Configuration | Seed 1 | Seed 2 | Seed 3 | Seed 4 | Seed 5 | Average |
|-------------------|--------------|--------------|--------------|--------------|------------------|-----------------|
| `bs8-e10-lr5e-05` | [0.303][1] | [0.2873][2] | [0.2881][3] | [0.2866][4] | [0.2788][5] | 0.2888 ± 0.0088 |
| `bs4-e10-lr3e-05` | [0.2984][6] | [0.311][7] | [0.2718][8] | [0.2702][9] | [0.2598][10] | 0.2822 ± 0.0215 |
| `bs4-e10-lr5e-05` | [0.2907][11] | [0.288][12] | [0.276][13] | [0.268][14] | [**0.2736**][15] | 0.2793 ± 0.0097 |
| `bs8-e10-lr3e-05` | [0.2863][16] | [0.2771][17] | [0.2732][18] | [0.2812][19] | [0.2657][20] | 0.2767 ± 0.0078 |
[1]: https://hf.co/stefan-it/hmbench-newseye-de-hmbert_tiny-bs8-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-1
[2]: https://hf.co/stefan-it/hmbench-newseye-de-hmbert_tiny-bs8-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-2
[3]: https://hf.co/stefan-it/hmbench-newseye-de-hmbert_tiny-bs8-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-3
[4]: https://hf.co/stefan-it/hmbench-newseye-de-hmbert_tiny-bs8-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-4
[5]: https://hf.co/stefan-it/hmbench-newseye-de-hmbert_tiny-bs8-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-5
[6]: https://hf.co/stefan-it/hmbench-newseye-de-hmbert_tiny-bs4-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-1
[7]: https://hf.co/stefan-it/hmbench-newseye-de-hmbert_tiny-bs4-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-2
[8]: https://hf.co/stefan-it/hmbench-newseye-de-hmbert_tiny-bs4-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-3
[9]: https://hf.co/stefan-it/hmbench-newseye-de-hmbert_tiny-bs4-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-4
[10]: https://hf.co/stefan-it/hmbench-newseye-de-hmbert_tiny-bs4-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-5
[11]: https://hf.co/stefan-it/hmbench-newseye-de-hmbert_tiny-bs4-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-1
[12]: https://hf.co/stefan-it/hmbench-newseye-de-hmbert_tiny-bs4-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-2
[13]: https://hf.co/stefan-it/hmbench-newseye-de-hmbert_tiny-bs4-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-3
[14]: https://hf.co/stefan-it/hmbench-newseye-de-hmbert_tiny-bs4-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-4
[15]: https://hf.co/stefan-it/hmbench-newseye-de-hmbert_tiny-bs4-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-5
[16]: https://hf.co/stefan-it/hmbench-newseye-de-hmbert_tiny-bs8-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-1
[17]: https://hf.co/stefan-it/hmbench-newseye-de-hmbert_tiny-bs8-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-2
[18]: https://hf.co/stefan-it/hmbench-newseye-de-hmbert_tiny-bs8-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-3
[19]: https://hf.co/stefan-it/hmbench-newseye-de-hmbert_tiny-bs8-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-4
[20]: https://hf.co/stefan-it/hmbench-newseye-de-hmbert_tiny-bs8-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-5
The [training log](training.log) and TensorBoard logs (not available for hmBERT Base model) are also uploaded to the model hub.
More information about fine-tuning can be found [here](https://github.com/stefan-it/hmBench).
# Acknowledgements
We thank [Luisa März](https://github.com/LuisaMaerz), [Katharina Schmid](https://github.com/schmika) and
[Erion Çano](https://github.com/erionc) for their fruitful discussions about Historic Language Models.
Research supported with Cloud TPUs from Google's [TPU Research Cloud](https://sites.research.google/trc/about/) (TRC).
Many Thanks for providing access to the TPUs ❤️
|
stefan-it/hmbench-newseye-de-hmbert_tiny-bs8-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-4 | stefan-it | 2023-10-24T13:17:40Z | 5 | 0 | flair | [
"flair",
"pytorch",
"tensorboard",
"token-classification",
"sequence-tagger-model",
"de",
"base_model:dbmdz/bert-tiny-historic-multilingual-cased",
"base_model:finetune:dbmdz/bert-tiny-historic-multilingual-cased",
"license:mit",
"region:us"
]
| token-classification | 2023-10-19T13:16:16Z | ---
language: de
license: mit
tags:
- flair
- token-classification
- sequence-tagger-model
base_model: dbmdz/bert-tiny-historic-multilingual-cased
widget:
- text: In Teltsch und Jarmeritz wurden die abgegebenen Stimmen für Genossen Krapka
ungiltig erklärt , weil sie keinen Wohnort aufwiesen .
---
# Fine-tuned Flair Model on German NewsEye NER Dataset (HIPE-2022)
This Flair model was fine-tuned on the
[German NewsEye](https://github.com/hipe-eval/HIPE-2022-data/blob/main/documentation/README-newseye.md)
NER Dataset using hmBERT Tiny as backbone LM.
The NewsEye dataset is comprised of diachronic historical newspaper material published between 1850 and 1950
in French, German, Finnish, and Swedish.
More information can be found [here](https://dl.acm.org/doi/abs/10.1145/3404835.3463255).
The following NEs were annotated: `PER`, `LOC`, `ORG` and `HumanProd`.
# Results
We performed a hyper-parameter search over the following parameters with 5 different seeds per configuration:
* Batch Sizes: `[4, 8]`
* Learning Rates: `[5e-05, 3e-05]`
And report micro F1-score on development set:
| Configuration | Seed 1 | Seed 2 | Seed 3 | Seed 4 | Seed 5 | Average |
|-------------------|--------------|--------------|--------------|-----------------|--------------|-----------------|
| `bs8-e10-lr5e-05` | [0.303][1] | [0.2873][2] | [0.2881][3] | [**0.2866**][4] | [0.2788][5] | 0.2888 ± 0.0088 |
| `bs4-e10-lr3e-05` | [0.2984][6] | [0.311][7] | [0.2718][8] | [0.2702][9] | [0.2598][10] | 0.2822 ± 0.0215 |
| `bs4-e10-lr5e-05` | [0.2907][11] | [0.288][12] | [0.276][13] | [0.268][14] | [0.2736][15] | 0.2793 ± 0.0097 |
| `bs8-e10-lr3e-05` | [0.2863][16] | [0.2771][17] | [0.2732][18] | [0.2812][19] | [0.2657][20] | 0.2767 ± 0.0078 |
[1]: https://hf.co/stefan-it/hmbench-newseye-de-hmbert_tiny-bs8-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-1
[2]: https://hf.co/stefan-it/hmbench-newseye-de-hmbert_tiny-bs8-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-2
[3]: https://hf.co/stefan-it/hmbench-newseye-de-hmbert_tiny-bs8-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-3
[4]: https://hf.co/stefan-it/hmbench-newseye-de-hmbert_tiny-bs8-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-4
[5]: https://hf.co/stefan-it/hmbench-newseye-de-hmbert_tiny-bs8-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-5
[6]: https://hf.co/stefan-it/hmbench-newseye-de-hmbert_tiny-bs4-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-1
[7]: https://hf.co/stefan-it/hmbench-newseye-de-hmbert_tiny-bs4-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-2
[8]: https://hf.co/stefan-it/hmbench-newseye-de-hmbert_tiny-bs4-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-3
[9]: https://hf.co/stefan-it/hmbench-newseye-de-hmbert_tiny-bs4-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-4
[10]: https://hf.co/stefan-it/hmbench-newseye-de-hmbert_tiny-bs4-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-5
[11]: https://hf.co/stefan-it/hmbench-newseye-de-hmbert_tiny-bs4-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-1
[12]: https://hf.co/stefan-it/hmbench-newseye-de-hmbert_tiny-bs4-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-2
[13]: https://hf.co/stefan-it/hmbench-newseye-de-hmbert_tiny-bs4-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-3
[14]: https://hf.co/stefan-it/hmbench-newseye-de-hmbert_tiny-bs4-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-4
[15]: https://hf.co/stefan-it/hmbench-newseye-de-hmbert_tiny-bs4-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-5
[16]: https://hf.co/stefan-it/hmbench-newseye-de-hmbert_tiny-bs8-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-1
[17]: https://hf.co/stefan-it/hmbench-newseye-de-hmbert_tiny-bs8-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-2
[18]: https://hf.co/stefan-it/hmbench-newseye-de-hmbert_tiny-bs8-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-3
[19]: https://hf.co/stefan-it/hmbench-newseye-de-hmbert_tiny-bs8-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-4
[20]: https://hf.co/stefan-it/hmbench-newseye-de-hmbert_tiny-bs8-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-5
The [training log](training.log) and TensorBoard logs (not available for hmBERT Base model) are also uploaded to the model hub.
More information about fine-tuning can be found [here](https://github.com/stefan-it/hmBench).
# Acknowledgements
We thank [Luisa März](https://github.com/LuisaMaerz), [Katharina Schmid](https://github.com/schmika) and
[Erion Çano](https://github.com/erionc) for their fruitful discussions about Historic Language Models.
Research supported with Cloud TPUs from Google's [TPU Research Cloud](https://sites.research.google/trc/about/) (TRC).
Many Thanks for providing access to the TPUs ❤️
|
stefan-it/hmbench-newseye-de-hmbert_tiny-bs4-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-5 | stefan-it | 2023-10-24T13:17:40Z | 7 | 0 | flair | [
"flair",
"pytorch",
"tensorboard",
"token-classification",
"sequence-tagger-model",
"de",
"base_model:dbmdz/bert-tiny-historic-multilingual-cased",
"base_model:finetune:dbmdz/bert-tiny-historic-multilingual-cased",
"license:mit",
"region:us"
]
| token-classification | 2023-10-19T13:31:11Z | ---
language: de
license: mit
tags:
- flair
- token-classification
- sequence-tagger-model
base_model: dbmdz/bert-tiny-historic-multilingual-cased
widget:
- text: In Teltsch und Jarmeritz wurden die abgegebenen Stimmen für Genossen Krapka
ungiltig erklärt , weil sie keinen Wohnort aufwiesen .
---
# Fine-tuned Flair Model on German NewsEye NER Dataset (HIPE-2022)
This Flair model was fine-tuned on the
[German NewsEye](https://github.com/hipe-eval/HIPE-2022-data/blob/main/documentation/README-newseye.md)
NER Dataset using hmBERT Tiny as backbone LM.
The NewsEye dataset is comprised of diachronic historical newspaper material published between 1850 and 1950
in French, German, Finnish, and Swedish.
More information can be found [here](https://dl.acm.org/doi/abs/10.1145/3404835.3463255).
The following NEs were annotated: `PER`, `LOC`, `ORG` and `HumanProd`.
# Results
We performed a hyper-parameter search over the following parameters with 5 different seeds per configuration:
* Batch Sizes: `[4, 8]`
* Learning Rates: `[5e-05, 3e-05]`
And report micro F1-score on development set:
| Configuration | Seed 1 | Seed 2 | Seed 3 | Seed 4 | Seed 5 | Average |
|-------------------|--------------|--------------|--------------|--------------|------------------|-----------------|
| `bs8-e10-lr5e-05` | [0.303][1] | [0.2873][2] | [0.2881][3] | [0.2866][4] | [0.2788][5] | 0.2888 ± 0.0088 |
| `bs4-e10-lr3e-05` | [0.2984][6] | [0.311][7] | [0.2718][8] | [0.2702][9] | [**0.2598**][10] | 0.2822 ± 0.0215 |
| `bs4-e10-lr5e-05` | [0.2907][11] | [0.288][12] | [0.276][13] | [0.268][14] | [0.2736][15] | 0.2793 ± 0.0097 |
| `bs8-e10-lr3e-05` | [0.2863][16] | [0.2771][17] | [0.2732][18] | [0.2812][19] | [0.2657][20] | 0.2767 ± 0.0078 |
[1]: https://hf.co/stefan-it/hmbench-newseye-de-hmbert_tiny-bs8-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-1
[2]: https://hf.co/stefan-it/hmbench-newseye-de-hmbert_tiny-bs8-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-2
[3]: https://hf.co/stefan-it/hmbench-newseye-de-hmbert_tiny-bs8-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-3
[4]: https://hf.co/stefan-it/hmbench-newseye-de-hmbert_tiny-bs8-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-4
[5]: https://hf.co/stefan-it/hmbench-newseye-de-hmbert_tiny-bs8-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-5
[6]: https://hf.co/stefan-it/hmbench-newseye-de-hmbert_tiny-bs4-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-1
[7]: https://hf.co/stefan-it/hmbench-newseye-de-hmbert_tiny-bs4-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-2
[8]: https://hf.co/stefan-it/hmbench-newseye-de-hmbert_tiny-bs4-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-3
[9]: https://hf.co/stefan-it/hmbench-newseye-de-hmbert_tiny-bs4-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-4
[10]: https://hf.co/stefan-it/hmbench-newseye-de-hmbert_tiny-bs4-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-5
[11]: https://hf.co/stefan-it/hmbench-newseye-de-hmbert_tiny-bs4-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-1
[12]: https://hf.co/stefan-it/hmbench-newseye-de-hmbert_tiny-bs4-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-2
[13]: https://hf.co/stefan-it/hmbench-newseye-de-hmbert_tiny-bs4-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-3
[14]: https://hf.co/stefan-it/hmbench-newseye-de-hmbert_tiny-bs4-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-4
[15]: https://hf.co/stefan-it/hmbench-newseye-de-hmbert_tiny-bs4-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-5
[16]: https://hf.co/stefan-it/hmbench-newseye-de-hmbert_tiny-bs8-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-1
[17]: https://hf.co/stefan-it/hmbench-newseye-de-hmbert_tiny-bs8-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-2
[18]: https://hf.co/stefan-it/hmbench-newseye-de-hmbert_tiny-bs8-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-3
[19]: https://hf.co/stefan-it/hmbench-newseye-de-hmbert_tiny-bs8-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-4
[20]: https://hf.co/stefan-it/hmbench-newseye-de-hmbert_tiny-bs8-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-5
The [training log](training.log) and TensorBoard logs (not available for hmBERT Base model) are also uploaded to the model hub.
More information about fine-tuning can be found [here](https://github.com/stefan-it/hmBench).
# Acknowledgements
We thank [Luisa März](https://github.com/LuisaMaerz), [Katharina Schmid](https://github.com/schmika) and
[Erion Çano](https://github.com/erionc) for their fruitful discussions about Historic Language Models.
Research supported with Cloud TPUs from Google's [TPU Research Cloud](https://sites.research.google/trc/about/) (TRC).
Many Thanks for providing access to the TPUs ❤️
|
stefan-it/hmbench-newseye-de-hmbert_tiny-bs4-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-4 | stefan-it | 2023-10-24T13:17:39Z | 2 | 0 | flair | [
"flair",
"pytorch",
"tensorboard",
"token-classification",
"sequence-tagger-model",
"de",
"base_model:dbmdz/bert-tiny-historic-multilingual-cased",
"base_model:finetune:dbmdz/bert-tiny-historic-multilingual-cased",
"license:mit",
"region:us"
]
| token-classification | 2023-10-19T12:53:28Z | ---
language: de
license: mit
tags:
- flair
- token-classification
- sequence-tagger-model
base_model: dbmdz/bert-tiny-historic-multilingual-cased
widget:
- text: In Teltsch und Jarmeritz wurden die abgegebenen Stimmen für Genossen Krapka
ungiltig erklärt , weil sie keinen Wohnort aufwiesen .
---
# Fine-tuned Flair Model on German NewsEye NER Dataset (HIPE-2022)
This Flair model was fine-tuned on the
[German NewsEye](https://github.com/hipe-eval/HIPE-2022-data/blob/main/documentation/README-newseye.md)
NER Dataset using hmBERT Tiny as backbone LM.
The NewsEye dataset is comprised of diachronic historical newspaper material published between 1850 and 1950
in French, German, Finnish, and Swedish.
More information can be found [here](https://dl.acm.org/doi/abs/10.1145/3404835.3463255).
The following NEs were annotated: `PER`, `LOC`, `ORG` and `HumanProd`.
# Results
We performed a hyper-parameter search over the following parameters with 5 different seeds per configuration:
* Batch Sizes: `[4, 8]`
* Learning Rates: `[5e-05, 3e-05]`
And report micro F1-score on development set:
| Configuration | Seed 1 | Seed 2 | Seed 3 | Seed 4 | Seed 5 | Average |
|-------------------|--------------|--------------|--------------|-----------------|--------------|-----------------|
| `bs8-e10-lr5e-05` | [0.303][1] | [0.2873][2] | [0.2881][3] | [0.2866][4] | [0.2788][5] | 0.2888 ± 0.0088 |
| `bs4-e10-lr3e-05` | [0.2984][6] | [0.311][7] | [0.2718][8] | [0.2702][9] | [0.2598][10] | 0.2822 ± 0.0215 |
| `bs4-e10-lr5e-05` | [0.2907][11] | [0.288][12] | [0.276][13] | [**0.268**][14] | [0.2736][15] | 0.2793 ± 0.0097 |
| `bs8-e10-lr3e-05` | [0.2863][16] | [0.2771][17] | [0.2732][18] | [0.2812][19] | [0.2657][20] | 0.2767 ± 0.0078 |
[1]: https://hf.co/stefan-it/hmbench-newseye-de-hmbert_tiny-bs8-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-1
[2]: https://hf.co/stefan-it/hmbench-newseye-de-hmbert_tiny-bs8-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-2
[3]: https://hf.co/stefan-it/hmbench-newseye-de-hmbert_tiny-bs8-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-3
[4]: https://hf.co/stefan-it/hmbench-newseye-de-hmbert_tiny-bs8-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-4
[5]: https://hf.co/stefan-it/hmbench-newseye-de-hmbert_tiny-bs8-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-5
[6]: https://hf.co/stefan-it/hmbench-newseye-de-hmbert_tiny-bs4-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-1
[7]: https://hf.co/stefan-it/hmbench-newseye-de-hmbert_tiny-bs4-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-2
[8]: https://hf.co/stefan-it/hmbench-newseye-de-hmbert_tiny-bs4-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-3
[9]: https://hf.co/stefan-it/hmbench-newseye-de-hmbert_tiny-bs4-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-4
[10]: https://hf.co/stefan-it/hmbench-newseye-de-hmbert_tiny-bs4-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-5
[11]: https://hf.co/stefan-it/hmbench-newseye-de-hmbert_tiny-bs4-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-1
[12]: https://hf.co/stefan-it/hmbench-newseye-de-hmbert_tiny-bs4-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-2
[13]: https://hf.co/stefan-it/hmbench-newseye-de-hmbert_tiny-bs4-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-3
[14]: https://hf.co/stefan-it/hmbench-newseye-de-hmbert_tiny-bs4-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-4
[15]: https://hf.co/stefan-it/hmbench-newseye-de-hmbert_tiny-bs4-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-5
[16]: https://hf.co/stefan-it/hmbench-newseye-de-hmbert_tiny-bs8-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-1
[17]: https://hf.co/stefan-it/hmbench-newseye-de-hmbert_tiny-bs8-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-2
[18]: https://hf.co/stefan-it/hmbench-newseye-de-hmbert_tiny-bs8-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-3
[19]: https://hf.co/stefan-it/hmbench-newseye-de-hmbert_tiny-bs8-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-4
[20]: https://hf.co/stefan-it/hmbench-newseye-de-hmbert_tiny-bs8-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-5
The [training log](training.log) and TensorBoard logs (not available for hmBERT Base model) are also uploaded to the model hub.
More information about fine-tuning can be found [here](https://github.com/stefan-it/hmBench).
# Acknowledgements
We thank [Luisa März](https://github.com/LuisaMaerz), [Katharina Schmid](https://github.com/schmika) and
[Erion Çano](https://github.com/erionc) for their fruitful discussions about Historic Language Models.
Research supported with Cloud TPUs from Google's [TPU Research Cloud](https://sites.research.google/trc/about/) (TRC).
Many Thanks for providing access to the TPUs ❤️
|
stefan-it/hmbench-newseye-de-hmbert_tiny-bs8-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-4 | stefan-it | 2023-10-24T13:17:39Z | 2 | 0 | flair | [
"flair",
"pytorch",
"tensorboard",
"token-classification",
"sequence-tagger-model",
"de",
"base_model:dbmdz/bert-tiny-historic-multilingual-cased",
"base_model:finetune:dbmdz/bert-tiny-historic-multilingual-cased",
"license:mit",
"region:us"
]
| token-classification | 2023-10-19T13:04:54Z | ---
language: de
license: mit
tags:
- flair
- token-classification
- sequence-tagger-model
base_model: dbmdz/bert-tiny-historic-multilingual-cased
widget:
- text: In Teltsch und Jarmeritz wurden die abgegebenen Stimmen für Genossen Krapka
ungiltig erklärt , weil sie keinen Wohnort aufwiesen .
---
# Fine-tuned Flair Model on German NewsEye NER Dataset (HIPE-2022)
This Flair model was fine-tuned on the
[German NewsEye](https://github.com/hipe-eval/HIPE-2022-data/blob/main/documentation/README-newseye.md)
NER Dataset using hmBERT Tiny as backbone LM.
The NewsEye dataset is comprised of diachronic historical newspaper material published between 1850 and 1950
in French, German, Finnish, and Swedish.
More information can be found [here](https://dl.acm.org/doi/abs/10.1145/3404835.3463255).
The following NEs were annotated: `PER`, `LOC`, `ORG` and `HumanProd`.
# Results
We performed a hyper-parameter search over the following parameters with 5 different seeds per configuration:
* Batch Sizes: `[4, 8]`
* Learning Rates: `[5e-05, 3e-05]`
And report micro F1-score on development set:
| Configuration | Seed 1 | Seed 2 | Seed 3 | Seed 4 | Seed 5 | Average |
|-------------------|--------------|--------------|--------------|------------------|--------------|-----------------|
| `bs8-e10-lr5e-05` | [0.303][1] | [0.2873][2] | [0.2881][3] | [0.2866][4] | [0.2788][5] | 0.2888 ± 0.0088 |
| `bs4-e10-lr3e-05` | [0.2984][6] | [0.311][7] | [0.2718][8] | [0.2702][9] | [0.2598][10] | 0.2822 ± 0.0215 |
| `bs4-e10-lr5e-05` | [0.2907][11] | [0.288][12] | [0.276][13] | [0.268][14] | [0.2736][15] | 0.2793 ± 0.0097 |
| `bs8-e10-lr3e-05` | [0.2863][16] | [0.2771][17] | [0.2732][18] | [**0.2812**][19] | [0.2657][20] | 0.2767 ± 0.0078 |
[1]: https://hf.co/stefan-it/hmbench-newseye-de-hmbert_tiny-bs8-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-1
[2]: https://hf.co/stefan-it/hmbench-newseye-de-hmbert_tiny-bs8-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-2
[3]: https://hf.co/stefan-it/hmbench-newseye-de-hmbert_tiny-bs8-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-3
[4]: https://hf.co/stefan-it/hmbench-newseye-de-hmbert_tiny-bs8-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-4
[5]: https://hf.co/stefan-it/hmbench-newseye-de-hmbert_tiny-bs8-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-5
[6]: https://hf.co/stefan-it/hmbench-newseye-de-hmbert_tiny-bs4-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-1
[7]: https://hf.co/stefan-it/hmbench-newseye-de-hmbert_tiny-bs4-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-2
[8]: https://hf.co/stefan-it/hmbench-newseye-de-hmbert_tiny-bs4-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-3
[9]: https://hf.co/stefan-it/hmbench-newseye-de-hmbert_tiny-bs4-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-4
[10]: https://hf.co/stefan-it/hmbench-newseye-de-hmbert_tiny-bs4-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-5
[11]: https://hf.co/stefan-it/hmbench-newseye-de-hmbert_tiny-bs4-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-1
[12]: https://hf.co/stefan-it/hmbench-newseye-de-hmbert_tiny-bs4-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-2
[13]: https://hf.co/stefan-it/hmbench-newseye-de-hmbert_tiny-bs4-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-3
[14]: https://hf.co/stefan-it/hmbench-newseye-de-hmbert_tiny-bs4-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-4
[15]: https://hf.co/stefan-it/hmbench-newseye-de-hmbert_tiny-bs4-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-5
[16]: https://hf.co/stefan-it/hmbench-newseye-de-hmbert_tiny-bs8-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-1
[17]: https://hf.co/stefan-it/hmbench-newseye-de-hmbert_tiny-bs8-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-2
[18]: https://hf.co/stefan-it/hmbench-newseye-de-hmbert_tiny-bs8-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-3
[19]: https://hf.co/stefan-it/hmbench-newseye-de-hmbert_tiny-bs8-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-4
[20]: https://hf.co/stefan-it/hmbench-newseye-de-hmbert_tiny-bs8-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-5
The [training log](training.log) and TensorBoard logs (not available for hmBERT Base model) are also uploaded to the model hub.
More information about fine-tuning can be found [here](https://github.com/stefan-it/hmBench).
# Acknowledgements
We thank [Luisa März](https://github.com/LuisaMaerz), [Katharina Schmid](https://github.com/schmika) and
[Erion Çano](https://github.com/erionc) for their fruitful discussions about Historic Language Models.
Research supported with Cloud TPUs from Google's [TPU Research Cloud](https://sites.research.google/trc/about/) (TRC).
Many Thanks for providing access to the TPUs ❤️
|
stefan-it/hmbench-newseye-de-hmbert_tiny-bs8-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-3 | stefan-it | 2023-10-24T13:17:38Z | 2 | 0 | flair | [
"flair",
"pytorch",
"tensorboard",
"token-classification",
"sequence-tagger-model",
"de",
"base_model:dbmdz/bert-tiny-historic-multilingual-cased",
"base_model:finetune:dbmdz/bert-tiny-historic-multilingual-cased",
"license:mit",
"region:us"
]
| token-classification | 2023-10-19T12:12:12Z | ---
language: de
license: mit
tags:
- flair
- token-classification
- sequence-tagger-model
base_model: dbmdz/bert-tiny-historic-multilingual-cased
widget:
- text: In Teltsch und Jarmeritz wurden die abgegebenen Stimmen für Genossen Krapka
ungiltig erklärt , weil sie keinen Wohnort aufwiesen .
---
# Fine-tuned Flair Model on German NewsEye NER Dataset (HIPE-2022)
This Flair model was fine-tuned on the
[German NewsEye](https://github.com/hipe-eval/HIPE-2022-data/blob/main/documentation/README-newseye.md)
NER Dataset using hmBERT Tiny as backbone LM.
The NewsEye dataset is comprised of diachronic historical newspaper material published between 1850 and 1950
in French, German, Finnish, and Swedish.
More information can be found [here](https://dl.acm.org/doi/abs/10.1145/3404835.3463255).
The following NEs were annotated: `PER`, `LOC`, `ORG` and `HumanProd`.
# Results
We performed a hyper-parameter search over the following parameters with 5 different seeds per configuration:
* Batch Sizes: `[4, 8]`
* Learning Rates: `[5e-05, 3e-05]`
And report micro F1-score on development set:
| Configuration | Seed 1 | Seed 2 | Seed 3 | Seed 4 | Seed 5 | Average |
|-------------------|--------------|--------------|------------------|--------------|--------------|-----------------|
| `bs8-e10-lr5e-05` | [0.303][1] | [0.2873][2] | [0.2881][3] | [0.2866][4] | [0.2788][5] | 0.2888 ± 0.0088 |
| `bs4-e10-lr3e-05` | [0.2984][6] | [0.311][7] | [0.2718][8] | [0.2702][9] | [0.2598][10] | 0.2822 ± 0.0215 |
| `bs4-e10-lr5e-05` | [0.2907][11] | [0.288][12] | [0.276][13] | [0.268][14] | [0.2736][15] | 0.2793 ± 0.0097 |
| `bs8-e10-lr3e-05` | [0.2863][16] | [0.2771][17] | [**0.2732**][18] | [0.2812][19] | [0.2657][20] | 0.2767 ± 0.0078 |
[1]: https://hf.co/stefan-it/hmbench-newseye-de-hmbert_tiny-bs8-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-1
[2]: https://hf.co/stefan-it/hmbench-newseye-de-hmbert_tiny-bs8-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-2
[3]: https://hf.co/stefan-it/hmbench-newseye-de-hmbert_tiny-bs8-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-3
[4]: https://hf.co/stefan-it/hmbench-newseye-de-hmbert_tiny-bs8-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-4
[5]: https://hf.co/stefan-it/hmbench-newseye-de-hmbert_tiny-bs8-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-5
[6]: https://hf.co/stefan-it/hmbench-newseye-de-hmbert_tiny-bs4-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-1
[7]: https://hf.co/stefan-it/hmbench-newseye-de-hmbert_tiny-bs4-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-2
[8]: https://hf.co/stefan-it/hmbench-newseye-de-hmbert_tiny-bs4-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-3
[9]: https://hf.co/stefan-it/hmbench-newseye-de-hmbert_tiny-bs4-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-4
[10]: https://hf.co/stefan-it/hmbench-newseye-de-hmbert_tiny-bs4-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-5
[11]: https://hf.co/stefan-it/hmbench-newseye-de-hmbert_tiny-bs4-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-1
[12]: https://hf.co/stefan-it/hmbench-newseye-de-hmbert_tiny-bs4-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-2
[13]: https://hf.co/stefan-it/hmbench-newseye-de-hmbert_tiny-bs4-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-3
[14]: https://hf.co/stefan-it/hmbench-newseye-de-hmbert_tiny-bs4-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-4
[15]: https://hf.co/stefan-it/hmbench-newseye-de-hmbert_tiny-bs4-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-5
[16]: https://hf.co/stefan-it/hmbench-newseye-de-hmbert_tiny-bs8-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-1
[17]: https://hf.co/stefan-it/hmbench-newseye-de-hmbert_tiny-bs8-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-2
[18]: https://hf.co/stefan-it/hmbench-newseye-de-hmbert_tiny-bs8-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-3
[19]: https://hf.co/stefan-it/hmbench-newseye-de-hmbert_tiny-bs8-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-4
[20]: https://hf.co/stefan-it/hmbench-newseye-de-hmbert_tiny-bs8-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-5
The [training log](training.log) and TensorBoard logs (not available for hmBERT Base model) are also uploaded to the model hub.
More information about fine-tuning can be found [here](https://github.com/stefan-it/hmBench).
# Acknowledgements
We thank [Luisa März](https://github.com/LuisaMaerz), [Katharina Schmid](https://github.com/schmika) and
[Erion Çano](https://github.com/erionc) for their fruitful discussions about Historic Language Models.
Research supported with Cloud TPUs from Google's [TPU Research Cloud](https://sites.research.google/trc/about/) (TRC).
Many Thanks for providing access to the TPUs ❤️
|
stefan-it/hmbench-newseye-de-hmbert_tiny-bs4-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-3 | stefan-it | 2023-10-24T13:17:37Z | 3 | 0 | flair | [
"flair",
"pytorch",
"tensorboard",
"token-classification",
"sequence-tagger-model",
"de",
"base_model:dbmdz/bert-tiny-historic-multilingual-cased",
"base_model:finetune:dbmdz/bert-tiny-historic-multilingual-cased",
"license:mit",
"region:us"
]
| token-classification | 2023-10-19T12:00:45Z | ---
language: de
license: mit
tags:
- flair
- token-classification
- sequence-tagger-model
base_model: dbmdz/bert-tiny-historic-multilingual-cased
widget:
- text: In Teltsch und Jarmeritz wurden die abgegebenen Stimmen für Genossen Krapka
ungiltig erklärt , weil sie keinen Wohnort aufwiesen .
---
# Fine-tuned Flair Model on German NewsEye NER Dataset (HIPE-2022)
This Flair model was fine-tuned on the
[German NewsEye](https://github.com/hipe-eval/HIPE-2022-data/blob/main/documentation/README-newseye.md)
NER Dataset using hmBERT Tiny as backbone LM.
The NewsEye dataset is comprised of diachronic historical newspaper material published between 1850 and 1950
in French, German, Finnish, and Swedish.
More information can be found [here](https://dl.acm.org/doi/abs/10.1145/3404835.3463255).
The following NEs were annotated: `PER`, `LOC`, `ORG` and `HumanProd`.
# Results
We performed a hyper-parameter search over the following parameters with 5 different seeds per configuration:
* Batch Sizes: `[4, 8]`
* Learning Rates: `[5e-05, 3e-05]`
And report micro F1-score on development set:
| Configuration | Seed 1 | Seed 2 | Seed 3 | Seed 4 | Seed 5 | Average |
|-------------------|--------------|--------------|-----------------|--------------|--------------|-----------------|
| `bs8-e10-lr5e-05` | [0.303][1] | [0.2873][2] | [0.2881][3] | [0.2866][4] | [0.2788][5] | 0.2888 ± 0.0088 |
| `bs4-e10-lr3e-05` | [0.2984][6] | [0.311][7] | [0.2718][8] | [0.2702][9] | [0.2598][10] | 0.2822 ± 0.0215 |
| `bs4-e10-lr5e-05` | [0.2907][11] | [0.288][12] | [**0.276**][13] | [0.268][14] | [0.2736][15] | 0.2793 ± 0.0097 |
| `bs8-e10-lr3e-05` | [0.2863][16] | [0.2771][17] | [0.2732][18] | [0.2812][19] | [0.2657][20] | 0.2767 ± 0.0078 |
[1]: https://hf.co/stefan-it/hmbench-newseye-de-hmbert_tiny-bs8-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-1
[2]: https://hf.co/stefan-it/hmbench-newseye-de-hmbert_tiny-bs8-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-2
[3]: https://hf.co/stefan-it/hmbench-newseye-de-hmbert_tiny-bs8-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-3
[4]: https://hf.co/stefan-it/hmbench-newseye-de-hmbert_tiny-bs8-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-4
[5]: https://hf.co/stefan-it/hmbench-newseye-de-hmbert_tiny-bs8-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-5
[6]: https://hf.co/stefan-it/hmbench-newseye-de-hmbert_tiny-bs4-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-1
[7]: https://hf.co/stefan-it/hmbench-newseye-de-hmbert_tiny-bs4-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-2
[8]: https://hf.co/stefan-it/hmbench-newseye-de-hmbert_tiny-bs4-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-3
[9]: https://hf.co/stefan-it/hmbench-newseye-de-hmbert_tiny-bs4-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-4
[10]: https://hf.co/stefan-it/hmbench-newseye-de-hmbert_tiny-bs4-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-5
[11]: https://hf.co/stefan-it/hmbench-newseye-de-hmbert_tiny-bs4-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-1
[12]: https://hf.co/stefan-it/hmbench-newseye-de-hmbert_tiny-bs4-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-2
[13]: https://hf.co/stefan-it/hmbench-newseye-de-hmbert_tiny-bs4-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-3
[14]: https://hf.co/stefan-it/hmbench-newseye-de-hmbert_tiny-bs4-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-4
[15]: https://hf.co/stefan-it/hmbench-newseye-de-hmbert_tiny-bs4-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-5
[16]: https://hf.co/stefan-it/hmbench-newseye-de-hmbert_tiny-bs8-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-1
[17]: https://hf.co/stefan-it/hmbench-newseye-de-hmbert_tiny-bs8-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-2
[18]: https://hf.co/stefan-it/hmbench-newseye-de-hmbert_tiny-bs8-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-3
[19]: https://hf.co/stefan-it/hmbench-newseye-de-hmbert_tiny-bs8-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-4
[20]: https://hf.co/stefan-it/hmbench-newseye-de-hmbert_tiny-bs8-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-5
The [training log](training.log) and TensorBoard logs (not available for hmBERT Base model) are also uploaded to the model hub.
More information about fine-tuning can be found [here](https://github.com/stefan-it/hmBench).
# Acknowledgements
We thank [Luisa März](https://github.com/LuisaMaerz), [Katharina Schmid](https://github.com/schmika) and
[Erion Çano](https://github.com/erionc) for their fruitful discussions about Historic Language Models.
Research supported with Cloud TPUs from Google's [TPU Research Cloud](https://sites.research.google/trc/about/) (TRC).
Many Thanks for providing access to the TPUs ❤️
|
stefan-it/hmbench-newseye-de-hmbert_tiny-bs4-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-3 | stefan-it | 2023-10-24T13:17:37Z | 2 | 0 | flair | [
"flair",
"pytorch",
"tensorboard",
"token-classification",
"sequence-tagger-model",
"de",
"base_model:dbmdz/bert-tiny-historic-multilingual-cased",
"base_model:finetune:dbmdz/bert-tiny-historic-multilingual-cased",
"license:mit",
"region:us"
]
| token-classification | 2023-10-19T11:45:49Z | ---
language: de
license: mit
tags:
- flair
- token-classification
- sequence-tagger-model
base_model: dbmdz/bert-tiny-historic-multilingual-cased
widget:
- text: In Teltsch und Jarmeritz wurden die abgegebenen Stimmen für Genossen Krapka
ungiltig erklärt , weil sie keinen Wohnort aufwiesen .
---
# Fine-tuned Flair Model on German NewsEye NER Dataset (HIPE-2022)
This Flair model was fine-tuned on the
[German NewsEye](https://github.com/hipe-eval/HIPE-2022-data/blob/main/documentation/README-newseye.md)
NER Dataset using hmBERT Tiny as backbone LM.
The NewsEye dataset is comprised of diachronic historical newspaper material published between 1850 and 1950
in French, German, Finnish, and Swedish.
More information can be found [here](https://dl.acm.org/doi/abs/10.1145/3404835.3463255).
The following NEs were annotated: `PER`, `LOC`, `ORG` and `HumanProd`.
# Results
We performed a hyper-parameter search over the following parameters with 5 different seeds per configuration:
* Batch Sizes: `[4, 8]`
* Learning Rates: `[5e-05, 3e-05]`
And report micro F1-score on development set:
| Configuration | Seed 1 | Seed 2 | Seed 3 | Seed 4 | Seed 5 | Average |
|-------------------|--------------|--------------|-----------------|--------------|--------------|-----------------|
| `bs8-e10-lr5e-05` | [0.303][1] | [0.2873][2] | [0.2881][3] | [0.2866][4] | [0.2788][5] | 0.2888 ± 0.0088 |
| `bs4-e10-lr3e-05` | [0.2984][6] | [0.311][7] | [**0.2718**][8] | [0.2702][9] | [0.2598][10] | 0.2822 ± 0.0215 |
| `bs4-e10-lr5e-05` | [0.2907][11] | [0.288][12] | [0.276][13] | [0.268][14] | [0.2736][15] | 0.2793 ± 0.0097 |
| `bs8-e10-lr3e-05` | [0.2863][16] | [0.2771][17] | [0.2732][18] | [0.2812][19] | [0.2657][20] | 0.2767 ± 0.0078 |
[1]: https://hf.co/stefan-it/hmbench-newseye-de-hmbert_tiny-bs8-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-1
[2]: https://hf.co/stefan-it/hmbench-newseye-de-hmbert_tiny-bs8-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-2
[3]: https://hf.co/stefan-it/hmbench-newseye-de-hmbert_tiny-bs8-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-3
[4]: https://hf.co/stefan-it/hmbench-newseye-de-hmbert_tiny-bs8-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-4
[5]: https://hf.co/stefan-it/hmbench-newseye-de-hmbert_tiny-bs8-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-5
[6]: https://hf.co/stefan-it/hmbench-newseye-de-hmbert_tiny-bs4-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-1
[7]: https://hf.co/stefan-it/hmbench-newseye-de-hmbert_tiny-bs4-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-2
[8]: https://hf.co/stefan-it/hmbench-newseye-de-hmbert_tiny-bs4-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-3
[9]: https://hf.co/stefan-it/hmbench-newseye-de-hmbert_tiny-bs4-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-4
[10]: https://hf.co/stefan-it/hmbench-newseye-de-hmbert_tiny-bs4-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-5
[11]: https://hf.co/stefan-it/hmbench-newseye-de-hmbert_tiny-bs4-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-1
[12]: https://hf.co/stefan-it/hmbench-newseye-de-hmbert_tiny-bs4-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-2
[13]: https://hf.co/stefan-it/hmbench-newseye-de-hmbert_tiny-bs4-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-3
[14]: https://hf.co/stefan-it/hmbench-newseye-de-hmbert_tiny-bs4-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-4
[15]: https://hf.co/stefan-it/hmbench-newseye-de-hmbert_tiny-bs4-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-5
[16]: https://hf.co/stefan-it/hmbench-newseye-de-hmbert_tiny-bs8-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-1
[17]: https://hf.co/stefan-it/hmbench-newseye-de-hmbert_tiny-bs8-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-2
[18]: https://hf.co/stefan-it/hmbench-newseye-de-hmbert_tiny-bs8-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-3
[19]: https://hf.co/stefan-it/hmbench-newseye-de-hmbert_tiny-bs8-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-4
[20]: https://hf.co/stefan-it/hmbench-newseye-de-hmbert_tiny-bs8-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-5
The [training log](training.log) and TensorBoard logs (not available for hmBERT Base model) are also uploaded to the model hub.
More information about fine-tuning can be found [here](https://github.com/stefan-it/hmBench).
# Acknowledgements
We thank [Luisa März](https://github.com/LuisaMaerz), [Katharina Schmid](https://github.com/schmika) and
[Erion Çano](https://github.com/erionc) for their fruitful discussions about Historic Language Models.
Research supported with Cloud TPUs from Google's [TPU Research Cloud](https://sites.research.google/trc/about/) (TRC).
Many Thanks for providing access to the TPUs ❤️
|
stefan-it/hmbench-newseye-de-hmbert_tiny-bs4-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-2 | stefan-it | 2023-10-24T13:17:36Z | 3 | 0 | flair | [
"flair",
"pytorch",
"tensorboard",
"token-classification",
"sequence-tagger-model",
"de",
"base_model:dbmdz/bert-tiny-historic-multilingual-cased",
"base_model:finetune:dbmdz/bert-tiny-historic-multilingual-cased",
"license:mit",
"region:us"
]
| token-classification | 2023-10-19T11:08:07Z | ---
language: de
license: mit
tags:
- flair
- token-classification
- sequence-tagger-model
base_model: dbmdz/bert-tiny-historic-multilingual-cased
widget:
- text: In Teltsch und Jarmeritz wurden die abgegebenen Stimmen für Genossen Krapka
ungiltig erklärt , weil sie keinen Wohnort aufwiesen .
---
# Fine-tuned Flair Model on German NewsEye NER Dataset (HIPE-2022)
This Flair model was fine-tuned on the
[German NewsEye](https://github.com/hipe-eval/HIPE-2022-data/blob/main/documentation/README-newseye.md)
NER Dataset using hmBERT Tiny as backbone LM.
The NewsEye dataset is comprised of diachronic historical newspaper material published between 1850 and 1950
in French, German, Finnish, and Swedish.
More information can be found [here](https://dl.acm.org/doi/abs/10.1145/3404835.3463255).
The following NEs were annotated: `PER`, `LOC`, `ORG` and `HumanProd`.
# Results
We performed a hyper-parameter search over the following parameters with 5 different seeds per configuration:
* Batch Sizes: `[4, 8]`
* Learning Rates: `[5e-05, 3e-05]`
And report micro F1-score on development set:
| Configuration | Seed 1 | Seed 2 | Seed 3 | Seed 4 | Seed 5 | Average |
|-------------------|--------------|-----------------|--------------|--------------|--------------|-----------------|
| `bs8-e10-lr5e-05` | [0.303][1] | [0.2873][2] | [0.2881][3] | [0.2866][4] | [0.2788][5] | 0.2888 ± 0.0088 |
| `bs4-e10-lr3e-05` | [0.2984][6] | [0.311][7] | [0.2718][8] | [0.2702][9] | [0.2598][10] | 0.2822 ± 0.0215 |
| `bs4-e10-lr5e-05` | [0.2907][11] | [**0.288**][12] | [0.276][13] | [0.268][14] | [0.2736][15] | 0.2793 ± 0.0097 |
| `bs8-e10-lr3e-05` | [0.2863][16] | [0.2771][17] | [0.2732][18] | [0.2812][19] | [0.2657][20] | 0.2767 ± 0.0078 |
[1]: https://hf.co/stefan-it/hmbench-newseye-de-hmbert_tiny-bs8-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-1
[2]: https://hf.co/stefan-it/hmbench-newseye-de-hmbert_tiny-bs8-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-2
[3]: https://hf.co/stefan-it/hmbench-newseye-de-hmbert_tiny-bs8-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-3
[4]: https://hf.co/stefan-it/hmbench-newseye-de-hmbert_tiny-bs8-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-4
[5]: https://hf.co/stefan-it/hmbench-newseye-de-hmbert_tiny-bs8-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-5
[6]: https://hf.co/stefan-it/hmbench-newseye-de-hmbert_tiny-bs4-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-1
[7]: https://hf.co/stefan-it/hmbench-newseye-de-hmbert_tiny-bs4-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-2
[8]: https://hf.co/stefan-it/hmbench-newseye-de-hmbert_tiny-bs4-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-3
[9]: https://hf.co/stefan-it/hmbench-newseye-de-hmbert_tiny-bs4-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-4
[10]: https://hf.co/stefan-it/hmbench-newseye-de-hmbert_tiny-bs4-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-5
[11]: https://hf.co/stefan-it/hmbench-newseye-de-hmbert_tiny-bs4-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-1
[12]: https://hf.co/stefan-it/hmbench-newseye-de-hmbert_tiny-bs4-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-2
[13]: https://hf.co/stefan-it/hmbench-newseye-de-hmbert_tiny-bs4-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-3
[14]: https://hf.co/stefan-it/hmbench-newseye-de-hmbert_tiny-bs4-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-4
[15]: https://hf.co/stefan-it/hmbench-newseye-de-hmbert_tiny-bs4-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-5
[16]: https://hf.co/stefan-it/hmbench-newseye-de-hmbert_tiny-bs8-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-1
[17]: https://hf.co/stefan-it/hmbench-newseye-de-hmbert_tiny-bs8-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-2
[18]: https://hf.co/stefan-it/hmbench-newseye-de-hmbert_tiny-bs8-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-3
[19]: https://hf.co/stefan-it/hmbench-newseye-de-hmbert_tiny-bs8-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-4
[20]: https://hf.co/stefan-it/hmbench-newseye-de-hmbert_tiny-bs8-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-5
The [training log](training.log) and TensorBoard logs (not available for hmBERT Base model) are also uploaded to the model hub.
More information about fine-tuning can be found [here](https://github.com/stefan-it/hmBench).
# Acknowledgements
We thank [Luisa März](https://github.com/LuisaMaerz), [Katharina Schmid](https://github.com/schmika) and
[Erion Çano](https://github.com/erionc) for their fruitful discussions about Historic Language Models.
Research supported with Cloud TPUs from Google's [TPU Research Cloud](https://sites.research.google/trc/about/) (TRC).
Many Thanks for providing access to the TPUs ❤️
|
stefan-it/hmbench-newseye-de-hmbert_tiny-bs8-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-2 | stefan-it | 2023-10-24T13:17:36Z | 1 | 0 | flair | [
"flair",
"pytorch",
"tensorboard",
"token-classification",
"sequence-tagger-model",
"de",
"base_model:dbmdz/bert-tiny-historic-multilingual-cased",
"base_model:finetune:dbmdz/bert-tiny-historic-multilingual-cased",
"license:mit",
"region:us"
]
| token-classification | 2023-10-19T11:19:28Z | ---
language: de
license: mit
tags:
- flair
- token-classification
- sequence-tagger-model
base_model: dbmdz/bert-tiny-historic-multilingual-cased
widget:
- text: In Teltsch und Jarmeritz wurden die abgegebenen Stimmen für Genossen Krapka
ungiltig erklärt , weil sie keinen Wohnort aufwiesen .
---
# Fine-tuned Flair Model on German NewsEye NER Dataset (HIPE-2022)
This Flair model was fine-tuned on the
[German NewsEye](https://github.com/hipe-eval/HIPE-2022-data/blob/main/documentation/README-newseye.md)
NER Dataset using hmBERT Tiny as backbone LM.
The NewsEye dataset is comprised of diachronic historical newspaper material published between 1850 and 1950
in French, German, Finnish, and Swedish.
More information can be found [here](https://dl.acm.org/doi/abs/10.1145/3404835.3463255).
The following NEs were annotated: `PER`, `LOC`, `ORG` and `HumanProd`.
# Results
We performed a hyper-parameter search over the following parameters with 5 different seeds per configuration:
* Batch Sizes: `[4, 8]`
* Learning Rates: `[5e-05, 3e-05]`
And report micro F1-score on development set:
| Configuration | Seed 1 | Seed 2 | Seed 3 | Seed 4 | Seed 5 | Average |
|-------------------|--------------|------------------|--------------|--------------|--------------|-----------------|
| `bs8-e10-lr5e-05` | [0.303][1] | [0.2873][2] | [0.2881][3] | [0.2866][4] | [0.2788][5] | 0.2888 ± 0.0088 |
| `bs4-e10-lr3e-05` | [0.2984][6] | [0.311][7] | [0.2718][8] | [0.2702][9] | [0.2598][10] | 0.2822 ± 0.0215 |
| `bs4-e10-lr5e-05` | [0.2907][11] | [0.288][12] | [0.276][13] | [0.268][14] | [0.2736][15] | 0.2793 ± 0.0097 |
| `bs8-e10-lr3e-05` | [0.2863][16] | [**0.2771**][17] | [0.2732][18] | [0.2812][19] | [0.2657][20] | 0.2767 ± 0.0078 |
[1]: https://hf.co/stefan-it/hmbench-newseye-de-hmbert_tiny-bs8-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-1
[2]: https://hf.co/stefan-it/hmbench-newseye-de-hmbert_tiny-bs8-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-2
[3]: https://hf.co/stefan-it/hmbench-newseye-de-hmbert_tiny-bs8-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-3
[4]: https://hf.co/stefan-it/hmbench-newseye-de-hmbert_tiny-bs8-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-4
[5]: https://hf.co/stefan-it/hmbench-newseye-de-hmbert_tiny-bs8-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-5
[6]: https://hf.co/stefan-it/hmbench-newseye-de-hmbert_tiny-bs4-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-1
[7]: https://hf.co/stefan-it/hmbench-newseye-de-hmbert_tiny-bs4-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-2
[8]: https://hf.co/stefan-it/hmbench-newseye-de-hmbert_tiny-bs4-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-3
[9]: https://hf.co/stefan-it/hmbench-newseye-de-hmbert_tiny-bs4-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-4
[10]: https://hf.co/stefan-it/hmbench-newseye-de-hmbert_tiny-bs4-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-5
[11]: https://hf.co/stefan-it/hmbench-newseye-de-hmbert_tiny-bs4-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-1
[12]: https://hf.co/stefan-it/hmbench-newseye-de-hmbert_tiny-bs4-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-2
[13]: https://hf.co/stefan-it/hmbench-newseye-de-hmbert_tiny-bs4-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-3
[14]: https://hf.co/stefan-it/hmbench-newseye-de-hmbert_tiny-bs4-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-4
[15]: https://hf.co/stefan-it/hmbench-newseye-de-hmbert_tiny-bs4-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-5
[16]: https://hf.co/stefan-it/hmbench-newseye-de-hmbert_tiny-bs8-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-1
[17]: https://hf.co/stefan-it/hmbench-newseye-de-hmbert_tiny-bs8-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-2
[18]: https://hf.co/stefan-it/hmbench-newseye-de-hmbert_tiny-bs8-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-3
[19]: https://hf.co/stefan-it/hmbench-newseye-de-hmbert_tiny-bs8-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-4
[20]: https://hf.co/stefan-it/hmbench-newseye-de-hmbert_tiny-bs8-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-5
The [training log](training.log) and TensorBoard logs (not available for hmBERT Base model) are also uploaded to the model hub.
More information about fine-tuning can be found [here](https://github.com/stefan-it/hmBench).
# Acknowledgements
We thank [Luisa März](https://github.com/LuisaMaerz), [Katharina Schmid](https://github.com/schmika) and
[Erion Çano](https://github.com/erionc) for their fruitful discussions about Historic Language Models.
Research supported with Cloud TPUs from Google's [TPU Research Cloud](https://sites.research.google/trc/about/) (TRC).
Many Thanks for providing access to the TPUs ❤️
|
stefan-it/hmbench-newseye-de-hmbert_tiny-bs4-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-2 | stefan-it | 2023-10-24T13:17:35Z | 3 | 0 | flair | [
"flair",
"pytorch",
"tensorboard",
"token-classification",
"sequence-tagger-model",
"de",
"base_model:dbmdz/bert-tiny-historic-multilingual-cased",
"base_model:finetune:dbmdz/bert-tiny-historic-multilingual-cased",
"license:mit",
"region:us"
]
| token-classification | 2023-10-19T10:53:07Z | ---
language: de
license: mit
tags:
- flair
- token-classification
- sequence-tagger-model
base_model: dbmdz/bert-tiny-historic-multilingual-cased
widget:
- text: In Teltsch und Jarmeritz wurden die abgegebenen Stimmen für Genossen Krapka
ungiltig erklärt , weil sie keinen Wohnort aufwiesen .
---
# Fine-tuned Flair Model on German NewsEye NER Dataset (HIPE-2022)
This Flair model was fine-tuned on the
[German NewsEye](https://github.com/hipe-eval/HIPE-2022-data/blob/main/documentation/README-newseye.md)
NER Dataset using hmBERT Tiny as backbone LM.
The NewsEye dataset is comprised of diachronic historical newspaper material published between 1850 and 1950
in French, German, Finnish, and Swedish.
More information can be found [here](https://dl.acm.org/doi/abs/10.1145/3404835.3463255).
The following NEs were annotated: `PER`, `LOC`, `ORG` and `HumanProd`.
# Results
We performed a hyper-parameter search over the following parameters with 5 different seeds per configuration:
* Batch Sizes: `[4, 8]`
* Learning Rates: `[5e-05, 3e-05]`
And report micro F1-score on development set:
| Configuration | Seed 1 | Seed 2 | Seed 3 | Seed 4 | Seed 5 | Average |
|-------------------|--------------|----------------|--------------|--------------|--------------|-----------------|
| `bs8-e10-lr5e-05` | [0.303][1] | [0.2873][2] | [0.2881][3] | [0.2866][4] | [0.2788][5] | 0.2888 ± 0.0088 |
| `bs4-e10-lr3e-05` | [0.2984][6] | [**0.311**][7] | [0.2718][8] | [0.2702][9] | [0.2598][10] | 0.2822 ± 0.0215 |
| `bs4-e10-lr5e-05` | [0.2907][11] | [0.288][12] | [0.276][13] | [0.268][14] | [0.2736][15] | 0.2793 ± 0.0097 |
| `bs8-e10-lr3e-05` | [0.2863][16] | [0.2771][17] | [0.2732][18] | [0.2812][19] | [0.2657][20] | 0.2767 ± 0.0078 |
[1]: https://hf.co/stefan-it/hmbench-newseye-de-hmbert_tiny-bs8-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-1
[2]: https://hf.co/stefan-it/hmbench-newseye-de-hmbert_tiny-bs8-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-2
[3]: https://hf.co/stefan-it/hmbench-newseye-de-hmbert_tiny-bs8-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-3
[4]: https://hf.co/stefan-it/hmbench-newseye-de-hmbert_tiny-bs8-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-4
[5]: https://hf.co/stefan-it/hmbench-newseye-de-hmbert_tiny-bs8-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-5
[6]: https://hf.co/stefan-it/hmbench-newseye-de-hmbert_tiny-bs4-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-1
[7]: https://hf.co/stefan-it/hmbench-newseye-de-hmbert_tiny-bs4-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-2
[8]: https://hf.co/stefan-it/hmbench-newseye-de-hmbert_tiny-bs4-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-3
[9]: https://hf.co/stefan-it/hmbench-newseye-de-hmbert_tiny-bs4-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-4
[10]: https://hf.co/stefan-it/hmbench-newseye-de-hmbert_tiny-bs4-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-5
[11]: https://hf.co/stefan-it/hmbench-newseye-de-hmbert_tiny-bs4-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-1
[12]: https://hf.co/stefan-it/hmbench-newseye-de-hmbert_tiny-bs4-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-2
[13]: https://hf.co/stefan-it/hmbench-newseye-de-hmbert_tiny-bs4-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-3
[14]: https://hf.co/stefan-it/hmbench-newseye-de-hmbert_tiny-bs4-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-4
[15]: https://hf.co/stefan-it/hmbench-newseye-de-hmbert_tiny-bs4-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-5
[16]: https://hf.co/stefan-it/hmbench-newseye-de-hmbert_tiny-bs8-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-1
[17]: https://hf.co/stefan-it/hmbench-newseye-de-hmbert_tiny-bs8-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-2
[18]: https://hf.co/stefan-it/hmbench-newseye-de-hmbert_tiny-bs8-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-3
[19]: https://hf.co/stefan-it/hmbench-newseye-de-hmbert_tiny-bs8-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-4
[20]: https://hf.co/stefan-it/hmbench-newseye-de-hmbert_tiny-bs8-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-5
The [training log](training.log) and TensorBoard logs (not available for hmBERT Base model) are also uploaded to the model hub.
More information about fine-tuning can be found [here](https://github.com/stefan-it/hmBench).
# Acknowledgements
We thank [Luisa März](https://github.com/LuisaMaerz), [Katharina Schmid](https://github.com/schmika) and
[Erion Çano](https://github.com/erionc) for their fruitful discussions about Historic Language Models.
Research supported with Cloud TPUs from Google's [TPU Research Cloud](https://sites.research.google/trc/about/) (TRC).
Many Thanks for providing access to the TPUs ❤️
|
Subsets and Splits